SEARCH RESULTS
88 results found with an empty search
- OSI Model, TCP/IP Framework, and Network Topologies Explain
This article provides a detailed explanation of the OSI model and its seven layers. It also explores the TCP/IP model, compares it to the OSI framework, and examines network topologies, their characteristics, advantages, and applications. Alexander S. Ricciardi December 12, 2024 In networking,the Open Systems Interconnection (OSI) model is a standardized reference framework that describes how data flows in networks or how networked devices communicate with each other. In 1977 the International Organization for Standardization (ISO) developed OSI to standardize the interoperability of multivendor communications systems into one cohesive model (uCertify, 2019 a). The OSI Model is a reference model, it is not a reverence model (Wallace, 2020). In other words, the model does not need to be revered as a framework where every network component or device must neatly fit. However, it can be used as a tool to explain and understand where different network components or devices reside. This makes the model very useful for diagnosing and fixing network issues as it helps isolate problems within its different layers. The OSI model is composed of seven layers: Layer 1: The physical layer Layer 2: The data link layer Layer 3: The network layer Layer 4: The transport layer Layer 5: The session layer Layer 6: The presentation layer Layer 7: The application layer Note that the application layer is the last in the OSI queue, as it is the closest to the user. However, graphically the layers are usually represented as a stack, bottom-up, as illustrated in Figure 1. Figure 1 OSI Layers Note: From “The OSI reference model. CompTIA Network+ Pearson N10-007”, Figure 2.2, by uCertify. (2019 a). Each layer represents a different network functionality as shown in Figure 2. Figure 2 OSI vs. TCP/IP Note: From “Objective 1.01 Explain, compare, and contrast the OSI layers” by vWannabe (n.d.). In Figure 2, the OSI stack is compared to the TCP/IP stack model, which is a reference model based on the TCP/IP protocol suite. The TCP/IP model is used to describe communications on the Internet and simplifies the OSI layers into four categories which are Network Interface (Network access layer), Internet (Internet layer), Transport (Host-to-Host layer), and Application (Process/Application layer) see Figure 3. Figure 3 OSI and TCP/IP Note: From “The OSI reference model. CompTIA Network+ Pearson N10-007”, Figure 2.15, by uCertify. (2019 a). The TCP/IP layers map to the OSI layers as follows: Network Interface: Combines the physical and data link layers of the OSI model. Internet: Corresponds to the network layer of the OSI model. Transport: Maps directly to the transport layer of the OSI model. Application: Consolidates the session, presentation, and application layers of the OSI model. As shown above, for me, the OSI model is a great tool for understanding network systems and diagnosing issues. When connected to the TCP/IP model, it provides practical insights into troubleshooting and understanding Internet systems, which is where most of today's networks operate. Another important concept to understand is network topology. Topology classifies the arrangement of devices and connections within a network, either physically (physical topology) or logically (logical topology). Below is an illustration of the most common topologies: Figure 4 Network Topologies Note: From “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007,“ various Figures, by uCertify (2019 b). Modify. The table below describes the characteristics, advantages, and limitations of various topologies. Table 1 Network Topologies Note: Data from “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007“ by uCertify (2019 b). As shown in Table 1, each topology has its pros and cons depending on the needs, budget, and future goals of a business. One topology may be more suitable than another. Below, Table 2 is a comparison between Star and Generic Mesh Topologies, that showcases their advantages, disadvantages, and the types of business applications or use cases they are best-suited for. Table 2 Comparison of Star and Generic Mesh Topologies Note: Data from “Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007“ by uCertify (2019 b). As shown in Table 2 a star topology is better suited for Small-to-medium businesses due to its low cost as mesh topology is better suited for large enterprise networks, data centers, and IoT networks that require fault tolerance. To summarize, the OSI model is a foundational framework for understanding network communication and diagnosing connection issues. It is particularly helpful when used in conjunction with the TCP/IP model to troubleshoot and understand modern Internet systems. Additionally, network topology helps to define the structure of networks by setting the arrangement of devices and connections, both physically and logically, enabling businesses to select the most suitable configuration based on their specific needs and goals. References: uCertify. (2019 a). Lesson 2: The OSI reference model. CompTIA Network+ Pearson N10-007 (Course & Labs) [Computer software]. uCertify LLC. ISBN: 9781616910327 uCertify. (2019 b). Lesson 1: Computer Network Fundamentals. CompTIA Network+ Pearson N10-007 (Course & Labs) [Computer software]. uCertify LLC. ISBN: 9781616910327 vWannabe (n.d.). Objective 1.01 Explain, compare, and contrast the OSI layers. vWannabe.com . https://vwannabe.com/2013/07/29/objective-1-01-explain-compare-and-contrast-the-osi-layers/ Wallace, K. (2020, December 11). Networking foundations: Networking basics [Video]. LinkedIn Learning. Retrieved from: https://www.linkedin.com/learning/networking-foundations-networking-basics/a-high-level-look-at-a-network?autoSkip=true&resume=false&u=2245842 .
- The Relationship Between Software Modeling and Software Development
This article explores the relationship between Software Modeling (SM) and Software Development (SD) within Software Engineering. It examines how SM, through techniques like UML diagrams, supports and enhances the SD process by improving communication, reducing errors, and providing essential documentation. Alexander S. Ricciardi December 11, 2024 Software Engineering (SE) is the art of engineering high-quality software solutions. The Object Oriented (OO) approach, Software Development (SD) process, and Software Modeling (SM) are components of SE. This article explores these components, more specifically the relationship between SD and SM. How through this relationship software developer teams build systems that are efficient, robust, and provide high-quality software solutions to users. Software Engineering First, let’s define Software Engineering. “The goal of SE is to produce robust, high-quality software solutions that provide value to users. Achieving this goal requires the precision of engineering combined with the subtlety of art” (Unhelkar, 2018, p.1). SE involves a wide range of functions, activities, and tasks such as: Project management, business analysis, financial management, regulatory and compliance management, risk management, and service management functions. Functions are SE teams’ responsibilities or disciplines that often span the entire software development lifecycle. Development processes, requirements modeling, usability design, operational performance, security, quality assurance, quality control, and release management activities. Activities are SE actions taken during certain stages of the software development, they are often performed repeatedly in functions or processes. Tasks are small SE’s actions taken during certain functions or processes. They are often individual steps necessary to perform a specific activity. (Unhelkar, 2018) As shown above, SE is a complex process that can be decomposed into four components which are essential to learn and to adopt SE. These components are fundamentals or object-oriented, modeling (UML standard), process (SDLC, Agile), and experience (case studies and team-based project work). Figure 1 The Four Essential Components to Adopt Software Engineering Note: From “Software Engineering Fundamentals with Object Orientation. Software Engineering with UML” by Unhelkar (2018, p.2). Below is a brief definition of each component: Object Oriented (OO) Object Oriented is the concept of object orientation based on Object Oriented Programming (OOP) languages such as Java and Python. OO is composed of six fundamentals (encapsulation, inheritance, polymorphism, abstraction, composition, and association) that help in creating classes and programs that process and manipulate data and objects. These OO fundamentals are as follows: Classification (grouping) Abstraction (representing) Encapsulation (modularizing) Association (relating) Inheritance (generalizing) Polymorphism (executing) (Unhelkar, 2018, p.5) Software Modeling (SM) Software Modeling is a project modeling standard based on the Unified Modeling Language (UML) that is used the create diagrams to improve communication and participation from all project stakeholders. This also improves the quality of the software, reduces errors, and encourages easy acceptance of the solution by users. UML's purpose in SE is modeling, developing, and maintaining software. Figure 2 Purpose of Unified Modeling Language in Software Engineering Note: From “Software Engineering Fundamentals with Object Orientation. Software Engineering with UML” by Unhelkar (2018, p.13). UML in modeling purpose modeling, developing, and maintaining software can be listed as follows: Visualizing: The primary purpose of the UML is to visualize the software requirements, processes, solution design, and architecture. Specifying: UML is used to facilitate the specification of modeling artifacts. For example, a UML class diagram can specify/describe the attributes and methods of a class, along with their relationships. Constructing: UML is used for software construction because it can be easily translated into code (e.g., C++, Java). Documenting: UML diagrams can be used as detailed documentation for requirements, architecture, design, project plans, tests, and prototypes. Maintaining: UML diagrams are an ongoing aid for the maintenance of software systems. Additionally, they provide a visual representation of a project's existing system, architecture, and IT design. This allows the developer to identify the correct places to implement changes and understand the effect of their changes on the software functionalities and behaviors. (Unhelkar, 2018) Process or Software Development (SD) Process or Software Development is the process that defines activities and phases, as well as providing directions to the different teams of designers and developers throughout the Software Development Lifecycle (SDLC). Methodologies, such as Waterfall and Agile are used to guide and give structure to the development process. Additionally, they help the development teams to complete projects in an efficient manner, meet standards, and meet user requirements. The waterfall methodological approach is linear and plan-driven whereas the Agile methodological approach is more flexible and adaptive. These approaches are usually structured around 5 key components: Requirements gathering and analysis Design and architecture Coding and implementation Testing and quality assurance Deployment and maintenance (Institute of Data, 2023, p.2) Experience Experience or case studies and team-based project work is the process of learning a project's best approaches and solutions through experimenting with UML and object-oriented fundamentals. “Experience in creating UML models, especially in a team environment, is a must for learning the art of SE” (Unhelkar, 2018, p.2) How Software Modeling Supports Software Development As described above, SD and SM are components of SE playing different roles in SDLC. The difference between SD and SM resides in SD being the methodological process that guides the creation and development of software, and SM being the representation of the software's architecture and functionality through diagrams based on UML. SM's primary role is to support SD by providing a visual representation of the project, reducing errors and scope creep, as well as providing documentation: Project visualization: UML diagrams, especially case diagrams, allow stakeholders to visualize program functionality and behaviors at a high level (Fenn, 2017). This helps teams to focus on where they need more requirements, details, and analysis. It supports the SM phase of requirements gathering and analysis. Reducing errors and scope creep: Software modeling can provide a clear model of the project by serving as a reference for the project requirements, minimizing errors, misunderstandings, and scope creep, particularly during the early stages of the software development process (Fenn, 2017). Scope creep is expanding or adding to the project requirements or objectives beyond the original scope. It supports the SM phase of design and architecture. Providing documentation: UML diagrams can serve as living documentation for the project, describing the project functionality and behaviors as it is developed and after deployment. This documentation can help with decision-making for functionality/behavior implementation and maintenance of the software. It supports the SM phases of “coding and implementation” and “deployment and maintenance.” By applying the concept listed above SM helps the SD process to create efficient, robust, high-quality software solutions that provide value to users. UML Example The following is a UML Class diagram of a simple banking manager Java program that utilizes the swing library, a graphical user interface (GUI) library. The program manages bank accounts and checking accounts with various functionalities such as creating accounts, attaching checking accounts, depositing and withdrawing funds, and viewing account balances. In UML, class diagrams are one of six types of structural diagrams. Class diagrams are fundamental to the object modeling process and model the static structure of a system. Depending on the complexity of a system, you can use a single class diagram to model an entire system, or you can use several class diagrams to model the components of a system. Class diagrams are the blueprints of your system or subsystem. You can use class diagrams to model the objects that make up the system, to display the relationships between the objects, and to describe what those objects do and the services that they provide. (IBM, 2021) Figure 3 UML Class Diagram Example Note: From “Module-4: Portfolio Milestone” By Ricciardi (2024, p.5) To summarize, SE is the art of engineering high-quality software solutions through OO, SD, and SM. SM helps the SD process by providing clear visual representations of system requirements and architecture, reducing errors, minimizing scope creep, improving communication among stakeholders, and serving as living documentation throughout the software development lifecycle. References: Fenn, B. (2017, October). UML in agile development. Control Engineering, 64(10), 48. https://csuglobal.idm.oclc.org/login?qurl=https%3A%2F%2Fwww.proquest.com%2Ftrade-journals%2Fuml-agile-development%2Fdocview%2F2130716718%2Fse-2%3Faccountid%3D38569 IBM (2021, May 5) Rational Software Modeler 7.5.5. IBM. https://www.ibm.com/docs/en/rsm/7.5.0?topic=structure-class-diagrams Institute of Data (2023, September 5). Understanding software process models: What they are and how they work. Institute of Data. https://www.institutedata.com/us/blog/understand-software-process-models/ Ricciardi, A. (2024, July 7). Module-4: Portfolio Milestone. CSC372: Programming 2. Depart of Computer Science. Colorado State University Global. https://github.com/Omegapy/My-Academics-Portfolio/blob/main/Programming-2-CSC372/Module-4%20Portfolio%20Milestone/Module-4%20Portfolio%20Milestone.pdf Unhelkar, B. (2018). Software engineering fundamentals with object orientation. Software engineering with UML. CRC Press. ISBN 9781138297432
- The Role of Probability in Decision-Making: A Blackjack Case Study
This article examines the concept of probability as a tool for quantifying uncertainty and making informed decisions, using the game of Blackjack as an example. By applying probability principles such as conditional probability, dependency, and Bayes' Theorem, it demonstrates how mathematical methods can evaluate risks, predict outcomes, and guide strategic choices in uncertain scenarios. Alexander S. Ricciardi November 17, 2024 Uncertainty, by definition, is a nebulous concept; it encapsulates the unknowns and ambiguities. Probability plays a crucial role in quantifying uncertainty, helping establish degrees of belief –percentages– in the likelihood of an outcome or outcomes in a given scenario or set of scenarios. This paper explores the concept of probability by applying it to a easy to understand example involving the game of Blackjack. Probability Probability is the likelihood of something happening. It can also be defined as a mathematical method used to study randomness. In other words, probability is a mathematical method that deals with the chance of an event occurring (Illowsky et al., 2020). This section describes some of the fundamental concepts of probability. Starting with the concept of sample space, often donated Ω. It is the set of all possible outcomes from a scenario or a set of scenarios. An event denoted E or ω is a subset of the sample space, it consists of one outcome or multiple outcomes. In probability theory, the probability of a specific possible outcome donated P(E), from the sample space is a value between 0 and 1, inclusive (Russell &Norvig, 2021). With a ‘0’ probability meaning the outcome will never occur, a ‘1’ probability meaning the outcome will always occur, and a value between ‘0’ and ‘1’ meaning the likelihood of the outcome, the higher values the greater likelihood. This can be formulated as follows: and The probability method comes with a set of rules, proprieties, laws, and theorems that are fundamental principles used for computing the likelihood of events occurring. Below is a list of some of these rules, proprieties, principles, and theorems: A and B are events. - Addition rule: Computes the probability of either one of two events from occurring (Data Science Discovery, n.d.). For mutually exclusive events (events that cannot occur simultaneously): P(A ∪ B) = P(A ∨ B) = P(A) + P(B) For non-mutually exclusive events: P(A ∪ B) = P(A ∨ B) = P(A) + P(B) - P(A ∧ B) - Multiplication rule: Computes “the joint probability of multiple events occurring together using known probabilities of those events individually” (Foster, n.d., p.1). For independent events (the occurrence of one does not affect the other): P(A ∩ B) = P(A ∧ B) = P(A) ∙ P(B) For dependent events:(the occurrence of one does affect the other) P(A ∩ B) = P(A ∧ B) = P(A) ⋅ P(B|A) P(B|A) : conditional probability (see below). - Complement rule: “The complement of an event is the probability that the event does not occur” (Elberly College of Science, n,d., Section 2.1.3.2.4). P(¬A) = 1 - P(A) - Conditional probability: It is the probability of an event occurring given that another event has already occurred. The probability of A given B: - The Bayes’ Theorem: Computes the reverse of the conditional probability. It updates the probability of an outcome based on new evidence. It can also be defined as Where: P(A) is the prior probability of an event A. P(B) is the probability of an event B. P(B|A) is the probability of an event B occurring given that A has occurred. P(A|B) is the probability of an event A occurring given that B has occurred. (Dziak, 2024) These rules, proprieties, principles, and theorems provide a range of tools to solve probabilistic problems in simple scenarios such as rolling dice and card games like Blackjack. Blackjack Scenario Let's explore a Blackjack scenario when the dealer is showing a 10. In Blackjack, specific rules dictate when to take a hit or stand, especially when the dealer is showing a 10, that is the house hand is showing a 10. Suppose a single deck is in play, and four cards are already dealt. If you have a 10 and a 7 visible, and the dealer shows a 10, let's calculate the probability that the dealer's hidden card is an 8, 9, or a face card, and why it makes sense to hit on a 16 but to stand on a 17. In this Blackjack scenario, the concepts of dependency and conditional probability play an essential role in calculating the probabilities. Two events are said to be dependent if the outcome of one event affects the probability of the other. In this scenario, the events are dependent because the cards are drawn without replacement being placed. This means that each card dealt to the player or the house hands changes the composition of the deck and thus affects the probabilities of future events. Analyses of the Scenario Now that the probability methods have been established let’s analyze the problem in more detail. In blackjack, the goal of a player is to finish the game with a higher hand than that of the house, without exceeding 21, as going over 21 is known as ‘busting’ and it is an automatic loss (Master Traditional Games, n.d.). The face cards have a value of 10 and Ace can either be treated as 1 or 11 with the player choosing the value. The player and the house can either hit or stand, the player or players go first, and after all the players stand the house goes next. Note that all players are playing against the house, not each other, and if the house’s hand matches a player’s hand it results in a draw between the player and the house. A standard deck of cards has 52 cards. The player's hand has a 10 and a 7, totaling 17. The house has a 10 as the dealer up-card, and a fourth card is on the table, the dealer hole-card. Therefore 3 card values are known and 52-3=49 cards are unknown. The scenario calls to calculate the probability of the house's other card, the dealer hole-card, being an 8, 9, or face card, as any of those cards would give to house a better hand than the player. A standard deck has 4 8s, 4 9s, 4 Jacks, 4 Queens, and 4 Kings. Mathematically, this can be translated to: 4(8s) + 4(9s) + 4(Jacks) + 4(Queens) + 4(Kings) = 20 house favorable cards. This means that from a set of 49 unknown cards 20 of those cards are favorable to the house. Thus, the probability that the house's other card, the dealer hole-card, is one of the house's favorable cards is: This means that the probability of the house having a better hand than the player is 40.82% when considering only the 8, 9, or face cards as the possible cards on the table. On a side note, the Ace card was not considered in this scenario, and an Ace can be treated as a 1 or 11. If the dealer hole card is Ace then the house’s hand would be 10 + 11 = 21, Blackjack. A card deck has 4 Aces, additionally, the 2 remaining 10 cards were also not considered in this scenario, this changes the number of house favorable cards to 20 + 4(Aces) + 2(10) = 26 and the probability to: This considerably improves the probability of the house having a better hand than the player from 40.82% to 53.06%. Why It Makes Sense to Hit on a 16 but Stand on a 17? The scenario claims that it makes sense to hit on a 16 but to stand on a 17; if the house will stand on a 17 and above. Let’s explore the scenario where the player has a hand of 16. If the player decides to hit, they can improve their hand by drawing a 1 (Ace), 2, 3, 4, or 5, that is a total of 5 types of cards out of a set of 13 types of cards (Ace through King) are favorable to the player with a hand of 16. Note that this calculation is not based on the number of cards in a deck but on the number of types of cards found in a deck, 13, rather than the total number of cards left in the deck, as the specific card composition of the player’s hand of 16 is unknown. Therefore, the probability of the player hitting a favorable type of card is: Thus, if the house’s hand is a 17 and above, it makes sense for the player to hit as they have an approximately 38.46% chance to draw a favorable card. If the player does not hit, they will automatically lose since the house has a better hand. Now let’s explore the scenario where the player has a hand of 17. If the player decides to stand, they can improve their hand by drawing a 1 (Ace), 2, 3, or 4, a total of 4 types of cards out of set of 13 types of cards are favorable to the player with a hand of 17. Therefore, the probability of the player hitting a favorable card is: This means that a house with a hand of 17 above has a 69.77% and above chance of hitting an unfavorable card and bursting out. Thus, if the house’s hand is 17 or above, it makes sense for the player to stand, as the house has approximately a 69.77% or higher chance of drawing an unfavorable card and busting if it decides to hit. This is likely because the house typically plays against multiple players who may have better hands than the house. Conclusion Probability plays a crucial role in quantifying uncertainty; it helps establish the likelihood of an outcome or outcomes in a given scenario or set of scenarios. This paper explored the concept of probability, by applying its principles to a practical example, the game of Blackjack. This simple example shows how powerful the concept of probability can be; by demonstrating how probability can be used to evaluate risks, calculate potential outcomes, and make strategic choices. Probability as a tool for making decisions, can be apply not only in games but also in various real-world situations where uncertainty is a factor. References: Data Science Discovery (n.d.). Multi-event probability: Addition rule. University of Illinois at Urbana-Champaign (UIUC). https://discovery.cs.illinois.edu/learn/Prediction-and-Probability/Multi-event-Probability-Addition-Rule/#Addition-Rule-Formula Dziak M. Bayes’ theorem. Salem Press Encyclopedia of Science. 2024. Accessed November 18, 2024. https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip,uid&db=ers&AN=89142582&site=eds-live Elberly College of Science (n.d.). 2: Describing data, part 1. STAT 200: Elementary statistics. Department of Statistics, PennState Elberly College of Science. https://online.stat.psu.edu/stat200/lesson/2/2.1/2.1.3/2.1.3.2/2.1.3.2.4 Foster, J. (n.d.). Multiplication rule for calculating probabilities. Statistics By Jim. https://statisticsbyjim.com/probability/multiplication-rule-calculating-probabilities/ Illowsky, B., Dean, S., Birmajer, D., Blount, B., Einsohn, M., Helmreich, J., Kenyon, L., Lee, S., & Taub J. (2020, March 27) 1.1 Definitions of statistics, probability, and key terms. Statistics. OpenStax. https://openstax.org/books/statistics/pages/preface Master Traditional Games (n.d.) The rules of Blackjack, Master of the Games. https://www.mastersofgames.com/rules/blackjack-rules.htm?srsltid=AfmBOoojETz5j0oD9X_OW-mIYhepbOfCZm3sH6Z4o2klRDmMLHYO6s5m Russell, S. & Norvig, P. (2021). 12.2 Basic probability notation. Artificial intelligence: A modern approach. 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932.
- AI and Chess: Shaping the Future of Strategic Thinking and Intelligence
This article explores the evolving relationship between Artificial Intelligence (AI) and chess, highlighting how AI has transformed chess strategy and player training, while chess has contributed to the advancement of AI technologies. Alexander S. Ricciardi October 8, 2024 Artificial Intelligence (AI) and the game of chess have an ongoing relationship that began in 1990s and gained prominence when human chess champions and AI faced each other, notably in 1997 when IBM's Deep Blue defeated chess champion Garry Kasparov, putting AI capabilities into the public spotlight (Martin, 2024). This influenced the evolution of chess strategy over the past few decades by bringing a better understanding of the game (Deverell, 2023). AI is now a tool for game analysis and player training, making chess more popular than ever. For it is part, the game influenced the evolution of AI. The Deep Blue supercomputer was able to evaluate around 200 million chess positions per second, about the capacity to look between 12 to 30 moves ahead, this gave the AI a greater tragical insight than its human counterpart (Cipra, 1996). The Deep Blue supercomputer was an example of good old-fashioned AI which uses heuristic reasoning. In other words, it is an example of symbolic planning AI or narrow AI also called an expert AI which can only play chess and operates based on pre-programmed functions and search algorithms. This AI model can also be defined as a model-based utility-based agent, see Figure 1. A model-based utility-based agent has an internal model of the chess environment and utilizes a utility function(s) to evaluate and choose actions with the goal of winning the game. Figure 1 Model-based Utility-based Agent Note: From “2.4 The Structure of Agents. Artificial Intelligence: A Modern Approach” Figure 2.14 by Russell & Norvig (2021, p.55) Note that: “A model-based utility-based agent. It uses a model of the world, along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome” (Russell & Norvig, 2021, p.55). More modern AI chess expert models deep neural network use such as Google AlphaZero uses deep neural network model combined with Monte Carlo tree search. A deep neural network is a type of machine learning model that uses an artificial network, a program that tries to mimic the structure of the human brain (Rose, 2023). A machine learning model is a program that learns through supervised, or unsupervised learning or a combination of both. It can be defined as a type of learning agent that improves its performance on a specific task by learning from data, rather than being explicitly programmed with fixed rules (Russell & Norvig, 2021). Google AlphaZero, notably its predecessor AlphaGo, learned to play the game of Go by playing millions of games against itself, using deep neural networks and backpropagation. Backpropagation is a training algorithm used in neural networks that adjusts the weights of the connections between neurons by propagating the error of the output backward through the network. In 2016, AlphaGo beat Lee Sedol Go champion using novel and brilliant moves, such as move 37, “— a move that had a 1 in 10,000 chance of being used. This pivotal and creative move helped AlphaGo win the game and upended centuries of traditional wisdom ” (DeepMind, n.d., p.1). This changes how players play the game of Go and chess, players now study AI-generated strategies and incorporate them into their own gameplay, the expert AIs are now the teachers. On a side note: Large Language Models such as GPT-4o, o1, Anthropic Claude Sonnet 3.5, Gemini 1.5, and Grok 2 are more generalist models with strong language abilities and are not well suited for playing chess and the game of Go. However, based on the Transformers architecture which relies on self-attention, meaning the model weighs the importance of different parts of the input data when making predictions, all that may be needed is scaling in computing power, data, time, and the use of Chain-of-Thought Reasoning for these models to potentially achieve the same levels in chess and Go as the expert AI models. Furthermore, many are predicting that Artificial General Intelligent (AGI), see Figure 2, will be achieved by 2030. Figure 2 The ANI-AGI-ASI Train Note: The illustration is a metaphor that depicts the rapid advancement of AI technology, progressing from Artificial Narrow Intelligence (ANI), which is less intelligent than human-level intelligence, to Artificial General Intelligence (AGI), which is equivalent to human-level intelligence, and to Artificial Super-Intelligence (ASI), which surpasses human intelligence. From “The AI revolution: The road to superintelligence Part-2”, by Urban, 2015. To summarize, AI's relationship with chess has transformed the game itself and made the game more popular than ever, but it has also contributed significantly to the advancement of AI technologies. Deep Blue and more modern models like AlphaZero are the children of this relationship. Moreover, as AI continues to evolve, there is potential for even generalist models, like large language models combined with scaling in computing power, data, time, and the use of Chain-of-Thought Reasoning, to reach the same level of strategic thinking as expert AI not only in strategic games such as chess but also in other fields such as advanced physics and mathematics, potentially surpassing human abilities, if it is not already the case, ultimately opening the door to AGI and subsequently to ASI. References: Cipra, B. (1996, February 2). Will a computer checkmate a chess champion at last. Science, 271 (5249), p.599. Retrieved from https://www.proquest.com/docview/213567322?accountid=38569&parentSessionId=cOz1dBEdSipk%2FF9km0uBWbuk2pNTreJUZoVBGhGjMxE%3D&sourcetype=Scholarly%20Journals/ DeepMind (n.d.). AlphaGo. Google. https://deepmind.google/research/breakthroughs/alphago/ Deverell, J. (2023, July 6). Artificial intelligence and chess: An evolving landscape. Regency Chess Company. https://www.regencychess.com/blog/artificial-intelligence-and-chess-an-evolving-landscape/ Rand, M. (2024, March 8). To understand the future of AI, look at what happened to chess . Forbes. https://www.forbes.com/sites/martinrand/2024/03/08/to-understand-the-future-of-ai-look-at-what-happened-to-chess/ Rose, D. (2023, October 12). Artificial intelligence foundations: Thinking machines welcome. LinkedIn Learning. https://www.linkedin.com/learning/artificial-intelligence-foundations-thinking-machines/welcome?resume=false&u=2245842 Russell, S. & Norvig, P. (2021). 2.4 The structure of agents. Artificial intelligence: A modern approach . 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932. Urban, T. (2015, January 27). The AI revolution: The road to superintelligence Part-2. Wait But Why . https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html/
- Taxonomy and Frames in Programming Languages: A Hierarchical Approach to Knowledge Representation
This article examines the application of taxonomy and frames in programming languages, focusing on Python and Java. It demonstrates how hierarchical taxonomies and frame-based representations organize and define relationships, properties, and attributes, providing a comprehensive approach to knowledge representation in computer science. Alexander S. Ricciardi November 3, 2024 Taxonomy in computer science is the act of classifying and organizing concepts. For example, in software engineering, it is used to classify software testing techniques, model-based testing approaches, and static code analysis tools (Novak et al., 2010). In data management, it is used to organize metadata and categorize-manage data assets (Knight, 2021). In Artificial Intelligence it is used to guide models to recognize patterns in data sets. Taxonomy can also be defined as the act of organizing knowledge within a domain by using a controlled vocabulary to make it easier to find related information (Knight, 2021), and it must: “Follow a hierarchic format and provide names for each object in relation to other objects. May also capture the membership properties of each object in relation to other objects. Have specific rules used to classify or categorize any object in a domain. These rules must be complete, consistent, and unambiguous. Apply rigor in specification, ensuring any newly discovered object must fit into one and only one category or object. Inherit all the properties of the class above it but can also have additional properties.” (Knight, 2021, p.1). In this paper, taxonomic knowledge and frames are implemented in the domain of Programming Languages, focusing on Python. “A frame is a data structure that can represent the knowledge in a semantic net” (Colorado State University Global, n.d., p.2). To implement the taxonomic knowledge the paper follows three steps using first-order logic. The steps are Subset Information, Set Membership of Entities, and Properties of Sets and Entities. Then, the paper uses a tree-like structure to show how subcategories relate to parent categories. Additionally, it demonstrates how the hierarchical taxonomic structure interacts with frames, by illustrating how attributes and properties are defined in the Python frame and they align with the broader taxonomic categories. Finally, it explains how the combination of taxonomic relationships and frames provides a comprehensive representation of knowledge. The Three Steps to Implement Taxonomic Knowledge Note that the programming languages Java and Python are used as examples. Step 1: Subset Information In this step to represent the subcategory, first-order logic is used. Subcategory Relationships: Compiled Languages and Interpreted Languages: ∀x High_Level_Compiled_Language(x) ⇒ Compiled_Language(x) ∀x Scripting_Language(x) ⇒ Interpreted_Language(x) Programming Languages: ∀x Compiled_Language(x) ⇒ Programming_Language(x) ∀x Interpreted_Language(x) ⇒ Programming_Language(x) specific Languages: ∀x Java(x) ⇒ High_Level_Compiled_Language(x) ∀x Python(x) ⇒ Scripting_Language(x) Additional Subcategories: Functional Languages and Logic Programming Languages: ∀x Functional_Language(x) ⇒ Programming_Language(x) ∀x Logic_Programming_Language(x) ⇒ Programming_Language(x) Step 2: Set Membership of Entities In this step, the category membership of specific languages is represented. Instances of Programming Languages: Java SE 23: Java(JavaSE23) Python 3.13: Python(Python3_13) Other Programming Languages (more examples): C(C23) Haskell(Haskell2010) Step 3: Properties of Sets and Entities In this step, the properties of the categories and language program are represented. Properties of Programming Languages: All Programming Languages have Syntax and are used for Software Development: ∀x Programming_Language(x) ⇒ Has_Syntax(x) ∀x Programming_Language(x) ⇒ Used_For(x,"Software_Development") Properties of Compiled and Interpreted Languages: Compiled Languages have Execution Model 'Compiled': ∀x Compiled_Language(x) ⇒ Execution_Model(x,"Compiled") Interpreted Languages have Execution Model 'Interpreted': ∀x Interpreted_Language(x) ⇒ Execution_Model(x,"Interpreted") Properties of Specific Languages: Java has Static Typing Discipline: ∀x Java(x) ⇒ Typing_Discipline(x,"Static") Python has Dynamic Typing Discipline: ∀x Python(x) ⇒ Typing_Discipline(x,"Dynamic") Java Supports Paradigm 'Object-Oriented': ∀x Java(x) ⇒ Supports_Paradigm(x,"Object-Oriented" ) Python Supports Multiple Paradigms: ∀x Python(x) ⇒ Supports_Paradigm(x,"Multi-Paradigm") Properties of Entities: Java SE 23's Latest Version is 23: Latest_Version(JavaSE23,"23") Python 3.13's Latest Version is 3.13: Latest_Version(Python3_13,"3.13") Hierarchical Taxonomy of Programming Languages Below is a (text shorter version) tree-like hierarchical structure representing the relationships between different languages in the domain of programming languages: Programming Language Compiled Language High-Level Compiled Language C C++ Java Rust Low-Level Compiled Language Assembly Language Interpreted Language Scripting Language Python Ruby Perl Shell Scripting Language Bash PowerShell Functional Language Pure Functional Language Haskell Multi-Paradigm Functional Language Scala F# Logic Programming Language Prolog Note that some languages, like Python and Java, can be considered both interpreted and compiled languages; however, for the scope of this exercise, they are categorized as interpreted and compiled languages, respectively. Hierarchical Taxonomy Visualization Figure 1 Hierarchical Taxonomy of Programming Languages Note: The diagram is a visual representation of the hierarchical taxonomy of programming languages. Data adapted from multiple sources: (Epözdemir, 2024; Foster, 2013; Gómez, n.d.; Peter Van Roy, 2008; Saxena, 2024; Startups, 2018; & Wikipedia contributors, 2024) Specific Frame Python Interaction With Hierarchical Taxonomy This section illustrates a Python frame, which is a data structure representation of the Python programming language attributes and properties. For comparison, a Java frame is also provided. Figure 2 Python Frame ) Python Instance_Of: Scripting_Language; // Inherited properties and attributes Used_For: Software_Development Execution_Model: Interpreted Syntax: Easy_To_Use; // Properties and attributes specific to Python Creator: Guido van Rossum; First_Released: 1991; Typing_Discipline: Dynamic, Strong Typing; Paradigms: Object-Oriented, Imperative, Functional, Procedural, Reflective; License: Python Software Foundation License; Latest_Version: 3.13; Official_Website: www.python.org ) Note: This is a frame representation of Python’s properties and attributes. An example of an attribute is ‘Instance_Of’ and of a property is ‘Scripting_Language’. For comparison, below is a representation of the Java frame. Figure 3 Java Frame ) Java Instance_Of: High_Level_Compiled_Language; // Inherited properties and attributes Used_For: Software_Development Debugging: Friendly Execution_Model: Compiled // Properties and attributes specific to Java Creator: James Gosling; First_Released: 1995; Typing_Discipline: Static, Strong Typing; Paradigms: Object-Oriented, Class-based, Concurrent; License: GNU General Public License with Classpath Exception; Latest_Version: 23; Official_Website: www.oracle.com/java/ ) Note: This is a frame representation of Java’s properties and attributes. A hierarchical taxonomy organizes entities into a tree-like structure. In the Programming Language hierarchical taxonomy, the root class (category representing the domain) or the first node of the tree-like structure is ‘ Programming Language ,’ with all other nodes as subclasses (subcategories) that inherit directly or indirectly from the ‘ Programming Language ’ root class. These relationships can be described as “ is an instance of .” For example, all subclasses show the relation “ is an instance of ” ‘ Programming Language ’ such as ‘ High-level Compile Language ’ “ is an instance of ” ‘ Compile Language ’ “ is an instance of ” ‘Programming Language,’ therefore ‘ High-level Compile Language ’ also shows the relationship “ is an instance of ” ‘ Programming Language’ . This relationship is defined by the concept of inheritance where a subclass inherits the properties and attributes of its parent class and grandparent classes. Note that a subclass can have more than one parent class. For example, the parent class ‘ Compiled_Language ’ has a property ‘ Execution_Model ’ with the attribute ‘ Complied ’, the subclass ‘ High_Level_Compiled_Language ’ and all the languages that are children of it will inherit the property ‘Execution_Model’ with the attribute ‘Complied’. This can be translated into first-order logic as follows: ∀x High_Level_Compiled_Language(x) ⇒ Compiled_Language(x) ⇒ Execution_Model(x,"Compiled") Where ' x ' is the instance of a programming language (e.g. Java SE 23) and ' ⇒ ' implies. When exploring the Python frame we can see that one of its attributes is ‘Instance_Of ’ with the property ‘ Scripting_Language ,’ this shows that Python is a subclass of the ‘ Scripting_Language ’ class, therefore Python inherits all the properties and attributes from ‘Scripting_Language’ which are ‘ Syntax: Easy_To_Use ’, ‘ Execution_Model: Interpreted ’, and ‘ Used_For: Software_Development ’. Additionally, the ‘ Syntax: Easy_To_Use ’ is specific to ‘ Scripting_Language .’ On the other hand, ‘Execution_Model: Interpreted’ and ‘Used_For: Software_Development’ are inherited by ‘Scripting_Language’ from ‘Interprated_Language.’ Furthermore, ‘Execution_Model: Interpreted’ is specific to ‘Interprated_Language’ which inherits ‘Used_For: Software_Development’ from ‘Promming_Language.’ This can be translated into first-order logic as follows: ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Syntax(x,Easy_To_Use) ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Interprated_Language(x) ⇒ Execution_Model(x,Interpreted) ∀x Python(x) ⇒ Scripting_Language(x) ⇒ Interprated_Language(x) ⇒ Promming_Language ⇒ Used_For(x,Software_Development) Where ' x ' is the instance of a programming language (e.g. Python 3.13) and ' ⇒ ' implies. The rest of Python’s properties and attributes are specific to it. On a side note, in polymorphism, a subclass can modify (override) the attribute's value of a property inherited from a parent class. For example, a language could inherit ‘ Syntax: Easy_To_Use ’ from ‘ Scripting_Language ’ and modify the attribute ‘ Easy_To_Use ’ to ‘ Hard_To_Use. ’ Frame and Hierarchical Taxonomy Interactions Visualization This section illustrates visually, the interactions between the hierarchical taxonomy and the Java and Python frames. Figure 4 Frame and Hierarchical Taxonomy Interactions (Java and Python) Note: The diagram illustrates the interactions between the hierarchical taxonomy and the Java and Python frames. Only the specific properties and attributes of the subclasses are listed in their node containers as the inherited properties and attributes can be listed in their parent class containers nodes. Data adapted from multiple sources: (Epözdemir, 2024; Foster, 2013; Gómez, n.d.; Peter Van Roy, 2008; Saxena, 2024; Startups, 2018; & Wikipedia contributors, 2024) As shown in Figure 4, combining hierarchical taxonomic relationships and frames creates a powerful tool for representing knowledge. The hierarchical taxonomy illustrated the relationships between categories; for example, the ‘Scripting Language’ category is a subcategory of the ‘Interpreted Language’ which is a subcategory of the ‘Programming Language’ category making the ‘Scripting Language’ a sub-subcategory of the root category ‘Programming Language’ which represents the domain. Additionally, the implementation of frames into the diagram shows the entities’ properties and attributes and how they get inherited from another category. For example, Python’s specific properties and attributes are listed in its node containers, and its inherited properties and attributes are listed in its parent, grandparent, and great-grandparent class node containers. This creates a robust representation of knowledge that provides depth and clarity allowing users to navigate complex relationships effortlessly. References: Colorado State University Global. (n.d.). Module 4: Knowledge Representation [Interactive lecture]. Canvas. Retrieved November 1, 2024, from https://csuglobal.instructure.com/courses/100844/pages/4-dot-2-frames?module_item_id=5183634 Epözdemir, J. (2024, April 10). Programming Language Categories - Jiyan Epözdemir - Medium. Medium. https://medium.com/@jepozdemir/programming-language-categories-6b786d70e8f7 Foster, D. (2013, February 20). Visual Guide to Programming Language Properties. DaFoster. https://dafoster.net/articles/2013/02/20/visual-guide-to-programming-language-properties/ Gómez, R. (n.d.). Alphabetical List of Programming Languages • programminglanguages.info . https://programminglanguages.info/languages/ Knight, M. (2021, March 12). What Is Taxonomy? Dataversity. https://www.dataversity.net/what-is-taxonomy/ Novak, J., Krajnc, A., & Žontar, R. (2010, May 1).Taxonomy of static code analysis tools. IEEE Conference Publication | IEEE Xplore. https://ieeexplore.ieee.org/document/5533417 Peter Van Roy. (2008). The principal programming paradigms. https://webperso.info.ucl.ac.be/~pvr/paradigmsDIAGRAMeng108.pdf Saxena, C. (2024, October 17). Top Programming Languages 2025: By Type and Comparison. ISHIR | Software Development India. https://www.ishir.com/blog/36749/top-75-programming-languages-in-2021-comparison-and-by-type.htm Startups, A. (2018, June 20). Choosing the Right Programming Language for Your Startup. Medium. https://medium.com/aws-activate-startup-blog/choosing-the-right-programming-language-for-your-startup-b454be3ed5e2 Wikipedia contributors. (2024, November 3). List of programming languages by type. Wikipedia. https://en.wikipedia.org/wiki/List_of_programming_languages_by_type
- Truth Tables: Foundations and Applications in Logic and Neural Networks
This article explores the role of Truth Tables (TTs) in evaluating logical statements by systematically analyzing relationships between propositions, providing examples and foundational concepts in propositional logic. Additionally, it examines innovative applications of TTs in Convolutional and Deep Neural Networks. Alexander S. Ricciardi October 20, 2024 Truth Tables (TTs) evaluate logical statements by systematically analyzing the relationship between the truth and falsehood between propositions within those statements. This essay demonstrates the use of TTs by providing two examples using three propositions and analyzing their logical relationships. It also briefly explores how TT can be used to implement Truth Table networks (TT-net), a Convolutional Neural Network (CNN) model that can be expressed in terms of TTs, and when they are combined with Deep Neural Networks (DNN) they can create a novel Neural Network Framework (NNF) called Truth Table rules (TT-rules). Definition A True Table evaluates all possible truth values returned by a logical expression (Sheldon). The return truth values are binary, meaning that either true or false (not true), they may be referred to as Boolean values. Boolean is a term that represents a system of algebraic notation used to represent logical propositions, usually by means of the binary digits 0 (false) and 1 (true) (Oxford Dictionary, 2005). In Boolean algebra and related mathematics fields, as well as in sciences rely on Boolean logic to show the possible outcomes of a logical expression or operation in terms of its truth or falseness that can be expressed using numbers, characters, or words. In programming languages such as C++ and C any non-zero return Boolean value is considered true; however, in the Java programming language the value can only be of the data type ‘ true ’ or ‘ false ’. On the other hand, TTs usually use the letters ‘ T ’ for true and ‘ F ’ for false to represent truth values. Propositional Logic As mentioned earlier, Boolean values are used to represent logical propositions, a logical proposition also known as an atomic sentence is a sentence that can either be true or false, but not both (James, 2014). Propositional logic (also known as sentential logic or Boolean logic) is the process of forming logical statements by combining logical propositions, also known as complex sentences (Russell & Norvig, 2021). An atomic sentence is represented by a single proposition symbol, such as P , Q , R , or W ₁ ₃ that can be allocated with a true or false Boolean value. For example, P = T , P is true, or P = F , P is false, but never both. To combine the atomic sentences into a logical statement operators like AND (‘∧’), OR (‘∨’), and NOT (‘¬’) are used, as well as symbols to express implications, symbols such as ‘⇒’ for ‘ implies ’ and ‘⇔’ for ‘ if and only if ’. The table below lists the five basic logical operations forming complex sentences using the operators and symbols that were just discussed. Table 1 Basic Logical Operations Note: From “2.2: Introduction to truth tables. Mat 1130 mathematical ideas,” by Lippman (2022), modify. Complex sentences can combine more than one operation. For example, (W ₁₁ ∧ P ₁₃ ) ∨ W ₂₂ ⇔ ¬W ₂₄. Additionally, the operator follows precedence similar to the precedence arithmetic operators, it is as follows ‘¬’, ‘∧’, ‘∨’, ‘⇒’, ‘⇔’, with ‘¬’ having the most precedence (Russell & Norvig, 2021). Additionally, two atomic sentences P and Q are logically equivalent if they are true in the same set of models using the following notation P ≡ Q . A model is a specific assignment of truth values to all the atomic sentences in a logical expression. This equivalence also applies to complex sentences, and it has the following properties: o (P ∧ Q) ≡ (Q ∧ P) — commutativity of ∧ o (P ∨ Q) ≡ (Q ∨ P) — commutativity of ∨ o ((P ∧ Q) ∧ W) ≡ (P ∧ (Q ∧ W)) — associativity of ∧ o ((P ∨ Q) ∨ W) ≡ (P ∨ (Q ∨ W)) — associativity of ∨ o ¬(¬P) ≡ P — double-negation elimination o (P ⇒ Q) ≡ (¬Q ⇒ ¬P) — contraposition o (P ⇒ Q) ≡ (¬P ∨ Q) — implication elimination o (P ⇔ Q)≡ ((P ⇒ Q) ∧ (Q ⇒ P)) — biconditional elimination o ¬(P ∧ Q) ≡ (¬P ∨ ¬Q) — De Morgan o ¬(P ∨ Q) ≡ (¬P ∧ ¬Q) — De Morgan o (P ∧ (Q ∨ W)) ≡ ((P ∧ Q) ∨ (P ∧ W)) — distributivity of ∧ over ∨ o (P ∨ (Q ∧ W)) ≡ ((P ∨ Q) ∧ (P ∨ W)) — distributivity of ∨ over ∧ (Russell & Norvig, 2021, p.222) Examples of Truth Tables This section explores two sentences’ logic: one involving a conditional statement with a negation and conjunction, and the other involving a biconditional statement with disjunction. The sentences in natural language are: If it is sunny and I do not work today, then I will go to the beach. I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours. The first step is to identify the atomic sentences, that are part of the natural language sentences, followed by the complex sentences and the logical operator that combines them, then the last step is to form the table based on the atomic sentence, complex sentences, and logic operators. Note that atomic sentences also called atomic propositions are simple propositions that have no logical content (Lavin, n.d.). Thus, their logical value can be set as false or true to evaluate more complex propositions also called complex sentences. Example 1 Let's start with the first sentence “ If it is sunny and I do not work today, then I will go to the beach. ” The atomic sentences are: o P: “The weather is sunny.” o Q : “I work today.” o R : “I will go to the beach.” The logic operators are: o ‘¬’ (not). o ‘∧’ (and). o ‘⇒’ (Conditional). Note that ‘⇒’ corresponds to the terms ‘implies’ or “if then’. The complex sentences are: o ¬Q o P ∧ ¬Q o (P ∧ ¬Q) ⇒ R Now, let’s make the TT: Table 2 Sentence 1 Truth Table The TT is a world model that explores and evaluates all the possible truth values of the atomic and complex sentences. However, not all the table values bear relevance in proving the logical validity of sentence 1. In other words, irrelevant propositions can be ignored, no matter how many of them there are (Russell & Norvig, 2021). For example, if R is false, “I am not going to the beach” regardless of whether P and Q are true or false, making the proposition R is false irrelevant in determining the validity of sentence 1. Additionally, to prove the logical validity of sentence 1 both (P ∧ ¬Q) and R propositions need to be true and both P and ¬Q propositions need to be also true. This concept is similar to coding an 'if' statement in a programming language, where two conditions combined with the logical 'and' operators must both be true for the code after the 'then' clause to execute; for instance, ‘if (A && B) then print(“ A and B are both true”);’. Note that the proposition “print(“A and B are both true”);” is always true if (A and B) is true. Thus, the relevant propositions for this example are found in row three of the table: Table 3 Sentence 1 Truth Table Row 3 o P: T — “The weather is sunny” is true. o Q: F — “I work today” is false. o ¬Q: T — “I will go to the beach” is true. o R: T — “I do not work today” is true. o P ∧ ¬Q: F — “It is sunny and I do not work” is true. o (P ∧ ¬Q) ⇒ R: T — “If it is sunny and I do not work today, then I will go to the beach” is true. Therefore, the sentence “If it is sunny and I do not work today, then I will go to the beach” is logically sound. Example 2 Now, let's explore the sentence “ I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours. ” The atomic sentences are: o P: “I complete all homework assignments.” o Q: “I study for at least 10 hours.” o R: “I will pass the exam.” The logic operators are: o ‘∨’ (or) o ‘⇔’. Note that ‘⇔’ corresponds to the term “if and only if’. The complex sentences are: o P ∨ Q o (P ∨ Q) ⇒ R Now, let’s make the TT: Table 4 Sentence 2 Truth Table As in example 1, the TT is a world model that explores and evaluates all the possible truth values of atomic and complex sentences. However, not all the table values bear relevance in proving the validity of sentence 2. Both (P V Q) and R expressions need to be true to prove that the sentence is logically valid. Additionally, only one of the atomic sentences in the proposition (P V Q) for the proposition to be true. Thus, the relevant propositions for this example are found in rows one, three, and five of the table: Table 5 Sentence 2 Truth Table Row 1, 2, and 3 o P: T — “I complete all homework assignments” is true. o P : F — “I complete all homework assignments” is false. o Q : T — “I study for at least 10 hours” is true. o Q: F — “I study for at least 10 hours” is False. o R : T — “I will pass the exam” is true. o P ∨ Q : T — “I complete all homework assignments (false) or I study for at least 10 hours (true)” is true. o P ∨ Q : T — “I complete all homework assignments (true) or I study for at least 10 hours (false)” is true. o (P ∧ ¬Q) ⇒ R : T — “I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours” is true. Therefore, the sentence “I will pass the exam if and only if I complete all homework assignments or I study for at least 10 hours” is logically valid. Applications TTs have many applications in mathematics and science. A recent proposed application by Benamira et al. (2023b) suggests using them within Convolutional Neural Networks (CNNs) to create a novel CNN architecture called Truth Table net (TT-net). In traditional CNNs, researchers do not have a clear insight into how the network makes decisions, making CNNs “black boxes.” TT-net architecture will make it easier for researchers to understand and interpret how the CNN makes decisions. After training, TT-net can be analyzed and understood using Boolean decision trees, Disjunctive/Conjunctive Normal Form (DNF/CNF), or Boolean logic circuits. This will allow the researchers to map the decision-making process of the CNN. A similar proposed application of TTs by Benamira et al. (2023a) suggests using them as a framework called Truth Table rules (TT-rules) within Deep Neural Networks (DNNs). TT-rules is based on the TT-net architecture with the goal of making DNNs less of a “black box” and more interpretable by transforming the DNN-trained models into understandable rule-based systems using TTs. Summary Truth Tables are a useful tool to prove the logical validity of sentences using Boolean values. They are world models that explore and evaluate all the possible truth values of atomic and complex sentences. In other words, they help evaluate logical propositions by breaking down complex sentences into atomic sentences and analyzing all possible combinations of truth values within the proposition, as shown in examples 1 and 2. They have many applications in mathematics and science; proposed applications in CNNs and DDNs would involve using TTs to make the models less of a "black box" by making their decision-making processes more transparent, and interpretable. References: Benamira, A., Guérand, T., Peyrin, T., & Soegeng, H., (2023a, September 18). Neural network-based rule models with truth tables. arXiv. http://arxiv.org/abs/2309.09638 Benamira, A., Guérand, T., Peyrin, T., Yap, T., & Hooi, B. (2023b, February 2). A scalable, interpretable, verifiable & differentiable logic gate convolutional neural network architecture from truth tables. arXiv. http://arxiv.org/abs/2208.08609 James, J. (2014). Math 310: Logic and Truth Tables [PDF]. Minnesota State University Moorhead. Mathematics Department. https://web.mnstate.edu/jamesju/Spr2014/Content/M310IntroLogic.pdf Lavin, A. (n.d.). 7.2: Propositions and their Connectors. Thinking well - A logic and critical thinking textbook 4e (Lavin). LibreTexts Humanities. https://human.libretexts.org/Bookshelves/Philosophy/Thinking_Well_-_A_Logic_And_Critical_Thinking_Textbook_4e_(Lavin)/07%3A_Propositional_Logic/7.02%3A_Propositions_and_their_Connectors#:~:text=Atomic%20propositions%20are%20sometimes%20longer,a%20simple%20or%20atomic%20proposition . Lippman, D. (2022) 2.2: Introduction to Truth Tables. Mat 1130 mathematical ideas. Pierce College via The OpenTextBookStore. https://math.libretexts.org/Courses/Prince_Georges_Community_College/MAT_1130_Mathematical_Ideas_Mirtova_Jones_(PGCC:_Fall_2022)/02:_Logic/2.02:_Introduction_to_Truth_Tables Oxford Dictionary (2006). The Oxford dictionary of phrase and fable (2 ed.). Oxford University Press. DOI: 10.1093/acref/9780198609810.001.0001. Russell, S. & Norvig, P. (2021). 7. Logical Agent. Artificial intelligence: A modern approach. 4th edition. Pearson Education, Inc. ISBN: 9780134610993; eISBN: 9780134671932. Sheldon, R. (2022, December). What is a truth table? TechTarget. https://www.techtarget.com/whatis/definition/truth-table
- Minimizing Variable Scope in Java: Best Practices for Secure and Efficient Code
This article explains the importance of minimizing variable scope in Java to enhance code readability, maintainability, and security. It highlights Java’s object-oriented approach, contrasts it with languages like C++, and provides examples of best practices, including encapsulation and controlled access through methods. Alexander S. Ricciardi November 20, 2024 In Java, the scope of a variable is the part of a program where the variable can be accessed (Mahrsee, 2024). The scope can be a Class scope, method scope, or block scope. Java does not have global variables like C++ does; global variables are variables that can be accessed from anywhere in the program. In other words, the variables have a global scope. Java inherently minimizes scope by encapsulating everything in classes. Java is a strictly object-oriented programming (OOP) language rather than a procedural one like C. On the other hand, C++ supports both paradigms, OPP and procedural programming. Anyhow, scope minimization is an approach with the goal of improved readability, better maintainability, and reduced chance of errors (Carter, 2021). DCL53-J. SEI CERT Oracle Coding Standard for Java (CMU, n.d.) recommends minimizing the scope of variables to:“avoid common programming errors, improves code readability by connecting the declaration and actual use of a variable, and improves maintainability because unused variables are more easily detected and removed. It may also allow objects to be recovered by the garbage collector more quickly, and it prevents violations of DCL51-J. Do not shadow or obscure identifiers in subscopes .“ Minimizing the scope of a variable also adds a layer of security, as the variable is restricted to the context where it is needed. This reduces access, manipulation, or misuse by other parts of the program, limiting possible vulnerabilities. For example, in Java, declaring a class variable ‘private’ would restrict its scope within the class, preventing other classes from directly modifying or accessing it. However, if the variable needs to be accessed or modified, it can only be done through controlled methods, such as getters or setters, which encapsulate the variable or return a copy of it; additionally, they can implement an extra layer or validation or logic that ensures that the variable is properly utilized. Below is an example of how to apply scope minimization in Java can look like: public class Employee { // Private class variables to restrict access private String name; private double salary; // Constructor public Employee(String name, double salary) { this.name = name; this.salary = salary; } // Getter for name (read-only access) public String getName() { return name; } // Getter and setter for salary with validation public double getSalary() { return salary; } public void setSalary(double salary) { if (salary > 0) { this.salary = salary; } else { throw new IllegalArgumentException("Salary must be greater than 0."); } } // Method to provide an increment with controlled logic public void applyBonus(double percentage) { if (percentage > 0 && percentage <= 20) { this.salary += this.salary * (percentage / 100); } else { throw new IllegalArgumentException("Bonus percentage must be between 0 and 20."); } } // Display employee details public void printDetails() { System.out.println("Name: " + name); System.out.println("Salary: $" + salary); } } public class Main { public static void main(String[] args) { // Create an Employee object Employee emp = new Employee("Alice", 50000); System.out.println("Initial Salary:"); emp.printDetails(); // Modify Salary emp.setSalary(55000); emp.applyBonus(10); System.out.println("\nUpdated Salary:"); emp.printDetails(); // Attempting to set an invalid salary System.out.println("\nInvalid salary (-10000):"); try { emp.setSalary(-10000); } catch (IllegalArgumentException e) { System.out.println(e.getMessage()); } } } Outputs: Name: Alice Salary: $50000.0 Updated Salary: Name: Alice Salary: $60500.0 Invalid salary (-10000): Salary must be greater than 0. To summarize, minimizing variable scope in Java code improves code readability, maintainability, and security by restricting access to where variables are needed most. Java is strictly object-oriented programming (OOP) language implying that it encapsulates data and variables within classes. This approach not only prevents unintended interactions and vulnerabilities but also aligns with best practices for efficient and secure programming. References: Carter, K. (2021, February 10). Effective Java: Minimize The Scope of Local Variables. DEV Community. https://dev.to/kylec32/effective-java-minimize-the-scope-of-local-variables-3e87 CMU — Software Engineering Institute (n.d.) DCL53-J. Minimize the scope of variables. SEI CERT Oracle coding standard for Java. Carnegie Mellon University. Software Engineering Institute. Mahrsee, R. (2024, May 13). Scope of variables in Java. GeeksforGeeks. https://www.geeksforgeeks.org/variable-scope-in-java/
- Searching vs. Sorting in Java: Key Differences and Applications
This article describes the differences between searching and sorting algorithms in Java, their distinct purposes, methodologies, and time complexities. It includes practical examples and implementations, such as Merge Sort for organizing data and Binary Search for efficient retrieval, demonstrating their roles in solving real-world problems. Alexander S. Ricciardi July 14, 2024 In Java, understanding searching and sorting algorithms and how they differ from each other, is crucial for the correct functionality of the application and for effectively managing data. While searching focuses on locating specific data within a collection, sorting rearranges data. This article explores their differences in purpose, methodology, and applications, by providing examples. The major differences between searching and sorting in Java lie in their purposes and outputs, as well as their efficiencies and time complexities. Please Table 1 for a detailed comparison. Table 1 Searching vs Sorting in Java Choosing between different searching or sorting algorithms often depends on the purpose or output wanted and the specific requirements of your application, such as the size of the dataset, and whether the data is already sorted. The following table, Table 2, gives examples of pseudocode and time complexity for several searches and sort algorithms: Table 2 Runtime Complexities for Various Pseudocode Examples Note: In Java without using the Comparable Interface the code above would only be viable for primitive types. From Programming in Java with ZyLabs , 18.3 O notation, Figure 18.3.2 by Lysecky, R., & Lizarraga, A. (2022). An example of a sort algorithm is the merge sort, which has the divide-and-conquer approach, it recursively divides a data array into smaller subarrays and sorts those subarrays, then merges the subarrays together to create a sorted array (GeeksforGeeks, 2020a).An example of a search algorithm is the binary search; which operates on a pre-sorted array by repeatedly dividing the search interval in half until the target element is found or determined to be absent (GeeksforGeeks, 2020b). The example below sorts using merge sort an ArrayList of book objects by year of publication then searches the sorted list using binary: Book.java /** * Book object with a title and publication year. This class implements * Comparable to allow sorting based on the publication year. * * @author Alexander Ricciardi * @version 1.0 * @date 07/14/2024 */ class Book implements Comparable { String title; int year; /** * Constructs a new Book object. * * @param title The title of the book. * @param year The year the book was published. */ public Book(String title, int year) { this.title = title; this.year = year; } /** * Compares this book with another book based on the publication year. * * @param other The book to compare with. * @return A negative integer, zero, or a positive integer as this book is less * than, equal to, or greater than the specified book. */ @Override public int compareTo(Book other) { return Integer.compare(this.year, other.year); } /** * Returns a string representation of the book. * * @return A string in the format "title (year)". */ @Override public String toString() { return title + " (" + year + ")"; } } BookSortingSearching.java i mport java.util.ArrayList; import java.util.Arrays; import java.util.Scanner; /** * Sorts and search a list of books. It implements merge sort for sorting and * binary search for searching. * * @author Alexander Ricciardi * @version 1.0 * @date 07/14/2024 */ public class BookSortingSearching { /** * The main method that demonstrates sorting and searching on a list of books. * * @param args Command line arguments (not used). */ public static void main(String[] args) { // Initialize the list of books ArrayList books = new ArrayList<>( Arrays.asList(new Book("To Kill a Mockingbird", 1960), new Book("1984", 1949), new Book("The Great Gatsby", 1925), new Book("One Hundred Years of Solitude", 1967), new Book("The Catcher in the Rye", 1951), new Book("Brave New World", 1932), new Book("The Hobbit", 1937), new Book("The Lord of the Rings", 1954), new Book("Pride and Prejudice", 1813), new Book("Animal Farm", 1945) )); // Print the original list System.out.println("Original list:"); books.forEach(System.out::println); // Sort the books using merge sort mergeSort(books, 0, books.size() - 1); // Print the sorted list System.out.println("\nSorted list by year:"); books.forEach(System.out::println); // Perform binary search based on user input Scanner scn = new Scanner(System.in); System.out.print("\nEnter a year to search for: "); int searchYear = scn.nextInt(); int result = binarySearch(books, searchYear); if (result != -1) { System.out.println("Book found: " + books.get(result)); } else { System.out.println("No book found for the year " + searchYear); } scn.close(); } /** * Sorts the given list of books using the merge sort algorithm. * * @param books The list of books to sort. * @param left The starting index of the subarray to sort. * @param right The ending index of the subarray to sort. */ private static void mergeSort(ArrayList books, int left, int right) { if (left < right) { int mid = (left + right) / 2; mergeSort(books, left, mid); // Sort left half mergeSort(books, mid + 1, right); // Sort right half merge(books, left, mid, right); // Merge the sorted halves } } /** * Merges two sorted subarrays of the books list. * * @param books The list of books containing the subarrays to merge. * @param left The starting index of the left subarray. * @param mid The ending index of the left subarray. * @param right The ending index of the right subarray. */ private static void merge(ArrayList books, int left, int mid, int right) { // Create temporary arrays ArrayList leftList = new ArrayList<>(books.subList(left, mid + 1)); ArrayList rightList = new ArrayList<>(books.subList(mid + 1, right + 1)); int i = 0, j = 0, k = left; // Merge the two lists while (i < leftList.size() && j < rightList.size()) { if (leftList.get(i).compareTo(rightList.get(j)) <= 0) { books.set(k++, leftList.get(i++)); } else { books.set(k++, rightList.get(j++)); } } // Copy remaining elements of leftList, if any while (i < leftList.size()) { books.set(k++, leftList.get(i++)); } // Copy remaining elements of rightList, if any while (j < rightList.size()) { books.set(k++, rightList.get(j++)); } } /** * Performs a binary search on the sorted list of books to find a book by its * publication year. * * @param books The sorted list of books to search. * @param year The publication year to search for. * @return The index of the book if found, -1 otherwise. */ private static int binarySearch(ArrayList books, int year) { int left = 0, right = books.size() - 1; while (left <= right) { int mid = left + (right - left) / 2; if (books.get(mid).year == year) { return mid; // Book found } if (books.get(mid).year < year) { left = mid + 1; // Search in the right half } else { right = mid - 1; // Search in the left half } } return -1; // Book not found } } Output: To Kill a Mockingbird (1960) 1984 (1949) The Great Gatsby (1925) One Hundred Years of Solitude (1967) The Catcher in the Rye (1951) Brave New World (1932) The Hobbit (1937) The Lord of the Rings (1954) Pride and Prejudice (1813) Animal Farm (1945) Sorted list by year: Pride and Prejudice (1813) The Great Gatsby (1925) Brave New World (1932) The Hobbit (1937) Animal Farm (1945) 1984 (1949) The Catcher in the Rye (1951) The Lord of the Rings (1954) To Kill a Mockingbird (1960) One Hundred Years of Solitude (1967) Enter a year to search for: 1951 Book found: The Catcher in the Rye (1951) In other words, merge sort is efficient for sorting large sets of data due to its complexity of O(n log(n)) , while binary search with its targeted approach to search is better suited for machine learning applications, such as those for training neural networks or finding the optimal hyperparameters for a model. In summary, searching and sorting algorithms have interconnected roles in programming but serve different purposes. Sorting algorithms like Merge Sort organize the data, allowing searching methods such as Binary Search to be more efficient. Together, these algorithms are indispensable for solving real-world problems, from data analysis to application development. References GeeksforGeeks. (2020a, November 18). Merge sort. GeeksforGeeks https://www.geeksforgeeks.org/merge-sort/ GeeksforGeeks. (2020b, February 3). Binary search. GeeksforGeeks. https://www.geeksforgeeks.org/binary-search/ Lysecky, R., & Lizarraga, A. (2022). Programming in Java with ZyLabs[ Table]. Zyante, Inc.
- GUI Design with JavaFX Layout Managers
This article explores how Java Layout Managers provide an abstraction that streamlines the development of Graphical User Interfaces (GUIs) in JavaFX by automating component sizing and positioning. Using predefined layouts like HBox, VBox, and GridPane, developers can create organized and responsive interfaces. Alexander S. Ricciardi June 25, 2024 The Java Layout Manager provides an easy way to develop Graphical User Interfaces (GUIs), particularly by offering tools to manage and organize GUI components. It is responsible for determining the dimensions and placement of components within a container (Oracle Docs. n.d.). While components can suggest their preferred sizes and alignments, the layout manager of the container ultimately decides the final size and position of these components. The Java Layout Manager provides a simpler approach to using panes (Gordon, 2013). It also facilitates the creation and management of standard layouts like rows, columns, stacks, tiles, and more. Additionally, when the window is resized, the layout pane automatically adjusts the positioning and size of its contained nodes based on their properties, it is responsive. Additionally, this article offers a Zoo Exabit program example of how layout managers can be used to arrange UI elements. JavaFX offers a variety of layouts that can fit different GUI needs and functionalities. Layouts such as HBox, VBox, GridPane, BorderPane, StackPane, and FlowPane, see Figure 1. Figure 1 Example of JavaFX Layouts Note : from “3/10 - Introduction and overview of JavaFX panes or GUI containers for layout” by JavaHandsOnTeaching (2021) Zoo Exabit Program Example The program displays the animals found in various exhibits of a Zoo using the JavaFx VBox and HBox layouts, see Figure 2 to see how the different layout panes are positioned. Figure 2 Zoo Layout Panes Java File Source Code import javafx.application.Application; import javafx.geometry.Pos; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.stage.Stage; public class Main extends Application { @Override public void start(Stage primaryStage) { // Title for the window Label title = new Label("Zoo Exhibits"); title.getStyleClass().add("title"); // Create main VBox layout VBox mainLayout = new VBox(20); // Center align the contents of the VBox (title and Horizontal boxes for the two // sets of exhibits) mainLayout.setAlignment(Pos.CENTER); // Horizontal boxes for the two sets of exhibits HBox firstSetExhibits = new HBox(10); firstSetExhibits.setAlignment(Pos.CENTER); // Center align the contents of the HBox firstSetExhibits.getChildren().add(createExhibitSection("Africa", "Lion", "Elephant", "Giraffe")); firstSetExhibits.getChildren().add(createExhibitSection("South America", "Jaguar", "Llama", "Macaw")); firstSetExhibits.getChildren().add(createExhibitSection("Australia", "Kangaroo", "Koala", "Platypus")); HBox secondSetExhibits = new HBox(10); secondSetExhibits.setAlignment(Pos.CENTER); // Center align the contents of the Exhibit // HBox secondSetExhibits.getChildren() .add(createExhibitSection("North America", "Bison", "Bald Eagle", "Grizzly Bear")); secondSetExhibits.getChildren().add(createExhibitSection("Asia", "Tiger", "Panda", "Orangutan")); secondSetExhibits.getChildren().add(createExhibitSection("Europe", "Wolf", "Brown Bear", "Red Deer")); // Add the title and horizontal sets to the main layout mainLayout.getChildren().addAll(title, firstSetExhibits, secondSetExhibits); // Create a Scene Scene scene = new Scene(mainLayout, 500, 500); // Load the CSS file scene.getStylesheets().add(getClass().getResource("application.css").toExternalForm()); // Set the scene on the primary stage primaryStage.setTitle("Zoo Exhibits"); primaryStage.setScene(scene); primaryStage.show(); } // String... passes multiple string an array of strings not a set size private VBox createExhibitSection(String continent, String... animals) { VBox exhibitSection = new VBox(5); exhibitSection.setAlignment(Pos.CENTER); // Center align the exhibit section labels exhibitSection.getStyleClass().add("exhibit-section"); // Title label for the continent Label continentLabel = new Label(continent); continentLabel.getStyleClass().add("continent-label"); exhibitSection.getChildren().add(continentLabel); // Vertical box to hold animal labels VBox animalsBox = new VBox(5); animalsBox.setAlignment(Pos.CENTER); // Center align the animal labels for (String animal : animals) { Label animalLabel = new Label(animal); animalLabel.getStyleClass().add("animal-label"); animalsBox.getChildren().add(animalLabel); } // Add the VBox to the section exhibitSection.getChildren().add(animalsBox); return exhibitSection; } public static void main(String[] args) { launch(args); } } CSS File .title { -fx-font-size: 24px; -fx-font-weight: bold; -fx-padding: 10px; -fx-background-color: #f0f0f0; } .main-content-title { -fx-font-size: 20px; -fx-font-weight: bold; -fx-padding: 5px; -fx-background-color: #d0d0d0; } .exhibit-section { -fx-padding: 10px; -fx-background-color: #f9f9f9; -fx-border-radius: 5px; } .continent-label { -fx-font-size: 16px; -fx-font-weight: bold; } .animal-label { -fx-font-size: 14px; -fx-border-radius: 3px; -fx-padding: 3px; } Outputs: To summarize, Java Layout Managers provided an easy way to develop GUI by offering a robust and flexible framework. Java Layout Managers’ abstraction allows developers to focus on creating organized layouts using predefined panes like HBox, VBox, GridPane, and others without having to manually manage the placement and sizing of each component. The Zoo Exhibits program shows how layout managers can be used to arrange UI elements and ensure that applications remain adaptable to window resizing and different display environments. Java Layout Managers is a powerful abstraction that not only facilitates the development process but also enhances the user experience by providing consistent and dynamic interface structures. References: Gordon, J. (2013, June). JavaFX: Working with layouts in JavaFX [PDF]. Oracle Docs. https://docs.oracle.com/javafx/2/layout/jfxpub-layout.pdf/ JavaHandsOnTeaching (2021, June 19). 3/10 - Introduction and overview of JavaFX panes or GUI containers for layout [Video]. YouTube. https://www.youtube.com/watch?v=GH-3YRAuHs0&t=905s Oracle Docs. (n.d). Using layout managers . The Java™ Tutorials. Oracle. https://docs.oracle.com/javase%2Ftutorial%2Fuiswing%2F%2F/layout/using.html
- Bayes' Theorem: Risk Prediction and AI in The Insurance Sector
The article explains Bayes' Theorem as a mathematical formula used to update probabilities based on new evidence, illustrating its applications in risk prediction for insurance companies. It highlights the theorem's integration into AI systems to enhance decision-making while addressing potential biases, privacy, and security concerns. Alexander S. Ricciardi November 13, 2024 In probability theory, Bayes' Theorem, also known as Bayes' Rule or Bayes' Law, is a mathematical formula that updates the probability of a hypothesis based on new evidence. In statistics, it is a way to revise or update existing predictions or theories based on new or additional evidence (Hayes, 2024), In other words, it is a mathematical formula for determining conditional probability. A conditional probability is the likelihood of an outcome occurring based on a previous outcome that occurred under similar circumstances. Thus, Bayes' Theorem can also be defined as a mathematical property that allows conditional probability to be expressed in terms of the inverse of the conditional (Data Science Discovery, n.d.). The Bayes' Theorem formula: Where: P(A) is the prior probability of an event A . P(B) is the probability of an event B . P(B|A) is the probability of an event B occurring given that A has occurred. P(A|B) is the probability of an event A occurring given that B has occurred. Therefore, the Bayes' Theorem computes the reverse of the conditional probability. That is the probability of a cause A given an effect B , P(A|B) , when the overall probability of the cause, P(A), the probability of the effect, P(B) , and the probability of the effect occurring given that the cause has occurred, P(B|A) are known. Note: P(B) = P(B∣A) P(A) + P(B∣¬A) P(¬A) Where: P(B∣¬A) – the probability of event B occurring given that ¬A event has occurred P(B∣A) – the probability of event B occurring given that A event has occurred (Taylor, 2023) Another way to formulate Bayes' Theorem is to base it on a hypothesis context: Where: H : The hypothesis (e.g., a person has a certain disease) E: Evidence (e.g., the person tests positive in a diagnostic test) P(H): Prior probability is the known probability about the hypothesis or the initial belief about the hypothesis, that is the beliefs before observing the evidence. (e.g., the probability someone has a specific disease before considering specific symptoms or test results) P(E) : Marginal probability is the probability of observing the evidence under all possible scenarios. It may be thought of as an unconditional probability, or it is not conditioned on another event (Albright, n.d.). P(E) = P(E∣H) P(H) + P(E∣¬H) P(¬H) (e.g., the probability that a diagnostic test would show a positive result, whether or not the person actually has the disease). P(E∣H) : Likelihood is the probability of observing the evidence. (e.g., a positive test result) given that the hypothesis is true. In other words, it is the likelihood of E being true based on H being true. (e.g., the likelihood of a positive test result being accurate, knowing that the person has the disease; note that medical tests are not 100% accurate, false positive) On the other hand, P(H∣E) is the likelihood of H being true based on E being true. P(H∣E) : The posterior probability is the updated probability of the hypothesis given the observed evidence. (e.g., the probability that the person actually has the disease, given that they tested positive) The formulated Bayes' Theorem above can be used to assess risks by calculating the likelihood of an event, such as accidents, illnesses, or natural disasters. This information is very valuable for an insurance company, enabling it to better understand and predict potential risks. The steps below describe how the theorem can be applied to predict the risk of a person developing a disease: Prior Knowledge: Establish the disease hypothesis, such as the probability someone has a specific disease before considering specific symptoms or test results, then collect relevant data and compute the prior probability P(H) from it. For example: If 5 in 1,000 people in the general population have the disease, therefore the prior is 0.005. P(H) = 0.005 Incorporate the Evidence: Incorporate evidence such as a positive result from a diagnostic test. That is the Likelihood, P(E∣H). That is the likelihood of a positive test result being accurate, knowing that the person has the disease. For example: The test is accurate 96% of the time. P(E∣H) = 0.96 Account for False Positives: Compute The marginal probability P(E) . That is the probability that a diagnostic test would show a positive result, whether or not the person actually has the disease. - True Positives P(E∣H)P(H) , the test correctly identifies the disease. - False Positives P(E∣¬H)P(¬H) , the test incorrectly identifies the disease. Then P(E) = P(E∣H) P(H) + P(E∣¬H) P(¬H) For example: P(E∣¬H) = 0.04 (4% false positive rate) P(¬H) = 1 − P(H) =0.995 P(¬H) = 0.995. P(E) = P(E∣H) P(H) +P(E∣¬H) P(¬H) = (0.96)( 0.005) +( 0.04)(0.995) P(E)= 0.0048 + 0.0398 = 0.0446 P(E) = 0.0446 Compute the Posterior: Use Bayes' Theorem to update the probability based on the evidence. That is the probability that the person actually has the disease, given that they tested positive. For example: P(H∣E) ≈ 0.1076 Thus, even though the test is 96% accurate the posterior probability P(H∣E) is equal to 10.76%. This is due to P(H ) being only 0.5% (5 in 1,000 people in the general population have the disease). In other words, a person having a positive test result increases the person's probability of having the disease from 0.5% to about 10.76% ; however, it is still more likely that the person does not have the disease. For the insurance, it means that it needs to consider both test accuracy, P(E∣H) = 0.96 , and disease prevalence, P(H) , to not overestimate risks, as risks are low even with a positive test, and to set insurance premiums that represent a fair and accurate assessment of the probability of a person having the diseases. As shown from the example above, the Bayes' Theorem is a powerful tool that provides more accurate predictions than solely relying on simple probabilities or data alone. An Artificial Intelligence (AI) Bayes' Theorem implementation can transform and enhance the insurance company's decision-making processes by providing: More accurate predictions of risks, such as accidents, illnesses, or natural disasters real-time data processing. More personalized and refined prediction-based individual-level data. By analyzing vast amounts of data.- By automating systems streamlining claims processing and fraud detection. However, a Bayes' Theorem AI may invertedly create biases. Additionally, by using vast personal and private amounts of data such as medical history and financial information the AI system introduces privacy and security concerns, as well as data regulatory compliance challenges. These potential issues need to be carefully considered when using such systems. To summarize, Bayes' Theorem is a mathematical formula that updates the probability of a hypothesis based on new evidence providing a way to incorporate prior knowledge and observed data in real time into predictions. It is a powerful tool that provides more accurate predictions than solely relying on simple probabilities or data alone and when implemented in an AI model can transform and enhance the insurance company's decision-making processes. References: Albright, E. (n.d.). Probability: Joint, marginal and conditional probabilities. ENV710 Statistics Review. Nicholas School of the Environment | Duke University. https://sites.nicholas.duke.edu/statsreview/jmc/ Data Science Discovery (n.d.). Bayes’ theorem. University of Illinois at Urbana-Champaign (UIUC). https://discovery.cs.illinois.edu/learn/Prediction-and-Probability/Bayes-Theorem/ Hayes, A. (2024, March 30). Bayes' theorem: What it is, the formula, and examples. Investopedia. https://www.investopedia.com/terms/b/bayes-theorem.asp Taylor, S. (2023, November 21). Bayes’ Theorem. Corporate Finance Institute. https://corporatefinanceinstitute.com/resources/data-science/bayes-theorem/
- Securing Sensitive Data in Java: Best Practices and Coding Guidelines
The article explores the importance of protecting sensitive data in Java applications and highlights common vulnerabilities, including improper data handling, injection attacks, and input validation failures. It provides secure coding guidelines from Oracle, along with examples of unsafe and safe code practices. Alexander S. Ricciardi November 14, 2024 Sensitive data is information that individuals or organizations want to protect from public exposure as if its unintentional release or stolen could result in harm to the person or the organization in the form, for example, of identity theft or other criminal intentions (Baig, 2021). For individuals, this may include personal details like payment information or birth dates, and for organizations, it could be proprietary corporate information. Java, as a programming language, incorporates several abstractions to secure sensitive data. However, data security can still be compromised, in an application, by different factors such as improper handling of sensitive information and vulnerabilities to data injection attacks, as well as insufficient input validation and the unsafe handling of mutable objects. Oracle (n.d.), the corporation that owns the rights to Java, provides coding guidelines for Java SE, The following is a list of these guidelines. - Guideline 2 Confidential Information (Oracle, n.d.). Guideline 2-1 / CONFIDENTIAL-1: Purge sensitive information from exceptions Sensitive information in exceptions should not reveal internal states or paths. Guideline 2-2 / CONFIDENTIAL-2: Do not log highly sensitive information Logs should exclude sensitive details like passwords or security tokens. Guideline 2-3 / CONFIDENTIAL-3: Consider purging highly sensitive information from memory after use Clearing sensitive data from memory reduces its exposure window. If sensitive information is logged or stored insecurely, it becomes vulnerable to unauthorized access. Code examples: Unsafe code, an application that logs sensitive user passwords in clear text violates the principle of purging sensitive information from logs. public class PasswordLogger { public void logPassword(String password) { // Logs sensitive data—violates secure coding guidelines System.out.println("Password: " + password); } } Safe code, to comply with secure coding guidelines, sensitive data should be sanitized or excluded from logs entirely. public class SecurePasswordLogger { public void logPassword() { System.out.println("Password logging is not permitted."); } } - Guideline 3 Injection and Inclusion (Oracle, n.d.). Guideline 3-1 / INJECT-1: Generate valid formatting Input should always be sanitized to prevent incorrect formatting issues. Guideline 3-2 / INJECT-2: Avoid dynamic SQL Always use parameterized SQL statement queries to eliminate SQL injection risks. These vulnerabilities may allow attackers to manipulate queries and access, modify, or delete sensitive data Code examples: Unsafe code, using dynamic SQL queries to process user inputs without sanitization is a common mistake. String query = "SELECT * FROM users WHERE username = '" + username + "'"; Statement stmt = connection.createStatement();ResultSet rs = stmt.executeQuery(query); Safe code, instead, parameterized queries should be used to prevent injection attacks: String query = "SELECT * FROM users WHERE username = ?"; PreparedStatement pstmt = connection.prepareStatement(query); pstmt.setString(1, username); ResultSet rs = pstmt.executeQuery(); - Guideline 5 Input Validation (Oracle, n.d.). Guideline 5-1 / INPUT-1: Validate inputs Input from untrusted sources should be sanitized and validated. Guideline 5-2 / INPUT-2: Validate output from untrusted objects as input Output from untrusted sources should be revalidated before further processing. These vulnerabilities may allow attackers may exploit improperly validated inputs to execute malicious code or access restricted data. Code example: Safe code, proper input validation ensures that malicious code is not injected. public void validateInput(String userInput) { if (userInput == null || userInput.isEmpty() || userInput.contains("..")) { throw new IllegalArgumentException("Invalid input detected."); } System.out.println("Validated input: " + userInput); } - Guideline 6 Mutability (Oracle, n.d.). Guideline 6-1 / MUTABLE-1: Prefer immutability for value types Immutable objects avoid unintended modifications in shared contexts. Guideline 6-2 / MUTABLE-2: Create copies of mutable output values Return copies of mutable objects to ensure encapsulation. These vulgarities can lead to inconsistent object states or security vulnerabilities, especially when mutable objects expose sensitive information. Code example: Safe code, creating immutable objects or making safe copies reduces risks associated with mutable states. public class ImmutableExample { private final List items; public ImmutableExample(List items) { this.items = new ArrayList<>(items); // Creates a safe copy } public List getItems() { return Collections.unmodifiableList(items); // Returns an immutable view } } To summarize, sensitive data is information that individuals or organizations want to protect from public exposure as if it is exposed could result in harm to the person or the organization. Factors such as improper handling of sensitive information, vulnerabilities to data injection attacks, the unsafe handling of mutable objects, and insufficient input validation can compromise an application's integrity. However, by adhering to secure coding guidelines such as avoiding the logging of sensitive information, using SQL parameterized queries to prevent injection attacks, validating all inputs, and handling mutable objects correctly, developers can build Java applications that are secure and keep sensitive data protected. References: Baig, A. (2021, May 17). What is sensitive data? Securiti. https://securiti.ai/blog/what-is-sensitive-data/ Oracle (n.d.). Secure coding guidelines for Java SE. Updated May 2023. Oracle. https://www.oracle.com/java/technologies/javase/seccodeguide.html
- Recursion: Concepts, Components, and Practical Applications - Java
This article explains the concept of recursion in programming. It describes its key components: the base case and the recursive case. Using a Java example, it illustrates how recursion is implemented and emphasizes safeguards to prevent infinite loops and stack overflow errors. Alexander S. Ricciardi July 8, 2024 In computer science, understanding the concept of recursion is essential as it is often the base of more complex algorithms, and in programming, it is a tool used to solve problems by breaking them down into smaller, more manageable subproblems. This post explores the components of a recursive method—the base case and the recursive case—using the programming language Java. Recursive Method Explanation A recursive algorithm or method solves complex problems by calling itself and by breaking the problems into smaller, more manageable subproblems. The basic components to create a recursive method are a base case and a recursive case. A base case is a condition that when met stops the recursion, usually in an if statement. A recursive case is a set of code lines or functionalities that are computed 'if' the base case condition is not met, always followed by the recursive method calling itself usually with a modified input. Typically, the code lines and the recursive call are found in an ' else' statement following the ' if' statement checking if the base condition is met. However, If the ' if' statement contains a ' return' statement, the code lines and the recursive call are found right after the ' if' statement. Note that a recursive method that calls itself with an unmodified input, or a recursive method that does not take an input, will not create an infinitely recursive loop if and only if the base case condition is based on external factors that change independently of the method's input. To avoid creating an infinitely recursive method, the method needs to contain at least one base case that will eventually be reached. Note that a recursive method can have more than one base case. For example, the recursive method can contain a base case that checks a specific condition, and others can act as safeguards. If the first base case condition is never reached, a safeguard such as a counter can limit the number of recursions based on the available computing memory, preventing a stack overflow error. On a side note: the Python programming language has a built-in mechanism that limits the number of recursions a program can perform. If needed, this limit can be modified, either decreased or increased, by using the Python system (sys) library. Here is an example of a recursion method: import java.util.Random; public class AreWeThereYet { private static final Random randomGenerateMiles = new Random(); public static void askAreWeThereYet(int totalMilesDriven, int tripTotalMiles) { // ---- Base case ---- We've arrived! if (totalMilesDriven >= tripTotalMiles) { System.out.println(" We're here! Finally! "); return; } // ---- Recursive case ---- // Miles driven int milesDriven = randomGenerateMiles.nextInt(50) + 1; // Drive 1-50 miles // Keep asking and driving System.out.println("Are we there yet?"); System.out.println(" Not yet, we've traveled " + totalMilesDriven + " miles. "); if (milesDriven + totalMilesDriven >= tripTotalMiles) { milesDriven = tripTotalMiles - totalMilesDriven; } System.out.println(" --- Drives " + milesDriven + " miles --- "); totalMilesDriven += milesDriven; // ---- Recursive call ---- askAreWeThereYet(totalMilesDriven, tripTotalMiles); } public static void main(String[] args) { int tripTotalMiles = 100; // Total trip distance System.out.println(" Trip total miles: " + tripTotalMiles); askAreWeThereYet(0, tripTotalMiles); } } Outputs Trip total miles: 100 Are we there yet? Not yet, we've traveled 0 miles. --- Drives 10 miles --- Are we there yet? Not yet, we've traveled 10 miles. --- Drives 26 miles --- Are we there yet? Not yet, we've traveled 36 miles. --- Drives 17 miles --- Are we there yet? Not yet, we've traveled 53 miles. --- Drives 12 miles --- Are we there yet? Not yet, we've traveled 65 miles. --- Drives 23 miles --- Are we there yet? Not yet, we've traveled 88 miles. --- Drives 12 miles --- We're here! Finally! To summarize, recursion is an elegant and powerful approach to solving complex problems. By defining a base case and a recursive case, developers can create algorithms that effectively manage problem complexity. However, it is important to ensure that recursion stops appropriately to prevent infinite loops or stack overflow errors. The provided Java example, "AreWeThereYet," illustrates these principles in action, showing how recursion can be used dynamically to solve a problem while maintaining clarity and functionality. As we continue to explore programming techniques, recursion remains an invaluable skill that underscores the importance of thoughtful problem decomposition and method design.













