top of page

SEARCH RESULTS

88 results found with an empty search

  • Asymptotic Analysis Exercises - Java

    This article showcases asymptotic analyses evaluating algorithms' time complexity using Big-Oh notation through a series of Java-based programming problems. Alexander S. Ricciardi September 1st , 2024 This documentation is part of the Critical Thinking 3 Assignment from CSC400: Data Structures and Algorithms at Colorado State University Global. It consists of a series of exercises designed to demonstrate the principles of asymptotic analysis. Asymptotic analysis uses the Big-Oh notation. “In computer science, the Big-Oh notation is used to describe the time complexity or space complexity of algorithms (Geeks for Geeks, 2024). Mathematically, it defines the upper bound of an algorithm’s growth rate, known as the asymptotic upper bound, and is denoted as 𝑓(𝑛) is 𝑂(𝑔(𝑛)) or 𝑓(𝑛) ∈ 𝑂(𝑔(𝑛)), pronounced 𝑓(𝑛) is Big-Oh of 𝑔(𝑛). The term "asymptotic" refers to the behavior of the function as its input size 𝑛 approaches infinity. In the context of computer science, it describes the worst-case scenario for time complexity or space complexity. For example, an algorithm with 𝑂(𝑛²) time complexity will grow much faster than one with 𝑂(𝑛) as the input size increases, with n representing the number of primitive operations. Primitive operations are low-level instructions with a constant execution time.” (Ricciardi, 2024. Post)  Definition of Big-Oh: Let f(n) and g(n) be functions mapping positive integers to positive real numbers. We say that f(n) ∈ O(g(n)) if there is a real constant c > 0 and an integer constant n0 ≥ 1 such that f(n) ≤ c⋅g(n) , for n ≥ n0 (Carrano & Henry, 2018) The Assignment Direction: Complete the following exercises. For each exercise, show your work and all the steps taken to determine the Big-Oh for each problem. Partial points cannot be awarded without showing work. Exercise 1 What is the Big-Oh of the following computation?  int sum = 0; for (int counter = n; counter > 0; counter = counter - 2) sum = sum + counter; Exercise 2 Suppose your implementation of a particular algorithm appears in Java as follows:  for (int pass = 1; pass <= n; pass++) { for(int index = 0; index < n; index++) { for(int count = 1; count < 10; count++) { . . . } //end for } // end for } //end for The algorithm involves an array of "n" items. The previous code shows only the repetition in the algorithm, but it does not show the computations that occur within the loops.Those computations, however, are independent of "n." What is the order of the algorithm?       Exercise 3 Consider two programs, A and B. Program A requires 1000 x n^2 operations and Program B requires 2n operations. For which values of n will Program A execute faster than Program B?   Exercise 4 Consider an array of length "n" containing unique integers in random order and in the range 1 to n + 1. For example, an array of length 5 would contain 5 unique integers selected randomly from the integers 1 through 6. Thus the array might contain 3 6 5 1 4. Of the integers 1 through 6, notice that 2 was not selected and is not in the array. Write Java code that finds the integer that does not appear in such an array. Explain the Big-Oh in your code. My notes: - Each exercise starts on a new page. - 𝑔(𝑛) is the number of primitive operations. - The summation properties of a constant Exercise 1 What is the Big-Oh of the following computation?  int sum = 0; for (int counter = n; counter > 0; counter = counter - 2) sum = sum + counter; In this exercise 𝑔(𝑛) is the number of primitive operations. Step 1: Understanding the code - The ‘sum’ variable initializes to ‘0’. - The ‘counter’ loop iterates from ‘n’ to ‘1’ inclusive. Note that all the variables andconstants in the loop are integers. o The ‘counter’ variable initializes to ‘n’. o The ‘counter’ variable is compared to ‘0’, the loop iterates as long as ‘counter’ is greater than ‘0’. o The ‘counter’ variable is post-decremented by ‘2’, this means that the ‘counter’ is compared to ‘0’ before being decremented. o The ‘sum’ variable is incremented by the value of the ‘counter’ variable in eachiteration of the for-loop until the ‘counter’ is less than or equal to ‘0’. If ‘n = 10’ - 1st iteration ‘counter = 10’ and ‘sum = 0 + 10 = 10’ - 2nd iteration ‘counter = 8’ and ‘sum = 10 + 8 = 18’ - 3rd iteration ‘counter = 6’ and ‘sum = 18 + 6 = 24’ - 4th iteration ‘counter = 4’ and ‘sum = 24 + 4 = 28’ - 5th iteration ‘counter = 2’ and ‘sum = 28 + 2 = 30’ The number of iterations is 𝑛/2 If ‘n = 11’ - 1st iteration ‘counter = 11’ and ‘sum = 0 + 11 = 11’ - 2nd iteration ‘counter = 9’ and ‘sum = 11 + 9 = 20’ - 3rd iteration ‘counter = 7’ and ‘sum = 20 + 7 = 27’ - 4th iteration ‘counter = 5’ and ‘sum = 27 + 5 = 32’ - 5th iteration ‘counter = 3’ and ‘sum = 32 + 3 = 35’ - 6th iteration ‘counter = 1’ and ‘sum = 32 + 1 = 36’ The number of iterations is (𝑛+1)/2 Step 2: Counting the number of iterations The for-loop will iterate as long as the ‘counter > 0’, the ‘counter is initialized to ‘n’, and it is decremented by ‘2’. If 𝑘 is the number of iterations, then the last iteration would occur when 𝑛 - 2𝑘 ≤ 0. Solving for 𝑘, 𝑘 ≥ 𝑛/2 . Using the examples above of ‘n = 10’ and ‘n = 11’, we can say that 𝑘 = 𝑛/2 when 𝑛 is even and k = (𝑛+1)/2 when 𝑛 is odd.  Justification: An odd number is an integer of the form 𝑛 = 2𝑐 + 1, where 𝑐 is an integer. Then 𝑘 = (2𝑐+1+1)/2 = (2/2) 𝑐 + (2/2) = 𝑐 + 1. An even number is an integer of the form 𝑛 = 2𝑐, where 𝑐 is an integer. Then 𝑘 = (2𝑐)/2 = (2/2) 𝑐 = 𝑐 If 𝑐 = 5 Then 𝑛_𝑒𝑣𝑒𝑛 = 2∗5 = 10 and 𝑘_𝑒𝑣𝑒𝑛 = 𝑛_𝑒𝑣𝑒𝑛/2 = 5 = 𝑐 And 𝑛_𝑜𝑑𝑑 = 2∗5+1 = 11 𝑎𝑛𝑑 𝑘_𝑜𝑑𝑑 = (11+1)/2 = 12/2 = 6 = 𝑐+1  Step 3: Number of primitive operations 𝑔(𝑛) per instruction - The ‘sum’ variable initializes to 0, it is executed only once, 𝑔₁(𝑛) = 1. - The ‘counter’ variable initializes to n by the for-loop statement this is executed only once, 𝑔₂(𝑛) = 1. - The ‘counter’ variable is compared to ‘0’, this is executed in each iteration of the loop, and the number of iterations is 𝑘_𝑒𝑣𝑒𝑛 = 𝑛_𝑒𝑣𝑒𝑛/2 , then 𝑔₃(𝑛_𝑒𝑣𝑒𝑛) = 𝑛_𝑒𝑣𝑒𝑛/2; and 𝑘_𝑜𝑑𝑑 = (𝑛_𝑜𝑑𝑑+1)/2, then 𝑔₃(𝑛_𝑜𝑑𝑑) = (𝑛_𝑜𝑑𝑑+1)/2 - The ‘counter’ is post-decremented by ‘2’, this is done after each iteration of the loop and the comparison of ‘counter’ to ‘0’returns true, the number of iterations is 𝑘_𝑒𝑣𝑒𝑛 = 𝑛_𝑒𝑣𝑒𝑛/2, then 𝑔₄(𝑛𝑒𝑣𝑒𝑛) = 𝑛_𝑒𝑣𝑒𝑛/2 ; and 𝑘_𝑜𝑑𝑑 = (𝑛_𝑜𝑑𝑑+1)/2, then 𝑔₄(𝑛_𝑜𝑑𝑑) = (𝑛_𝑜𝑑𝑑+1)/2 - The ‘sum’ variable is incremented by the value of the ‘counter’ variable in each iteration of the for-loop until the ‘counter’ is less than or equal to ‘0’, then the number of times this instruction will be executed is the number of iterations 𝑘_𝑒𝑣𝑒𝑛 = 𝑛_𝑒𝑣𝑒𝑛/2, then 𝑔₅(𝑛_𝑒𝑣𝑒𝑛) = 𝑛_𝑒𝑣𝑒𝑛/2 ; and 𝑘_𝑜𝑑𝑑 = (𝑛_𝑜𝑑𝑑+1)/2, then 𝑔₅(𝑛_𝑜𝑑𝑑) = (𝑛_𝑜𝑑𝑑+1)/2  Step 4: The total of primitive operations If 𝒏 is even, 𝒌 = 𝒏/𝟐 - 𝑔(𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔₃(𝑛) + 𝑔₄(𝑛) +𝑔₅(𝑛) - 𝑔(𝑛) = 1 + 1 + 𝑘 + 𝑘 + 𝑘 = 1 + 1 + 𝑛/2 + 𝑛/2 + 𝑛/2 - 𝑔(𝑛) = 2 + (3/2)𝑛 If 𝒏 is odd, 𝒌 = (𝒏-𝟏)/𝟐 - 𝑔(𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔₃(𝑛) + 𝑔₄(𝑛) +𝑔₅(𝑛) - 𝑔(𝑛) = 1 + 1 + 𝑘 + 𝑘 + 𝑘 = 1 + 1 + (𝑛+1)/2 + (𝑛+1)/2 + (𝑛+1)/2 = 2/2 + 2/2 + 1/2 + 1/2 + 1/2 + 𝑛/2 + 𝑛/2 + 𝑛/2 - 𝑔(𝑛) = 7/2 + (3/2)𝑛 Step 5: The Big-Oh notation By the definition of the Big-Oh notation, 𝑂(𝑔(𝑛)). If 𝑛 is even, with 𝑔(𝑛) = 2 + (3/2)𝑛 then the asymptotic complexity is 𝑂(𝑛) Justification: 2 + (3/2)𝑛 ≤ 𝑐𝑛, for 𝑐 = 4/2 + 3/2 = 7/2 , when 𝑛 ≥ 𝑛₀ = 2 and 𝑛 is even. If 𝑛 is odd, with 𝑔(𝑛) = 7/2 + (3/2)/𝑛 then the asymptotic complexity is 𝑂(𝑛) justification: 2 + (3/2)𝑛 ≤ 𝑐𝑛, for 𝑐 = 4/2 + 3/2 = 7/2 , when 𝑛 ≥ 𝑛₀ = 1 and 𝑛 is odd. When 𝑛 is even Big-Oh is 𝑂(𝑛) and when 𝑛 is even Big-Oh 𝑂(𝑛), thus: The Big-Oh of the computation is 𝑂(𝑛) In conclusion, the asymptotic analysis shows that the complexity of the algorithm is directly proportional to the size of the input n, indicating a linear growth rate. O(n) represents a linear type of complexity.  Exercise 2 Suppose your implementation of a particular algorithm appears in Java as follows: for (int pass = 1; pass <= n; pass++) { for(int index = 0; index < n; index++) { for(int count = 1; count < 10; count++) { . . . } //end for } // end for } //end for The algorithm involves an array of "n" items. The previous code shows only the repetition in the algorithm, but it does not show the computations that occur within the loops. Those computations, however, are independent of "n." What is the order of the algorithm?    Note that the order of an algorithm refers to its "time complexity" or Big-Oh notation. Below is the time complexity analysis of the code above.   Step 1: Understanding the code -  The outer loop (‘pass’) itenerates from ‘1’ to ‘n’ inclusive. Note that all the variables and constants in the loop are integers. o   The ‘pass’ variable initializes to ‘1’. o   The ‘pass’ variable is compared to ‘n’, the loop iterates as long as ‘pass’ is less than or equal to ‘n’. o   The ‘pass’ variable is post-incremented by ‘1’, this means that the ‘pass' variable is compared to ‘n’ before being incremented. -   The middle loop (‘index’) itenerates from ‘0’ to ‘n’ exclusive. Note that all the variables and constants in the loop are integers. o   The ‘index’ variable initializes to ‘0’. o   The ‘index’ variable is compared to ‘n’, the loop iterates as long as ‘index’ is less than ‘n’. o   The ‘index’ variable is post-incremented by ‘1’, this means that the ‘pass' variable is compared to ‘n’ before being incremented. -  The inner loop (‘count’) itenerates from ‘1’ to ‘10’ exclusive. Note that all the variables and constants in the loop are integers. o   The ‘count’ variable initializes to ‘1’. o   The ‘count’ variable is compared to ‘10’, the loop iterates as long as ‘count’ is less than ‘10’. o   The ‘count’ variable is post-incremented by ‘1’, this means that the ‘count' variable is compared to ‘10’ before being incremented.   Step 2:  Counting the number of iterations -  The outer loop will iterate as long as  ‘pass’ is less than or equal to ‘n’, and it is post-incremented by ‘1’, starting at ‘pass = 1’.If  is the number of iterations: Thus, the loop iterates 𝒏 times  -  The outer middle will iterate as long as the ‘index’ is strictly less than ‘n’. and it is post-incremented by ‘1’, starting at ‘index = 0’. If  is the number of iterations: Thus, the loop iterates 𝑛 times. However, since the loop is nested within the outer loop, it iterates a total of 𝑛 ∙ 𝑛 = 𝒏² times. - The inner loop will iterate as long as ‘count’ is strictly less than ‘10’, and it is post-incremented by ‘1’, starting at ‘pass = 1’. If 𝑘 is the number of iterations:  Thus, the loop iterates 9 times. However, since the loop is nested within the middle loop, it iterates a total of 9 ∙ 𝑛 ² = 𝟗𝒏² times.   Step 3: Number of primitive operations 𝑔(𝑛) per instruction - The number of primitive operations for the outer loop is: o The ‘pass’ variable initializes to ‘1’ by the for-loop statement this is executedonly once, then 𝑔₁(𝑛) = 1. o The ‘pass’ variable is compared to ‘n’, this is executed in each iteration of the loop, and the number of iterations is 𝑘 = 𝑛, then 𝑔₂(𝑛) = 𝑛. o The ‘pass’ is post-incremented by ‘1’, this is done after each iteration of the loop and the comparison of ‘pass’ to ‘n’ returns true, the number of iterations is 𝑘 = 𝑛, then 𝑔₃(𝑛) = 𝑛. The total of primitive operations for the outer loop is: 𝑔_𝑜𝑢𝑡𝑒𝑟𝑙𝑜𝑜𝑝(𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔₃(𝑛) = 1 + 𝑛 + 𝑛 = 1 + 2𝑛 𝑔_𝑜𝑢𝑡𝑒𝑟𝑙𝑜𝑜𝑝(𝑛) = 1 + 2𝑛  - The number of primitive operations for the middle loop is: o The ‘index’ variable initializes to ‘0’ by the for-loop statement this is executed only once. However, since the middle loop is nested within the outer loop, the index’ variable is initialized 𝑛 times, then 𝑔₁(𝑛) = 1 ∙ 𝑛 = 𝑛. o The ‘index’ variable is compared to ‘n’, this is executed in each iteration of the loop, and the number of iterations is 𝑘 = 𝑛², then 𝑔₂(𝑛) = 𝑛². o The ‘index’ is post-incremented by ‘1’, this is done after each iteration of the loop and the comparison of ‘index’ to ‘n’ returns true, and the number of iterations is 𝑘 = 𝑛², then 𝑔₃(𝑛) = 𝑛². The total of primitive operations for the outer loop is: 𝑔_𝑚𝑖𝑑𝑑𝑙𝑒𝑙𝑜𝑜𝑝 (𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔₃(𝑛) = + 𝑛 + 𝑛² + 𝑛² = 𝑛 + 2𝑛² 𝑔_𝑚𝑖𝑑𝑑𝑙𝑒𝑙𝑜𝑜𝑝 (𝑛) = 𝑛 + 2𝑛² - The number of primitive operations for the inner loop is: o The ‘count’ variable initializes to ‘1’ by the for-loop statement this is executed only once. However, since the inner loop is nested within the middle loop, the ‘count’ variable is initialized 𝑛2 times, then 𝑔₁(𝑛) = 1 ∙ 𝑛² = 𝑛² . o The ‘index’ variable is compared to ‘10’, this is executed in each iteration of the loop, and the number of iterations is 𝑘 = 9𝑛² , then 𝑔3(𝑛) = 9𝑛² o The ‘index’ is post-incremented by ‘1’, this is done after each iteration of the loop and the comparison of ‘index’ to ‘10’ returns true, and the number of iterations is 𝑘 = 9𝑛² , then 𝑔3(𝑛) = 9𝑛² . The total of primitive operations for the outer loop is: 𝑔_𝑖𝑛𝑛𝑒𝑟𝑙𝑜𝑜𝑝(𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔₃(𝑛) = 𝑛² + 9𝑛² + 9𝑛² = 19𝑛² 𝑔_𝑖𝑛𝑛𝑒𝑟𝑙𝑜𝑜𝑝(𝑛) = 19𝑛²   Step 4: The total of primitive operations done by the loops - 𝑔_𝑙𝑜𝑜𝑝𝑠(𝑛) = 𝑔_𝑜𝑢𝑡𝑒𝑟𝑙𝑜𝑜𝑝 (𝑛) + 𝑔_𝑚𝑖𝑑𝑑𝑙𝑒𝑙𝑜𝑜𝑝 (𝑛) + 𝑔_𝑖𝑛𝑛𝑒𝑟𝑙𝑜𝑜𝑝 (𝑛) - 𝑔_𝑙𝑜𝑜𝑝𝑠(𝑛) = 1 + 2𝑛 + 𝑛 + 2𝑛² + 19𝑛² - 𝑔_𝑙𝑜𝑜𝑝𝑠(𝑛) = 21𝑛² + 3𝑛 + 1 Step 5: The Big-Oh notation Since the array computation in the inner loop, the only place where computations are done can be considered a constant said we called ‘c’, then the number of times that computation will be is executed is 𝑔_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝 = 9𝑐𝑛². With 𝑔_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝(𝑛) = 9𝑐𝑛² then the asymptotic complexity is 𝑂_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝(𝑛²). Justification: 9𝑐𝑛² with c being a constant ≤ (9𝑐)𝑛² = 𝑐𝑛². By the definition of the Big-Oh notation, 𝑂_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝(𝑛²). In other words, we can ignore the array computations in terms of time complexity analysis because the computations within the inner loop are independent of “n”, meaning that the computations in the inner loop, in the context of the time complexity, are constant and are ignored by the Big-Oh nation. The time complexity of the loops is 𝑂_𝑙𝑜𝑜𝑝𝑠(𝑛²): with 𝑔_𝑙𝑜𝑜𝑝𝑠(𝑛) = 21𝑛² + 3𝑛 + 1 then the asymptotic complexity is 𝑂_𝑙𝑜𝑜𝑝𝑠(𝑛²) Justification: 21𝑛² + 3𝑛 + 1 ≤ (21+3+1)𝑛² = 𝑐𝑛², for 𝑐 = 25, when when 𝑛 ≥ 𝑛₀ = 1. By the definition of the Big-Oh notation. The overall complexity of the algorithm is 𝑂(𝑛²) Justification: With 𝑂_𝑙𝑜𝑜𝑝s(𝑛²), 𝑔_𝑙𝑜𝑜𝑝s(𝑛) = 𝑛² and 𝑂_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝(𝑛²), 𝑔_𝑖𝑛𝑛𝑒𝑟𝑐𝑜𝑚𝑝(𝑛) = 𝑛² then 𝑛² + 𝑛² = 2𝑛² = 𝑐𝑛², for c=2. By the definition of the Big-Oh notation, the overall complexity of the algorithm is 𝑂(𝑛²) . Therefore The order of the algorithm is 𝑂(𝑛²) In conclusion, the computations within the inner loop are independent of the input ‘n’, meaning that the computation in inner loop, in the context of the time complexity is constant. Since this is the only place where the actual computations occur, we can ignore it in terms of time complexity analysis. The order of the algorithm is 𝑂(𝑛²) indicating that time complexity is quadratic. 𝑂(𝑛²) is a quadric type time complexity.  Exercise 3 Consider two programs, A and B. Program A requires 1000 x 𝑛² operations and Program B requires 2n operations. For which values of n will Program A execute faster than Program B? The number of operations for program A is 𝑔_𝐴(𝑛) = 1000𝑛² where 𝑛 is the number of inputs. The number of operations for program B is 𝑔_𝐵(𝑛) = 2𝑛 where 𝑛 is the number of inputs. When comparing algorithms a primitive operation corresponds to 1 unit of time, 𝑔𝐴(𝑛) and 𝑔𝐵(𝑛) can be defined as a growth rate where the out is the duration and the input is the number of operations ‘n’. Therefore to find the ‘n’ values where Program A executes faster than Program B, we need to identify where 𝑔_𝐴(𝑛) < 𝑔_𝐵(𝑛). 𝑔_𝐴(𝑛) < 𝑔_𝐵(𝑛) 1000𝑛² < 2𝑛 𝑛²/𝑛 < 2/1000 𝑛 < 0.002 The values of ‘n’ where Program A executes faster than Program B are between -∞ and 0.002 excluded, (-∞,0.002). However, operations can be defined only as whole numbers or integers, ‘n’ the number of operations can be a decimal, it can be defined only as an integer Therefore, Therefore, Program A will never run faster than Program B. Another method that provides a more accurate representation of the relationship is to use the time complexity of the algorithms to compare the algorithms. The time complexity of Program A is 𝑂_𝐴(𝑛²) Justification: 1000𝑛² ≤ (1000)𝑛² = 𝑐𝑛², for 𝑐 = 1000, when 𝑛 ≥ 𝑛₀ = 1 The time complexity of Program B is 𝑂_𝐴(𝑛) Justification: 2𝑛 ≤ (2)𝑛 = 𝑐𝑛, for 𝑐 = 2, when 𝑛 ≥ 𝑛₀ = 1 To find the value of ‘n’ where Program A executes faster than Program B, we need to identify where 𝑂_𝐴(𝑛²) < 𝑂_𝐵(𝑛). 𝑛² < 𝑛 => 𝑛² 𝑛 < 1 => 𝑛 < 1 this is not possible when ‘n’ is an integer and 𝑛 ≥ 𝑛₀ = 1 No values of ‘n’ exist where Program A executes faster than Program B.  Exercise 4 Consider an array of length "n" containing unique integers in random order and in the range 1 to n + 1. For example, an array of length 5 would contain 5 unique integers selected randomly from the integers 1 through 6. Thus the array might contain 3 6 5 1 4. Of the integers 1 through 6, notice that 2 was not selected and is not in the array. Write Java code that finds the integer that does not appear in such an array. Explain the Big-Oh in your code. Note that the expected sum of integers in array length from 1 to 𝑛 + 1 is:  The Java code:  /** * Finds the missing number * * @param array The array of unique integers with one integer missing. * @param n The length of the array * @return The missing integer */ public static int missingNum(int[] array, int n) { // The expected sum of integers from 1 to n+1 int expectedSum = (n + 1) * (n + 2) / 2; int actualSum = 0; // The actual sum of the numbers in the array for (int i = 0; i < n; i++) { actualSum += array[i]; } return expectedSum - actualSum; } Step 1: Understanding the code -   The ‘expectedSum’ variable initializes to ‘(n + 1) * (n + 2) / 2’. -   The ‘actualSum’ variable initializes to ‘0’. -   The ‘array’ loop iterates through from the ‘0’ to the ‘n’ exclusive. Note that all the variables and constants in the loop are integers. o   The ‘i’ variable initializes to ‘0’. o   The ‘i’ variable is compared to ‘n’, the loop iterates as long as  ‘i’ is lesse than ‘n’. o    The ‘i’ variable is incremented by ‘1’, this means that the ‘i’ is compared to ‘n’ before being incremented. o   The ‘atualSum’ variable is incremented by the value of the ‘arrau[i]’ variable in each iteration of the for-loop until the ‘i’ is equal to or more than ‘n’. -   The function returns  ‘expectedSum – actualSum’ Step 2:  Counting the number of iterations in the array loop -    The array loop will iterate as long as  ‘i’ is less ‘n’, and it is post-incremented by ‘1’, starting at ‘i = 0’. If  is the number of iterations: Thus, the loop iterates 𝑛 times. Step 3: Number of primitive operations 𝑔(𝑛) per instruction - The ‘expectedSum’ variable initializes to ‘0’, it is executed only once, 𝑔₁(𝑛) = 1. - The ‘actualSum’ variable initializes to ‘‘(n + 1) * (n + 2) / 2’, it is executed only once, 𝑔₂(𝑛) = 1. - The number of primitive operations for the array loop is: o The ‘i’ variable initializes to ‘0’ by the for-loop statement this is executed only once, 𝑔₃(𝑛) = 1. o The ‘i’ variable is compared to ‘n’, this is executed in each iteration of the loop, and the number of iterations is 𝑘 = 𝑛, then 𝑔₄(𝑛) = 𝑛. o The ‘i’ is post-incremented by ‘1’, this is done after each iteration of the loop and the comparison of ‘pass’ to ‘n’ returns true, the number of iterations is 𝑘 = 𝑛, then 𝑔₅(𝑛) = 𝑛. o The ‘actualSum’ is incremented with the value of ‘array[i]’ variable in each iteration of the for-loop until the ‘i' is equal to or more than ‘n’. The number of times this instruction will be executed is the number of iterations k = n, then 𝑔₆(𝑛) = 𝑛 The total of primitive operations for the outer loop is: 𝑔_𝑙𝑜𝑜𝑝(𝑛) = 𝑔₃(𝑛) + 𝑔₄(𝑛) +𝑔₅(𝑛) + 𝑔₆(𝑛) = 1 + 𝑛 + 𝑛 + 𝑛 = 1 + 3𝑛 𝑔_𝑙𝑜𝑜𝑝(𝑛) = 1 + 3𝑛 - The function returns ‘expectedSum – actualSum’, this instruction is performed only once, the 𝑔₇(𝑛) = 1 Step 4: The total of primitive operations  𝑔(𝑛) = 𝑔₁(𝑛) + 𝑔₂(𝑛) + 𝑔_𝑙𝑜𝑜𝑝(𝑛) + 𝑔₇(𝑛) 𝑔(𝑛) = 1 + 1 + 1 + 3𝑛 + 1 = 4 + 3𝑛  𝑔(𝑛) = 4 + 3𝑛  Step 5: The Big-Oh notation The Big-Oh notation is O(n). Justification: 4 + 3𝑛 ≤ (4 + 3)𝑛 = 𝑐𝑛, for 𝑐 = 7, when 𝑛 ≥ 𝑛0 = 1. By the definition of the Big-Oh notation, the overall complexity of the code is 𝑂(𝑛) . The Big-Oh notation of my code is 𝑂(𝑛). The Big-Oh notation 𝑂(𝑛) in my code indicates that the complexity of the code grows in a 1-to-1 proportional relationship with the size of the input, ‘n’, which is the size of the ‘actualArray’ array. In other words, the complexity of the code grows linearly with the size of the array ‘n’, this is known as linear complexity. For example, in terms of time complexity, as the size of the array gets larger, the time it takes to run the code also increases proportionally in 1-to-1 relationship. This is because the duration that it takes to iterate through each element in the loop is the most time-consuming part of the code and all other primitive operations are ignored by the Big-Oh notation, due to their minimal impact on the overall complexity of the code.

  • Stack: Concepts and Applications - Java

    This article explains the fundamental concept of the Stack Abstract Data Type (ADT), its Last-In-First-Out (LIFO) principle, real-life applications such as browser history navigation and expression evaluation, and provides a basic Java implementation of a custom Stack. Alexander S. Ricciardi September 8th, 2024 In computer science, a Stack is a fundamental Abstract Data Type (ADT) that follows the Last-In-First-Out (LIFO) principle, which means that the last item added to the Stack is the first to be removed. The term stack comes from the analogy of a stack of plates in a spring-loaded dispenser, commonly found in cafeterias (Carrano & Henry, 2018). Where plates are pushed onto the top of the stack and popped off from the top, making the push and pop operations fundamental to the Stack ADT's functionality. The operation usually implemented in a Stack ADT are: Push (e): Adds a new element to the top of the stack. Pop (e): Removes and returns the top element of the Stack. Peek (e): Returns the top element without removing it. IsEmpty (): Checks if the Stack is empty. Clear (): Removes all elements from the Stack​ A real-life scenario where the Stack ADT is used is in web browser history navigation. For example, each time a user leaves a web page, the browser pushes the page URL onto the URL history Stack. When the browser’s "Back" button is clicked, the previous page's URL is popped from the top of the history Stack, and the current page’s URL is pushed onto another Stack, let's call it the URL forward Stack. Conversely, when the user clicks the "Forward" button, the current page’s URL is pushed onto the URL history stack, and the URL at the top of the URL forward stack is popped and displayed. A real-life scenario where Stack ADT is a mandatory choice is in algebraic expression evaluation, particularly when converting an infix algebraic expression to a postfix algebraic expression. In infix expressions, operators are placed between operands (e.g., A + B); this is the common way used by humans rather than by machine computing but it requires parentheses and precedence rules. On the other hand, in postfix expression, operators come after operands (e.g., A B +); there is no need for parentheses, and is easier for computers to evaluate using a Stack, it is solely based on the order of the operands and operators. In this scenario, the Stack ADT is perfectly suited to manage the order of operations. A stack is used to temporarily hold operators until their precedence and associativity are determined ensuring that the algebraic expression is evaluated in the correct order. This makes the Stack an efficient and essential ADT in such algorithms. Another mandatory application of a Stack ADT is in managing function calls in programming, particularly in environments that support recursion, such as the Java Virtual Machine (JVM) or C++ runtime environments. When a function is called, the function's local variables and return addresses are pushed onto the call Stack. This allows JVM, for example, to keep track of where to return in the program execution after the function execution is completed, particularly during recursive and nested function calls. Below is a simple Java program that mimics the basic functionality of the Java Stack memory. import java.util.EmptyStackException; // Custom Stack class public class MyLinkedStack { private Node top; private int size; // Node class to represent each element in the stack private static class Node { private T data; private Node next; public Node(T data) { this.data = data; } } public MyStack() { this.top = null; this.size = 0; } public void push(T data) { Node newNode = new Node<>(data); newNode.next = top; // set the next reference to the current top top = newNode; // make the new node the top of the stack size++; } public T pop() { if (isEmpty()) { throw new EmptyStackException(); } T poppedData = top.data; top = top.next; // remove the top node size--; return poppedData; } public T peek() { if (isEmpty()) { throw new EmptyStackException(); } return top.data; } public boolean isEmpty() { return top == null; } public int size() { return size; } } // Class with main method to test the stack class Main { public static void MimicJavaStackMemory(String[] args) { MyLinkedStack stack = new MyStack<>(); stack.push(10); stack.push(20); stack.push(30); System.out.println("Top element: " + stack.peek()); System.out.println("Stack size: " + stack.size()); System.out.println("Popped element: " + stack.pop()); System.out.println("Popped element: " + stack.pop()); System.out.println("Top element after popping: " + stack.peek()); System.out.println("Stack size after popping: " + stack.size()); } } Ouputs Top element: 30 Stack size: 3 Popped element: 30 Popped element: 20 Top element after popping: 10 Stack size after popping: 1 To summarize, a Stack is a fundamental Abstract Data Type (ADT) that follows the Last-In-First-Out (LIFO) principle, making it ideal for applications like browser history navigation and algebraic expression evaluation. It is also essential in managing function calls in programming, particularly in environments that support recursion. References: Carrano, F. M., & Henry, T. M. (2018, January 31). 6.1 Stacks. Data structures and abstractions with Java (5th Edition). Pearson.

  • Understanding Vectors: Foundations, Applications, and Transformations in Computer Graphics

    This article explores the fundamental concepts of vectors, their mathematical properties, and their applications in computer graphics, particularly in transformations such as translation, scaling, and rotation within multidimensional Euclidean spaces. Alexander S. Ricciardi September 8th, 2024 Vectors are a fundamental component in geometry, and they are a crucial concept in Computer Science for science for performing operations in multidimensional spaces, they are used in fields like computer graphics, machine learning, and scientific computing to represent directions, implement transformations, optimize algorithms, and process data. In this post, I will define what a vector is and discuss its applications in graphic visualization. Mathematically a vector is characterized by magnitude and direction. In physics and mathematics, the vector is used to define a quantity with direction and magnitude. In a vector space, a mathematical structure formed by a collection of vectors, a vector is an object that can be added to other vectors and multiplied by scalars (real or complex numbers). Some characteristics of vector space are: v , u , and w are vectors α and β are scalars: Commutativity: u + v = v + u Associativity: ( u + v) + w = u + (v + w) Zero vector: There exists a zero vector 0 such that v + 0 = v Inverse elements of addition: For each vector v there exists a vector -v   such that v + (-v) = 0 Distributivity of scalar multiplication: a(u + v)=αu + αv   Distributivity of scalar multiplication: (α + β)v =αv + βv(a + b) Associativity of scalar multiplication: α(βv) = (αβ)v In affine space, a vector space with points, a vector can be defined as a directed line segment between any two points from one point to the other (Angle & Shreiner, 2020). They can be represented by a point-point notation in the form of v = P - Q , where v is the vector, P and Q are the points. The direction of the vector is from P to Q and its magnitude is the length between the two points.  Some characteristics of affine space are: α and β are scalars No origin: Affine spaces do not have a fixed point of origin. Point-Vector Operations: You can add a vector to a point to get another point, and subtract one point from another to get a vector. Affine Combinations: An affine combination of points is a weighted sum, a sum where the weights (a coefficient similar to a scalar) add up to 1. For example, the point P = αP ₁ + βP ₂  where α + β =1 , is an affine combination of points P ₁ and P ₂ .  In Euclidean space, a type of affine space, distances between points can be measured, angles between vectors can be also measured using the vectors’ dot product, and cartesian coordinates are used. Some characteristics of Euclidean space are: Distance: The distance between two points is determined by the Euclidean distance formula,. Angles: Angles between vectors are measured using the dot product. Cartesian Coordinates: Points in Euclidean space are represented using Cartesian coordinates. Orthogonality: Vectors are orthogonal (perpendicular) if their dot product is equal to 90 degrees. Dot Product: The dot product of two vectors measures the angle between them. Norm or Magnitude: The length (magnitude) of a vector is computed as the square root of the sum of the squares of its components. Transformation Operations: Computation of linear transformations such as translation, rotation, scaling, and reflection. A common example of an Euclidean space is R ⁿ ­. The 2D plane R² and 3D space R³ are often visualized using Euclidean space operations, making them ideal for applications in computer graphics, where vectors are used in modeling and rendering, they are used to determine object position, camera movements, and lightning calculation. One of the most useful applications of vectors is in transformations, where they are utilized for the following types of transformations : Translation: Moving objects in space by adding a translation vector to their coordinates. P' = P + T Where: P and P' are position vectors. T is a translation. Scaling: Resizing objects by multiplying their coordinates by scaling factors. P' = S ⋅ P Where: P and P' are position vectors. S is a scaling matrix, not a vector. Rotation: Rotating objects around an axis using rotation matrices derived from vectors. P′ = R ⋅ P Where: P and P' are position vectors. R is a rotation matrix, not a vector. Shearing and Reflection : are complex transformations that modify the shape and orientation of an object using various vector operations. Let's visit the concept of rotation about the y-axis. To perform this transformation we will need to use the following matrix (counterclockwise rotation): Where: Θ is the rotation angle (dictates the length of the rotation) Applying the rotation to point P (technically the point does not rotate it changes position following a circular path, this is because points are one-dimensional, they represent a location in a system). Where: 1 is the homogeneous coordinate, used for matrix transformations in 3D space. It converts the point 3D vector into a matrix that can be easily used for matrix calculation, particularly for translation transformation. The following video explains why it is used in graphic visualization Quick Understanding of Homogeneous Coordinates for Computer Graphics . Computing the rotation using the dot product: Where: P' is the location of the new point after applying the rotation about the y-axis, notice that the value of y did not change only the x and z values did. Below is the representation of a 2D made of two independent components, the “I” shaped object and the circles, the object is the logo for my Blog website and GitHub page. Note that the 'I' shape and the circles rotate about the z-axis in 3D (if the object was a 3D object) and about a fixed point, the center of the object, in 2D in opposite directions. Once both components complete a 360° rotation, they reverse direction, switching from counterclockwise to clockwise and vice versa. The animation runs on an infinite loop. In summary, vectors are fundamental geometric concepts that are essential in computer science for operations in multidimensional spaces. They have applications in fields like computer graphics, machine learning, and scientific computing. In this post, I explored their applications in graphic visualization, including their role in transformations such as translation, scaling, and rotation within Euclidean space. References: Angel, E., & Shreiner, D. (2020). Chapter 4  Geometric object and transformation.  Interactive computer graphics. 8th edition . Pearson Education, Inc. ISBN: 9780135258262 Miolith (2023, December 3). Quick understanding of homogeneous coordinates for computer graphics [Video]. YouTube. https://www.youtube.com/watch?v=o-xwmTODTUI&t=134s

  • Physical and Logical Memory: Addressing and Allocation in Operating Systems

    This article explains the differences between physical and logical memory, addresses, and various memory allocation methods, including fixed partitioning, dynamic partitioning, and the buddy system, in operating systems. Alexander S. Ricciardi July 5th, 20243 In Operating Systems (OS), physical and logical memory  refer to distinct concepts related to the actual storage and management of data, while physical and logical addresses  relates to how the system locates and accesses that data. In simple terms, a memory is a physical place where the data is stored, and an address is the memory-address of the data stored in a register. Registers are commonly referred to as a type of computer memory built directly into the processor or Central Processing Unit (CPU) that is used to store and manipulate data during the execution of instructions (Hopkins, 2023). Memory is an essential component of a computer system's functionality. "Computer memory is organized into at least two levels, referred to as main memory and secondary memory" (Stallings, 2018, p. 315). Main memory is also called system memory or Random-Access Memory (RAM). RAM is volatile, meaning that it does not provide permanent storage, i.e., if your turn a computer off, the data stored in the computer RAM will be lost. Moreover, the CPU utilizes the RAM to store and quickly access data necessary to execute processes. Secondary memory , on the other hand, is slower than the RAM and it is mostly utilized as data long-term storage. However, second memory can also be utilized to store and access data necessary to execute processes, this concept is called virtual mememory (virtual RAM). To link the different concepts: Physical memory refers to the actual memory hardware, that is, the RAM. While it can also refer to secondary memory, for the purpose of this post, I will define it as the RAM. The logical memory, in the context of this post, can be defined as the CPU address register, which is where data logical addresses are stored. logical and physical addresses are sets of finite bits pointing to data stored in the RAM. The logical address, also called virtual address (not to be confused with virtual memory) is generated by the CPU during program execution. The set of all logical addresses generated for a program is referred to as the logical address space of the program. The physical address, on the other hand, points to a location in the RAM, and it is computed by the Memory Management Unit (MMU), which is a hardware component responsible for translating a logical address to a physical address. through the MMU. This safeguards or protects the physical memory from being directly accessed by a program. In other words, programs cannot directly access the data stored on the RAM, they access it indirectly. The set of all physical addresses corresponding to logical addresses is referred to as the physical address space. The figures below depict the relationship between the logical address, physical addresses, CPU, and MMU. (Silberschatz et al., 2018, Figure 9.5, Figure 9.2, Figure 9.1) Additionally, different mapping schemes exist to map the logical address space to the physical address space. The example above illustrates a simple MMU scheme that is a generalization of the base register scheme. The base register or relocation register stores a base value that is used to compute the physical address from the logical address. For example, in the first figure: The CPU generated a logical address value of 346. MMU generated a base address (Relocation Register) value of 14000 corresponding to the base value of the physical address space. The MMU translates the logical address into the physical address utilizing the base address, here 346 + 14000 = 14346. In summary, the CPU generates logical addresses and the MMU translates them into physical addresses to access the data stored in the physical memory. How is the RAM partitioned to store data? Operating Systems use different methods of allocating memory contiguously. Methods such as fixed partitioning, dynamic partitioning, and the buddy system. These methods allocated memory (RAM), this is done contiguously, meaning that each computing process is assigned a single block of memory that is adjacent to other process blocks. This ensures efficient utilization of memory and reduces fragmentation (Contiguous Memory Allocation, n.d.). (Images from Stallings, 2018, p. 321) Not contiguous blocks are also referred to as fragmented blocks. Fixed partitioning characteristics: The size of the block is equal for all processes. The size is determined when the system is started up and remains the same until the system is shut down. Each process that is loaded into memory is assigned to a partition. If a process is larger than a single partition, it must be divided into smaller pieces that fit into separate partitions. One downside to fixed partitioning is the potential for memory waste, known as internal fragmentation. This happens when any program, no matter how small, occupies an entire partition. This can cause internal fragmentation. Since the number of partitions is fixed, the number of processes that can be loaded into memory at the same time is limited by the number of partitions. Dynamic partitioning characteristics: The size of the partition isn't fixed. When a process needs to be loaded into memory, a partition just large enough to hold the process is allocated. Often results in more efficient use of memory because it minimizes internal fragmentation. Since each partition is just large enough to hold its process, there's less unused space within each partition. Despite reducing internal fragmentation, dynamic partitioning can lead to a problem called external fragmentation. This occurs when free memory is broken into small, non-contiguous blocks over time. Even if there is enough total free memory to accommodate a process, it may not be able to be allocated if the free memory isn't contiguous. To deal with external fragmentation, a process called compaction may be used. This involves shifting the processes in memory to make all the free memory contiguous. However, compaction can be a resource-intensive task. It requires efficient algorithms to allocate and de-allocate memory. The most common of these are the "first-fit" and "next-fit" algorithms, which each have their own trade-offs in terms of memory utilization and allocation speed. The buddy system Characteristics: It is a mix of the fixed and dynamic partitioning methods Memory is divided into partitions of different sizes, but each size is a power of 2. For example, starting with a block of size 1024KB, it can be divided into smaller "buddy" blocks of sizes 512KB, 256KB, 128KB, 64KB, 32KB, 16KB, 8KB, 4KB, 2KB, and 1KB. When a process requests memory, the system checks for a free partition of the exact size requested. If a partition of the exact size doesn't exist, the system splits a larger partition in half (and continues splitting if necessary) until it has a partition of the correct size. When memory is freed, the system checks if the "buddy" of the freed partition (the adjacent partition of the same size) is also free. If it is, the system combines the two partitions into a larger partition of the next power of 2 size. This combining (or coalescing) continues until the system can't combine the freed partition with its buddy. The Buddy System offers a good compromise between efficient use of memory and reduction of fragmentation. It's more efficient than fixed partitioning because it can handle processes of different sizes more flexibly. And it reduces the external fragmentation that can be a problem with dynamic partitioning by merging free memory blocks. The buddy system, however, has its drawbacks. One of these is overheads is involved splitting and combining blocks of memory. In conclusion, physical and logical memory - physical and logical addresses play essential roles in how an OSs manages data storage and access. Physical memory refers to the actual hardware (RAM). On the other hand, logical memory is an abstraction for process execution. The Memory Management Unit (MMU) translates logical addresses into physical addresses. Additionally, memory allocation methods like fixed partitioning, dynamic partitioning, and the buddy system help utilize to manage RAM usage and minimize fragmentation. References: Afteracademy (2020, March 30). What is the difference between logical and Physical Address WRT operating system? AfterAcademy . https://afteracademy.com/blog/what-is-the-difference-between-logical-and-physical-address-wrt-operating-system/ Contiguous Memory Allocation (n.d.).   Online Courses and eBooks Library . https://www.tutorialspoint.com/contiguous-memory-allocation#:~:text=Contiguous%20memory%20allocation%20is%20a,or%20adjacent%20to%20each%20other . Hopkins, J. (2023, May 24). What is a register in a CPU and how does it work? Total Phase Blog . https://www.totalphase.com/blog/2023/05/what-is-register-in-cpu-how-does-it-work/ Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating System Concepts [PDF]. Wiley. Retrieved from: https://os.ecci.ucr.ac.cr/slides/Abraham-Silberschatz-Operating-System-Concepts-10th-2018.pdf   Stallings, W. (2018). Operating Systems: Internals and design principles . Pearson.

  • Double Buffering: Ensuring Smooth Animation and Responsiveness in Graphics

    This article explains double buffering in computer graphics, emphasizing its importance in preventing screen flickering and ensuring smooth, responsive application animations. Alexander S. Ricciardi August 25th, 2024 In computer graphic visualization, double buffering is a crucial technique that is used to address the issues of screen refresh rates and image rendering, particularly in applications where smooth animation and user interactions are essential. Within a graphics system, images are generated in the framebuffer, and the framebuffer's content—the image—is shown on an output device, usually a screen, which is managed by the window system or, in the case of WebGL, by the browser. Moreover, in single-buffer models, there is only one buffer—the color buffer in the framebuffer—for rendering and display (Angel & Shreiner, 2020). However, an output device like a screen has a fixed refresh rate that is independent of how frequently the graphics system updates the framebuffer. Thus, an issue may arise between the screen refresh rate and the graphics system updating the framebuffer, particularly when repeatedly clearing and redrawing an area of the screen, an operation often performed in animations. In other words, there is no synchronization between when new images are drawn into the framebuffer and when the framebuffer is redisplayed by the hardware. This may lead to situations where only part of the image is displayed, depending on the screen refresh rate. In most cases, this translates into an image flickering issue. Figure 1 Double Buffers Note: From Platform Integration Aspects: Framebuffer Concepts by Embedded Wizard (n.d.) Double buffering provides a solution to this problem by using two buffers. One buffer, called the front buffer, is used to display the current image, while the other, called the back buffer, is used to draw the next image. Once the image from the front buffer is displayed on the screen, the contents of the back buffer are transferred to the front buffer, and a new image is drawn in the back buffer, and so forth. This eliminates flickering and provides seamless animation. One application that heavily relies on double buffering is Virtual Reality. In VR applications, the user’s experience needs to be deeply immersive to be effective. In this scenario, double buffering is crucial as it helps maintain application responsiveness by preventing flickering and ensuring that the display is updated seamlessly (Lenovo, n.d.). This is especially important when users interact with the GUI, such as dragging windows or scrolling through content. The smooth updates provided by double buffering enhance the overall responsiveness and usability of the interface—another application where double buffering is video games. The user can also experience screen flickering and incomplete scene rendering. For instance, in a first-person shooter game without double buffering implemented, an incomplete scene can be displayed when the screen refreshes, revealing, for example, an enemy hiding behind a wall because the wall was not yet drawn (Nerd First, 2015). Double buffering prevents such issues, ensuring smooth and fully rendered images and enhancing the player experience. In conclusion, double buffering eliminates issues like flickering and incomplete scene rendering ensuring seamless animations. This technique is particularly vital in applications where user interaction is crucial, such as virtual reality and video games, where maintaining responsiveness and immersion is key to a successful interactive experience.  References: Angel, E., & Shreiner, D. (2020). Chapter 3 interaction and animation. Interactive computer graphics. 8th edition. Pearson Education, Inc. ISBN: 9780135258262 Embedded Wizard (n.d.). Platform integration aspects: Framebuffer concepts. Embedded Wizard. https://doc.embedded-wizard.de/framebuffer-concepts Lenovo (n.d). What is double buffering? Lenovo. https://www.lenovo.com/us/en/glossary/doublebuffering/?orgRef=https%253A%252F%252Fwww.perplexity.ai%252F Nerd First (April 23, 2015). Double buffering - Friday Minis 103 [Video]. Youtube. https://www.youtube.com/watch?v=f3tO_gyyLmk

  • Understanding Process States in Operating Systems

    This article describes the role of process states in Operating Systems, detailing how they manage resource allocation and processor time through transitions between Ready, Running, Waiting, and Terminated states. Alexander S. Ricciardi June 20th, 2023 Processes and their states are fundamental components of an Operating System playing a role in resource management. Moreover, process states have a crucial part in how processor time and processor resources are allocated and distributed. A process can be defined as:  “A program in execution. An instance of a program running on a computer. The entity that can be assigned to and executed on a processor. A unit of activity characterized by the execution of a sequence of instructions, a current state, and an associated set of system resources.” (Stallings, Chapter 3, 2018) In the context of the discussion, a process can be defined as a set of instructions that need to be executed by a processor. Secondly, a process state is the status of a process at a given time. They are four main processes in a Process Life Cycle: Ready to be assigned to a processor.  Running, a processor is executing the process. Waiting, the process is idle waiting for resources. Terminated, the process is removed from memory. The first state that I will discuss in this post is the Running state. A process is in the Running state when it is being executed by a processor, and the process will be executed until one of the following events happens: The process is completed. The process state will change from Running to Terminated.  The process is preempted by the OS due to a higher priority process coming into the ready state. The process state will change from Running to Ready. The process’s time slice has expired. The process state will change from Running to Ready. The process requested an I/O operation. The process state will change from Running to Waiting. The process is waiting for data from another process. The process state will change from Running to Waiting. A system call is requested. If the system call was requested by the process, the process state will change from Running to Waiting, If the system call was requested by another process (multiprogramming and/or multiprocessor environments) from Running to Ready. An exception happened, for example, the process is trying to open a nonexistent file. Depending on the exception the process state change from Running to Waiting or from Running to Terminated. The second state that I will discuss in this post is the Waiting state also called the Blocked state. A process is in a Waiting state when it is waiting for some event to occur (like user input or disk access), or it's waiting for a resource to become available. The process is idle and not being executed. The process will be in the Waiting state until one of the following events happens: Completion of an I/O request. The process state will change from Waiting to Ready. Receiving requested data (or signal) from another process. The process state will change from Waiting to Ready. A requested resource is available. The process state will change from Waiting to Ready. Furthermore, a process with a Waiting state status can only transition to a Ready state, in comparison a process in a Running state can transition to a Terminated, a Ready, or a Waiting state. The following chart flow depicts the transitions between the four main states Running, Terminated, Waiting, and Ready state and the New state. Figure 1 Process State Note: From Operating System Concepts by Silberschatz et al. (2018) In conclusion, processes and their state are fundamental components of an operating system used to manage resources, and process states play a crucial part in how processor time and processor resources are allocated and distributed. Additionally, there are four main states, which are Ready, Running, Waiting, and Terminated states. A process is in a Running state when is being run by a processor, and in a Waiting state when the process is waiting for resources or signals from another process. Furthermore, a process in a Waiting state can only transition to a Ready state, on the other hand, a process in a Running state can transition to a Terminated, a Ready, or a Waiting state.  References: Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating System Concepts [PDF]. Wiley. Retrieved from: https://os.ecci.ucr.ac.cr/slides/Abraham-Silberschatz-Operating-System-Concepts-10th-2018.pdf Stallings, W. (2018). Operating Systems: Internals and design principles. Pearson.

  • Big-Oh Notation: Key to Evaluating Algorithm Efficiency

    This article explains the importance of Big-Oh notation in computer science for evaluating and comparing the time and space complexity of algorithms. Alexander S. Ricciardi August 28th, 2024 In computer science, Big-Oh notation is used to describe the time complexity or space complexity of algorithms (Geeks for Geeks, 2024). Mathematically, it defines the upper bound of an algorithm’s growth rate, known as the asymptotic upper bound, and is denoted as f(n) is O(g(n)) or f(n) ∈ O(g(n)) , pronounced f(n) is Big-Oh of g(n) . The term "asymptotic" refers to the behavior of the function as its input size n approaches infinity. In the context of computer science, it describes the worst-case scenario for time complexity or space complexity. For example, an algorithm with time complexity will grow much faster than one with O(n ² ) as the input size increases, with n representing the number of primitive operations. Primitive operations are low-level instructions with a constant execution time, such as assigning a value to a variable, performing an arithmetic operation, comparing two values, or accessing an element in an array by its index. Definition of Big-Oh: Let  f(n)  and  g(n)  be functions mapping positive integers to positive real numbers. We say that  f(n)  ∈ O(g(n))  if there is a real constant  c > 0  and an integer constant  n0 ≥ 1  such that                                                            f(n) ≤ c⋅g(n)   ,    for n ≥ n ₀                (Carrano & Henry, 2018) Others The Graphical representation of the relationship: Figure 1: Big-Oh Time Complexity Note: From Big O notation tutorial – A guide to Big O by Geeks for Geeks (2024) Here are some of the properties of the Big-Oh notation: f(n) = a is     O(1) ,     where a is a constant .  Ex: a = 4, f(5) = 4  and  f(7) = 4   justification: a ≤ a, hence a = a   =>  a = 1a = ca , for c = 1, when n ≥ n ₀ = 1 f(n) = 7n + 2  is    O(n) justification: 7n + 2   ≤ (7+2)n = cn ,  for c = 9, when n ≥ n ₀ = 1 f(n) = 6n ⁴ + 2n ³ + n ² - 3n is    O(n) justification: 6n ⁴ + 2n ³ + n ² - 3n   ≤  (6+2+1-3) n ⁴  = cn ⁴ , for c = 6, when n ≥ n ₀ = 1 f(n) = 7n ³   + 5n log n + 9n is    O(n ³   ) justification: 7n ³   + 5n log n + 9n  ≤  (7+5+9) n ³   = cn ³   , for c = 6, when n ≥ n ₀ = 1    f(n) = 5 log n + 3 is    O(log n)      justification: 5n log n + 3  ≤  (5+3) log n = c log n , for c = 8 and n ≥ 2, when n ≥ n ₀ = 2. Note that log n is 0 for n = 1 . f(n) = 3  ⁿ⁺² is    O(3 ⁿ )    justification: 3   ⁿ⁺²  ≤  = 3 ²  +  3 ⁿ , hence 3 ⁿ⁺²  =  9  ∙  3 ⁿ  = c 3 ⁿ , for c = 9 when n ₀ = 1    f(n) = 7n + 75 log n   is    O(n) justification: 7n + 75 log n  ≤  (7+75)n  = cn, for c = 82, when n ≥ n ₀ = 1 (Carrano & Henry, 2018) Note that the Big-Oh notation eliminates constant terms and lower factors. The major function types used to analyze time complexity are Logarithm, Linear, Quadratic, and Constant Time Complexities, below is the definition of these functions: Constant Time Complexity - O(1) O(g(n)) where g(n) = c For some fixed constant c, such as c = 2 , c = 33 , or c = 300 . The time complexity remains constant, regardless of the input size. The algorithm takes the same amount of time to complete, no matter how large n This is the most efficient time complexity. Logarithmic Complexity - O(log n) O(g(n)) where g(n) = log n Note that in computer science log n =  log ₂ n And x = log ₂ n  iff   2 ˣ = n The time complexity grows logarithmically with input size, meaning it increases slowly. Linear Complexity - O(n) O(n) where g(n) = n Given an input value n , the linear function g assigns the value n itself. The time complexity grows linearly with input size, meaning the time taken increases directly in proportion to the input size. Quadratic Complexity - O(n ² ) O(n ² ) where g(n) = n ² The time complexity grows quadratically with input size, meaning the time taken increases by the square of the input size. As shown above, the time or space complexity directly relates to the Big-Oh bounding function g(n) type. For example, the Big-Oh quadratic function O(n ² ) has a greater growth rate than the logarithmic function O(log n) . In other words, O(n ² ) has greater complexity than O(log n) making the algorithm associated with O(n ² ) worse than the one related to O(log n). See Figure 2 for different Big-Oh functions' growth rates. Figure 2 Big-Oh - O(g(n)) Growth Rates The table below illustrates various O(g(n)) related to common computer science algorithms: Table 1 Big-Oh Notation Summary Note: From Big O Notation by Cowan (n.d.). Big-Oh Pros and Cons The Big-Oh notation has both pros and cons. One of the main advantages is that it provides a clear way to express algorithm efficiency by eliminating constant factors and lower-order terms, allowing the focus to be on how the algorithm's performance scales as the input size grows, rather than being tied to specific computing system specifications or the type of compiler used. In other words, it is platform independent and programming language agnostic. Although both will have a significant impact on the final running time of the algorithms; however, that impact will most likely be a constant multiple (Educative, n.d.). The major disadvantage of Big-Oh notation is that it loses important information by eliminating constant factors and lower-order terms, and by ignoring computing system specifications and the type of compiler used. Additionally, it only focuses on the worst-case scenario, the upper bound, leaving out potential insights into how the algorithm might perform under typical conditions or with smaller input sizes. In other words, the Big-Oh notation gives the worst big picture of algorithm performance and ignores the rest of the nuances that could be crucial for better understanding the algorithm's behavior. The Importance of Big-Oh Big-Oh analysis, or any other algorithm analysis, can help uncover performance bottlenecks, identify opportunities for optimization, and influence software design by selecting more appropriate algorithms for specific tasks. Making Big-Oh and any other algorithm analysis crucial for developing efficient, and reliable software systems. One real-world application where Big-Oh analysis is essential for efficiency is in e-commerce platforms like Amazon, where millions of products need to be searched quickly and accurately. An inefficient search algorithm can directly impact user experience by slowing down searches, frustrating potential customers, and leading to abandoned carts and missed sales. Thus, optimizing or selecting the right search algorithms using Big-Oh or another algorithm analysis is paramount for the efficiency of the application and user experience. In conclusion, Big-Oh notation is a powerful tool for evaluating and comparing the efficiency of algorithms. While it has limitations in ignoring constant factors and average-case scenarios, its ability to highlight worst-case performance and guide algorithm selection makes it indispensable in developing efficient and reliable software systems, particularly in high-demand applications like e-commerce platforms. References: Carrano, F. M., & Henry, T. M. (2018, January 31). Algorithms: Algorithm Analysis. Data structures and abstractions with Java (5th Edition). Pearson. Cowan, D. (n.d.). Big O Notation. Down Cowan Blog. https://www.donkcowan.com/blog/2013/5/11/big-o-notation Educative (n.d.). Advantages and disadvantages of the Big-O notation . Educative. https://www.educative.io/courses/algorithmic-problem-solving-preparing-for-a-coding-interview/advantages-and-disadvantages-of-the-big-o-notation Geeks for Geeks (March 29, 2024). Big O notation tutorial – A guide to Big O analysis.   https://www.geeksforgeeks.org/analysis-algorithms-big-o-analysis

  • The Black Box Concept in Graphics Programming and Deep Learning

    This article describes the black box concept in engineering, focusing on its application in graphics programming and deep learning. Alexander S. Ricciardi August 27th, 2024 The concept of a black box refers to a system in engineering that is characterized solely by its inputs and outputs, and we may not know its internal workings (Angel & Shreiner, 2020). In graphic programming, this concept can be defined as developers interacting with the graphics pipeline, through a graphical API such as WebGl and OpenGl by declaring specific inputs, such as vertex data and shaders, and receiving the rendered output, without needing to understand or interact with the processes handled internally by the operating system or the GPU, see Figure 1. Figure 1 Graphics Pipeline as a Black Box Note: From 2.3 Webgl Application Programming Interface, Interactive Computer Graphics (8th edition) , Figure 2.3 Graphic System as a Black Box by Angel and Shreiner (2020).                                                               Notice that the flow chart from Figure 1 has three elements, application program, graphics system, and input or output devices. The flow chart can be described as follows: Function calls from application program to graphics system. Output from graphics system to input or output devices. Input from input or output devices to graphics system. Data from graphics system to application program. This abstraction allows graphic programmers to focus on developing and optimizing visual outputs and user experiences rather than getting bogged down in the complexities of the graphics system architecture and processing details. In other words, application interfaces such as WebGL, OpenGL, and DirectX allow graphic programmers to create sophisticated graphics without worrying about the low-level operations managed by the operating system and the functionalities of the GPU, enabling them to concentrate on the creative aspects of their work. Another domain where the black box concept is used is in Artificial Intelligence (AI), particularly in Deep Learning (DL). Just as in graphics programming with WebGL and OpenGL, DL developers interact with APIs using Python libraries such TensorFlow and PyTorch , without needing to understand or interact with the processes handled internally by the operating system or the GPU. Furthermore, AI systems often function as black boxes themselves making decisions based on complex algorithms and vast amounts of data that are opaque to users. This is known as the “black box problem” in AI, where the inputs and outputs are clear, but the pathway that the AI took (thinking-reasoning) from input to output is not easily understood or visible, even to the developers who created the systems (Rawashdeh, 2023). The black box concept is a powerful abstraction tool as shown by its application in graphic programming and DL development, enabling programmers and developers to be more efficient and to concentrate on the creative and engineering aspects of their work. References: Angel, E., & Shreiner, D. (2020).  Interactive computer graphics . 8th edition. Pearson Education, Inc. ISBN: 9780135258262 Rawashdeh, S. (2023, March 6) AI's mysterious ‘black box’ problem, explained. University of Michigan-Dearborn. https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

  • Process Synchronization in Operating Systems: Key Challenges and Solutions

    This article discusses methods and techniques of process synchronization in operating systems, focusing on classic problems like the Bounded-Buffer, Readers-Writers, and Dining-Philosophers, and their solutions using semaphores, deadlock prevention, and mutual exclusion. Alexander S. Ricciardi March 27th, 2024 In the context of an operating system (OS), process synchronization can be defined as a set of methods and techniques used to coordinate the concurrent execution of multiple processes or threads. In a multi-process environment, which can also be referred to as "multiprocessing" or "parallel computing", process synchronization plays a crucial role in ensuring that shared data is accessed and modified correctly by addressing several problems that may arise in multi-process environments. Classic problems of process synchronization are the Bounded-Buffer, Readers–Writers, and Dining-Philosophers problems. Solutions to these problems can be developed using tools such as semaphores, binary semaphores,  Message passing, and monitors (Silberschatz et al., 2018, p. 314). The Bounded-Buffer Problem  The Bounded-Buffer Problem is a classic synchronization problem that involves coordinating the operations of producers and consumers accessing a fixed-size buffer. A Bounded-Buffer can lead to Race Condition.  A Race Condition is “a situation in which multiple threads or processes read and write a shared data item, and the final result depends on the relative timing of their execution” (Stallings, 2018, Chapter 5). Mutual exclusion must be enforced to ensure that only one process can access the shared buffer at a given time. The following is a Bounded-Buffer producer and consumer pseudocode. // producer while (true) { /* produce item v */ while ((in + 1) % n == out) /* do nothing */; b[in] = v; in = (in + 1) % n; } // consumer while (true) { while (in == out) /* do nothing */; w = b[out]; out = (out + 1) % n; /* consume item w */; } (Stallings, 2018, Chapter 5.4) The problem Description: The fixed-size buffer can hold a limited number of items. Multiple producers may add items to the buffer concurrently. Multiple consumers may remove items from the buffer concurrently. Producers must wait if the buffer is full. Consumers must wait if the buffer is empty. A solution to the Bounded-Buffer Problem is the implementation of semaphore, which is a mutual exclusion method. A semaphore (also called a counting semaphore or a general semaphore) is an integer value used for signaling among processes, and can only be accessed through two standard atomic operations: semWait() and semSignal(). The letter 's' is generally used to denote a semaphore, below is an example of semaphore pseudocode. struct semaphore { int count; queueType queue; }; void semWait(semaphore s) { s.count--; if (s.count < 0) { /* place this process in s.queue */; /* block this process */; } } void semSignal(semaphore s) { s.count++; if (s.count <= 0) { /* remove a process P from s.queue */; /* place process P on ready list */; } } (Stallings, 2018, Figure 5.6) Not that only three operations may be performed on a semaphore, all of which are atomic: initialize, decrement, and increment. The decrement operation may result in the blocking of a process, and the increment operation may result in the unblocking of a process. Furthermore, a binary semaphore is a semaphore that takes on only the values 0 and 1, and a mutex is “similar to a binary semaphore. A key difference between the two is that the process that locks the mutex (sets the value to 0) must be the one to unlock it (sets the value to 1)" (Stallings, 2018, Chapter 5.4). Below is an example of binary semaphore pseudocode: struct semaphore { int count; queueType queue; }; void semWait(semaphore s) { s.count--; if (s.count < 0) { /* place this process in s.queue */; /* block this process */; } } void semSignal(semaphore s) { s.count++; if (s.count <= 0) { /* remove a process P from s.queue */; /* place process P on ready list */; } } (Stallings, 2018, Figure 5.7) Note that: A binary semaphore may be initialized to zero or one. The semWaitB(semaphore s) function checks if the value is zero, then the process executing the semWaitB() is blocked. If the value is one, then the value is changed to zero and the process continues execution. The semSignalB(semaphore s)  function checks if the queue is empty (s is equal to zero) on the semaphore, if so, the semaphore is set to one, else a process blocked by a semWaitB() is unblocked (removed from the queue) and placed on the ready list. The following pseudocode shows a possible solution to the Bounded-Buffer Problem using semaphores: Readers–Writers Problem /* program boundedbuffer */ const int sizeofbuffer = /* buffer size */; semaphore s = 1, n = 0, e = sizeofbuffer; void producer() {     while (true) {        produce(); // produce item         semWait(e);         semWait(s);         append(); // add item to 'the end of the list'         semSignal(s);         semSignal(n);     } } void consumer() {     while (true) {         semWait(n);         semWait(s);         take(); // take item from 'the list'         semSignal(s);         semSignal(e);         consume(); // consume item     } } void main() {     parbegin(producer, consumer); } (Stallings, 2018, Figure 5.13) The Readers–Writers Problem can be described as multiple readers and writers concurrently assessing a resource (e.g., a file or database) and potentially affecting the data integrity. The condition that needs to be met to ensure data integrity are: Any number of readers may simultaneously read the file. Only one writer at a time may write to the file. If a writer is writing to the file, no reader may read it. Note that readers are processes that are not required to exclude one another, and writers are processes that are required to exclude all other processes, readers and writers alike. (Stallings, 2018, Chapter 5.7) A solution to the Readers–Writers Problem is the implementation of semaphore or message passing. Message passing is a mechanism to allow processes or threads to communicate and synchronize their actions. In message passing, communication occurs by explicitly sending a message from a sender to a receiver. The sender encapsulates the data or information to be transmitted into a message and sends it to the receiver. The receiver then receives the message and extracts the information or data from it. Dining-Philosophers problem The dining-Philosophers problem describes a scenario where, for example, five philosophers share a meal at a round table. Every philosopher has a plate and one fork to their right and one fork to their left. They need to use two forks to eat, and it is a total of five forks on the table.  The problem arises when all the philosophers decide to eat at the same time. The Dining-Philosophers Problem is a classic synchronization problem, it is a metaphor for the problems of deadlock and starvation, which can occur when multiple processes attempt to access multiple resources simultaneously. Starvation is the situation in which a process or thread waits indefinitely within a semaphore. (Silberschatz et al., 2018, p. G-32) Deadlock is the state in which two processes or threads are stuck waiting for an event that can only be caused by one of the processes or threads. (Silberschatz et al., 2018, p. G-9). Two main methods exist to prevent or avoid deadlocks. The first one is the deadlock avoidance approach, which grants access to a resource if it cannot result in a deadlock. The second one is the deadlock prevention approach which involves changing the rules so that processes will not make requests that could result in deadlock. Not that for a deadlock to occur, each of the four conditions listed below must hold: Mutual Exclusion: Only one process at a time can use a resource. Hold and Wait: A process that holds at least one resource and is waiting to acquire additional resources held by other processes. No Preemption: A resource can be released only voluntarily by the process holding it after that process has completed its task. Circular Wait: A set of waiting processes. In other words, a deadlock will not occur by ensuring that at least one of the conditions does not exist. Deadlock prevention prevents at least one of the four conditions of deadlock. On the other hand, deadlock avoidance allows the four conditions, but it allows only three of the four conditions to exist concurrently. Finally, one of the solutions for Dining-Philosophers Problem is using a monitor-based solution. A monitor is a programming language construct that allows to put a lock on objects. The main characteristics of a monitor are the following: The local data variables are accessible only by the monitor’s procedures and not by any external procedure. A process enters the monitor by invoking one of its procedures. Only one process may be executing in the monitor at a time; any other processes that have invoked the monitor are blocked, waiting for the monitor to become available. (Stallings, 2018, Chapter 5.5). In conclusion, process synchronization is a complex subject and it is essential in operating systems to coordinate the concurrent execution of multiple processes or threads. It addresses various problems such as the Bounded-Buffer Problem, Readers-Writers Problem, and Dining-Philosophers Problem. These problems highlight the challenges of managing shared resources, ensuring mutual exclusion, avoiding race condition and deadlock, and preventing starvation. References: Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating system concepts [PDF]. Wiley. Retrieved from: https://os.ecci.ucr.ac.cr/slides/Abraham-Silberschatz-Operating-System-Concepts-10th-2018.pdfLinks to an external site. Stallings, W. (2018).  Operating Systems: Internals and design principles . Pearson

  • Exception Handling in Python

    This article explores the various techniques used to handle exceptions in Python, including try-except blocks, custom exceptions, and advanced features like exception chaining and enrichment. Alexander S. Ricciardi March 26, 2024 Python provides a robust exception-handling framework that not only allows programmers to implement code that prevents crashes but also offers feedback and maintains application stability. Moreover, it enables developers to manage errors gracefully using constructs like try-except blocks, custom exceptions, and more. • The Try-Except Block In the try-except block, the code that may raise an exception is placed in the try-block, and the except-block specifies the actions to take if an exception occurs (Python Software Foundation, n.d.). For example: try: result = 1 / 0 except ZeroDivisionError: print("Cannot divide by zero.") To catch multiple exceptions in one try-except block, we can use a try-block with several except-blocks to generate specific responses for each exception type. Or, we can use a tuple to catch multiple exceptions with a single exception expression. For example: # One try block and several except blocks try: result = 1 / 'a' except ZeroDivisionError: print("Cannot divide by zero.") except TypeError: print("Type error occurred.") # One try block and one except tuple block try: # some operation result = 1 / 'a' except (ZeroDivisionError, TypeError) as e: print(f"Error occurred: {e}") - The Else Clause The else clause, is placed after the try-except blocks and runs if the try block does not raise an exception. For example: try: result = 1 / 2 except ZeroDivisionError: print(“Cannot divide by zero.”) else: print(“Division successful.”) • The Finally Clause The finally clause is always placed after the try-block or any except-block. It contains code that runs no matter what, typically for cleaning up resources like files or network connections, even if an exception was raised. For example: try: result = 1 / ‘a’ except ZeroDivisionError: print(“Cannot divide by zero.”) except TypeError: print(“Type error occurred.”) else: print(“Division successful.”) finally: print(“Goodbye, world!”) - The Raise Statement Raising exceptions: the raise clause raises exceptions by forcing an exception to occur, usually to indicate that a certain condition has not been met. For example: if ‘a’ > 5: raise ValueError(“A must not exceed 5”) - Exception Chaining You can chain exceptions with the clause raise. This is useful for adding context to an original error. For Example try: open(‘myfile.txt’) except FileNotFoundError as e: raise RuntimeError(“Failed to open file”) from e - Custom exceptions You can define your own exception classes by inheriting from the Exception class or any other built-in exception class (Mitchell, 2022). For example: class My_custom_ (Exception): pass try: raise MyCustomError(“An error occurred”) except MyCustomError as e: print(e) - Enriching exceptions you can add information or context to an exception by using the add_note() method to ‘append’ custom messages or notes to the exception object aka e. For example: def divide_numbers(a, b): try: result = a / b except ZeroDivisionError as e: e.add_note(“Cannot divide by zero”) e.add_note(“Please provide a non-zero divisor”) raise try: num1 = 10 num2 = 0 divide_numbers(num1, num2) except ZeroDivisionError as e: print(“An error occurred:”) print(str(e)) Handling exceptions is important for several reasons: Prevents program crashes: Unhandled exceptions can cause the program to crash, leading to data loss and a poor user experience. Provides meaningful error messages: By handling exceptions, you can provide users with informative error messages that help them understand what went wrong and how to fix it. Allows for graceful degradation: Exception handling enables the program to continue running even if an error occurs. A simple program error handling example: ##------------------------------------------- # Pseudocode: # 1. Define a custom exception class called CustomError. # 2. Create a function that raises the CustomError exception # based on a condition. # 3. Use a try-except block to handle the CustomError exception. #------------------------------------------- # Program Inputs: # - num: An integer value. #------------------------------------------- # Program Outputs: # - Custom exception message when the CustomError exception is raised. # - Success message when no exception is raised. #------------------------------------------- class CustomError(Exception): ''' A custom exception class. ''' pass def check_number(num): ''' Checks if the given number is positive. :param int: num, an integer value. :raises CustomError: If the number is not positive.        :Return: None ''' if num <= 0: raise CustomError("Number must be positive.") print("Number is valid.") #--- Main Program def main() -> None: ''' The main function that demonstrates the usage of custom exceptions. ''' try: check_number(5) # Valid number check_number(-2) # Raises CustomError except CustomError as e: print(f"Error: {str(e)}") #--- Execute the program if __name__ == "__main__": main() >>> Number is valid. Error: Number must be positive. To summarize, Python provides a comprehensive exception-handling framework that allows programs to handle unexpected situations without failing abruptly. By utilizing constructs such as try-except blocks, custom exceptions, and advanced features like exception chaining and enrichment, developers can ensure that their programs are resilient, user-friendly, and capable of handling unexpected scenarios gracefully. References: Mitchell R (2022, June 13). Custom exceptions. Python Essential Training [VIDEO]. LinkedIn Learning. https://www.linkedin.com/learning/python-essential-training-14898805/custom-exceptions?autoSkip=true&resume=false&u=2245842 Python Software Foundation. (n.d.). 8. Errors and Exceptions. Python . python.org .

  • Understanding Polymorphism in Python

    The article provides an in-depth explanation of polymorphism in Python, highlighting its role in object-oriented programming. Alexander S. Ricciardi March 26th, 2024 Polymorphism is a Greek word that means many-shape or many-forms. Polymorphism is a fundamental concept in object-oriented programming (OOP). Python is polymorphic, meaning that in Python objects have the ability to take many forms. In simple words, polymorphism allows us to perform the same action in many different ways. (Vishal, 2021) Furthermore, in Python, everything is an object/a class.  “Guido van Rossum has designed the language according to the principle "first-class everything". He wrote: "One of my goals for Python was to make it so that all objects were "first class." By this, I meant that I wanted all objects that could be named in the language (e.g., integers, strings, functions, classes, modules, methods, and so on) to have equal status.” (Klein, 2022, 1. Object Oriented Programming) To understand Polymorphism, it is important to understand the ”Duck Typing” concept."If it looks like a duck and quacks like a duck, then it probably is a duck." In programming, this means that the suitability of an object is determined by the presence of certain methods and properties, rather than the actual type of the object.In Python, Duck Typing is the concept where the ‘suitability’ of an object is determined by the presence of certain methods or attributes, rather than the actual type of the object.In other words, Polymorphism in Python means that a single operator, function, or class method can have multiple forms/behaviors depending on the context. Operator polymorphism Or operator overloading allows an operator like + to perform different operations based on the operand types. (Jergenson, 2022) For example: Two integers int_1 = 10 int_2 = 45 print(str(int_1 + int_2)) >>> 55       Two strings str_1 = “10” str_2 = “45” print(str_1 + str_2) >>> 1045 2. Function Polymorphism Built-in functions like len() can act on multiple data types (e.g. strings, lists) and provide the length measured appropriately for each type. For example: str_1 = "polymorphic" print(str(len(str_1))) >>> 11 my_lst = [1, 2, 3, 4, 5] print(str(len(my_lst)) >>> 5 3. Class method polymorphism Allows subclasses to override methods inherited from the parent class. For example: # Parent class class Animal: def make_sound(self): pass # Child Class class Dog(Animal): def make_sound(self): return "Woof!" # Child Class class Cat(Animal): def make_sound(self): return "Meow!" def animal_sound(animal): print(animal.make_sound()) dog = Dog() cat = Cat() animal_sound(dog) # Output: Woof! animal_sound(cat) # Output: Meow! 4. Independent classes can also define methods with the same name that behave differently. For example: def enter_obj(obj): return obj.action() # Independent class class Animal: def __init__(self, food): self.food = food # same name as the Circle method different functionality def action(self): print(f"eats {self.food}") # Independent class class Circle: def __init__(self, radius): self.radius = radius # same name as the Animal method different functionality def action(self): return 3.14 * (self.radius ** 2) cow = Animal("grass") circ = Circle(7)enter_obj(cow)print(str(enter_obj(circ))) >>> eats grass 153.86 In conclusion, polymorphism is a powerful feature of Python. It allows objects to take on multiple forms and behave differently based on the context. Python's duck typing enables polymorphism by focusing on the presence of certain methods or attributes rather than the actual type of the object. References: Jergenson, C. (2022, May 31). What is polymorphism in python? . Educative. https://www.educative.io/blog/what-is-polymorphism-python Klein, B. (2022, February 1). object oriented programming / OPP . python-course. https://python-course.eu/oop/object-oriented-programming.php Vishal. (2021, October 21). Polymorphism in python. PYnative. https://pynative.com/python-polymorphism/

  • Basic Loops in Python

    This article explains how to use 'for' and 'while' statements to create loops in Python, each serving different purposes for repetitive tasks. The article also explores additional control statements such as 'break,' 'continue,' 'pass,' and 'else' to manage loop execution. Alexander S. Ricciardi March 7th, 2024 In Python, the major statements required to create loops are ' for ' and ' while '.The for statement is mostly used to iterate over iterable objects (such as a string, tuple or list). Additionally, like other coding languages (Python Software Foundation (a), n.d.). The ‘ while ’ loop, on the other hand, is used for repeated execution as long as an expression is true. (Python Software Foundation (b), n.d.). In other words, both the ‘ for ’ and the ‘ while ’ loops are algorithmic, meaning they perform repetitive tasks until a condition is met or a condition remains true. To be more specific, the ‘ for ’ iterates over sequences executing a set of instructions until a condition is met, for example, until the end of the sequence is reached. In comparison, the ‘ while ’ will execute a set of instructions as long a condition is true. The loops complement each other and when nested within each other they can be a powerful tool for solving complex problems. This is the main reason Python has more than one loop statement. The ‘for’ statement The ‘ for ’ statement goes through each item in the sequence or iterable, one by one, and executes the block of code for each element. The flow chart below depicts the algorithmic nature of the ‘ for ’ loop. Figure 1 The ‘ for ’ loop Note: 4.3 For Loops in Python, by Colorado State University Global (2024a) A scenario of iterating over a sequence using a ‘ for ’ loop could be similar to the following: user_ids = [101, 102, 103, 104] for user_id in user_ids: print (user_id) The ‘while’ statement The ‘ while ’ statement, before each iteration, evaluates the condition; if the condition is true, the loop's body is executed. If the condition becomes false, the loop stops. The flow chart below depicts the algorithmic nature of the ‘ while ’ loop. Figure 2 The ‘ while ’ loop Note: from 4.2 While Loops in Python, by Colorado State University Global (2024b) A scenario of iterating using a ‘ while ’ loop as long a condition is true could be similar to the following: coffee = 0 homework_num = 100 while coffee < 100: coffee += 1 print(f"Drinking coffee number {coffee} ...") if coffee < 100: print(f"Doing homework number {homework_num } …") homework_num -= 1 if homework_num == 0: break else: print("Rest in peace!") The ‘ break ’ will exit the loop. The ‘ break ’, ‘ continue ’, ‘ pass ’, and ‘ else ’ statements can be used in conjunction with loops to control their execution. The ‘break ’ statement is used within loops to exit the loop. The ‘ continue ’ statement allows the loop to skip the rest of its code block and proceed directly to the next iteration. The ‘ pass ’ statement acts as a placeholder and does nothing really. It is often used by programmers as a placeholder to bypass blocks of code that are under construction or not yet implemented. The ‘ else ’ statement executes a block of code after the loop completes normally. In other words, the code within the ‘ else ’ block runs only if the loop is not terminated by a ‘ break ’ statement. For example, the ‘ while ’ loop example could be rewritten as follows: coffee = 0 homework_num = 100 while coffee < 100: coffee += 1 print(f"Drinking coffee number {coffee} ...") if coffee < 100: print(f"Doing homework number {homework_num } …") homework_num -= 1 if homework_num == 0: break else: print("Rest in peace!") Here the ‘ else ’ statement is part of the ‘ while ’ loop, the code within the ‘ else ’ would be executed if the loop is not terminated by the ‘ break ’ statement. In this case, the code within the ‘ else ’ statement will run. In conclusion, Python's 'for' and 'while' loops, along with control statements like 'break,' 'continue,' 'pass,' and 'else,' allow for control and flexibility in managing repetitive tasks in programming and creating effective code. References: Colorado State University Global (2024a) 4.3 For Loops in Python. Module 4: Python. Repetition. In ITS320: Basic Programming. Colorado State University Global (2024b) 4.2 While Loops in Python. Module 4: Python. Repetition. In ITS320: Basic Programming. Python Software Foundation (a). (n.d.). 4. More Control Flow Tools. The Python Tutorial . python.org . https://docs.python.org/3/tutorial/controlflow.html#index-0Links to an external site. Python Software Foundation (b). (n.d.). 8. Compound statements. The Python Language Reference . python.org . https://docs.python.org/3/tutorial/controlflow.html#index-0

bottom of page