Big-O Notation

Big-O Notation

What is Big-O Notation

Big-O notation is the language and metric we use to describe the efficiency of an algorithm by talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity). Big-O notation can express the best, worst, and average-case running time of an algorithm.

An Analogy

To help you understand this let's imagine a scenario you have some important files on your computer system and you want to share those files with one of your friends who need those files urgently, How should you send them?

You might think I can email those files to my friend or with some other means of electronic transfer but this is half correct only if it's a small file, what if the file has the size of around some TeraBytes then what is it still the fastest why probably no This could take more than a day or 2 to transfer electronically. It would be much faster if you just remove the physical hard drive from your computer and give it to your friend Right

Time Complexity

This is what the concept of asymptotic runtime, or big O time, means. We could describe the data transfer "algorithm" runtime as:

  • Electronic Transfer: 0( s ), where s is the size of the file. This means that the time to transfer the file increases linearly with the size of the file.

  • Hard Drive Transfer: 0( 1) with respect to the size of the file. As the size of the file increases, it won't take any longer to get the file to your friend. The time is constant.

No matter how big the constant is and how slow the linear increase is, linear will at some point surpass constantly.

Here are some common algorithms and their run times in Big O notation:

Big O notationExample algorithm
O(log n)Binary search
O(n)Simple search
O(n * log n)Quicksort
O(n2)Selection sort
O(n!)Traveling salesperson

1. Big-O Notation

We define an algorithm’s worst-case time complexity by using the Big-O notation, which determines the set of functions grows slower than or at the same rate as the expression.

2. Omega Notation

It defines the best case of an algorithm’s time complexity, the Omega notation defines whether the set of functions will grow faster or at the same rate as the expression.

3. Theta Notation

It defines the average case of an algorithm’s time complexity, the Theta notation defines when the set of functions lies in both O(expression) and Omega(expression), and then Theta notation is used. This is how we define a time complexity average case for an algorithm.

Best Case, Worst Case, and Expected Case

These are the three-way we can actually describe our runtime for an algorithm

1. Worst Case Analysis (Mostly used)

In the worst-case analysis, we calculate the upper bound on the running time of an algorithm. We must know the case that causes a maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched (x) is not present in the array. When x is not present, the search() function compares it with all the elements of arr[] one by one. Therefore, the worst-case time complexity of the linear search would be O(n).

2. Best Case Analysis (Very Rarely used)

In the best-case analysis, we calculate the lower bound on the running time of an algorithm. We must know the case that causes a minimum number of operations to be executed. In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be Ω(1)

3. Average Case Analysis (Rarely used)

In average case analysis, we take all possible inputs and calculate the computing time for all of the inputs. Sum all the calculated values and divide the sum by the total number of inputs. We must know (or predict) the distribution of cases. For the linear search problem, let us assume that all cases are uniformly distributed So we sum all the cases and divide the sum by (n+1). Following is the value of average-case time complexity.

Space Complexity

Time is not the only thing that matters in an algorithm. We might also care about the amount of memory­ or space required by an algorithm.

Space complexity is a parallel concept to time complexity. If we need to create an array of size n, this will require 0( n) space. If we need a two-dimensional array of size nxn , this will require O( n2) space.

3 important rules for big O notation

1. Different steps get added

If your algorithm is in the form "do this, then, when you're all done, do that" then you add the runtimes.

2. Drop constant

It is very possible for O(N) code to run faster than 0( 1) code for specific inputs. Big O just describes the rate of increase. For this reason, we drop the constants in runtime. An algorithm that one might have described as 0(2N) is actually O(N)

3. Drop Non-dominating terms

You should drop the non-dominant terms.

  • O(N 2 + N) becomes O(N2).

  • O(N + log N) becomes O(N).

  • 0(5*2N + 1000N100 ) becomes 0(2N ).

Thanks for reading I Hope this helps you better understand What is Big-O do let me know if you have any questions or any suggestions for the next blog.

Happy Coding!