When I first started programming and like most beginners my main focus was on writing a working code which solve the problem that I have between my hands and I didn’t care less about the **complexity** or the **performance** of the **Algorithm**.

So in this Article I will talk about all of this and also define what we mean by **“Big O”** notation and why all of this is so important, so lets get started, shall we?

**The Big O Notation**

When talking about **Big O** we simply mean the **growth’s rate **of an algorithm and how well, fast it scales.

So calculating the **Big O** is super important to determine both:

**Time Complexity:**The amount of time taken by an algorithm to run.**Space Complexity:**The amount of resources*(space or memory)*taken by an algorithm to run.

Complexity helps us a lot when it comes to choosing the best solution for a certain problem from multiple other suggested solutions, based on the effectiveness of those solutions.

**Worst case and Best case complexities **

**Worst Case Complexity:**it is upon used to describe the case where the algorithm needs the most resources possible to run, and a good example for this: searching an element**(X)**in an array**(A)**with the length of**(N).**

the worst case would be when**(X)**is not an element of**(A)**or is the last element.**Best Case Complexity:**same thing can be said here, because much more like the worst case, this one describes when the algorithm take the less resources possible to run, and same example can be used here but instead this time**X**is the**first element**of the array**A**

**Common Big O Examples**

**Constant O(1)**

**O(1)** means that regardless of the input’s length, the amount of resources *( time , space )*

needed will be constant and will not change.

public int max(int a, int b){ return a>=b ? a:b; }

**Linear O(N)**

This occurred when executing a constant amount of statements N times with N is the length of the input, this will make our **Big O** grow **linearly**, the example below will demonstrate that.

public boolean hasElement(int N,int a []){ for(int i=0;i<a.length;i++){ if(a[i]==N) return true; } return false; }

**Quadratic O(N²)**

This will mostly be the case when we have a nested two loops, so for every time the outer loop is being executed the nested loop is being executed N times, which makes our **” Big O “** equal to **O(N*N)** ==> **O(N²)**.

public int [] insertionSortR(int [] A, int T){ int i = 1, j , E ; while(i < T) { E = A[i] ; j = i ; while( j ≥ 1 && A[j 1]> E) { A[j] = A[j-1]; j = j-1; } A[j] = E ; i++ ; } return A ; }

**Logarithmic O(Log**_{b} N)

_{b}N)

With this one, we usually in every loop’s iteration or recursive call divide our **N** on **b** *(>=2)*

Also worth mentioning that **O(Log _{b} N)** grows slower than

**O(N)**does.

The next example will calculate the number of digits in a number

**N**

/* * In each recursive call we replace the value * Of N with N/10 * Which makes our b=10 and in result we will have O(Log10 N) */ public int count(int N) { if (N/10 == 0) return 1; else return 1+count(N/10); }

note that the whole code used in the example was written in the JAVA language.

**Wrap Up**

So this was the **Big O** in short, I hope it was informative and easy to understand, and if you enjoyed the post and want to show appreciation to the writer and the blog as a whole, just Subscribe to our Newsletter to stay updated with our content and share, like and follow us on social Media (links below in the footer :D).

**Take Care !! until we meet again in another Post from TechTalko**