Big O Notation (using Ruby)

Big O Notation (using Ruby)Daniel KoBlockedUnblockFollowFollowingApr 11Big O Notation allows us to calculate the worst possible runtime of an algorithm, or how long it takes the algorithm to complete.

In other words, it informs us of how much it will slow down based on the input size.

It’s always good to know how fast an algorithm runs because we always to achieve the fastest time efficiently.

Moreover, it’s an important concept to know when doing coding interviews as it really shows your understanding of problem solving and efficiency.

There are multiple runtimes or time complexities that Big O covers, but in this one, I’ll be covering four main time complexities:Graph displays 4 time complexities to visually understand runtimesConstant Runtime: “O (1)”arr = [3, 1, 6, 9, 10, 2]def print_all(arr) puts "#{arr[0]}" # prints out 3 puts "#{arr[1]}" # prints out 1endprint_all(arr)The runtime of this method is constant or O (1).

The reason being is no matter how big the array gets, the number of operations we perform never changes.

It’s constant.

We’re just printing out the first and second index of the array every time.

Linear Runtime: “O (n)”arr = [3, 1, 6, 9, 10, 2]def print_all(arr) arr.

each do |num| puts "#{num}" endendprint_all(arr)The runtime of this method is linear or O (n).

As you can see, we have a loop inside the method this time which iterates over the entire array and prints out the element.

However, the number of operations this loop performs will change, because depending on the amount of elements inside the array, the loop will have to make the correct number of iterations based on the input size.

An array of size 5 will take only 5 iterations whereas an array of size 10 will take 10 iterations, which is twice as long.

So as input size increases, runtime will also increase.

Exponential Runtime: “O (n²)”def print_all(arr) arr.

each do |letter1| arr.

each do |letter2| puts "#{letter1}" + "#{letter2}" end endendprint_all(["A", "B", "C"]) # prints out 9 pairsprint_all(["A", "B", "C", "D"]) # prints out 16 pairsThe runtime of this method is exponential or O (n²).

In this case, we have a loop inside a loop.

Depending on the size of the array, the outer loop will make its first iteration, and then the inner loop will iterate over the ENTIRE array before it goes back to the second iteration of the outer loop and it will continue on until the outer loop reaches the end of the array.

This runtime is very inefficient.

When you have a large array, it’s best to avoid using algorithms that utilize this runtime because it will take extremely long.

In the method, an array size of 3 will pdrint out 9 pairs, and an array size of 4 will print out 16 pairs: the loop has to square the amount of iterations based on the input size, hence why it’s called exponential runtime.

Something with an array size of 100 will take 10,000 iterations….

and nobody wants that.

Logarithmic Runtime: “O (log n)”I will not show a method for this as it’s better to explain it.

Logarithmic runtime is an extremely efficient runtime.

If an array had a size of 4000 elements, it will probably take only 12 operations to find what you are looking for as opposed to iterating over the entire array.

An example that always has logarithmic runtime is Binary Search (data structure).

Binary search works like this: let’s say you’re given an array that is ordered numerically or alphabetically.

In this case, we’ll do a numerically ordered array.

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]Our goal is to find a certain number in this array, let’s say 9.

Well, your first guess is probably to make a loop and just start from the beginning of the array to get there, but it’ll take 9 iterations for you reach 9, which will just make it a linear runtime.

If you had an ordered array with 1.

1000 and you wanted to find 999, you would have to literally iterate 999 times to get the number.

But of course, binary search makes this easier by starting at the middle of the array.

There are two middle values here but we’ll choose 5 and start from there.

Then we look at the left and right of 5.

Is the value 9, that we’re searching for, to the left or right of 5?.9 is at the right of 5, so we ignore the left side of 5 and go start at the middle of the right half of 5.

[5, 6, 7, 8, 9, 10]Now we’re searching through a shortened part of the array.

And we just repeat the same process!.Let’s look at the middle of this array, we’ll choose 8 for this one.

Then, we look to the left and right of 8.

9 is at the right of 8, so we completely ignore the left side of 8 and now we’re searching through an even more shortened array.

[8, 9, 10]Again, we start at the middle of the array, which is 9.

But look, the middle of the array starts with 9, the value we’re looking for!.So now we can just retrieve that value easily.

This literally took 3 operations when the array had an input size of 10.

ConclusionKnowing the runtimes of algorithms and how to improve them is why Big O Notation exists.

Developing more efficient algorithms leads to faster runtimes and less code (space complexity).

(I only went over the time complexity aspect of Big O, but there’s also space complexity which I may do in a future blog.

) And like I said before, this doesn’t cover everything in Big O time complexity wise as I only covered the fundamentals and the most common ones so don’t take everything here as “oh that’s it about Big O, now I’m set for coding interviews!”.

Just be sure to do some research on the other runtimes and how other algorithms exemplify that.

.

. More details

Leave a Reply