Peak Finding Problems and Algorithm Efficiency

Peak Finding Problems and Algorithm EfficiencyLouis RaymondBlockedUnblockFollowFollowingJan 13I’m a full-stack developer who enjoys writing about technology (among other things- If this article interests/helps you follow me on Medium and Twitter for more content like this.

The Peak Finding Problem- Why Care?While it’s a bit of a toy problem, the peak finding problem is a great platform to start getting to grips with some of the core concepts in algorithmic thinking.

It serves as an excellent introduction to facing questions about how to efficiently solve problems with large amounts of data.

Why Worry About Algorithmic Efficiency?You might think that efficiency ought not to be too much of a concern in modern times.

Computers are now more powerful than ever; they can perform staggering numbers of computations every second, and their performance improves year in, year out.

Why worry about algorithm efficiency at all?The answer is that the performance increase that results from optimizing our algorithms can dwarf the performance increases we can get from improving our hardware- this is especially true when we are dealing with large data sets.

Consider two algorithms, one has “linear time complexity”, meaning that the time it takes to finish running rises proportionally to the size of a data set.

Here, a data set that triples in size will take three times as long to deal with.

The other has “cubic time complexity” meaning that if we increase the size of a data set 3 fold, our program will take 27 times as long to run.

Given the size of data sets in the modern world (where Facebook has data for over 2 billion users, and computers are set to work analysing the human genome, which has well over 3 billion base pairs to sift through), it becomes clear that there is immense value in learning to optimise our procedures- with such large sets of data, algorithmic efficiency can sometimes make the difference between providing a service, or providing nothing at all.

Peak Finding in One DimensionTake a look at the following question:[a,b,c,d,e,f,g,h,i]In the above array, we are using letters to represent numbers of unknown value, we assume that all the numbers involved are positive.

a is a peak if a≥b — since it only has one neighbour.

b is a peak if and only if b≥a and b≥cWrite an algorithm that finds a peak (if a peak exists)A Naive SolutionAt first glance, this looks like a trivial problem with the solution being to “Look right, then look left”.

If you’re equal to or higher than your neighbors, you’re a peak”.

Furthermore, in this case, we do not need to account for the possibility that no peak exists.

Either the whole set consists of peaks (as every number is the same) or there is at least one number that is higher than its neighbors.

Let’s write this example out in JavaScript code:function peak_finder(array){ let counter = 0 let peak = 0 let peak_index =0 while (counter <= array.

length){ console.

log(counter) if (counter === 0){ if (a[0]>=a[1]){ peak = a[0] peak_index = counter counter = array.

length return `The ${counter-1} indexed number, ${peak} is a peak` }else{ counter+=1 } }else if(counter === array.

length-1){ if (a[array.

length-1] >= a[array.

length-2]){ peak = a[array.

length-1] peak_index = counter counter = array.

length return `The ${counter-1} indexed number, ${peak} is a peak` } }else{ if (a[counter]> a[counter+1] && a[counter]> a[counter-1]){ peak = a[counter] peak_index = counter counter = array.

length return `The ${counter-1} indexed number, ${peak} is a peak` }else{ counter += 1 } }}}OK, while the above code could probably be refactored to make it a little slicker- it suffices for our discussion.

How Efficient Is This Method?This algorithm has a linear time complexity.

This means (roughly speaking) that in the worst case scenario- the time taken to execute this algorithm would be some constant multiplied by the length of the input array.

We can express this by saying that this algorithms time-complexity belongs to the class O(n).

For large data sets, this may be less than the ideal.

Is there any way we could do any better?A Better Solution: Binary Search AlgorithmThere is a better way to solve this problem- which solves the problem faster for large datasets.

This method uses a technique known as recursion.

Code detailing the method in JavaScript might look something like this:function peak_finder2(array){ if (array.

length)=== 0{ return `Array cannot be empty` }else if (array.

length === 1){ return array[0] }else{ let mid_index = Math.

floor(array.

length*0.

5) if (array[mid_index +1]>array[mid_index]){ return peak_finding(array.

slice(mid_index + 1 )) }else if (array[mid_index -1]>array[mid_index]){ new=array.

reverse().

slice(mid_index+1).

reverse() return peak_finding(new) }else{ return array[mid_index] } }}This is a better way to solve the problem at hand.

Let’s take a look at why:Working Out Time-ComplexityA good way to think about time-complexity is to think about how many computations will be performed.

In this example, all the operations involved would take constant time (returning some value).

The worst case time complexity of the algorithm is going to be some multiple of the number of times these operations are performed.

It’s quite clear that the number of computations increases with the size of the array.

However, the relationship between the size of the array and the number of computations is not linear.

Every time the program doesn’t find a peak, it breaks the array it is dealing with in half.

In the worst case- this process will stop when the array has been broken down to just one number.

This re-frames the question to become how many times can we half an array, length N, before we are just left with one element?In mathematical notation, our question looks like this:if 1 = N / (2ˣ) ,Solve for xIf you work through the problem, you will arrive at the conclusion thatx=log ₂(N)So, now we have worked out the worst case number of computations — our worst case time will be some multiple of this.

Thus,the worst case time-complexity of our algorithm belongs to the class O(K log ₂(N)), where k is some constant.

Let’s compare the worst case time-complexity for both of the methods we have come up with thus far, and get a sense of how much better this method fares as the size of our input N increases:A graph of T=N [in blue] vs T=log ₂(N) [in green], N is the size of the input givenAs N gets larger and larger, this algorithm becomes increasingly preferable to the algorithm we implemented at the beginning of this article.

.

. More details

Leave a Reply