Bet you can’t solve this Google interview question.

If not, call getContiguousIds again.

When it returns, we’ll have an updated list of contiguous nodes which returns to our reducer and used as the state for the next adjacentId.

You might be wondering where we’re adding values to contiguousIds.

That happens when we concat the current node onto contiguousIds.

Each time we recurse further, we’re making sure to add the current node onto the list of contiguousIds before looping its adjacentIds.

Always adding the current node ensures we don’t infinitely recurse.

The LoopThe second half of this function also goes through each node once.

We have reducer surrounding the recursive function.

This one checks if our code’s been scanned.

If so, keep looping until we find a node that hasn’t or until we fall out of the loop.

If our node hasn’t been scanned, call getContiguousIds and wait till it’s done.

This is synchronous, but it can take some time.

Once it comes back with a list of contiguousIds, check those against the largestContiguousIds list.

If larger, store that value.

At the same time, we’re going to add those contiguousIds to our scannedIds list to mark where we’ve been.

It’s pretty simple when you see it all laid out.

ExecutionEven with 10K items, it didn’t run into stack overflow issues with 3 randomized colors.

If I changed everything to use a single color, I was able to run into a stack overflow.

That’s because our recursive function was going through 10K recursions.

Sequential IterationSince memory is larger than the function call stack, my next thought was doing the whole thing in a single loop.

We’re going to keep track of a list of node lists.

We’ll keep adding to them and linking them together until we fall out of the loop.

This method requires we keep all possible node lists in memory until we’ve completed the loop.

In the recursive example, we only kept the largest list in memory.

Another crazy one.

Let’s break this down from the top.

We’re looping each node once.

But now we have to check if our id is in the list of node lists: contiguousIdsList.

If it’s not in any list of contiguousIds, we’ll add it and its adjacentIds.

That way, while looping, something else will link up to it.

If our node is in one of the lists, it’s possible it’s in quite a few of them.

We want to link all those together, and remove the unlinked ones from the contiguousIdsList.

That’s it.

After we’ve come up with a list of node lists, then we check which one’s the largest, and we’re done.

ExecutionUnlike the recursive version, this one does finish when all 10K items are the same color.

Other than that, it’s pretty slow; much slower than I originally expected.

I’d forgot to account for looping the list of lists in my performance estimates, and that clearly had an impact on performance.

Random IterationI wanted to take the methodology behind the recursive method and apply it iteratively.

I spent the good portion of a night trying to remember how to change the index in the loop dynamically and then I remembered while(true).

It’s been so long since I’ve written traditional loops, I’d completely forgotten about it.

Now that I’d had my weapon, I moved in for the attack.

As I spent a lot of time trying to speed up the observable versions (more on that later), I decided to go in lazy and old-school-mutate the data.

The goal of this algorithm was to hit each node exactly once and only store the largest contiguous block:Even though I wrote this like most people probably would, it’s by far the least readable.

I can’t even tell you what it’s going without going through it myself first top-to-bottom.

Instead of adding to a list of previously scanned IDs, we’re splicing out values from our remainingNodes array.

Lazy!.I wouldn’t ever recommend doing this yourself, but I was at the end of my rope creating these samples and wanted to try something different.

The BreakdownI broke this out into 3 sections separated by if blocks.

Let’s start with the middle section.

We’re checking for queuedIds.

If we have some, we do another loop through the queued items to see if they’re in our remainingNodes.

In the third section, it depends on the results of the second section.

If we don’t have any queuedIds, and remainingNodesIndex is -1, then we’re done with that node list and we need to start at a new root node.

The new root node is always at index 0 because we’re splicing our remainingNodes.

Back at the top of our loop, I could’ve used while(true), but I wanted a way out in case something went wrong.

This was helpful while debugging since infinite loops can be a pain to figure out.

After that, we’re splicing out our node.

We’re adding it to our list of contiguousIds, and adding the adjacentIds onto the queue.

ExecutionThis ended up being nearly as fast as the recursive version.

It was the fastest of all algorithms when all nodes are the same color.

Data-Specific OptimizationsGrouping Similar ColorsSince we know only blues go with blues, we could’ve grouped similarly colored nodes together for the sequential iteration version.

Splitting it up into 3 smaller arrays lowers our memory footprint and the amount of looping we need to do in our list of lists.

Still, that doesn’t solve the situation where all colors are the same so this wouldn’t fix our recursive version.

It also means we could multi-thread the operation, lowering the execution time by nearly a third.

If we execute these sequentially, we just need to run the largest of the three first.

If the largest is larger than the other two, you don’t need to check them.

Largest Possible SizeInstead of checking if we have the largest list at certain intervals, we could be checking each iteration.

If the largest set is greater than or equal to half the available nodes (5K or higher), it’s obvious we already have the largest.

Using the random iteration version, we could find the largest list size so far and see how many nodes are remaining.

If there are less than the size of the largest, we’ve already got the largest.

Use RecursionWhile recursion has its limitations, we can still use it.

All we have to do is check the number of remaining nodes.

If it’s under the stack limit, we can switch to the faster recursive version.

Risky, but it’d definitely improve the execution time as you got further along in your loop.

Use a `for` LoopSince we know our max item count, there’ll be a minor benefit from switching the reduce function to a traditional for loop.

For whatever reason, Array.

prototype methods are incredibly slow compared to for loops.

Use Tail RecursionIn the same way, I didn’t go over the observable versions in this particular article, I think tail recursion requires an article of its own.

It’s a big topic with a lot to explain, but while it would allow the recursive version to run, it might not end up being faster than the while loop like you’d expect.

RxJS: Maintainability vs PerformanceThere are ways to rewrite these functions where you’ll have an easier time comprehending and maintaining them.

The primary solution I came up with used RxJS in the Redux-Observable style, but without Redux.

That was actually my challenge for the article.

I wanted to code the regular ways, then stream the data using RxJS to see how far I could push it.

I made 3 versions in RxJS and took a few liberties to speed up the execution time.

Unlike my transducer articles, all three wound up slower even if I increased rows and columns.

I spent my nights that week dreaming up possible solutions and combing over every inch of code.

I’d even lie on the ground, close my eyes, and think think think.

Each time, I came up with better ideas but kept hitting JavaScript speed limitations.

There’s a whole list of optimizations I could’ve done, but at the cost of code readability.

I didn’t want that (still used one anyway).

I finally got one of the observable solutions — now the fastest — running in half the time.

That was the best improvement overall.

The only time I could beat out the memory-heavy sequential iteration with observables was when every node is the same color.

That was the only time.

Technically that beats the recursive one too because it stack overflows in that scenario.

After all that work figuring out how to stream the data with RxJS, I realized it’s way too much for this one article.

Expect a future article to go over those code examples in detail.

If you want to see the code early, you can see it on GitHub:https://github.

com/Sawtaytoes/JavaScript-Performance-Interview-QuestionFinal StatsIn general, the largest contiguous block was anywhere from 30–80 nodes on average.

These are my numbers:No matter how many times I ran the tests, the relative positions of each method remained the same.

The Redux-Observable Concurrent method suffered when all nodes were the same color.

I tried a bunch of things to make it faster, but nothing worked :/.

Game DevelopmentI’ve come across this code twice in my career.

It was at a much smaller scale, in Lua, and happened while working on my indie game Pulsen.

In one situation, I was working on a world map.

It had a predefined list of nodes, and I processed this list in real-time.

This allowed hitting [LEFT], [RIGHT], [UP], and [DOWN] to move you around the world map even if the angle was slightly off.

I also wrote a node generator for an unknown list of items with X and Y values.

Sound familiar? I also had to center that grid on the screen.

It was a heckuva lot easier to do this in HTML than it was in the game engine though.

Although, centering a bunch of absolutely-positioned divs isn’t easy either.

In both of these solutions, the real-time execution time wasn’t a big deal because I did a lot of preprocessing when you loaded up the game.

I want to emphasize that TechLead’s question might be something you come across in your career; maybe, but it’s rare that speed is ever a factor in typical JavaScript applications.

Based on TechLeads other videos, he was using Java at Google.

I’m assuming the positions he interviewed cared about execution speed.

They probably had a bunch of worker tasks processing huge chunks of data so a solution like this might’ve been necessary.

But then, it could’ve been a job working on HTML and CSS, and he was just trolling the interviewee; who knows!ConclusionAs you saw with the final stats, the worst-looking code was almost the fastest and accomplished all our requirements.

Good luck maintaining that sucker!From my own experience, I spent longer developing the non-RxJS versions.

I think it’s because the faster versions required thinking holistically.

Redux-Observable allows you to think in small chunks.

This was a really fun and frustrating problem to solve.

It seemed really difficult at first, but after breaking it up into pieces, the pieces all came together :).

More ReadsIf you wanna read more on JavaScript performance, check out my other articles:Speed Up JavaScript Array ProcessingUsing Transducers to Speed Up JavaScript ArraysIf you liked what you read, please check out my other articles on similar eye-opening topics:Callbacks: The Definitive GuidePromises: The Definitive GuideFeature Flags: Be Truly AgileMake Poop from Food Emojis.

. More details

Leave a Reply