It seems a bit slow for such a high-end CPU.
Let’s see what we can do to speed things up.
The fast wayTo understand how we would want Python to process things in parallel, it helps to think intuitively about parallel processing itself.
Let’s say we have to perform the same single task of hitting nails into a piece of wood and that we have 1000 nails in our bucket.
If we say that each nail takes 1 second, then with 1 person we would finish the job in 1000 seconds.
But if we have 4 people on the team, we would divide the bucket into 4 equal piles and then each person on the team would work on their own pile of nails.
With this method, we would finish in only 250 seconds!We can have Python do something similar for us in our example here with the 1000 images:Split the list of jpg files into 4 smaller groups.
Run 4 separate instances of the Python interpreter.
Have each instance of Python process one of the 4 smaller groups of data.
Combine the results from the 4 processes to get the final list of resultsThe great part about all this is that Python handles all the hard work for us.
We just tell it which function we want to run and how many instances of Python to use and it does all the rest!.We only have to change 3 lines of code.
From the above code:with concurrent.
futures.
ProcessPoolExecutor() as executor:boots up as many Python processes as you have CPU cores, in my case 6.
The actually processing line is this one:executor.
map(load_and_resize, image_files)The executor.
map() takes as input the function you would like to run and a list where each element of the list is a single input to our function.
Since we have 6 cores, we will be processing 6 items from that list at the same time!If we again run our program using:time python fast_res_conversion.
pyWe get a run time of 1.
14265 seconds, a nearly x6 speed-up!Note: There is some overhead in spawning more Python processes and shuffling data around between them, so you won’t always get this much of a speed improvement.
But overall your speed-up will usually be quite significantIs it always super fast?Using Python parallel pools is a great solution when you have a list of data to process and you are performing a similar computation on each data point.
But, it’s not always going to be the perfect solution.
The data processed by parallel pools won’t be processed in any predictable order.
If you need the result from the processing to be in a specific order, then this method probably isn’t right for you.
The data you are processing also needs to be a type that Python knows how to “pickle”.
Luckily, these are quite common.
From the official Python documentation:None, True, and Falseintegers, floating point numbers, complex numbersstrings, bytes, bytearraystuples, lists, sets, and dictionaries containing only picklable objectsfunctions defined at the top level of a module (using def, not lambda)built-in functions defined at the top level of a moduleclasses that are defined at the top level of a moduleinstances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section Pickling Class Instances for details).
Like to read about nerd stuff?Follow me on twitter where I post all about the latest and greatest AI, Technology, and Science!.. More details