Generating color palettes from movies with Python

Generating color palettes from movies with PythonAndris GauracsBlockedUnblockFollowFollowingJan 10How we can use Python to automatically generate Pinterest style color palette images from iconic scenes of our favourite moviesIf you go to Pinterest, and type in “movie color palettes”, you can find many great examples of color palettes from iconic scenes of various well known movies.

I thought it would be a cool idea to create a program, that can can automatically generate such color palettes.

In this tutorial I will walk you through the steps to create such a program.

The program is simply a single Python based script, that launches an instance of the VLC player and it has been supplemented with several buttons and other GUI elements so we can create our color palettes.

The color palettes themselves are created using a color clustering method with an algorithm called K-means.

Let's dive in!The full project code is available on my github page.

Note: You can also check out the video version of this tutorial on my channel:1.

Installing the prerequisitesFor this project we will need three essential packages:Python VLC — a Python binding for the VLC Player framework.

We will use this to create an instance of a VLC player in our program.

PyQt4 — A GUI package, that provides us with great tools, which we can use to create actual functioning programs with windows, buttons and other graphical UI elements.

OpenCV — A very powerful image manipulation framework, which we will use to generate the final output images with the color palettes.

We will use pip to install our first package, Python VLC.

Open up your Terminal and run the following command:$ pip install python-vlcTo get PyQt, we need to install it using Homebrew.

Note: We specifically need to install version 4 for this tutorial, so we will specify that when writing our install command:$ brew install pyqt@4Lastly we need to install the OpenCV framework.

We can do this easily through pip, but this package also relies heavily on packages — matplotlib and numpy, so we will install those first:$ pip install matplotlib$ pip install numpy$ pip install scikit-learn$ pip install opencv-pythonNote: At the time of writing this tutorial, I am installing the 2nd version of OpenCV, so in Python it will be imported as cv2 module.

In the future however the version might get updated.

Before we get to the actual coding part, we need to verify that all the packages are actually installed, so after all the installations are done it is always a good idea to open up Python via the command line and try to import each package one by one.

If no errors show up on the command line interface after each import, it is safe to say that everything is working, and we can exit Python and close the Terminal window.

Here’s how the process looks like.

First open Python by simply executing the following command $ pythonNow let’s import each module sequentially:import vlcimport PyQt4import cv2If the interpreter allows us to go to the next line, everything is working great and we can exit the Python interface by executing the following command exit() This will close Python on the command line.

We can also close Terminal itself at this point.

2.

Getting the VLC Player instance for our projectThe Python VLC repository has some great ready to use samples on how to incorporate the VLC Player instance in various Python project setups.

For this project we will copy the sample code, which showcases a ready to use Python VLC Player combined with a PyQt GUI interface.

This will be the backbone of our color palette program.

Let’s head to qtvlc.

py and copy the file contents to use in our own project.

We can then make a new file in our root project directory and call it: vlc_player.

pyWe can now check out, how the file actually works.

Open up the Terminal window, use the cd command to get to the folder of our project and run the following command: $ python vlc_player.

py Once we execute the command we should see the following window being launched:So far so good, we now have a working instance of a VLC player.

We now need to create our main file, where we will do all of our custom coding.

Either create a new file and name it main.

py or use the Terminal command to do this: $ touch main.

py3.

Modifying the UIBefore we begin working on main.

py, we should go to the vlc_player.

py file and comment out the last part of the file, which acts as a the program launcher.

Since we will not use this file as a stand alone script and will instead use it as a helper class file, we should move this part to our main file.

On vlc_player.

py either comment out this part or delete it completely:In our main.

py file, let’s import all the necessary packages:from vlc_player import Playerimport sysimport osfrom PyQt4 import QtGui, QtCoreThen let’s set up our global variables, which we will use later in our project:Now the fun part begins.

We will create a subclass of our existing class found on the vlc_player.

py file, that will inherit all the original attributes of this class.

We will then append our own custom functions on top of it to make our custom program.

Let’s start by defining our inherited class:We will come back to this class later, but the next thing we have to do is to set up our main program launcher code, that we previously deleted from the vlc_player.

py class.

We will leave everything the same as in the original except for the window dimensions.

We will use a new size, just because this will create a better looking UI layout for our particular project.

Now let’s run python main.

py We should see the same window as before.

Nothing has changed yet — we have just created our custom class, which does the same thing as the original vlc_player class.

So far so good.

Let’s start adding our custom code.

Let’s go back to our init function for the Custom_VLC_Player class and let’s add the following code, which will add out new custom UI elements to the player window.

Modify the init function to match the following code:If we run the code again, we will see an updated layout to our main window:4.

Defining the functionsNow we will define a valuechange function, which will just update the state of our UI elements throughout the frame capture process.

Right after the init function, let’s define a new function:At this point, we can go back to the init function and uncomment the following line:self.

sp.

valueChanged.

connect(self.

valuechange)This will trigger our valuechange function every time, when a user change the spinbox value.

Now that we have a functional VLC player instance in our program, we can access its core functions, and the most important function we need to take advantage of in this project is the video_take_snapshot function.

It lets us take a snapshot of the frame, that is currently visible in the player, and save it to a directory of our choice.

So we will first snap the frame, save it to our directory and then use it for our image processing step.

Let’s go ahead and write the function for our “Take snapshot” button:We also need to uncomment the code in our init function, that connects this function to our button:self.

connect(self.

snapbutton, QtCore.

SIGNAL("clicked()"),self.

take_snapshot)Now if we run the program and start playing the video, we can see that a new png image is saved in the project root directory every time we press the button.

This is exactly how it should work up to this point.

Next, let’s check out, how we can generate the color palettes for our extracted frames.

5.

The color palette generation functionThe actual name for this method is called color clustering and if you were to google this term, you could certainly find good tutorials on how to do this process.

I found this very awesome tutorial by Adrian Rosebrock on color clustering which we’ll use as the basis for our own program: https://www.

pyimagesearch.

com/2014/05/26/opencv-python-k-means-color-clustering/On a side note: You should definitely check out Adrian’s blog https://www.

pyimagesearch.

com if you want to learn some cool computer vision and deep learning tutorials.

He is a master at this and he has some excellent free tutorials on his site on other cool subjects like face recognition and object detection.

So in Adrian’s tutorial there are two key functions, that we need to borrow and tweak a bit — centroid histogram and plot_colors.

Our version of plot_colors will be a bit different.

We’ll use the original code of centroid_histogram though.

Go ahead and paste this function inside our main.

py file as a stand-alone function.

(outside the VLC player class)And now go ahead and paste in our plot_colors function beneath:Our version will be a bit different, because instead of plotting the colors with a width relative to a its percentage of how much this color is present in the frame, we want to display all n colors in a bar evenly, because that’s how these color palettes usually look like, plus I find it to be more visually appealing.

In the code above we first find the centroids that represent the colors with the most frequency, then we sort these centroids in an ascending matter to form sort of a gradient look, from the darkest color to the brightest.

Again, this is just to make it look more visually appealing.

We also add a margin defined in our global variables, just to add some spacing between the color blocks for a better overall look.

Notice, that here we use the global offset parameter we defined in the beginning of this tutorial.

After tweaking around with the execution of this program, I noticed that the first one or first two colors of the color palette were always almost completely black, because videos often have a high frequency of black tones.

This is why I decided to not include these first black tones in the cluster.

This is why we specify the parameter — offset, which tells us how many colors to exclude starting from the beginning.

This lets us produce a more colourful and thus more interesting color palette.

6.

Putting the color generation function to useWe have our color generator function ready, we now just have to put it in to use in our snapshot taking function.

Let’s go back to the take_snapshot function and append the following code to it:We start by retrieving the image we captured and saved in our root directory.

Next, we transform this image into an OpenCV readable format and make a copy of this image to use for our color palette function.

Since the K-means algorithm is very labour intensive, we run it on a downscaled version of this image.

It does not affect any quality in our case, because even if we scale the image down, we can still have a pretty good understanding of which colors are the dominant ones.

We run the two color palette functions we defined earlier, and we end up with a new image, which is our color palette bar.

Our final task is to append this bar image to the original one, adding a white margin between the two just for a better look, and the color palette is ready!7.

Supplementing the UI with thumbnailsTo provide a better overview of how the taken snapshots look like with their accompanied color palette, and to track the progress of how many snapshots have been taken so far, we can add a list of label elements, which hold a thumbnail of the image we just created.

To do this in PyQt4, we need to create an pixmap element, that can hold images inside of it:We previously created a grid of ten image boxes as labels.

We can now insert the pixmap inside these labels.

Lastly we want to keep the final color palette image in memory, while we continue the snapshot capture process, so we can stitch the all together in the end.

This is why we need to add the image to framesTakenList — an array that holds all our images taken so far.

8.

Stitching the final image togetherLet’s continue appending code to the take_snapshot function.

Next step is to check if we have gotten all our required snapshots.

If that is the case, we are ready to stitch them together to form the last image:The code above lets us combine all the images and finish the task by saving the final input into our root directory.

We then automatically exit the program, not to confuse the user.

We can also tweak the global parameter borderSize to specify how thick we want the border to be around our image and the color palette.

This is just for personal preference.

9.

The full exampleSuccess!.We have now created our own color palette generator program.

The code is available on my github page and is completely open source, so you can play around with it, tweak it or improve it to you own preferences.

.. More details

Leave a Reply