Style up your photos with a touch of Deep Learning magic

Style up your photos with a touch of Deep Learning magicSee the artistic side of Deep LearningGeorge SeifBlockedUnblockFollowFollowingJun 11Style Transfer in the context of imaging refers to the process of transferring the “style” of one image to another, while maintaining the “content” of the second image.

For example, the image on the far left is the “content” image.

We apply the “style” of the the middle image (the “style” image) to our content image.

We expect that since the middle image has kind of a big city night time vibe to it that this will be reflected in the final image — which is exactly what happens in the result on the far right!Source from the original research paper.

One of the most ground-breaking pieces of research in this area came from Adobe Research.

They called it Deep Photo Style Transfer (DPST).

How to transfer photo styleTo properly perform a style transfer from one photo to another, the Adobe team framed the goal of their DPST: “to transfer the style of the reference to the input while keeping the result photorealistic”The key part here is maintaining the “photorealistic” property of the output.

If we have a content photo like the one above, we don’t want any of the building to change at all.

We just want it to look like that exact same photo was taken at night time.

Many style transfer algorithms that came before the publication of this research distorted a lot of the content present the original image.

Things like making straight lines wavy and changing the shapes of objects were common to see in the results of Neural Style Transfer techniques at the time.

And that was totally acceptable.

Many of algorithms were designed for artistic style transfer, so a bit of distortion was even welcomed!Example of a distorted style-transferred image.

 SourceBut in this case, the aim was to create images that were still realistic — as if they were taken by a real-world camera.

There are 2 main things that the authors do to accomplish this: (1) a photorealism regularisation term in the loss function (2) a semantic segmentation of the content image to be used as a guidance.

Photorealism RegularisationThink of how we would intuitively maintain photorealism in an image.

We’d want the lines and shapes of the original image to remain the same.

The colors and lighting might change, but a person should still look like a person, a tree like a tree, a dog like a dog, etc.

Based on this intuitive idea, the regularisation term implemented by the authors forces the transformation of the pixels from the input to the output to be locally affine in colorspace.

An Affine transform by definition must maintain the points, straight lines, and planes when mapping the input to the output.

With this constraint, straight lines never go wavy and there won’t be any weird shape shifting in our output!Segmentation GuidanceIn addition to maintaining the points, straight lines, and planes we also want to make sure that the style of various “things” in the style image are actually transferred realistically.

Imagine if you had a style image that showed a beautiful orange sunset like the one down below.

SourceMost of the image is a red-ish orange.

If we were to style transfer this to, say, a city image, all of the buildings would turn red!.That’s not really what we want though — a more realistic transfer would make most of the buildings very dark (close to black) and only the sky would have a sunset and water color.

The Deep Photo Style Transfer algorithm uses the results of a Semantic Segmentation applied to the content image in order to guide the style transfer.

When the algorithms knows exactly which pixels belong to the foreground and background, it can more realistically transfer the style.

Sky pixels will always be transferred to sky pixels, background pixels to background pixels, and so on.

The code for transferring styleYou can download the repository for Photo Realistic Style Transfer from GitHub:git clone https://github.

com/GeorgeSeif/DeepPhotoStyle_pytorch.

gitAll that’s required for it to run is a recent version of Pytorch.

Once that’s done, move into the folder and download the models for the Semantic Segmentation with the downloading script:cd DeepPhotoStyle_pytorchsh download_seg_model.

shNow we’re ready to run our code!Download a style image and a content image — any image of your choice!.City and landscape images tend to work best in my experience.

Finally, run the code like so:python main.

py –style_image path_style_image –content_image path_content_imageThe algorithm will iteratively improve the style transfer result, so the more you wait the better it will get!.By default it’s set to run for 3000 steps, but you can increase that if you feel that more steps is improving the results.

Give the code a try yourself, it’s great fun!.See how your photos look after the style transfer.

Feel free to post a link below to share your photos with the community.

Like to learn?Follow me on twitter where I post all about the latest and greatest AI, Technology, and Science!.Connect with me on LinkedIn too!Recommended ReadingWant to learn more about Deep Learning?.The Deep Learning with Python book will teach you how to do real Deep Learning with the easiest Python library ever: Keras!And just a heads up, I support this blog with Amazon affiliate links to great books, because sharing great books helps everyone!.As an Amazon Associate I earn from qualifying purchases.

.

. More details

Leave a Reply