Style Transfer of Images with CNN in PyTorch

It’s recommended that you leave the content weight = 1 and set the style weight to achieve the ratio you want.# for displaying the target image, intermittentlyshow_every = 400# iteration hyperparametersoptimizer = optim.Adam([target], lr=0.003)steps = 2000 # decide how many iterations to update your image (5000)for ii in range(1, steps+1):# get the features from your target imagetarget_features = get_features(target, vgg)# the content losscontent_loss = torch.mean((target_features['conv4_2'] – content_features['conv4_2'])**2)# the style loss# initialize the style loss to 0style_loss = 0# then add to it for each layer's gram matrix lossfor layer in style_weights:# get the "target" style representation for the layertarget_feature = target_features[layer]target_gram = gram_matrix(target_feature)_, d, h, w = target_feature.shape# get the "style" style representationstyle_gram = style_grams[layer]# the style loss for one layer, weighted appropriatelylayer_style_loss = style_weights[layer] * torch.mean((target_gram – style_gram)**2)# add to the style lossstyle_loss += layer_style_loss / (d * h * w)# calculate the *total* losstotal_loss = content_weight * content_loss + style_weight * style_loss# update your target imageoptimizer.zero_grad()total_loss.backward()optimizer.step()# display intermediate images and print the lossif ii % show_every == 0:print('Total loss: ', total_loss.item())plt.imshow(im_convert(target)) display content and final, target imagefig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 10))ax1.imshow(im_convert(content))ax2.imshow(im_convert(target))The code has been run on google colab to make use of GPU hardware..The completed code is shared at below link..Can experiment with various Beta values to see how the style is being captured in target image..Experimented with 1e6 and 1e8 beta values..The same output below if with beta of 1e8Google ColaboratoryEdit outputReference· Paper on Style Transfer —· Udacity — PyTorch Nanogegree· Stanford CNN for Visual Recognition — More details

Leave a Reply