How to create SnapChat lenses using pix2pix

I am using following pix2pix training repository which uses tensorflow to train and show results.affinelayer/pix2pix-tensorflowTensorflow port of Image-to-Image Translation with Conditional Adversarial Nets https://phillipi.github.io/pix2pix/ …github.comOnce you finish with generating training data it will look like below:Now we will start training..For training we should use following command inside the pix2pix-tensorflow repositorypython pix2pix.py –mode train –output_dir dir_to_save_checkpoint –max_epochs 200 –input_dir dir_with_training_data –which_direction AtoBHere, AtoB defines which side to train the model..For example, in above image AtoB means model will learn to convert normal face to face with glasses.You can see the results on the training data and the graphs in the tensorboard which you can start by following command:tensorboard –logdir=dir_to_save_checkpointOnce you start seeing decent results for you model stop the training and use evaluation data to check real-time performance..You can start training from the last checkpoint if you think results are not good enough for the real-time performance.python pix2pix.py –mode train –output_dir dir_to_save_checkpoint –max_epochs 200 –input_dir dir_with_training_data –which_direction AtoB –checkpoint dir_of_saved_checkpointConclusionConditional adversarial networks are a promising approach for many image-to-image translation tasks..You need to train properly and please use a good GPU for it as I am getting following output with less training and not much training data variance.If you like this article please do follow me on Medium or Github or subscribe to my YouTube channel.. More details

Leave a Reply