Intricate Art Generation The Belamy Family Data Science Projects for Music Google’s Magenta Project Clara The AI DJ Project Miscellaneous Projects IBM Project Debater MIT’s Generate Almost Anything Series Data Science Projects for Art Drawing new things and filling them up with color is something most of us did during our childhood.
I could never have imagined that a machine would be able to conjure up art as good as anything we have produced.
But here we are.
A simple neural network trained on a certain art or set of images can now generate stunning visual imagery.
Imagine the fun Leonardo da Vinci would have had with this back in the Renaissance era!. Google’s Quick, Draw!.Of course we start with Google.
Who else would rank top of the most creative list when it comes to artificial intelligence?.Google’s Creative Lab and Experiments with Google came together to create this simple tool that guesses what you are trying to draw.
Douglas Eck and David Ha came up with Sketch-RNN, a Recurrent Neural Network which generates drawings of common objects.
The model is trained on human-drawn sketches of everyday objects represented as a sequence.
The sequence is then fed to a sequence-to-sequence autoencoder.
This, in turn, trains the neural network.
Additionally, the team has also maintained a dataset of 50 million drawings contributed by players of the Quick, Draw!.game.
Here are a few resources to get you started with Quick, Draw!: Dataset Paper by Eric Ha and Douglas Eck TensorFlow tutorial Intricate Art Generation Have you ever heard of Zentangles?.Chances are that you have without ever realizing it.
The intricate patterns some of us draw at the corner of pages, which we used to call just ‘doodles’, are actually Zentangles.
They have recently become extremely popular in coloring books and pop art as well.
Zentagles, however, are bound by some visual rules and recurring patterns.
Here are a few examples of various Zentangles: Kalai Ramea, a researcher at the Palo Alto Research Center (previously Xerox PARC), believed that such art was a good domain to apply style transfer algorithms (Neural Style Transfer).
The generated designs she came up with are very unique and colorful too.
The project involves using a Style Transfer Algorithm and applying it to an image.
The content image is a silhouette of an image we would like to apply the Zentangle style to while the Style Image is any pattern (black-and-white, or colorful).
The algorithm basically transfers the style of the Style Image to the Content Image.
A brief explanation of neural style transfer is provided below: The weights used are from a pre-trained network called VGGNet, a deep convolutional network for object recognition developed and trained by the University of Oxford’s Visual Geometry Group.
A sample of Kalai’s work created from a quilt and Darth Vader image: Awesome, right?.Kalai also presented this research at the Self-Organizing Conference on Machine Learning (SOCML) 2017.
You can start learning more about intricate art generation here: Open source project on GitHub Article explaining how intricate art generation works underneath The Belamy Family Generative Adversarial Networks (GANs) are the flavor of the month among the deep learning community nowadays.
GANs are being used to generate photographs of people who don’t exist and even draw up landscapes and portraits.
The trio of Gauthier Vernier, Pierre Fautrel and Hugo Caselles-Dupré took the applications of GANs a step further.
As a part of Obvious, a collective of artists and machine learning researchers based in Paris, they created portraits of an entirely fictional Belamy Family from GANs.
The ‘Family’ is a collection of 11 portraits of different family members with the crowning glory being the portrait of Edmond de Belamy, which fetched $432,500 at the world-famous auction house, Christie’s.
The classical nature of the portraits stems from the fact the training data comprised of 15,000 portraits from the 14th to the 20th Century.
The best part?.Belamy is derived from ‘Bel ami’ which translates to ‘I, Goodfellow’ (from I.
Goodfellow, creator of GANs) and each portrait is signed off with the loss function formula of the GAN model.
The Obvious Collective Article from the creators at Obvious explaining their method in detail Data Science Projects for Music Using AI algorithms to generate music seems like a natural choice at first glance.
Music is essentially a collection of notes – and AI thrives on that kind of data.
So, it’s not surprising to see the kind of progress researchers have made with AI for music.
Google’s Magenta Project Ah yes, Google again.
Launched in the summer of 2016, Google’s Magenta project was initially widely known among research and AI enthusiasts, but its claim to fame among the populace was their Bach doodle.
Created to celebrate J.
Bach’s 334th birthday, the model will harmonize any user given input in Bach’s style.
The AI model is called Coconet, a convolutional neural network that fills in missing pieces of music.
To train this model, the team used 306 chorale harmonies written by Bach.
The model erases some random notes from the training set and regenerates new notes to fill in the blanks.
More on the Bach doodle explained by the team behind it: Google’s interface to experiment with this model Link to the complete code An in-depth explanation on the Google Magenta Blog Other awesome AI projects by the Google Magenta team Clara Clara, created by Christine McLeavey Payne (Pianist and Fellow at OpenAI), is a neural network that composes piano and chamber music.
Based on the concept that music is also a language, Christine has developed an LSTM based neural network that predicts what notes or chords we should play next.
To accomplish this, Christine first obtained a dataset of MIDI files and converted them into text format.
The text file was then divided into notewise or chordwise levels, which is similar to character-level and word-level language models.
Christine shows us a demo of Clara: Christine also created a Music Critic model that classifies music into human-generated and machine-generated.
Open source code on GitHub Project overview with more details The AI DJ Project You knew this one was coming – the mashup of AI and DJs!.This was actually the first image I visualized when I heard about AI and music.
A good DJ can completely transform the mood of a live audience – something AI does to us all the time!.Qosmo, a Japanese company focusing exclusively on Computational Creativity, created the AI DJ Project.
This is a combination of both human and algorithmic DJs creating new music.
The project consists of 3 phases: Music Selection: This comprises of a neural network each for Genre Inference, Instrument Inference, and Drum Machine Inference.
These networks combine to extract features from the music the human DJ is playing Beat Matching: This is implemented using Reinforcement Learning to enable the AI to control the speed of the turntable from the beats played by the human DJ Crowd-Reading: The most interesting (and the most complex) part of this project.
This uses motion-tracking and deep learning to gauge the mood of the live crowd and changes the music accordingly Check out the project page explaining these three steps in detail.
Miscellaneous Projects AI’s use cases in the creative arts extend beyond art and music.
Curious to see what else AI can do that you might not have imagined yet?.Let’s find out!. IBM’s Project Debater Natural Language Processing (NLP) is never far away from any AI list these days.
And this project by IBM is by far the most complex project on this list.
Every component which went in the making of this AI debater explores the concepts of Machine Learning.
Get a hold of this — the Argument Mining Component detects claims and evidence in a corpus and assesses the quality of the arguments.
The Stance Classification and Sentiment Analysis modules deal with classifying expert opinions on a stance, analyzing the sentiments of sentences and idioms, and even identifying a stance based on a claim.
Next, the Deep Neural Nets, along with weak supervision, predict phase-breaks, score an argument and further enhance the argument mining.
Finally, the Text-to-Speech Systems tell which words and phrases emphasize and generate effective speech patterns.
Join the debate on subsidizing pre-schools with Project Debater debating against Champion debater Harish Natarajan: Entire project explained in detail MIT’s Generate Almost Anything Series Students of MIT’s most popular class by Neil Gershenfeld, How to Make (Almost) Anything, are breaking new barriers and applying AI models in the most ingenious ways.
They have released a series of 6 episodes, with each episode featuring an expert human collaborator working with the team to explore different ways AI can be used to further their domain.
The list includes AI-generated music, AI-generated Fashion, Graffiti, and by far my favorite: AI-generated Pizza.
Make sure you browse through the project site.
Other honorable mentions include the AI-generated short film ‘Eclipse’: Also, here’s a really cool repository of faces of people who don’t exist that were created using GANs.
Incredible!. More details