A.I enhanced molecular discovery and optimization

We just need to know a couple more details.

These details are known as hyperparameters, such as the number of neurons, the number of layers, the learning rate, etc.

Hyperpameters tell us how we use our tools (machine learning algorithms), to work our wood (data).

Most of the time, the instructions are up to the carpenter and their intuition.

It’s your job as the architect to decide on some hyperparameter values, while others are learned during training.

But when you do, choose wisely.

(courtesy of Daniel Shapiro)These parameters have significant sway over the results of the model, so it’s up to the machine learner’s intuition and experience to pick a good starting point.

Often times the difference between success and failure lies in the change of just a single hyperparameter.

Optimization is all about changing these hyperparameters until the resulting output of the model is the most accurate as if can be.

We’ll have to choose our loss function (the way we determine how well or how bad our model is doing).

There are also plenty of activation functions to choose from (to keep values intact when passed between each layer of the neural network.

Developing an intuition for how to choose all these hyperparameters requires an understanding of how and why each choice works, sometimes you just gotta guess and check!I could really use a saw and some polisher…A carpenter’s job isn’t just to build the thing; it is also their job to make it presentable.

The final step is therefore to return the output of the model in a comprehensive format.

In a forward model, this means presenting the properties of a given material most accurately and with the appropriate units of measurement.

In an inverse model, this means presenting the generated molecule in correct SMILES string notation.

There are ways we can improve the finish of our product.

When looking for potential drug candidates in particular, this is the most dangerous stage.

There is no way to know if a molecule is steady without actually testing it, which is why certainties and error rates are so important in science.

This is where many researchers employ another add-on to their original model.

It’s very rare to find a one-shot solution and there is no silver bullet.

Good solutions for molecular discovery implement multiple machine learning algorithms, also known as ensemble algorithms.

The ReLeaSE architecture (Mariya Popova et al) combines RNNs with reinforcement learning techniques.

The RNN is comprised of 2 distinct networks that work to generate valid molecules.

The reinforcement training then biases these results towards desired properties.

The ECAAE architecture (Daniil Polykovskiy et al)first separates the latent distribution code of the autoencoder from properties before modifying the latent code to match a prior distribution code.

This is trained with an adversarial network until the discriminator can no longer distinguish the latent from the prior.

The ORGANIC architecture (Benjamin Sanchez-Lengeling et al) uses generative adversarial networks (GANs) with reinforcement learning techniques.

Similar to the ReLeaSE architecture, the GAN would generate valid molecules before reinforcement learning (named “objective reinforcement”) shifts the output towards desired properties.

That’s about itI WANT REAL WORLD EVIDENCE!!!This generalized process comes not from intuition, but rather from the patterns developed in the numerous papers, projects, and case studies generously provided by 2 factions; academia and industry.

University ResearchThe aforementioned architectures like ReLeaSE, ECAAE, and ORGANIC are all state of art examples of supervised deep learning with a twist.

The incredible institutions behind these innovations are some of the world’s top Universities.

Harvard UniversityPapers like “What is high throughput virtual screening…”, and the aforementioned ORGANIC architecture come from the top ranked university in the world.

Harvard’s clean energy project is an example of research pushed to the bleeding edge of A.

I.

Contributors include people from the likes of chemistry, A.

I, data science, and numerous other fields.

This sort of collaboration is necessary if we are to continue growing the applications in these fields.

University of CambridgeThe simply “Machine learning for material science” is an in depth paper that covers all the recent innovations in the space.

Cambridge is also the place of very specific applications, like probabilistic design of alloys using neural networks.

With companies like Deep Mind stationed in the UK, its no surprise that Cambridge continues to put out quality content.

Northwestern UniversityThe holistic idea of data driven science is a highlight of the work that Northwestern University has put out in recent times.

Research ranging from high throughput-DFT for Molecular Discovery to the Prediction of the High-Dimensional Thermal History in Directed Energy Deposition Process via recurrent neural networks originate from the research conducted at Northwestern University.

Startups and CompaniesMy mentor, Navid Nathoo, gave me a piece of grounded advice;“A problem becomes an opportunity when people are willing to pay for it to be solved; there must be an economic incentive.

”Without the money, everything up to this point is a fun science project that sounds cool but is of no interest from a business standpoint.

That being said, here are some companies, big and small (the size of which should be telling of how much economic incentive there is), that are looking to shake things up.

One of the leaders of the industry specifically in cheminformaticsCitrine InformaticsThis incredible company working out of San Francisco has made great strides in the molecular discovery and optimization research fields.

I would place particular emphasis on their methods of operation.

Citrine understands that the scientific community isn’t as privileged as other fields in terms of the size, quality, and consistency of data.

We may have enormous datasets of images, text, and audio, but you’d be hard pressed to find a solid dataset of carbon molecules, forget cleaned or labeled.

Citrine cuts through the “small data” problem by taking advantage of as many techniques as possible.

Techniques like data augmentation, transfer learning, and stacked architectures squeeze every ounce of value from existing datasets.

IBM ResearchLittle is known about the elusive research centers of fortune 500 companies like Microsoft, Facebook, Google, and especially IBM (who still remembers what IBM stands for?).

Since losing the bid for computing and missing out on mobile, IBM has since shifted focus on what is to come instead of what’s happening.

IBM still fights to be relevant today, but not as the computer company we once knew, but rather, a quantum computing, A.

I researching, and technology innovating company that’s looking to make a comeback in the near future.

Recently, IBM released a free tool that predicts chemical reactions, as per the majority of such projects, SMILES strings is the chosen molecular representation.

With 2 million datapoints on chemical reactions, the A.

I managed to get considerably accurate results.

Google ResearchUnsurprisingly, the world’s most influential company happens to also have its hand in the cookie jar.

Google’s A.

I research has a special team called Gogle Accelerated Science who are working on computational chemistry and biology, with goals to advance scientific research and accelerate scientific innovation.

They’ve collaborated with Deep Mind on several occasions, putting out mind-blowing work.

Rumors of their recent work involves using #3 of the 4 possible molecular representations; molecular graphs.

This is a natural result of their research, as Geometric Deep Learning is beginning to gain ground and the benefits become more clear.

Google consistently puts out their research publications and sometimes their relevant code.

Keep an eye out for news, if it’s anyone that’s able to pull off the next big thing, it’s Google.

Key TakeawaysWe are now entering the 4th paradigm of science, one that is data driven instead of theory, experiment, or computationA.

I is the determining factor in how this change will impact science and what it means for societyCurrent research follows a process and is limited to the currently available tools in A.

IThere are many real world problems being solved by both academia and industry currently, and it’s only growingThe future is bright (courtesy of the Science magazine)What’s to comeChamath Palihapitiya believes that while Google may be the master of search data, Facebook may be the master of communication data, and Amazon may be the master of consumerism data, there has yet to be a clear master of health care data, molecular data, and plenty of other growing fields.

Search, communication, and consumerist data is flashy and superficially important, but there aren’t enough people working on the world’s toughest problems.

Artificial intelligence can change that.

You can change that.

Need to see more content like this?Follow me on LinkedIn, Facebook, Instagram, and of course, Medium for more content.

All my content is on my website and all my projects are on GitHubI’m always looking to meet new people, collaborate, or learn something new so feel free to reach out to flawnsontong1@gmail.

com.

. More details

Leave a Reply