A radical new neural network design could overcome big challenges in AI

With a traditional neural net, you have to specify the number of layers you want in your net at the start of training, then wait until the training is done to find out how accurate the model is..The new method allows you to specify your desired accuracy first, and it will find the most efficient way to train itself within that margin of error..On the flip side, you know from the start how much time it will take a traditional neural net to train..Not so much when using an ODE solver..These are the trade-offs that researchers will have to make, explains Duvenaud, when they decide which technique to use in the future..Currently, the paper offers a proof of concept for the design, “but it’s not ready for prime time yet,” Duvenaud says..Like any initial technique proposed in the field, it still needs to fleshed out, experimented on, and improved until it can be put into production..But method has the potential to shake up the field—in the same way that Ian Goodfellow did when he published his paper on GANs..“Many of the key advances in the field of machine learning have come in the area of neural networks,” says Richard Zemel, the research director at the Vector Institute, who was not involved in the paper..“The paper will likely spur a whole range of follow-up work, particularly in time-series models, which are foundational in AI applications such as health care.” Just remember that when ODE solvers blow up, you read about it here first..This originally appeared in our AI newsletter The Algorithm..To have it delivered directly to your inbox, subscribe here for free.. More details

Leave a Reply