Generating New Ideas for Machine Learning Projects Through Machine Learning

prediction for a mass neural network', 'learning of human activity recognition from analysis of text', "an nba player 's approach to learning and character forecasting through video game ecg", 'playing a vocal instrument in local mri learning', 'real – time music recordings', 'finding new artistic and artistic features in music videos', 'an analysis of musical genres', 'predicting a single image – specific musical style', 'a cost approach to crime prediction', 'automatic user prediction and automated review recognition', 'food processing via machine learning', 'human activity recognition using multi – label fantasy', 'predicting a match in the keystroke poker', 'estimation of game types', 'ai identification of deep learning in locomotion monitoring using neural networks', 'the value of collaborative attention projecting for real – time playing', 'the sea level and low speed : the two waves', 'learning to predict the price of beer and personal genomes', 'trading and removing a novel image from the text', 'real – time news user identification on google gestures', 'removing and re – learning to play game and lyrics', 'rapid – mass dynamics with acoustic images', 'real – time music direction', "what 's your right ?", 'exploring event and music', 'human activity prediction using machine learning', 'model of architecture in california', 'vs light crime', 'adaptive learning for image recognition', 'predicting the approach of human activity using machine learning', 'the win given trajectories', 'a machine learning approach to online design', 'a massive based multi – layer feature unsupervised approach for multi – agent music', 'can you learn from a single hand', 'reaction with the media', 'measurement of time to order over time', 'how people can stop : learning the objects of blood and blood', 'machine learning for autonomous vehicles', 'vehicle types in neural networks', 'building a model for what does it store ?', 'for enhanced identification of machine learning techniques', "exploring new york city 's public image through machine learning", 'a novel approach to career image recognition', 'in general game playing', 'structure classification for adaptation of text', 'a variance learning approach for speech recognition', 'the optimization of a non – peer temporal layer', "a distinguishing feature of a song 's legal expression", 'learning to sound in english : learning to learn using word learning', 'information sharing with adaptive neural networks', 'playing the game with multi – touch neural networks', 'recursive estimation of dynamic and static images', 'predicting the quality of the net – style result in the media', 'the character of the sea snake robot', 'predicting the stock market price of machine learning', 'using inverted nucleotide data to predict the price of convolutional protein models', 'search engine', 'using twitter data to predict prices in high – cost trading', 'a machine learning approach', 'creating a new approach to building a deep learning approach', 'fingerprint learning component', 'machine learning techniques for functional change learning for the building of new york city college football networks', 'predicting cancer risk of breast cancer risk', 'cancer diagnosis and prediction', 'stock market classification', 'identifying the outcome of the news media'I haven’t checked thoroughly, but random checks tell me that most of the generated ideas are unique..I think the reason why the generated text isn’t memorized from training corpus is because we’re using a pre-trained model..The pre-trained language model was trained on Wikipedia and hence it has strong opinions on how concepts and words are related even before seeing training data.For a model that’s initialized randomly, the easiest way to reduce training data is to remember the training corpus..This results in over-fitting..However, for a pre-trained model, if the network tries to learn the training corpus, it can only do that if it first forgets previously learned weights..And since that leads to a higher error, the easier way is to accommodate training corpus within the context of earlier learned weights..Hence, the network is forced to generalize and generates grammatically correct sentences (thanks to pre-training on Wikipedia) but using domain-specific concepts and words (thanks to your dataset).What would you train using this approach?Before pre-trained models were available, you needed a huge corpus of text to do anything meaning..Now, even a small dataset is enough to do interesting things..Let me know in comments what project ideas come to your mind that could use a small text corpus along with a pre-trained model.Some ideas to get your neurons firing:Using your tweets, train a model that tweets like youUsing data dump from your WhatsApp, make a bot that chats like youFor your company, classify support tickets into BUG or FEATURE REQUESTMake a bot that generates quotes similar to your favorite authorMake your own customized AUTO-REPLY drafter for GmailProvided a photo and an Instagram account, generate caption in the style of the account’s previous captionsGenerate new blog post ideas for your blog (based on previous blog posts titles)Also, it’ll be super cool if you end up implementing a machine learning project idea generated by my model (or the one contained in this post)..You’ll be part of the world’s first project that a machine has thought of which a human implements!Thanks for reading so far..Let me know your thoughts and questions in comments.PS: Check out my previous hands-on tutorial on Bayesian Neural NetworksFollow me on TwitterI regularly tweet on AI, deep learning, startups, science and philosophy..Follow me on https://twitter.com/paraschopraParas Chopra (@paraschopra) | TwitterThe latest Tweets from Paras Chopra (@paraschopra). More details

Leave a Reply