Interspeech 2018: Highlights for Data Scientists

Under the hood, Speech2Vec uses encoder-decoder model with attention.Other topics included speech synthesis manners discrimination, unsupervised phone recognition and many more.With the development of Deep Learning, Interspeech conference, originally intended for the speech processing and DSP community, gradually transforms to a broader platform for communication of machine learning scientists irrespective of their field of interest..It becomes the place to share common ideas across different areas of machine learning, and to inspire multi-modal solutions where speech processing occurs together (and sometimes in the same pipeline) with video and natural language processing..Sharing the ideas between fields, undoubtedly, speeds up the progress; and this year’s Interspeech conference has shown several examples of such sharing.Deep context: end-to-end contextual speech recognition. 2018..[pdf]Original..Reposted with permission.Resources:Related: var disqus_shortname = kdnuggets; (function() { var dsq = document.createElement(script); dsq.type = text/javascript; dsq.async = true; dsq.src = https://kdnuggets.disqus.com/embed.js; (document.getElementsByTagName(head)[0] || document.getElementsByTagName(body)[0]).appendChild(dsq); })();.. More details

Leave a Reply