NIPS/NeurIPS 2018: Best* of the First Two Poster Sessions

NIPS/NeurIPS 2018: Best* of the First Two Poster SessionsPrabhu prakash KagithaBlockedUnblockFollowFollowingDec 4NeurIPS 2018 is a great conference attracting the state of the art in almost every aspect of machine learning research.There are a few things that a researcher in the field should, for sure, give attention to in a conference..Finally, we did not find evidence that any of the tested algorithms consistently outperforms the non-saturating GAN introduced in cite{goodfellow2014generative}.An intriguing failing of convolutional neural networks and the CoordConv solutionInteresting , About timeWe have shown the curious inability of CNNs to model the coordinate transform task, shown a simple fix in the form of the CoordConv layer, and given results that suggest including these layers can boost performance in a wide range of applications..A Faster R-CNN detection model trained on MNIST detection showed 24% better IOU when using CoordConv, and in the Reinforcement Learning (RL) domain agents playing Atari games benefit significantly from the use of CoordConv layers.A Linear Speedup Analysis of Distributed Deep Learning with Sparse and Quantized CommunicationEfficiencyThe large communication overhead has imposed a bottleneck on the performance of distributed Stochastic Gradient Descent (SGD) for training deep neural networks..Motivated by the unitary-invariance of word embedding, we propose the Pairwise Inner Product (PIP) loss, a novel metric on the dissimilarity between word embeddings..Using techniques from matrix perturbation theory, we reveal a fundamental bias-variance trade-off in dimensionality selection for word embeddings..Moreover, new insights and discoveries, like when and how word embeddings are robust to over-fitting, are revealed..By optimizing over the bias-variance trade-off of the PIP loss, we can explicitly answer the open question of dimensionality selection for word embedding.Adversarial Examples that Fool both Computer Vision and Time-Limited HumansFundamentalMachine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich..We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.Dendritic cortical microcircuits approximate the backpropagation algorithmInsightDeep learning has seen remarkable developments over the last years, many of them inspired by neuroscience..Overall, we introduce a novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem.On Neuronal CapacityNovel formulationWe define the capacity of a learning machine to be the logarithm of the number (or volume) of the functions it can implement..We also derive capacity estimates and bounds for fully recurrent networks and layered feedforward networks.Bias and Generalization in Deep Generative Models: An Empirical StudyTrue understandingn high dimensional settings, density estimation algorithms rely crucially on their inductive bias..Despite recent empirical success, the inductive bias of deep generative models is not well understood..In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets..We verify that these patterns are consistent across datasets, common models and architectures.How Does Batch Normalization Help Optimization?PerspectiveBatch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs).. More details

Leave a Reply