Deep neural networks are all the rage now in Machine Learning. This happened for a very good reason: deep neural networks have shown spectacular results on a number of difficult learning challenges [pointers].
A lot of people in Machine Learning were probably taken by surprise since the new results developed in a very small community. Even NIPS (Neural Information Processing), which is supposed to be the conference for neural networks, was until recently dominated by papers on Support Vector Machines.
In this blog post, I want to look at the motivation “Why should one care about neural networks” and also summarize some pointers which will allow one to become an expert in neural networks.
Are neural networks superior to other learning methods (in particular trees, SVMs, linear methods)
- neural networks are interpretable
Neural networks are not only about deep architecture but about complex modeling
- neural networks encode prior knowledge
One is actually able to actually train neural networks today.
- stochastic grad. descent vs. batch grad. descent
- RMSprop and other tricks
Some deficiencies in neural networks
- e.g. how does a neural network do smoothing
- how does neural network extract patterns (and what is the statistical significance of those patterns)