Machine learning for the curious but scared
Deep convolutional networks, overfitting, RBF kernels, GANs, nonlinear transformations, stochastic gradient descent, and of course the coming singularity and super intelligence. Machine learning buzzwords are all the rage now, but what does it mean for a machine to actually learn? This talk covers the things that are usually assumed as a prerequisite in "introductory" ML courses and tutorials. It describes what happens during learning, how we can represent experience and learnings in a machine-friendly way, and how different types of ML systems work at a high level.