Lectures /
DeepLearning2
As you watch the video:
What three changes does the lecturer (Kilian Weinberger) make to convolutional neural networks to make them more efficient?
What, intuitively, is a manifold, and why are manifolds useful for understanding deep learning for machine vision?
What demonstrations does Kilian make of transformations in the "latent space" of neural networks? What is the "latent space" that he uses?