Fork me on GitHub

Takeaways from NIPS: meta-learning and one-shot learning

Encoder-decoder with attention mechanism.

Before the representation learning revolution, hand-crafted features were a prerequisite for a successful application of most machine learning algorithms. Just like learned features have been massively successful in many applications, some recent work has shown that you can also automate the learning algorithms themselves. In this talk, I'll cover some of the related ideas presented at this year's NIPS conference. Bits and pieces will be taken mainly from the following papers.

Reading:

  • Andrychowicz et al., Learning to learn by gradient descent by gradient descent ABS, PDF, arXiv
  • Oriol Vinyals, et.al., Matching Networks for One Shot Learning ABS, PDF, arXiv
  • Alexander Vezhevets, et.al., Strategic Attentive Writer for Learning Macro-Actions ABS, PDF, arXiv
  • Rafael Gomez-Bombarelli, et.al., Automatic Chemical Design using Variational AutoencodersPDF, cs.nott.ac.uk

Slides (PDF)

Chalmers Machine Learning Seminars, 2017-02-02
Olof Mogren

Olof Mogren, PhD.