Fork me on GitHub

Talks

2017

  • 2017-05-14: Can we trust AI: A talk at the science festival
    (Vetenskapsfestivalen)
    During the science festival in Gothenburg, we had a session discussing artificial intelligence. The theme for the whole festival was “trust”, so we naturally named our session “Can we trust AI”. I gave an introduction, and shared my view of some of the recent progress that has been made in AI and machine learning, and then we had four other speakers giving their views of current state of the art. Finally, I chaired a discussion session that was much appreciated with the audience. The room was filled, and many people came up to us afterwards and kept the discussion going. The other speakers were Annika Larsson from Autoliv, Ola Gustavsson from Dagens Nyheter, and Hans Salomonsson from Data Intelligence Sweden AB.

    Click title for slides and more info.
  • 2017-02-02: Takeaways from NIPS: meta-learning and one-shot learning
    (Chalmers Machine Learning Seminars)
    Before the representation learning revolution, hand-crafted features were a prerequisite for a successful application of most machine learning algorithms. Just like learned features have been massively successful in many applications, some recent work has shown that you can also automate the learning algorithms themselves. In this talk, I'll cover some of the related ideas presented at this year's NIPS conference.

    Click title for slides and more info.

2016

  • 2016-10-06: Deep Learning Guest Lecture
    (FFR135, Artificial Neural Networks)

    A motivational talk about deep artificial neural networks, given to the students in FFR135 (Artificial neural networks). I gave motivations for using deep architechtures, and to learn hierarchical representations for data.

    Click title for slides and more info.
  • 2016-09-29: Recent Advances in Neural Machine Translation
    (Chalmers Machine Learning Seminars)

    Neural models for machine translation was introduced seriously in 2014. With the introduction of attention models their performance improved to levels comparable to those of statistical phrase-based machine translation, the type of translation we are all familiar with through servies like Google Translate.

    However, the models have struggled with problems like limited vocabularies, the need of large amounts of data for training, and that they are expensive to train and use.

    In the recent months, a number of papers have been published to remedy some of these issues. This includes techniques to battle the limited vocabulary problem, and of using monolingual data to improve the performance. As recently as Monday evening (Sept 26), Google uploaded a paper on their implementation of these ideas, where they claim performance on par with human translators, both counted in BLEU scores, and in human evaluations.

    During this talk, I'll go through the ideas behind these recent papers.

    Click title for slides and more info.
  • 2016-09-22: ACL overview
    (Chalmers Machine Learning Seminars)
    An overview over some of the interesting papers presented at ACL this year.

    Click title for slides and more info.
  • 2016-08-11: Assisting Discussion Forum Users using Deep Recurrent Neural Networks
    (Poster Presentation, RepL4NLP 2016, at ACL, Berlin)
    A presentation of our work on a virtual assistant for discussion forum users. The recurrent neural assistant was evaluated in a user evaluation in a realistic discussion forum setting within an IT consultant company. For more information, see publications.

  • 2016-06-07: Modelling the World with Deep Learning
    (Invited Talk, Sigma Smart Developers Society)
    An introduction to Deep Artificial Neural Networks and their applications within image recognition, natural language processing, and reinforcement learning.

    Click title for slides and more info.
  • 2016-02-25: Recognizing Entities and Assisting Discussion Forum Users using Neural Networks
    (Invited Talk, Machine Learning and Data Science GBG Meetup)
    Recurrent Neural Networks can model sequences of arbitrary lengths, and have been successfully applied to tasks such as language modelling, machine translation, sequence labelling, and sentiment analysis. In this this talk, I gave an overview of some ongoing research taking place in our group related to the technology. Firstly, a master thesis project in collaboration with the meetup host Findwise, concerning entity recognition in the medical domain in Swedish. Secondly, the effort to build a system to give useful feedback to users in a discussion forum.

  • 2016-02-18: Neural Attention Models
    (Talk, Chalmers Machine Learning Seminars)
    In artificial neural networks, attention models allow the system to focus on certain parts of the input. This has shown to improve model accuracy in a number of applications. In image caption generation, attention models help to guide the model towards the parts of the image currently of interest. In neural machine translation, the attention mechanism gives the model an alignment of the words between the source sequence and the target sequence. In this talk, we'll go through the basic ideas and workings of attention models, both for recurrent networks and for convolutional networks. In conclusion, we will see some recent papers that applies attention mechanisms to solve different tasks in natural language processing and computer vision.

    Click title for slides and more info.
  • 2016-01-13: Recurrent Networks and Sequence Labelling
    (Talk, Chalmers Deep Learning Seminar)
    An introduction to character-based recurrent neural networks andhow they can be used for sequence labeling.

2015

  • 2015-11-12: Machine Learning on GPUs using Torch7
    (Talk, Chalmers GPU Computing Workshop)
    An introduction to GPU computing from the machine learning perspective. I presented a survey of three different libraries: Theano, Torch, and Tensorflow. The first two libraries have backends both for CPUs and GPUs. TensorFlow has a more flexible backend, and also allows distributed computing on clusters. The talk also included a discussion about throughput and arithmetic intensity, inspired by Adam Coates lecture at the Deep Learning Summer School 2015.

  • 2015-11-05: Deep Learning and Algorithms
    (Talk for first-year students, in Swedish)
    A high-level overview of the field of algorithms, machine learning, and artificial intelligence. I talked about some recent advances in deep learning and gave an overview of the courses that the students can take at Chalmers.

  • 2015-09-07: Extractive Summarization by Aggregating Multiple Similarities
    (Poster Presentation, RANLP 2015, Hissar, Bulgaria)
    Poster presentation covering results in extractive multi-document summarization. For more information, see publications.

2014

Olof Mogren, Department of Computer Science and Engineering, Chalmers University of Technology

LinkedIn Twitter Atom/RSS Feed