Presentations



  • A presentation for the CMS group, Experimental Physics Department, CERN , April 20, 2022
    TITLE: Introduction to Flow-based Generative Models
    SLIDES: [here]

  • A presentation at GenU 2021, 12-13 October 2021, Copenhagen (Denmark)
    TITLE: Is the Likelihood-based Deep Generative Modeling appropriate for Representation Learning?
    SLIDES: [here]

  • A presentation at beIT, 29 May 2021
    TITLE: Deep Generative Modeling with Variational Auto-Encoders
    SLIDES: [here]

  • A presentation for Vinted (Vilnus, Lithuania), 25 May 2021
    TITLE: Why AI needs Deep Generative Modeling?
    SLIDES: [here]

  • A presentation at SPP Zurich 7-8 May 2021
    TITLE: There is no AI without Deep Generative Modelling
    SLIDES: [here]

  • A presentation for Booking.com @ Amsterdam
    TITLE: Introduction to Deep Generative Modeling
    SLIDES: [here]

  • A presentation at AI4Science Lab @ Univ. of Amsterdam
    TITLE: All that glitters is not Deep Learning in Life Sciences (but sometimes it is!)
    ABSTRACT: Life sciences is a fascinating field that tries to answer fundamental questions about ourselves, other species, and interactions within and among various environments. (Bio)chemistry and physics are typical tools to study and comprehend our world. However, due to the high complexity of biological systems, standard tools are not enough to understand and model all underlying relationships. Computational methods could serve as a possible remedy to that. In this talk, I will show how we can use computational intelligence, Bayesian inference, and deep learning to deal with some problems in life sciences. Specifically, we will discuss how to identify parameters in dynamical models of biological networks, how to find values of kinetic parameters in enzyme kinetics (including COVID-19), and how to count cells automatically.
    SLIDES: [here]
    VIDEO: [here]

  • A presentation at Weekly Artificial Intelligence (WAI) meeting @ VU Amsterdam
    TITLE: Why do we need deep generative modeling?
    SLIDES: [here]

  • An invited speech at ML in PL (Warsaw, Poland)
    TITLE: Why do we need deep generative modeling?
    ABSTRACT: Deep learning achieves state-of-the-art results in tasks like image or audio classification. However, adding noise to data can easily fool a deep learning model. During this talk, we will discuss a possible remedy to this issue, namely, learning generative models. We will start with a motivating example of image classification and highlight that training a joint distribution over a label and an object (image) is crucial for uncertainty quantification. Next, we will outline different approaches to model a distribution over objects (e.g., images). More specifically, we will focus on Variational Auto-Encoders and Flow-based models, which are models that allow to learn (approximate) probability distributions. In the conclusion, we will show successes and failures of these models, indicating possible future research directions.
    SLIDES: [here]

  • A presentation at AwesomeIT (Amsterdam, the Netherlands)
    TITLE: The Future of Deep Learning: Deep Generative Modeling
    ABSTRACT: Deep learning has become almost a default tool in many real-life problems like image analysis, audo analysis and text analysis. Due to increasing computational capabilities, neural networks are deeper and their training is faster. However, learning models that are capable of capturing rich distributions from vast amounts of (unlabeled) data remains one of the major challenges of artificial intelligence. For instance, there is an enormous number of images available online, however, labeling them all is almost an impossible task. Therefore, in order to take advantage of a flood of unlabeled data, deep generative modeling provides a natural manner of dealing with both labeled and unlabeled data.
    In recent years, different approaches to deep generative modeling were proposed by formulating alternative training objectives to the log-likelihood like the adversarial loss that leads to Generative Adversarial Networks (GANs) or by utilizing variational inference that results in a family of Variational Auto-Encoders (VAE). A third way is an application of autoregressive models like PixelCNN or WaveNet. During our meeting we will discuss these three approaches and point out their advatanges and disadvantages. In order to present their successes, the most promising applications of these deep generative models will be outlined.
    SLIDES: [here]

  • A presentation at Summer School on Data Science (Split, Croatia)
    TITLE: Deep Generative Models: GANs and VAE
    ABSTRACT: During this talk I present why generative modeling is important and what are the main trends in generative modeling. I focus on Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAE). I outline basics of these two approaches and point to recent developments. Moreover, I provice pros and cons of these methods.
    SLIDES: [here]

  • A presentation at Technische Universiteit Eindhoven
    TITLE: Variational Auto-Encoder: Deep Learning meets Generative Modeling
    ABSTRACT: Variational auto-encoder (VAE) is a scalable and powerful generative framework. The main advantage of the VAE is that it allows to model stochastic dependencies between random variables using deep neural networks that can be further trained by gradient-based methods (backpropagation). There are three main components within the VAE framework: (i) a decoder, (ii) an encoder, (iii) a prior. During the talk I will introduce basic ideas of the VAE and show how these three components could be formulated. I will especially focus on increasing flexibility of the encoder using the idea of normalizing flows. Further, I will present how to choose the prior for learning better latent representation. Eventually, I will outline possible extensions and future directions.
    SLIDES: [here]

  • A presentation in CWI (Dept. of Life Sciences and Health)
    TITLE: Deep Generative Modeling using Variational Auto-Encoders
    ABSTRACT: Learning generative models that are capable of capturing rich distributions from vast amounts of data like image collections remains one of the major challenges of artificial intelligence. In recent years, different approaches to achieve this goal were proposed by formulating alternative training objectives to the log-likelihood like the adversarial loss or by utilizing variational inference. The latter approach could be made especially efficient through the application of the reparameterization trick resulting in a highly scalable framework now known as the variational auto-encoders (VAE). VAEs are scalable and powerful generative models that can be easily utilized in any probabilistic framework. The tractability and the flexibility of the VAE follow from the choice of the variational posterior (the encoder), the prior over latent variables and the decoder.
    In this presentation I will outline different manners of improving the VAE. Moreover, I will discuss current applications and possible future directions.
    SLIDES: [here]

  • Jakub M. Tomczak, an oral presentation at AISTATS (the Canary Islands)
    TITLE: VAE with a VampPrior
    ABSTRACT: Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.
    SLIDES: [here]

  • A presentation at PASC 2018 Conference (2nd of July, 2018) and CERN (3rd of July, 2018)
    TITLE: The Success of Deep Generative Models
    ABSTRACT: Deep generative models allow to learn hidden representation of data and generate new examples. There are two major families of models that are exploited in current applications : Generative Adversarial Networks (GANs), and Variational Auto-Encoders (VAE). The principle of GANs is to train a generator that can generate examples from random noise, in adversary of a discrimanitive model that is forced to confused true samples from generated ones. Generated images by GANs are very sharp and detailed. The biggest disadvantage of GANs is that they are trained through solving a minimax optimization problem that causes significant learning instability issues. VAEs are based on a fully probabilistic perspective of the variational inference. The learning problem aims at maximizing the variational lower bound for a given family of variational posteriors. The model can be trained by backpropagation but it was noticed that the resulting generated images are rather blurry. However, VAEs are probabilistic models, thus, they could be incorporated in almost any probabilistic framework. We will discuss basics of both approaches and present recent extensions. We will point out advantages and disadvantages of GANs and VAE. Some of most promising applications of deep generative models will be shown.
    ANNOUNCMENT at CERN: [link]
    SLIDES: [PASC], [CERN]
    VIDEO: [link]

  • A presentation for Tooploox company
    TITLE: Attention-based Deep Multiple Instance Learning
    ABSTRACT: Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances. In this paper, we state the MIL problem as learning the Bernoulli distribution of the bag label where the bag label probability is fully parameterized by neural networks. Furthermore, we propose a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism. Notably, an application of the proposed attention-based operator provides insight into the contribution of each instance to the bag label. We show empirically that our approach achieves comparable performance to the best MIL methods on benchmark MIL datasets and it outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.
    SLIDES: [here]