Learning Hierarchical Priors in VAEs

A constrained optimisation approach

Alexej Klushyn
Graph-based interpolation of human motion

We address the issue of learning informative latent representations of data. In the normal VAE, the latent space prior is a standard normal distribution. This over-regularises the posterior distribution, resulting in latent representations that do not represent well the structure of the data. This post, describing our 2019 NeurIPS publication, proposes and demonstrates a solution by using an hierarchical latent space prior.

How to Learn Functions on Sets with Neural Networks

And how to choose your aggregation

Maximilian Soelch
The basic Deep Sets architecture for set functions: embed, aggregate, process.

If you input a vector of data in a neural network, the order of the elements matters. But ssometimes the order doesn't carry any useful inforamtion: sometimes we are interested in working on sets of data. In this post, we will look into functions on sets, and how to learn them with the help of neural networks.

Approximate Geodesics for Deep Generative Models

How to efficiently find the shortest path in latent space

Nutan Chen
The graph of the fashion MNIST dataset in a 2D latent space, along with the magnification factor.

Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source—the latent space—to a more complex distribution—corresponding to the distribution from which is data is sampled.

Typically, the data set is sparse, while the latent space is compact. Consequently, points that are separated by low-density regions in observation space will be pushed together in latent space, and the spaces get distored. In effect, stationary distances in the latent space are poor proxies for similarity in the observation space. How can this be solved?

Georgi Dikov

Intuition and experience. Probably, that's the answer you would get if you happen to ask deep learning engineers how they chose the hyperparameters of a neural network. Depending on their familiarity with the problem, they might have done some good three to five full dataset runs until a satisfactory result popped up. Now, you might say, we surely could automate this, right, after all we do it implicitly in our heads? Well, yes, we definitely could, but should we?

Deep Variational Bayes Filter

DVBF: filter to learn what to filter

Maximilian Soelch
Learnt latent representation of a swinging pendulum

Machine-learning algorithms thrive in environments where data is abundant. In the land of scarce data, blessed are those who have simulators. The recent successes in Go or Atari games would be much harder to achieve without the ability to parallelise millions of perfect game simulations.

About

Machine Learning Research Lab, Volkswagen Group

The front view of the lab building

Established in 2016, Volkswagen Group Machine Learning Research Lab, located in Munich, was set up as a fundamental research lab on topics close to what we think artificial intelligence is about: machine learning and probabilistic inference, efficient exploration, and optimal control.