30 Days in the Life of a Machine Learning Researcher

As a PhD student, you’ll find it hard not to think about research problems even when you’re in one of the most beautiful places on Earth.
Here, a VAE trained on celebrity images is used to generate new celebrity images, by varying one dimension at a time. The dimension going from bottom-left to top-right seems to represent hair length (or background color), while the dimension going from top-left to bottom-right seems to represent skin color. Since we only allow 2 latent variables, the generated images are not that realistic.
Examples of images in the target dataset (left) and background dataset (right).

I’ve told myself many times that I will not work on Saturdays and Sundays… but when the going gets tough, I immediately default to going back to the lab on weekends.

ResourceExhaustedError: OOM when allocating tensor
Gene expression matices often show high degrees of correlation. Do we really need to measure every genes, or can we just measure a few (saving experimental costs and time) and then impute the rest?
Figure showing the methodology of the contrastive VAE paper. The early version is shown on the left, while the final version, which appeared in the paper, is shown on the right.
Simplified architecture of the concrete autoencoder. The figure is generated entirely from LaTeX.

“A reader should understand your paper just from looking at the figures, or without looking at the figures” — Zachary Lipton

(a) Figure showing the 20 most important pixels (out of a total of 784) (b) Sample images from the MNIST dataset (c) Showing the 20 pixels that were selected from each sample image in the previous panel. (d) The reconstructed images using only the 20 selected pixels, which approximate the original images quite well. Thank you Melih for making this figure and not getting frustrated by my comments!
Figure adapted from Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Here, a contrastive VAE trained on celebrity images with hats (target) and without hats (background) is used to generate new celebrity images, by varying one dimension at a time. The vertical dimension seems to involve hat color, while the horizontal dimension seems to involve hat shape. Since we only allow 2 latent variables, the generated images are not that realistic.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store