icon-symbol-logout-darkest-grey
Diese Seite ist nur auf Englisch verfügbar.

28. November 2023Geometric Autoencoders -- What You See is What You Decode

Visualization is a crucial step in exploratory data analysis. One possible approach is to train an autoencoder with low-dimensional latent space. Large network depth and width can help unfolding the data. However, such expressive networks can achieve low reconstruction error even when the latent representation is distorted. To avoid such misleading visualizations, we propose first a differential geometric perspective on the decoder, leading to insightful diagnostics for an embedding's distortion, and second a new regularizer mitigating such distortion. Our ``Geometric Autoencoder'' avoids stretching the embedding spuriously, so that the visualization captures the data structure more faithfully. It also flags areas where little distortion could not be achieved, thus guarding against misinterpretation.

What you see is what you decode

Original Publication

Philipp Nazari, Sebastian Damrich, Fred A. Hamprecht
"Geometric Autoencoders -- What You See is What You Decode"
arXiv:2306.17638v1 [cs.LG] 30 Jun 2023

Further information