# Auto Encoders
Auto encoders are neural networks that can learn the internal representation of the input data without supervision
Its like a compression methodology. It is "Lossy"
## Constraints
- number of outputs = number of inputs
- The bottleneck layer should be smaller than the number of inputs is called the encoder layer. You need a bottleneck smaller than inputs/outputs otherwise network will just pass through all the data.
## Loss Function:
MSE if you are doing an autoencoder for continuous data. [[Loss Functions#Binary Cross Entropy]] if you are doing loss function for categorical data/ greyscale pixels of image etc.
# Variational Auto Encoders
A variational autoencoder (VAE) provides a _probabilistic_ manner for describing an observation in latent space. Thus, rather than building an encoder which outputs a single value to describe each latent state attribute, we'll formulate our encoder to describe a probability distribution for each latent attribute
## Latent Representation
We take two outputs from the bottleneck layer - mean encoding and std (they are just with no activation). deviation of encoding. Add a noise using a Gaussian distribution. The result is = mean + std x noise
## Reconstruction Loss
Uses a Kullback Leibler [[Loss Functions|loss function]]
```python
def kl_reconstruction_loss(inputs, outputs, mu, sigma):
kl_loss = 1 + sigma - tf. square(mu) -tf.math.exp(sigma)
return tf.reduce_mean(kl_loss) * -0.5
```