The variational autoencoders are the generative model unlike other types of autoencoders. Just like GAN, variational autoencoders learn the distribution of the training set and they are widely used in the generative tasks.
In order to make sure that our encodings are more robust to small perturbations present in the training set, we use the contractive autoencoders.
The contractive autoencoders use a new penalty term in the loss function which penalizes the representations that are too sensitive to the input.
The sparse autoencoder introduces a special constraint in the loss function called a sparse constraint. The sparse constraint is used to make sure that the autoencoder is not overfitting to the training data when we set the many nodes in the hidden layer.
When we set many nodes in the hidden layer then we can learn a better and robust latent representation of the input. But the problem is when we keep more nodes in the hidden layer, then the autoencoders overfit the training data.
To combat this problem of overfitting, we need a sparse autoencoder.
We can use the denoising autoencoders for denoising images. First, we corrupt the input by adding some noise and feed this corrupted input to the encoder instead of feeding the raw input.
While learning the representation of the input, the encoder will learn that the noise is unwanted information and removes its representation. Thus, the encoder learns the compact representation of the input without including the noise and by keeping only necessary the information and map the learned representation to the bottleneck.
Next, the decoder takes the bottleneck created by the input and reconstructs the image. Since the bottleneck does not contain any representation of the noise, the decoder can generate a denoised image from the bottleneck.


Description