The generator of the DCGAN consists of the convolutional transpose and batch norm layers with ReLU activations.
First, we draw a noise from a normal distribution and feed that as an input to the generator. The generator which is composed of convolutional transpose and batch norm layers takes this noise as an input and generates an image similar to the ones present in the training set.
GANs are widely used in applications which involve image such as image generation, converting grayscale image to colored image, and so on. When dealing with images, we use CNN instead of a feed-forward neural network since CNN is effective at handling images.
Similarly, instead of using vanilla GAN, we can use DCGAN whose generator and discriminator involve the convnets instead of the feed-forward networks. The DCGAN is very effective at tasks related to the images than the vanilla GANs.
The role of the generator is to generate new data points that are similar to the ones present in the training set while the role of the discriminator is to classify the given data points as to whether it is a real data point or it is generated by the generator.
The generator network generates new data points similar to the ones present in the training set. In order to generate a new data point, the generator implicitly learns the distribution of the training set, and based on this implicitly learned distribution the generator generates the new data point.
Since the generator network implicitly learns the distribution of the training set, GANs are often called the implicit density model.
The discriminative model classifies the data points into their respective classes by learning the decision boundary that separates the classes in an optimal way.
The generative models can also classify the data points, however, instead of learning the decision boundary, they learn the characteristics of each of the classes.