Discriminative and generative models form the backbone of machine learning, with discriminative models focusing on learning the distribution of dependencies among dataset features to distinguish between data points, while generative models learn the distribution that generates dataset features to create new data. Variational Autoencoders (VAEs), a type of generative model introduced in 2013, have gained attention for their ability to generate data by mapping input data to parameters of a distribution in the latent space, rather than embedding data directly. This approach enables VAEs to characterize the latent space as a feature landscape suitable for data generation, unlike ordinary Autoencoders that focus on compressing data for reconstruction. VAEs have been successfully used to generate realistic images, such as human faces and clothing items, by learning to map salient features from training data into the latent space. Through the reparameterization trick and the maximization of the Evidence Lower Bound, VAEs overcome the limitations of traditional Autoencoders, offering a powerful tool for data generation alongside Generative Adversarial Networks (GANs).