Machine learning and artificial intelligence have seen the rise of diffusion models, notable for their ability to simulate complex processes like data generation and image synthesis. Diffusion models, which are generative, transform simple data distributions into complex ones through a series of invertible operations. Key examples include Denoising Diffusion Probabilistic Models (DDPMs) and Score-Based Generative Models (SGMs), which use stochastic processes to produce high-quality data samples. These models excel in applications such as image denoising, inpainting, super-resolution, and text-to-video synthesis. They offer advantages over traditional generative models like GANs and VAEs by providing high image quality, stable training, and robustness to overfitting. Notable diffusion models for image generation include Stable Diffusion, DALL-E 2, Imagen, and GLIDE, each with unique features and applications in creative and technical fields. These models have proven particularly effective in handling high-dimensional data and ensuring privacy-preserving data generation, making them suitable for a wide range of tasks from image synthesis to text-to-video generation.