Experimenting with CLIP and VQGAN to Create AI Generated Art
Blog post from Roboflow
Brad Dwyer's blog post explores the fusion of OpenAI's CLIP with generative adversarial networks (GANs) to create AI-generated art, focusing on experiments using a Google Colab notebook to manipulate image outputs through parameter tuning. He discusses replacing human components in a previous project, paint.wtf, with AI to see how GAN outputs compare against human-drawn images judged by CLIP. The post details attempts to optimize image creation by adjusting resolutions, starting from human drawings, and using descriptive prompts. Dwyer finds that AI-generated images can outperform human artwork in terms of CLIP's scoring metrics due to the AI's ability to "collude" with the scoring model. The experiments reveal insights into the early locking of image composition and the limited impact of modifiers like "unreal engine" on improving prompt accuracy. Overall, the post highlights the potential of CLIP+VQGAN for creative experimentation and encourages further exploration to uncover new techniques and applications.