Home / Companies / Bugcrowd / Blog / Post Details
Content Deep Dive

How vulnerable are vibe-coded apps?

Blog post from Bugcrowd

Post Details
Company
Date Published
Author
Guest Post
Word Count
1,876
Language
English
Hacker News Points
-
Summary

"Vibe coding," a term coined by OpenAI's Andrej Karpathy, refers to the practice of using AI, such as large language models (LLMs), to write code by simply describing functionality in plain language. This approach has gained popularity for its speed and ease of use, leading to significant changes in software development. However, it is not without drawbacks, particularly regarding security and the finite context window of LLMs, which can cause them to "forget" parts of the codebase. An experiment comparing two LLMs, Claude and ChatGPT, in building a basic invoice app highlighted these vulnerabilities; neither implemented essential security measures like multi-factor authentication or rate limiting without explicit prompting. Claude's app was vulnerable to code injection and cross-site scripting, while ChatGPT's app had an insecure direct object reference vulnerability. Despite using modern frameworks, both apps demonstrated serious security risks, underscoring the need for vigilant oversight by knowledgeable developers. The inherent limitations of LLMs, such as their probabilistic nature and context window constraints, require organizations to enhance methods for detecting vulnerabilities as they integrate AI more extensively into workflows.