A developer's guide to prompt engineering and LLMs
Blog post from GitHub
Over a decade after Marc Andreessen's prediction that "software is eating the world," generative AI, particularly large language models (LLMs), is now transforming technology with unprecedented speed. These models, capable of outperforming humans in specific tasks, are accessible even to those without advanced machine learning expertise. GitHub's engagement with LLMs, illustrated through its GitHub Copilot tool, showcases how developers can leverage these models for various applications, including code completion. The article explains the fundamentals of LLMs, emphasizing their ability to predict text sequences based on extensive datasets, while also acknowledging limitations such as potential misinformation, known as "hallucinations." By discussing the intricacies of prompt engineering—crafting context-rich prompts to enhance model output—GitHub highlights how it has harnessed LLMs to improve Copilot's functionality. This process involves prioritizing relevant context during code completion, balancing speed and accuracy, and optimizing the generative model's responses to developer inputs.