Home / Companies / GitHub / Blog / Post Details
Content Deep Dive

Safeguarding VS Code against prompt injections

Blog post from GitHub

Post Details
Company
Date Published
Author
Michael Stepankin
Word Count
2,874
Language
English
Hacker News Points
-
Summary

The Copilot Chat extension for VS Code has rapidly evolved to include an agent mode that leverages multiple large language models (LLMs), tools, and MCP servers to enhance coding capabilities. This mode's flexibility allows users to choose specific tools to expedite development, yet raises security concerns when external data, such as GitHub issues, is integrated into the chat session. A security assessment revealed vulnerabilities that could lead to data leaks, unauthorized code execution, or exposure of sensitive information without user consent. To mitigate these risks, measures have been introduced, including requiring user confirmation for sensitive actions and isolating web content using security policies. Additionally, security enhancements and best practices, such as Workspace Trust and sandboxing with Developer Containers or GitHub Codespaces, offer protection by restricting unsafe operations and providing isolation. As LLMs advance, the goal is to maintain user control and transparency while minimizing necessary confirmations, ensuring a robust defense against potential exploits.