AI Agents Vulnerable to 'Comment and Control' Prompt Injection
A new AI attack method, dubbed ‘Comment and Control,’ has been detailed by a researcher, according to SecurityWeek. This technique exploits vulnerabilities in leading AI models and tools, including Claude Code, Gemini CLI, and GitHub Copilot Agents, through the seemingly innocuous medium of comments.
The core of the ‘Comment and Control’ attack lies in its ability to inject malicious prompts into AI models via code comments or other metadata that these models process during their operations. By manipulating how AI interprets these comments, an attacker can coerce the AI into performing unintended actions, ranging from code generation that introduces backdoors to data exfiltration or even altering the AI’s behavior in critical applications. This is a classic prompt injection, just with a new vector.
SecurityWeek’s report underscores that this isn’t just a theoretical concern. It highlights a significant oversight in how these AI tools parse and sanitize input, especially within development environments where comments are a ubiquitous part of the workflow. The implications for secure software development and AI-powered automation are considerable, as these tools are increasingly integrated into critical infrastructure and enterprise operations.
What This Means For You
- If your development teams are leveraging AI coding assistants like GitHub Copilot, Claude Code, or Gemini CLI, you need to understand this 'Comment and Control' prompt injection vector. Immediately review your policies for AI tool usage, especially concerning external or untrusted code snippets. Ensure your development workflows include robust validation of all AI-generated output and consider sandboxing AI agent environments to mitigate potential misuse.
Related ATT&CK Techniques
🛡️ Detection Rules
4 rules · 6 SIEM formats4 detection rules auto-generated for this incident, mapped to MITRE ATT&CK. Sigma YAML is free — export to any SIEM format via the Intel Bot.
AI 'Comment and Control' Prompt Injection via Code Comments
Indicators of Compromise
| ID | Type | Indicator |
|---|---|---|
| SecurityWeek-Prompt-Injection | Prompt Injection | Claude Code |
| SecurityWeek-Prompt-Injection | Prompt Injection | Gemini CLI |
| SecurityWeek-Prompt-Injection | Prompt Injection | GitHub Copilot Agents |
| SecurityWeek-Prompt-Injection | Prompt Injection | Attack method: 'Comment and Control' |