Prompt injection
How hidden instructions trick coding agents into unsafe work.
A prompt injection is a trick note for an AI tool.
Model context can carry hostile instructions from files, pages, issues, or package metadata into privileged tool calls.
HOL Guard turns these moments into private receipts first, then public lessons only after redaction and moderation.
Harness setup guides
Protect the coding tools your team already uses without forcing everyone to become a security expert.
harness
Codex
Terminal-native coding agent with broad shell reach.
Open guideharness
Claude Code
Agentic coding harness with MCP and file access.
Open guideharness
GitHub Copilot
IDE and CLI assistant across code and terminal flows.
Open guideharness
Cursor
AI-first IDE with repo and terminal context.
Open guideRedacted warnings
Real protection moments, scrubbed for safety before becoming public learning pages.
Safe labs
Practice attack patterns with static simulations. Nothing dangerous executes.