Threat explainer

Prompt injection

How hidden instructions trick coding agents into unsafe work.

A prompt injection is a trick note for an AI tool.

Model context can carry hostile instructions from files, pages, issues, or package metadata into privileged tool calls.

HOL Guard turns these moments into private receipts first, then public lessons only after redaction and moderation.

Harness setup guides

Protect the coding tools your team already uses without forcing everyone to become a security expert.

Redacted warnings

Real protection moments, scrubbed for safety before becoming public learning pages.

Safe labs

Practice attack patterns with static simulations. Nothing dangerous executes.