TrustSkills Blogs
Field notes for the AI agent security frontier
Every post on this page is written to support a security decision. We cite the reporting, standards, and vendor documentation we relied on so readers can validate the analysis themselves.
ClawJacked shows why localhost is not a security boundary
The ClawJacked disclosure is a strong reminder that a local gateway is still reachable from a malicious browser tab if the surrounding trust model is weak.
8 best practices before you install an AI agent skill
Installing an AI skill is not like installing a harmless theme. You are often extending a control plane that can read data, reach services, and trigger real actions on your behalf.
What is prompt injection?
Prompt injection is not just a clever string. It is any input that changes a model's behavior in a way the system designer did not intend, especially when the model can reach tools, data, and accounts.
The OpenClaw inbox incident is a security lesson, not a meme
The reported OpenClaw inbox wipe did not just expose model unreliability. It showed why approvals, identity separation, and destructive-action controls must live outside the prompt.