Autonomy demands Accountability
The APAAI Protocol defines the accountability loop for agentic AI — Action → Policy → Evidence.
Modern agents execute code, move funds, and publish content. APAAI ensures those actions carry a verifiable record of why and how they occurred.
Why APAAI exists
Existing agent frameworks execute actions without consistent accountability. APAAI introduces a common record schema and policy interface that bridges intent, execution, and evidence across models and platforms.
The core loop
Action → Policy → Evidence
• Action — structured intent (type, actor, target, params, timestamp)
• Policy — constraints (enforce or observe); may require approval
• Evidence — attestable outcomes (checks, artifacts, signatures)
The protocol does not prescribe storage, identity, or cryptography; it defines the record shape so implementations vary while remaining interoperable.
Principles
- Neutrality — independent of model provider or runtime.
- Transparency — records explain why and how, not just what.
- Human Oversight — review/approval as first-class policy.
- Interoperability — minimal schema; extensible via RFCs.
- Verifiability — evidence others can independently validate.
Governance
APAAI defines accountability primitives; governance is intentionally out of scope. The protocol enables governance by ensuring consistent per-action records of intent, applied policy, and evidence. apaAI Labs stewards the reference server, SDKs, and the RFC process. Changes follow public discussion and semantic versioning.
Non-goals
- Not a full agent framework or LLM runtime.
- Not a centralized trust authority or walled garden.
- Not tied to a specific storage, identity system, or cloud.
Licensing
Code & reference implementations: Apache-2.0
Specification: CC BY 4.0
Participate
If your systems take action, they should leave a trail. Adopt the primitives, propose improvements, and contribute integrations.