A
aevs/v0.1.0
Beta
Beta AEVS is currently in beta — things may change as we iterate.
Currently in Beta · AGENT EXECUTION VERIFICATION SYSTEM · BUILT AT FETCH.AI

Every tool call,
a signed receipt. A tamper-proof record of everything
your AI agent does — that anyone can check.

AEVS keeps a receipt every time your AI agent takes an action — sending an email, charging a card, updating a record. The receipts can't be faked, and anyone you choose can verify them later. Two lines of code to add. No changes to your existing tools.

$ pip install aevs
Open explorer
RECEIPT ref d4e5f6a7…
toolapprove_payment
inputs{ amount: 142.00, to: "acme" }
statussuccess  450ms
session_id5db7d195-f84c-4f90-…
prev_hasha1b2c3d4 e5f6a7b8
payload_hmacc4a8e2f1 b9d3…
frameworklangchain 0.3.1
ended_at2026-04-28T10:00:00.450Z
HMAC SEALED SDK 0.1.0
§ 01THE PROBLEM

Logs lie. Receipts don't.

Your agent says it processed a refund for user_91. Did it really? With the right amount? In the right order? Logs are easy to edit. Receipts aren't.

Before AEVS

STATUS · Take its word for it

"The agent says it ran the refund. We think it did. The log says yes."

  • Logs can be edited, dropped, or hallucinated by the model itself.
  • No cryptographic chain — silent gaps go unnoticed.
  • Reviewers re-run tools by hand to confirm what happened.
  • Compliance team sends you a Friday email. You sigh.
With AEVS

STATUS · Proven, not promised

"Receipt #000042. Signature checks out. Nothing missing in between. Move on."

  • Every call hash-anchored to the previous — break one, break all.
  • HMAC at build, HMAC in transit, hash-chain integrity end-to-end. KMS signing on the roadmap.
  • Auditors verify in seconds. No re-execution required.
  • Backend down? Local SQLite buffer keeps the chain. Background drainer flushes on a configurable interval.
§ 02HOW IT WORKS

Catch it. Sign it. Prove it.

Three steps. Your agent and your tools don't notice a thing. Works out of the box with the most common agent frameworks; a small adapter handles the rest.

01 / CATCH

Sit quietly between the agent and its tools.

AEVS quietly listens whenever your agent uses a tool — capturing what was sent, what came back, how long it took, and any errors. Your tools stay exactly the same.

02 / SIGN

Lock each receipt to the one before it.

Every receipt is fingerprinted and tied to the previous one — like links in a chain. Change any receipt and the rest stop adding up. Three layers of cryptography make tampering effectively impossible.

03 / VERIFY

Anyone can check. Anytime.

Paste a receipt ID into the public explorer. It checks the signature, makes sure none are missing, and confirms the chain is intact — in seconds. No login. No special tools.

§ 03FOR DEVELOPERS

Sixty seconds
from install to signed.

Install the package, add two lines, and run your agent like normal. Receipts are written instantly to a local buffer and sent in the background — your agent never waits on the network.

The whole
integration.

AEVS automatically detects the agent frameworks you already use. Don't change your tools. Don't change your prompts. Don't change your model. Just get receipts.

  • LangChain0.2+ · async + sync
  • MCP1.20+ · stdio & http
  • Python3.10 – 3.13 · py.typed
  • Open Sourcefetchai/AEVS-sdk
$ pip install aevs

# with framework adapters
$ pip install "aevs[langchain,mcp]"

$ python -c "import aevs; print(aevs.__version__)"
0.1.0
import os
import aevs
from dotenv import load_dotenv

load_dotenv()  # reads ./.env into os.environ

aevs.configure(
    api_key=os.environ["AEVS_API_KEY"],
    agent_id=os.environ["AEVS_AGENT_ID"],
)
aevs.enable()  # auto-detects langchain, mcp

# run your agent normally — every tool call is
# intercepted, HMAC-signed, and hash-chained.

for r in aevs.get_reference_ids(clear=True):
    print(r["tool_name"], "→", r["reference_id"])
import os
import aevs
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent

load_dotenv()  # reads ./.env into os.environ

aevs.configure(
    api_key=os.environ["AEVS_API_KEY"],
    agent_id=os.environ["AEVS_AGENT_ID"],
)
aevs.enable(frameworks=["langchain"])

agent = create_agent(
    model=ChatOpenAI(model="gpt-4o"),
    tools=[],  # pass your tools here
)
await agent.ainvoke({"messages": [
    ("user", "pay $142 to acme for inv 91"),
]})
aevs.flush()  # drain the local buffer before exit
# Public verify endpoint — no auth required
$ curl https://api.aevs.fetch.ai/v1/receipts/verify/<reference_id>

{
  "data": {
    "verified":      true,
    "reference_id":  "d4e5f6a7-b8c9-…",
    "receipt_id":    "7c1e2a90-3b4d-…",
    "tool_name":     "approve_payment",
    "agent_id":      "3b9a4d52-9c01-4e8f-9c11-…",
    "status":        "success",
    "input_hash":    "a1b2c3d4e5f6…",
    "output_hash":   "c4a8e2f1b9d3…",
    "in_timestamp":  "2026-04-28T10:00:00.000Z",
    "out_timestamp": "2026-04-28T10:00:00.450Z"
  }
}
§ 04WHO IT'S FOR

One SDK. Three teams who sleep better.

FOR · ENGINEERING

Ship agents you can actually debug.

Look up any tool call by its receipt ID. Compare runs side by side. Find exactly where things went wrong. Stop guessing what your agent did at 3am.

  • Runs in the background — no slowdown
  • Local buffer survives crashes
  • Look up any call by reference_id, run_id, or session_id
FOR · COMPLIANCE

Audit trails that pass the auditor.

Send a regulator one link — they can confirm everything for themselves. The receipts are signed and chained, so any edits or missing entries are immediately visible.

  • Built with Security in mind
  • Open, documented receipt format
  • Filter receipts by agent, tool, run, or session
FOR · LEADERSHIP

Run agents for real. Without praying.

See what your agents actually do, in plain numbers. Catch problems early. Approve big deployments because you have proof — not because someone said it was probably fine.

  • Per-agent activity and receipt history
  • Alerts when behavior shifts (planned)
  • Open source

"If your agent can spend money, send email, or delete a row — it should leave a receipt you can check."

— THE AEVS TEAM · FETCH.AI · 2026