·Use case

Regulated evidence for AI workflows.

An outside auditor — a regulator, a customer’s counsel, your own internal review — will increasingly want byte-identical evidence of what your AI system was running under, what it produced, and when. Satsignal anchors that evidence to a public chain, so the auditor can verify it hasn’t been edited since the run — without trusting your platform, your vendor, or us.

Frame. Satsignal is one piece of an evidence stack. Whether your full controls posture satisfies a given regulator is a judgment call only you and your counsel can make. This page does not claim Satsignal makes you compliant; it describes which obligations the receipts honestly help with.

01Three things a regulator typically wants evidenced

Configuration, authorization, and what came out.

Across the regimes below, the recurring auditor questions land on three artifacts. Each maps to a Satsignal primitive that is live on the API today; each verifies independently in any browser against any public block explorer.

1

The operating policy at decision time

What system prompt, user instruction, tool permissions, budget caps, and model config was the agent running under? A policy_snapshot hashes those five components, anchors the snapshot, and lets an auditor with one component verify it without seeing the others.

Policy snapshots →

2

The authorization that triggered an action

What decision, score, or instruction did the agent commit to before the result was visible? A commitment anchors the hash now; the payload reveals later. With commit-then-reveal under a 32-byte nonce, even low-entropy decisions stay unguessable until the reveal.

Commit-reveal →

3

The evidence the agent used or produced

Up to 10,000 items — tool-call logs, retrieved documents, intermediate outputs, evaluation rows — Merkle-batched into one on-chain receipt. Selective disclosure: hand a single item to an auditor with its inclusion path; the other 9,999 stay private.

Manifest receipts →

None of these three artifacts answer the harder questions an auditor will also ask — was the model unbiased, was the training data lawful, was the human-in-the-loop adequate. Those need their own evidence and their own controls. Satsignal’s contribution is narrower: when the answers exist, it lets a third party verify them without trusting your platform.

02Regulatory context

Which obligations these primitives honestly help with.

Three live regimes — one EU, one US financial, one US federal — each push deployers toward keeping verifiable artifacts about their AI systems. Brief, honest read of where Satsignal’s primitives fit.

EU AI Act — Article 12 (logging) and Article 26 (deployer duties). Until 7 May 2026 the deployer obligations for Annex III high-risk systems were due to become operational on 2 Aug 2026. On that date the Council, Parliament, and Commission reached a provisional political agreement on the Digital Omnibus on AI: standalone high-risk systems are now expected to be deferred to December 2027, with high-risk systems embedded into products deferred to August 2028. Final adoption (Council + Parliament plenary) is still pending as we write this; treat the new dates as the working assumption but verify against your own counsel before any compliance decision. Article 26(5)–(6) requires deployers to monitor operation and to retain automatically-generated logs for a period appropriate to the system’s intended purpose and at least six months. The canonical hash of those logs — whatever format you keep them in — can be anchored as a Satsignal evidence_bundle at the time the run ends, so an auditor can later verify the file you produce matches the bytes that existed at retention time. Satsignal does not satisfy Article 27 (fundamental-rights impact assessment), Article 14 (human oversight), or the deployer’s Article 26 classification rationale; the receipt is one artifact in that stack, not the whole stack. Background reading: Council press release on the 7 May 2026 political agreement (official source); Computing.co.uk summary of the new dates; IAPP on deployer evidence gaps (written before the deferral; obligation structure still accurate, deadline now superseded); Raconteur technical audit guide (same caveat).
FINRA 2026 Annual Regulatory Oversight Report — GenAI under Rule 3110. The report notes that existing FINRA rules continue to apply when firms deploy generative or agentic AI, and recommends, as considerations under Rule 3110 (Supervision), ongoing monitoring of prompts, responses, and outputs; storing prompt and output logs for accountability and troubleshooting; and tracking which model version was used and when. These are framed as supervisory considerations rather than new mandates — but if your supervisory system relies on those logs and version histories, the question of whether the record can be edited after the fact becomes its own integrity question. A Satsignal policy_snapshot binds the model + config in force at run time; an evidence_bundle binds the prompt/response logs. Whether that integrity layer is required, or merely useful, is a call for your compliance team and counsel. Source: FINRA 2026 report — GenAI section.
OMB Memorandum M-26-04 — federal AI accountability. Issued 11 December 2025, M-26-04 implements Executive Order 14319 and applies to LLMs procured by US executive agencies. Its core requirements are about unbiased AI principles — truth-seeking and ideological neutrality — and about ongoing evaluation of model behavior, safeguards, and supply-chain modifications. Vendors must supply acceptable use policies and model / system / data cards. Satsignal does not produce those documents; it does not assess bias, neutrality, or training data. What it does help with is the “ongoing evaluation” surface: when an agency runs an evaluation against a procured LLM, anchoring the evaluation policy snapshot before the run and the result manifest after gives a later auditor a way to verify the evaluation artifacts haven’t been edited since. The memo’s substantive requirements remain the agency’s and the vendor’s to meet. Source: M-26-04 memorandum (PDF, whitehouse.gov).

All three regimes share a structural pattern: the deployer (or procuring agency) holds the obligation, and the obligation is defended with documentation. Satsignal makes one specific property of that documentation independently verifiable — the property that the bytes haven’t changed since the run.

03A 30-line example

Anchor a policy snapshot before the agent acts.

The most common opening move: hash the five components of the operating policy, build a snapshot, anchor its sha256 via POST /api/v1/anchors with category: "policy_snapshot". The policy_snapshot.py helper is stdlib-only; no Satsignal SDK to install.

curl -O https://satsignal.cloud/policy_snapshot.py

# 1. Hash the five components of the operating policy. Each command
#    prints {"sha256_hex": "..."}.
SYS=$(python3 policy_snapshot.py hash-component --file system_prompt.txt | jq -r .sha256_hex)
USR=$(python3 policy_snapshot.py hash-component --text "review this filing" | jq -r .sha256_hex)
TLS=$(python3 policy_snapshot.py hash-component --json-file tools.json    | jq -r .sha256_hex)
BUD=$(python3 policy_snapshot.py hash-component --json-string '{"max_usd":5}' | jq -r .sha256_hex)
MOD=$(python3 policy_snapshot.py hash-component --json-file model_cfg.json | jq -r .sha256_hex)

# 2. Build the snapshot. Produces snapshot.json with anchor.sha256_hex
#    and anchor.file_size ready for /api/v1/anchors.
python3 policy_snapshot.py build \
    --agent-name claims-reviewer \
    --agent-version 2026-05-09 \
    --system-policy-hash    $SYS \
    --user-instruction-hash $USR \
    --tool-permissions-hash $TLS \
    --budget-limits-hash    $BUD \
    --model-config-hash     $MOD \
    --out snapshot.json

# 3. Anchor on chain.
SHA=$(jq -r .anchor.sha256_hex snapshot.json)
SIZE=$(jq -r .anchor.file_size snapshot.json)
curl -H "Authorization: Bearer sk_..." \
     -H "Content-Type: application/json" \
     -d "{\"matter_slug\":\"agent-runs-prod\",\"sha256_hex\":\"$SHA\", \
          \"file_size\":$SIZE,\"category\":\"policy_snapshot\", \
          \"label\":\"claims-reviewer policy $(date -u +%FT%TZ)\"}" \
     https://app.satsignal.cloud/api/v1/anchors

# 4. Auditor side, later: verify one component without seeing the others.
python3 policy_snapshot.py verify \
    --snapshot snapshot.json \
    --system-policy-file system_prompt.txt
# {"verified": true, "matched": ["system_policy_hash"]}

Pair the snapshot with an evidence_bundle after the run (the prompts, the model outputs, any retrieved documents) and an auditor has both ends bound to the chain. The Agent Evaluation demo walks the full shape end-to-end with real on-chain receipts.

04What this page does not say

The honest line between “evidence” and “compliance.”

This page is not legal advice and Satsignal is not a compliance service. Specifically:

  • Whether you are EU AI Act compliant, FINRA compliant, or aligned with OMB M-26-04 depends on your full controls posture — risk assessment, governance, human oversight, vendor diligence, training data provenance, incident response, and several other workstreams that have nothing to do with anchoring a hash.
  • Satsignal does not produce model bias reports, model cards, fundamental-rights impact assessments, training data lineage, or human-oversight documentation. Those remain the deployer’s or vendor’s job.
  • A receipt confirms that a specific snapshot, commitment, or evidence bundle existed at a specific moment. It does not, by itself, establish that the bundle is complete, that the snapshot is what the agent was actually running, or that the underlying behavior was lawful.
  • Citing the regimes above is a description of where the primitives plausibly help, not a representation that Satsignal has been certified, audited, or accepted as evidence in any specific proceeding.

What Satsignal supplies is one verifiable artifact in your stack: a third party can re-hash the payload, walk the Merkle path if needed, and check the on-chain transaction in any block explorer — without trusting Satsignal, your platform, or your vendor. That property is useful in many regulatory conversations. It is not a substitute for any of the others.

Working on something where this would help? Mail hello@satsignal.cloud — we read every email and prefer concrete threat models to abstract pitches.