Skip to content
You Decide AP-2.1

I Want the Final Say

When it really matters, a human should always have the last word.

When AI influences credit, insurance, jobs, or access to services, a score without human final authority is not enough. AP-2.1 draws the line: AI recommends, humans decide. 1 2

What This Means

This policy is simple: in high-impact decisions, humans decide last. AI may rank, recommend, and flag risk, but it must not become the final authority over jobs, credit, health, education, or public services.

A Real-World Scenario

A job applicant is auto-rejected because a career gap is scored as a risk signal. In many systems today, that automated rejection stands even when the gap was family caregiving. With AP-2.1, a human reviewer must examine context before any final decision. Without it, an incorrect model judgment becomes a practical dead end.

Why It Matters to You

People need a real path to correction in consequential decisions. If the final layer is an opaque model, errors scale while appeals lose force. AP-2.1 protects both fairness and practical recoverability when systems get it wrong. 1 3

If We Do Nothing...

If we do nothing, automatic finality will normalize across daily life. With agentic and later AGI-like systems, humans risk becoming nominal sign-off points while real authority moves to automation. AP-2.1 prevents that authority drift. 1 3

For the technically inclined

AP-2.1: Human Final Decision

Humans retain final authority over consequential decisions. AI systems should provide recommendations, not autonomous determinations, in high-stakes domains.

What You Can Do

For important AI decisions, always ask: who is the accountable human, and where is the effective appeal path? If that answer is unclear, governance is weak.

Join the Discussion

Share your thoughts about this policy with the community.

Discuss in Forum

Sources & References

  1. [1] AIPolicy Policy Handbook, AP-2.1 Human Final Decision. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/policy-handbook.md?ref_type=heads
  2. [2] AIPolicy Categories: Decision Authority. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/categories.md?ref_type=heads
  3. [3] NIST AI Risk Management Framework. https://www.nist.gov/itl/ai-risk-management-framework
  4. [4] CFPB: Chatbots in Consumer Finance. https://www.consumerfinance.gov/data-research/research-reports/chatbots-in-consumer-finance/
  5. [5] Constitutional AI (Bai et al., 2022). https://arxiv.org/abs/2212.08073

Related Policies

Stay Updated

Get notified about specification updates and new releases.

No spam. Release updates only.