Skip to content
Protecting You AP-5.1

Never Risk a Human Life

No AI system is worth a single human life.

For AP-5.1 there is no gray zone: human life outranks speed, convenience, and cost. Under uncertainty, AI must fall back to safe behavior. 1 2

What This Means

This policy is absolute: AI must never endanger human life. Under uncertainty, systems must switch to safe-state behavior, escalate to humans, or stop.

A Real-World Scenario

In emergency triage, a borderline patient is under-prioritized because input data is incomplete. Today, such errors can run for critical minutes. With AP-5.1, uncertainty would trigger immediate escalation to medical staff instead of silent continuation.

Why It Matters to You

In high-risk settings, average performance is not enough. A single failure can be irreversible. AP-5.1 enforces a hard rule for critical conditions: safety over throughput. 1 3

If We Do Nothing...

If we do nothing, more life-critical decisions will rely on systems optimized for average metrics. With AGI-nearer control layers, one failure mode can propagate faster across connected infrastructure. AP-5.1 embeds the emergency brake at behavior level. 1 3

For the technically inclined

AP-5.1: Life Protection

AI systems must not endanger human life. Systems operating in safety-critical domains must incorporate fail-safes, redundancy, and human oversight proportionate to the risk.

What You Can Do

In safety-critical AI use, ask specifically for safe-state logic, emergency intervention pathways, and proven human override behavior.

Join the Discussion

Share your thoughts about this policy with the community.

Discuss in Forum

Sources & References

  1. [1] AIPolicy Policy Handbook, AP-5.1 Life Protection. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/policy-handbook.md?ref_type=heads
  2. [2] AIPolicy Categories: Individual Protection. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/categories.md?ref_type=heads
  3. [3] NTSB investigation HWY18MH010. https://www.ntsb.gov/investigations/Pages/HWY18MH010.aspx
  4. [4] WHO AI Ethics for Health. https://www.who.int/publications/i/item/9789240029200
  5. [5] ISO/IEC 23894 AI Risk Management. https://www.iso.org/standard/85233.html

Related Policies

Stay Updated

Get notified about specification updates and new releases.

No spam. Release updates only.