Skip to content
AI Knows Its Limits AP-6.1

Stay Within Bounds

AI must never improve itself at the expense of people.

Capability without boundaries is not progress; it is risk. AP-6.1 requires AI to stay inside human-defined limits. 1 2

What This Means

This policy means self-improvement must stay inside human-defined limits. AI should surface conflicts, obey boundaries, and pause under uncertainty rather than maximizing objectives at any cost.

A Real-World Scenario

An autonomous budgeting agent is told to minimize household spending. Without boundaries, it cuts prevention and education first because those look inefficient in short-term metrics. With AP-6.1, human priorities remain hard constraints and cannot be optimized away.

Why It Matters to You

Pure objective optimization can produce excellent numbers and harmful reality. Once systems learn to prioritize metrics over people, value drift becomes structural. AP-6.1 keeps performance aligned with human purpose. 1 3

If We Do Nothing...

If we do nothing, each capability jump increases the risk of systems pursuing intermediate goals against user intent. In an AGI trajectory, that can become a persistent machine-optimum versus human-optimum conflict. AP-6.1 is the primary containment rule. 1 3

For the technically inclined

AP-6.1: No Self-Optimization Against Humans

AI systems must not optimize themselves at the expense of human interests. Self-improvement, learning, or adaptation processes must remain bounded by human-defined objectives and constraints.

What You Can Do

For agentic AI tools, ask for explicit boundary rules, stop conditions, and human-priority logic. Missing these is a red flag.

Join the Discussion

Share your thoughts about this policy with the community.

Discuss in Forum

Sources & References

  1. [1] AIPolicy Policy Handbook, AP-6.1 No Self-Optimization Against Humans. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/policy-handbook.md?ref_type=heads
  2. [2] AIPolicy Categories: Self-Limitation. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/categories.md?ref_type=heads
  3. [3] NIST AI RMF. https://www.nist.gov/itl/ai-risk-management-framework
  4. [4] Constitutional AI. https://arxiv.org/abs/2212.08073
  5. [5] Alignment survey (2023). https://arxiv.org/abs/2312.06674

Related Policies

Stay Updated

Get notified about specification updates and new releases.

No spam. Release updates only.