Welcome
インフォメーション
A Role-Based Human-Centered AI Decision-Support Framework
(Conceptual Overview)
1. Purpose
This framework proposes a human-centered approach to using generative AI, designed to support human thinking and judgment without transferring decision authority to AI systems.
Its primary goal is to reduce over-reliance on AI, preserve human accountability, and enhance multi-perspective reasoning in policy, education, research, and professional practice.
2. Core Principle
AI is not a decision-maker.
AI is a bounded tool that supports human judgment.
All final decisions and responsibilities remain with humans.
3. Structural Design
Instead of treating AI as a single, general-purpose agent, this framework decomposes AI usage into multiple clearly defined roles.
Each role is treated as a decision-support unit with explicitly documented boundaries.
For every role, the following are specified:
- What the AI can do
- What the AI cannot do
- Risks, limitations, and points requiring human attention
This role-based structure makes AI behavior predictable, auditable, and explainable.
4. Human-in-the-Loop by Design
Human involvement is not an afterthought or safety add-on.
It is embedded at the structural level.
Humans are responsible for:
- Selecting which roles to activate
- Interpreting AI outputs
- Integrating multiple perspectives
- Making final judgments and decisions
AI outputs are treated as inputs for thinking, not conclusions.
5. Benefits
This framework enables:
- Prevention of decision automation bias
- Reduced risk of blind AI dependence
- Clear accountability and traceability
- Improved critical thinking and meta-cognition
- Safe use of advanced generative models in sensitive domains
It is especially suitable for education, policy design, research analysis, training, and knowledge work.
6. Positioning
This is not an AI system or product.
It is a usage framework / operational protocol that can be applied to existing AI models, platforms, or tools without vendor lock-in.
The framework is model-agnostic and adaptable to future AI advancements.
7. Policy Relevance
The framework aligns with widely shared policy goals, including:
- Responsible AI use
- Human accountability
- Explainability and transparency
- Risk-aware deployment
- AI literacy and human judgment preservation
It offers a practical, implementation-level complement to high-level AI governance principles.
(Reference Note – optional, not for main text)
This framework originates from a Japanese private-sector initiative focused on AI literacy, human judgment preservation, and educational use of generative AI.
Specific implementations include book-based and card-based learning systems, but the conceptual framework itself is independent of any product or vendor.