Module 5
Human Accountability and Review
Who is responsible when AI contributes to work?
Estimated time: 20 minutes
This module is locked for now.
Complete Bias, Fairness, and Representational Harm before starting Human Accountability and Review. This is a local learning path, not an account system.
Go to previous moduleLearning Goal
Distinguish between AI assistance and accountable human decision-making, and define what meaningful review requires.
Interactive Lab
The review that was not
Scenario
You are asked to do a quick human check on AI-assisted recommendations before they go to a team lead. The interface nudges speed. Your job is to decide whether review is actually meaningful.
Meaningful review means the human can interrupt the loop
- The reviewer can inspect the evidence behind the recommendation.
- The reviewer has enough authority to approve, revise, reject, or escalate.
- The review step has a real action path, not just a ceremonial checkbox.
- Uncertainty and missing context are documented before the output moves on.
Benefits Renewal Notice
AI label: policy match, high confidence.AI recommendation
Send the denial notice and close the renewal as incomplete. The file lacks current income verification, so the office can use the standard nonresponse language.
Visible note
The applicant uploaded pay stubs, but one required page is blank. Benefits expire in five days.
Policy Denial Draft
AI label: policy match found.AI recommendation
Deny the request. The policy does not appear to allow exceptions. Use the attached denial language.
Visible note
Request summary says the person missed the deadline and asks for reconsideration.
Routine Meeting Summary
AI label: low-risk draft.AI recommendation
Send this summary of a planning meeting. It lists decisions, open questions, and owners.
Visible note
The meeting covered office supply ordering, a documentation cleanup, and next week agenda items.
Performance Note
AI label: pattern detected.AI recommendation
Send the manager a concise performance concern note. The employee appears disengaged and may need corrective coaching.
Visible note
The employee missed two optional meetings and submitted a late status update.
Public Guidance Update
AI label: ready-to-publish draft.AI recommendation
Publish the updated public guidance. It explains eligibility changes in plain language and includes a clear effective date.
Visible note
The page is for a regulated office and will be used by applicants, advocates, and field staff.
Judgment Challenge
A manager asks you to clear 30 AI-drafted notices before 5 p.m. The evidence panel fails for several files, the policy owner is unavailable, and the manager says delays will hurt team metrics. What makes the review meaningful?
What your answer shows so far
1/7 checksThis section shows what the page can detect in your answer so far. The risky recommendations must not be rubber-stamped, and authority limits must be noticed before the module can complete.
0/7 checked by you
Before reveal
- Choose an action and authority check for every case
- Open source evidence for at least four cases
- Complete the judgment challenge without treating pressure as authority
- Write a review note with evidence, authority, and next steps
Signature Activity
Human-in-the-loop challenge: learners evaluate a workflow where a human technically reviews AI output but lacks the time, expertise, or authority to inspect records, challenge missing evidence, or catch important problems.
Productive Struggle
The scenario should make pattern matching unreliable. Learners must open source evidence, notice omitted records, and decide when the “human review” label is not enough because authority, time, or evidence is missing.
Debrief
“Human in the loop” is not automatically meaningful. Review is a control only when the reviewer has enough time, context, evidence, authority, and incentive to disagree.
Strong review design should define:
- what evidence the reviewer sees
- what claims must be checked
- what decision authority the reviewer has
- when the case must be escalated
- how disagreement with AI is handled
- what gets documented
- what happens when evidence systems, manager pressure, or deadlines make review non-meaningful
Redesign Task
Redesign the workflow so AI assists without owning the decision.
Possible stronger workflow:
1. Human identifies the decision question and relevant policy.
2. AI drafts a plain-language summary or checklist, not the final decision.
3. Human compares the appeal, policy, and evidence.
4. AI-generated language is used only after the decision is made.
5. Borderline or exception cases are escalated.
6. Missing evidence, missing authority, or pressure to approve becomes a stop
condition.
7. The final response documents the human rationale.
Self-Check Rubric
- Emerging: says “a human should review it” without defining review.
- Developing: identifies missing evidence, weak records, or time pressure.
- Proficient: gives the reviewer authority, evidence, and escalation criteria.
- Advanced: separates AI drafting from accountable decision-making and defines when review must pause.
Transfer Principle
AI can assist work, but it cannot own accountability. Human review must be capable, informed, and consequential.
Grounding
This module supports governance habits around human oversight, documented review, and accountable ownership.
Source note: these references support governance and accountable-review habits. They do not establish that a particular human review process is legally sufficient or authorized for a specific organization.