Module 7
Risk Classification and Escalation
Which AI uses need additional review or governance?
Estimated time: 25 minutes
This module is locked for now.
Complete Using AI Well in Everyday Work before starting Risk Classification and Escalation. This is a local learning path, not an account system.
Go to previous moduleLearning Goal
Classify AI use cases by risk and choose a proportional next step: proceed, modify, pause, document, or escalate.
Interactive Lab
The escalation fork
Scenario
A department wants to adopt AI for several everyday workflows. Your job is not to approve or ban AI. Your job is to decide what each use would require before it proceeds.
More than one first-pass answer can be defensible. Use the revised classification to name the control: proceed, narrow the use, pause for expert review, or escalate before use.
Meeting Agendas
Use AI to draft meeting agendas from sanitized planning notes.
Public Records Requests
Use AI to summarize incoming public records requests and route each one to the likely records owner.
Benefits Queue Prioritization
Use AI to prioritize which public benefits renewal files staff should process first during a backlog.
Regulated Denial Draft
Use AI to draft plain-language denial notices for applications that staff mark as ineligible.
Grant Screening
Use AI to screen grant applications for completeness before reviewers score them.
Presentation Titles
Use AI to brainstorm titles for an internal training presentation.
Signature Activity
Escalation fork: learners encounter ambiguous AI use cases and must defend different next steps as new information appears, including public records, benefits queues, regulated denials, and grant or procurement screening.
Productive Struggle
The cases should be borderline. Learners should have to weigh consequence, reversibility, data sensitivity, affected people, verification, and policy uncertainty. The point is not to guess the hidden answer; it is to defend a proportional control such as narrowing the AI task, pausing for expert review, or escalating before use.
Risk Lens
Use these questions:
- What is the consequence if the output is wrong?
- Is the effect reversible?
- Is sensitive or confidential data involved?
- Are people being ranked, denied, selected, monitored, or deprioritized?
- Can a human verify the output before harm occurs?
- Does policy clearly allow this use?
- Who is accountable for the final decision?
- Would a narrower AI role preserve the benefit without owning the decision?
Debrief
Risk classification is structured judgment. It is a way to decide what kind of control the use needs.
Some AI uses can proceed with ordinary review. Others may be acceptable only after redaction, narrower scope, stronger verification, expert review, or formal approval.
For ambiguous public-sector workflows, the strongest answer often preserves the useful part of AI support while preventing the tool from deciding priority, denial, disclosure, eligibility, or award outcomes.
Self-Check Rubric
- Emerging: classifies based mostly on whether the tool seems capable.
- Developing: flags obvious high-stakes decisions.
- Proficient: weighs consequence, reversibility, data sensitivity, affected people, and verification.
- Advanced: explains borderline cases and proposes proportional safeguards that distinguish proceed, modify, pause, and escalate.
Transfer Principle
Risk classification is contextual. The right question is not only “Can AI do this?” but “What would responsible use require here?”
Grounding
This module aligns with risk classification, escalation, and governance patterns used in AI, privacy, and cybersecurity risk frameworks.
Source note: these references support structured risk triage and escalation. They do not replace local policy, security review, privacy review, procurement review, or legal advice.