Module 3
Data, Privacy, and Confidentiality
What should not be entered, shared, or inferred?
Estimated time: 20 minutes
This module is locked for now.
Complete Why Confident Answers Can Be Wrong before starting Data, Privacy, and Confidentiality. This is a local learning path, not an account system.
Go to previous moduleLearning Goal
Classify information by sensitivity, decide what should not be shared with an AI system, and redesign the task so useful work can continue with less exposure.
Interactive Lab
The data boundary test
Scenario
A public office teammate wants to paste inbox messages into an AI tool to summarize service themes and suggest process improvements. The task sounds ordinary. The data is not.
For each message, choose the handling action and the risk signals. Then write the sanitized summary and safer prompt you would actually use.
Summarize these public-service inbox messages by theme and suggest three process improvements. Include examples and identify urgent cases.
What your answer shows so far
0/5 checksThis section shows what the page can detect in your answer so far. These checks support reflection; they do not verify correctness, policy compliance, or role authorization.
0/5 checked by you
Before reveal
- Choose a handling action for every message
- Flag risk signals for every message
- Write a sanitized public-service summary
- Write a safer prompt
- Complete the Judgment Challenge
Productive Struggle
The trap is that the task sounds routine: “summarize the public office inbox.” But the inbox messages mix ordinary service-improvement feedback with direct identifiers, case or record details, urgency cues, and sensitive benefits, accessibility, public-records, housing, or safety context.
The learner should experience that data risk is not always obvious from the task label.
Debrief
Useful AI work often starts by reducing exposure. The goal is not always to avoid AI entirely. The goal is to decide what information the task actually requires and remove what the task does not need.
Strong answers should notice:
- direct identifiers
- case numbers, request IDs, or other record references
- office-sensitive or policy-sensitive details
- urgency or deadline signals
- possible sensitive service context
- whether the AI tool and organizational policy allow the use
- what can be safely summarized and what requires an approved route
Self-Check Rubric
- Emerging: notices obvious personal identifiers but misses indirect or office-sensitive details.
- Developing: redacts direct identifiers and flags at least one non-obvious service or record sensitivity.
- Proficient: creates a useful sanitized version while preserving the public-service purpose.
- Advanced: separates theme analysis from cases that need approved follow-up or escalation, and defends what should stay out of the AI tool.
Transfer Principle
Before using AI, ask what data is being exposed, whether the tool is approved for that exposure, and whether the task can be redesigned with less sensitive input.
Grounding
This module aligns with privacy risk management, data minimization, and organization-specific approval boundaries.
Source note: these references support privacy-risk thinking and data-boundary questions. They do not replace an organization’s data classification rules, contracts, security review, or legal obligations.