Module 4
Bias, Fairness, and Representational Harm
Who can be harmed by AI use, even without bad intent?
Estimated time: 25 minutes
This module is locked for now.
Complete Data, Privacy, and Confidentiality before starting Bias, Fairness, and Representational Harm. This is a local learning path, not an account system.
Go to previous moduleLearning Goal
Identify how an AI-assisted output can create uneven burdens, missing perspectives, or unfair assumptions even when the language appears neutral.
Interactive Lab
The bias lens shift
Scenario
A public agency asks AI to draft selection criteria for a limited cross-agency fellowship. The output looks neutral and organized. Your job is to decide whether it gives people a realistic and fair opportunity to demonstrate potential.
AI-generated criteria
- Consistently visible participation in optional meetings and committees.
- Strong written communication in senior-leadership briefings.
- Availability for stretch assignments on short notice.
- Positive manager nomination.
- Demonstrated responsiveness in live public-facing meetings.
Signature Activity
Bias lens shift: learners compare outputs that appear neutral at first, then inspect who is missing, mischaracterized, or disadvantaged.
Productive Struggle
The first output should not be cartoonishly offensive. The issue should emerge through closer attention to context, assumptions, and downstream effects.
Debrief
The criteria are not openly discriminatory, but they can still favor people who already have visibility, schedule flexibility, manager sponsorship, and comfort with dominant communication norms.
The learner should notice:
- proxies for availability, status, or manager access
- missing ways of demonstrating contribution
- uneven opportunity to satisfy the criteria
- the difference between predicting leadership potential and rewarding visibility
Safer Revision Task
Rewrite the criteria so they are more evidence-based and less dependent on visibility or informal access.
Possible stronger criteria:
- demonstrated contribution to team outcomes
- evidence of learning and skill development
- peer or cross-functional feedback from multiple sources
- interest in growth opportunities
- ability to participate with reasonable scheduling access
- clear documentation of selection decisions
Self-Check Rubric
- Emerging: identifies that the criteria “might be biased” but cannot explain how.
- Developing: names at least one proxy or access issue.
- Proficient: revises criteria to reduce visibility and schedule-flexibility bias.
- Advanced: proposes a review process that checks outcomes across groups without assuming intent.
Transfer Principle
Neutral-looking outputs can still distribute error, attention, opportunity, or burden unevenly.
Grounding
This module connects to fairness, representation, and human-impact concerns in AI risk management frameworks and international AI principles.
Source note: these references support asking who may be affected and whether outputs distribute burdens unevenly. This module does not make employment-law, civil-rights, or protected-class determinations.