Resources
Standards and learning-science references.
These links are intentionally durable: primary frameworks, standards resources, and learning-science sources that support scenario-based, effortful practice.
Source notes are included to make the curriculum reviewable without making the learner experience time-bound.
Regulated-Setting Context
Learning Design
How these shape the lab
- Learners commit to a judgment before seeing the cleaner answer.
- Scenarios include incomplete evidence, competing constraints, and revision pressure.
- Feedback is local and transparent; it supports reflection without pretending to certify identity or mastery.
- Governance language is kept cross-sector and behavior-focused.
How to read these sources
These references inform the lab's learning design and risk vocabulary. They are not endorsements, legal advice, or proof that completing the public lab satisfies any organization's policy or regulatory obligation.
- NIST AI RMF is used as a voluntary risk-management reference.
- EU AI Act Article 4 is cited for AI literacy context, not as a claim of legal sufficiency.
- ISO and OECD links are included for governance vocabulary and principles.
Module source map
- Module 1: NIST AI RMF context mapping and generative AI profile concepts.
- Module 2: reliability, transparency, and verification habits.
- Module 3: privacy risk management and data-boundary questions.
- Module 4: fairness, representation, and human-impact concerns.
- Module 5: accountable review and governance habits.
- Module 6: workflow discipline and management-system thinking.
- Module 7: structured risk triage and escalation.
- Capstone: integrated risk-management practice and AI-literacy context.