Project Thesis

AI literacy is the ability to make sound judgments about AI use in context.

AI Literacy Lab is a free, open-source training tool for office workers who need to use AI responsibly in everyday work, including regulated settings where data, documentation, human review, fairness, and escalation are part of the job.

The lab is built on a simple premise: AI literacy should change behavior. A learner should leave better able to decide when AI is useful, when it is fragile, what must be checked, what should not be shared, who remains accountable, and when a use should be escalated before it proceeds.

Why This Exists

Many AI trainings treat literacy as awareness: read a policy summary, recognize a few terms, answer a few recall questions, and receive a completion record. That may document exposure, but it does not reliably build judgment.

This project treats AI literacy as practice. Learners make decisions in realistic scenarios, encounter complications, revise their reasoning, and compare their choices against transparent checks. The goal is not to make people pro-AI or anti-AI. The goal is to make them more careful, capable, and accountable when AI enters ordinary work.

What The Lab Trains

Design Principles

Scope

The lab is not legal advice, a proctored credential, an identity-verified certificate, or a replacement for an organization's own policies. It is a structured practice environment for building the habits that make responsible AI policies more likely to work in real behavior.

For Organizations

Teams can use the lab as a practice layer for AI literacy, but should pair it with local guidance: approved tools, data classification rules, escalation contacts, accessibility review, and role-specific obligations. The browser-based learning record documents local practice; it does not verify identity or prove compliance.

See For Teams, Facilitators, and Privacy for adoption guidance.

Standards And Research

The curriculum is informed by durable ideas from AI risk management, privacy risk management, and learning science. It draws especially on the NIST AI Risk Management Framework, the NIST Privacy Framework, and research on scenario-based learning, retrieval practice, and productive failure.

See the Resources page for the working source list.

Accessibility And Testing

AI Literacy Lab is built with semantic page structure, a skip link, visible focus styles, labeled controls, and local browser feedback. The public site should be usable without learner accounts and without submitting learner writing, but it is not a substitute for an organization's own accessibility acceptance testing.

Organizations adopting the lab should run their own accessibility, assistive-technology, browser, storage, and policy checks in the environment where learners will use it.

Open Delivery

The site is built with Astro, MDX, and React islands. The curriculum remains readable as open-source content while interactive lab activities provide the practice moments that matter.