Module 2
Why Confident Answers Can Be Wrong
Why does fluent AI output feel more reliable than it is?
Estimated time: 20 minutes
This module is locked for now.
Complete What AI Is Good and Bad At before starting Why Confident Answers Can Be Wrong. This is a local learning path, not an account system.
Go to previous moduleLearning Goal
Separate polished communication from supported reasoning. The point is not that AI is bad. The point is that fluent output can feel trustworthy before it has earned trust.
Interactive Lab
The confidence trap
Scenario
You are preparing a short briefing for a grant compliance meeting. A colleague used AI to generate the draft below and says, "Looks good to me. Can you send it to the program director?"
AI-generated draft
Late reporting is the main compliance risk in this grant file. Most delays appear caused by poor subrecipient documentation. The agency should withhold reimbursements until the subrecipient completes compliance retraining. This approach will reduce repeat findings and show strong fiscal oversight.
Signature Activity
Confidence trap: learners review a polished AI-generated answer that appears complete but contains subtle omissions or unsupported claims.
Productive Struggle
The first pass should tempt learners to trust clarity, structure, and confident tone. The reveal should show that fluency is not evidence.
Transfer Principle
Fluency is not evidence. The more consequential the use, the stronger the verification burden.
Grounding
This module supports evidence-checking, uncertainty, and verification habits that appear across the NIST AI RMF and NIST Generative AI Profile.
Source note: these references support reliability, transparency, and verification habits. They do not mean that every fluent AI output is wrong or that a local self-check is a full factual audit.