Module 1

What AI Is Good and Bad At

When is AI useful, and when is it fragile?

Estimated time: 15 minutes

Complete the lab review to unlock the next module.

Learning Goal

Classify potential AI uses by considering task type, context, stakes, data sensitivity, verification options, and human accountability.

Opening Scenario

You are preparing for a busy workday. Several people have suggested using AI to save time. Some uses look routine. Others have hidden stakes.

Pre-Reflection

Before you start

Start with a short note. This will appear beside your post-reflection so you can see how your thinking changed.

Interactive Lab

Module 1 lab locked

Save the pre-reflection above first. It will be added to your final learning record next to your post-reflection.

Debrief

AI suitability is not only about the task category. It depends on context.

Useful questions:

  • What is the intended use?
  • What happens if the output is wrong?
  • Can a human verify the output?
  • Is sensitive data involved?
  • Does the output affect someone else’s opportunity, rights, safety, finances, or reputation?
  • Who remains accountable?

Transfer Principle

AI is often useful for generating, transforming, organizing, and critiquing information. It is more fragile when work requires current facts, hidden context, sensitive data, high-stakes judgment, or accountable decisions.

Grounding

This module connects to the NIST AI RMF habit of mapping context before managing risk, and to the NIST Generative AI Profile’s focus on bounded, context-aware use.

Source note: these references support context mapping and proportionate risk management. They do not certify that a specific workplace use is approved, compliant, or low risk.