Your progress 0%
⚡ Prompt Library ⚙ VS Code Setup
0
Phase 0
Foundations
Before building anything, you need to feel comfortable talking to Claude. This phase takes about 30 minutes. Do not rush it — the intuitions you build here make every later phase easier.

What you will learn

  • What Claude actually is and how it works (it is not a search engine)
  • What Claude is genuinely good at vs where it struggles
  • What "context" means — the single most important concept in the entire course
  • How context limits work and why they matter for client work

What you will build

  • Exercise 1 — Experience context firsthand with a structured experiment
  • Exercise 2 — Map Claude's strengths and limits across five test tasks
Before you start
You need a free account at claude.ai to complete the exercises in this phase. If you haven't signed up yet, do that now — it takes two minutes.
Phase 0 · Lesson 1

What Claude is and why it matters for your clients

What is Claude?

Claude is an AI assistant made by a company called Anthropic. You use it through a website at claude.ai, or through a technical interface called an API (Application Programming Interface) if you are building software. For this course, you will be using the website.

The most important thing to understand from the start: Claude is not a search engine. It does not look things up on the internet every time you ask a question. Instead, it was trained on a very large amount of text, and it uses that training to reason through your question and write a response. This is a fundamentally different thing from Googling.

Why does this matter for client work?

It means Claude is exceptionally good at things like:

  • Writing and editing — emails, reports, summaries, job descriptions, policy documents
  • Analysis — reading a document and identifying key information, categorising items, spotting patterns
  • Structured data extraction — reading something messy and turning it into a clean, organised format
  • Following complex instructions reliably and at speed

What it is not good at:

  • Current events — Claude's knowledge has a cutoff date and it does not browse the internet by default
  • Precise arithmetic — it can do maths but can make errors on large calculations; always verify
  • Knowing things it was never trained on — it cannot read your client's private data unless you give it that data in the conversation

One honest caveat

Claude can make mistakes. It can sometimes produce information that sounds confident but is wrong. This is called a hallucination and it is a known limitation of all current AI systems. In this course you will learn specific techniques to reduce hallucinations and to build systems that check their own work. For now, simply know it is something to be aware of.

Pitfall #1
Do not treat Claude like a search engine. Asking "What's the latest news about X?" will get you an answer, but it may be outdated or fabricated. Use Claude for reasoning, writing, and analysis — not for breaking news or real-time facts.
Phase 0 · Lesson 2

Context — the most important concept in this entire course

What is context?

Context means: everything Claude can see in the current conversation. At the start of a new conversation, Claude can only see what you have typed in that conversation. It does not remember anything from previous conversations. Every new conversation starts completely fresh.

This is not a bug — it is how the system works. Understanding it deeply will save you enormous frustration.

A demonstration you can try right now

Open claude.ai. Type: "My name is Jordan." Press enter. Now in the same conversation, type: "What is my name?" Claude will tell you Jordan — because that information is in the current context.

Now start a brand new conversation. Type: "What is my name?" Claude has no idea. Because you are in a fresh conversation, and context starts from zero.

The practical consequence for client work

Every piece of information Claude needs to do its job must be present in the conversation. Your instructions. Background about the client. The data to be processed. All of it. You cannot assume Claude remembers anything from before.

Context has limits

There is a maximum amount of text Claude can hold in a single conversation before older parts of it become less reliable. Think of it like a notepad with a fixed number of pages. For most client work you will not hit this limit quickly, but it is something to manage deliberately in longer tasks. Phase 4 covers this in depth.

Key concept
Context is everything Claude can see right now, in this conversation. It does not persist between conversations. It does not include anything from your client's systems unless you explicitly provide it. This is the foundation that all later lessons build on.
Exercise 1 Experiencing context firsthand

What you will do

This exercise helps you directly feel how context works. Do not skip it.

Steps

  1. Go to claude.ai and start a new conversation.
  2. Type: "I am building a tool for a gym called FitPro. Their main problem is that trainers spend 2 hours a day manually scheduling client sessions." Press enter.
  3. In the same conversation, type: "Summarise the client problem in one sentence." Claude should give you a specific, relevant answer about the gym.
  4. Now start a BRAND NEW conversation. Type: "Summarise the client problem in one sentence." Notice what happens — Claude has no idea what you are referring to.
  5. Go back to the original conversation. Type: "What are three ways an AI tool could help with this problem?" Claude uses the gym context to give specific, useful suggestions.

What to notice

  • Step 3 is specific and relevant — Claude is using your context.
  • Step 4 is confused — no context was provided.
  • Step 5 is specific again — the original context is still there.

Write your answer

After completing the steps, complete this sentence in your own words: "Context in Claude means ________________________."

👋
Check in with your mentor. Before moving to the next lesson, share what you built with Doug. Describe what worked, what confused you, and what you would do differently. You do not need to have everything perfect — you just need to have tried it.
Exercise 2 Finding Claude's strengths and limits

What you will do

Before building tools for clients, you need to know what Claude is genuinely good at versus where it struggles. Test each of the following in separate conversations and record your assessment.

The five tests

  1. Writing — ask Claude to write a professional email declining a meeting in under 100 words. Would you actually send it?
  2. Summarisation — paste any article from the web and ask for a 3-bullet summary. Is it accurate?
  3. Analysis — give Claude 5 made-up customer complaints and ask it to group them by category and count each. Does it categorise correctly?
  4. Current events — ask "What happened in the news today?" Is the answer current and reliable?
  5. Maths — ask "What is 3,847 multiplied by 29?" Verify the answer with a calculator.

Record your findings

For each test, write one of these three words: Strong / Acceptable / Weak. Your five answers will guide you on where to trust Claude confidently in client work and where to add safeguards.

👋
Check in with your mentor. Share your five ratings and your reasoning. Were any results surprising?
1
Phase 1
Talking to AI like a professional
Prompt engineering is the skill of writing instructions that get Claude to do exactly what you want, consistently and reliably. This is the highest-leverage skill in the entire course.
Open your Prompt Library now
There is a separate tool called the Prompt Library. It is a quick-reference card for every pattern taught in this phase. Open it in a new tab → and keep it open as you work through Phases 1, 2, and 3. You will use it constantly.

What you will learn

  • Why vague instructions fail and how to write specific criteria instead
  • How to use few-shot examples to teach Claude consistent behaviour
  • How to get structured, predictable output instead of conversational paragraphs
  • How to handle failures automatically with retry logic
Phase 1 · Lesson 3

Why vague instructions fail — the power of specific criteria

The problem with vague instructions

The most common mistake beginners make is being vague. Here is an example of a vague prompt:

Vague
Review this customer email and tell me if it's a problem.

What Claude might do with this: write a rambling paragraph about the email, flag things that are not actually problems, miss things that are, and do it differently every single time you run it.

Here is the same prompt rewritten with specific criteria:

Specific
Review this customer email. Flag it only if it contains: (1) an explicit complaint about a product defect, (2) a request for a refund, or (3) a threat to escalate to a manager. For each flag, state which category it matches and quote the relevant sentence. If none of these apply, respond with: No action required.

What changed?

The specific version defines categories — defect, refund, escalation. It tells Claude exactly how to format the output. And it tells Claude what to say when none of the conditions apply. The result is consistent, predictable, and useful.

The false positive problem

If Claude flags things it should not — called false positives — your client loses trust in the tool. They start ignoring the flags. And if they ignore all the flags, your tool is worse than useless. The fix is always to tighten the criteria, not to be more lenient.

How to build specific criteria

A useful technique: think about the worst case. What would be the most embarrassing mistake your tool could make? Then write a rule that prevents exactly that. Keep doing this until you have covered all the cases that matter.

Pitfall #2
Adding the word "only" to a vague instruction does not make it specific. "Only flag serious issues" is still vague — what counts as serious? Define it. "Flag issues only when they involve a safety risk, a billing error over £50, or a legal threat" is specific.
Phase 1 · Lesson 4

Showing instead of telling — how examples make Claude consistent

The problem examples solve

You can spend a long time writing clearer and clearer instructions and still get inconsistent results on ambiguous cases. There is a better tool: examples.

Instead of describing what you want, you show Claude what you want. This is called few-shot prompting — "few-shot" just means "a small number of examples."

A before and after

Without examples — inconsistent
Classify each support ticket as: Billing, Technical, or General.
With examples — consistent
Classify each support ticket as: Billing, Technical, or General. Examples: Input: "My payment was charged twice." → Billing Input: "The app crashes every time I open the settings." → Technical Input: "What are your office hours?" → General Input: "I can't log in, my password reset email never arrived." → Technical (Note: login problems are Technical even though they involve account access, because the root cause is a system failure, not a billing issue.)

The key: show the reasoning, not just the answer

Notice the note in the last example above — it explains why that ticket is Technical rather than General. This is what makes examples powerful. It teaches Claude to generalise to new cases it has never seen, not just copy your examples.

When to use examples

  • When your instructions produce inconsistent results
  • When Claude keeps making the same type of mistake
  • When you are working with ambiguous cases where reasonable people could disagree
  • When extracting information from documents that come in different formats

How many examples?

Start with 2. If you still get inconsistency, add a third. If that does not fix it, your examples are probably targeting the wrong cases — look at what Claude is getting wrong and build examples specifically for those edge cases.

Pitfall #3
Do not write examples that only cover the easy cases. Easy cases do not need examples — Claude handles those fine without them. Your examples should specifically cover the ambiguous or tricky cases that cause inconsistency. If the edge case does not appear in an example, Claude will keep getting it wrong.
Exercise 3 Rewriting vague prompts into specific ones

The three prompts to rewrite

  1. Vague: "Summarise this customer review and tell me if it's positive or negative." — Write a specific version that defines what counts as positive, negative, and mixed.
  2. Vague: "Review this invoice and flag anything unusual." — Write a specific version listing at least 4 concrete categories of what counts as unusual.
  3. Vague: "Analyse this job application and tell me if the candidate is suitable." — Invent a fictional role and write specific criteria with at least 3 things that must be present.

Testing your work

For each prompt pair: run the vague version on 3 different sample inputs. Then run your specific version on the same 3 inputs. Count how many times the output format was consistent. The specific version should score higher.

👋
Check in with your mentor. Share your three rewritten prompts and your consistency test results.
Phase 1 · Lesson 5

Structured output — making Claude's answers usable

What is structured output?

When Claude answers in a conversational paragraph, a human has to read it and decide what to do. That is useful. But when Claude produces output in a consistent, predictable format — the same fields, the same order, every time — that output can feed directly into other systems: a spreadsheet, a database, another AI step. This is called structured output, and it is what makes AI genuinely useful inside a real business process.

The simplest form: a template

Template example
Analyse this contract and fill out the following template exactly. Leave any field as "Not found" if the information is not present. Client name: [value] Contract start date: [value] Key obligations: [value] Red flags: [value] or None

The fabrication problem

If you make a field required and the source document does not contain that information, Claude will sometimes invent a plausible-sounding value rather than leave the field blank. This is called fabrication. The fix: for any field that might genuinely be absent, add "or Not found" as a valid answer.

Pitfall #4
If you require Claude to fill every field and some documents do not contain that information, Claude will fabricate plausible-sounding values. Always provide a "Not found" or "Unclear" option for fields that might genuinely be absent.
Phase 1 · Lesson 6

When it goes wrong — retry logic and self-correction

What is retry logic?

Even with great prompts, Claude sometimes produces output that does not meet your requirements. The technique is called retry with error feedback. You run Claude, check the output, and if it fails, you send it back with a specific description of what went wrong.

A concrete example

Retry message
Your previous response contained an error. Original input: Invoice dated 15 March 2024 Your output: due_date: "15 March 2024" Error: The due_date field must be in DD/MM/YYYY format. Please correct only the error and resubmit.

What retry can and cannot fix

  • Works for: format errors, structural mistakes, values in wrong fields
  • Does NOT work for: information genuinely absent from the source document
Pitfall #5
Retry loops work for format and structure errors. They do not work for missing information. If the data is not in the source, no amount of retrying will produce it — Claude will either hallucinate or repeat its "not found" answer. Always distinguish between "wrong format" and "genuinely absent."
Exercise 4 Building few-shot examples — the ticket classifier

The scenario

Your client is a software company. They receive support tickets that need to go to one of three teams: Billing, Technical, or Account Management. Tickets are often ambiguous.

Step 1 — Write the base prompt without examples

In a new Claude conversation, write a prompt that describes the three categories and asks Claude to classify each ticket.

Step 2 — Test without examples on these 5 tickets

  • "I cancelled my account last week but was still charged this month."
  • "The dashboard shows my usage as zero even though I've been using it every day."
  • "I need to add two new team members to my workspace — how do I do that?"
  • "My trial expired but I want to continue on the free plan, not the paid one."
  • "The export function downloads a file but my spreadsheet app says it's corrupted."

Step 3 — Add 3 examples and retest

Choose 3 examples that target the most ambiguous cases. Write each showing both the input and the correct output, plus a brief note explaining the reasoning. Retest the same 5 tickets and compare results.

👋
Check in with your mentor. Share which tickets changed classification after adding examples, and what you think caused the change.
2
Phase 2
Giving your AI superpowers
A Claude conversation on its own is useful. Claude connected to your client's real data and systems is transformative. This phase teaches you how tools work and how to design them so Claude uses them correctly.
Phase 2 · Lesson 7

What a tool is and why it changes everything

The limitation of a pure conversation

So far, Claude has been reasoning purely from the information you provide in the conversation. That is powerful, but limited. What if you want Claude to look up a live order status? Read from a database? Send an email when a condition is met? For that, you need tools.

What a tool is

A tool is a capability you give Claude that lets it interact with the outside world. Think about what this unlocks for clients:

  • A client who tracks sales in a spreadsheet could give Claude a tool that reads from that spreadsheet — Claude can now answer "Who are my top 5 customers this month?" without anyone pulling a report manually.
  • A client with a customer database could give Claude a tool that looks up account information. Claude can now handle queries that reference real, live data.
Mental model
Claude is the brain. Tools are the hands. Claude decides what to do — tools do the physical work of fetching, writing, or triggering things in the outside world. Without tools, Claude can only work with what you give it in the conversation.
3
Phase 3
Building AI agents
An agent is Claude doing more than answering a question — it is Claude working through a multi-step task, using tools, making decisions, and adapting as it goes. This is where the real client value lives.
4
Phase 4
Making it reliable
A demo that works 90% of the time is not a client deliverable. This phase covers the techniques that take your builds from "mostly works" to genuinely reliable.
5
Before Phase 5
VS Code + Claude Code Setup
Phase 5 teaches you how to configure Claude Code for professional client work. To do those exercises hands-on, you need Claude Code installed and working. Complete this setup now, then move into Phase 5.
Open the setup guide
The interactive VS Code setup guide has click-to-copy commands for every step. Open it now in a new tab → and work through it before continuing.
6
Phase 6
Build your first client project
Everything in this course has been building toward this. You now have the skills to deliver a real AI-powered tool for a real client.
Phase 6 · Capstone project

🏆 UrbanNest — Build your first client deliverable

This is your final project
It brings together every skill from every phase. Set aside 2–3 focused hours. Work through it without rushing.

The scenario

Your client is a property management company called UrbanNest. They manage 340 residential units across 12 buildings. Every day they receive 40–60 maintenance requests from tenants by email. A human coordinator currently reads every email, categorises it, assigns it to the right contractor, and sends an acknowledgement to the tenant. This takes 3–4 hours a day.

They want an AI tool to automate this process. Your job is to design and build it.

Part 1 System design

Before touching Claude, document your answers to all of the following:

  1. Problem analysis: What exactly is being automated? What must remain human?
  2. Tool list: What 4–6 tools does this system need? Write full descriptions for each.
  3. Agent loop: Describe the step-by-step process from receiving an email to completing the workflow.
  4. Categorisation criteria: Define the exact categories and specific criteria for each.
  5. Escalation rules: Define 4 specific conditions that should escalate to the human coordinator.
  6. Programmatic gate: Identify one rule that must be enforced programmatically.
  7. Output format: Design the structured output template for each processed request.
  8. Handoff message: What information must be included when escalating to a human?
Part 2 Prompt build

Write the complete system prompt for your main agent. It must include:

  • The agent's role and scope — what it does and what it does not do
  • The categorisation criteria with at least 3 few-shot examples covering ambiguous cases
  • The output template the agent must use for every processed request
  • Explicit escalation rules using the criteria you defined in Part 1
  • Instructions for writing the tenant acknowledgement email
Part 3 Test run

Test your system against these 6 maintenance requests. For each: category assigned, action taken, whether escalation was triggered, and the tenant acknowledgement email.

  1. "Hi, the hot water in unit 4B has been cold for three days. This is really not OK, I have a baby."
  2. "There's a small drip under the kitchen sink. It's not urgent but should probably be looked at. Unit 7A."
  3. "URGENT: water is coming through my ceiling from the unit above. It's getting worse. Unit 2C."
  4. "My front door lock is stiff and takes a few tries to open. Unit 9D."
  5. "This is the fourth time I'm reporting the broken heating. Nobody has come. I'm calling the council tomorrow. Unit 11B."
  6. "Hi, I wanted to check if I could install a dishwasher in my unit. Who do I need to talk to? Unit 6F."
Part 4 Reflection
  1. Which of the 6 requests was hardest for your system to handle? Why?
  2. Did your system correctly identify which requests needed human review?
  3. What would you change in your system prompt after seeing the test results?
  4. What is the one thing your client most needs to understand before deploying it?
🏆
Share your capstone with your mentor. This is your graduation project. Walk Doug through your system design, your prompt, and your test results. This is your first professional AI deliverable.
Congratulations
You have completed AI Builder Bootcamp. You now have the skills to design, build, and deliver AI-powered tools for clients. The only thing left is to go and use them.