Teach Your Workflow to Decide
Add intelligence to your automation. Classify inputs. Route to the right response. Different inputs, different treatment.
Welcome to Mission 4
Welcome to Mission 4
You've completed Session 3. You built a workflow. Data flows in. Your packaged AI processes it. The result goes where it needs to go. You stopped pressing the button.
But your workflow treats every input the same way.
That's the problem we fix today.
A smart workflow doesn't just respond. It listens first. It classifies what it's hearing. Then it routes to the right response. Your workflow needs this intelligence layer. Not a new tool. A new way of thinking.
This mission adds logic to your automation. Different inputs get different treatment. A glowing review gets one response. A furious complaint gets another. The pattern: Classify. Route. Respond.
Your Mission Briefs
Five briefs will walk you through the complete picture:
Brief 1: The Linear Workflow Problem - Your Session 3 workflow is fast. It's not smart. Why linear workflows miss nuance and what it costs. (20 min)
Brief 2: Classify, Route, Respond - The three-step pattern that adds intelligence. Classifiers label. Routers direct. Responses vary. (20 min)
Brief 3: The Tiny Decision Canvas - Design on paper before you wire in a system. If you can't sketch it, you can't build it. (15 min)
Brief 4: Build the Logic Layer - Step-by-step walkthrough of adding classification and routing to your Session 3 workflow. (30 min)
Brief 5: Test, Validate, and Ship - How to validate your logic layer before your team uses it. (25 min)
Learning Guide
- Teaching content that explains the concept
- A worked example showing exactly how it works in practice (using Jordan as your student character)
- A starter prompt to use with your AI Buddy
- A key insight that connects to leadership, not just technology
At the end, you'll have practice challenges and a final project: your upgraded Session 3 workflow with a logic layer that handles multiple input types intelligently.
The test isn't whether it works for you. The test is whether your team can send different inputs and get appropriately different responses without your involvement.
What You Need Before Starting:
- Your Session 3 workflow (from Mission 3)
- A list of different input types your workflow receives
- Your automation platform of choice (Make, Zapier, or equivalent)
- 10 minutes to sketch your decision logic on paper
Rules of the Road:
- Classifiers must output clean labels. One word, no explanation. If your classifier reasons through its answer, your router can't read it.
- The router is dumb logic. It doesn't think. If A then B. That's its job. All the intelligence lives in the classifier.
- Design on paper first. If you can't draw your logic branches on one page, the system is too complex to build. Simplify before you wire.
- Test edge cases, not just happy paths. What breaks your classifier? What inputs don't fit your categories? Find those now.
Brief 1: The Linear Workflow Problem
Brief 1: The Linear Workflow Problem
Teaching: Why One-Size-Fits-All Doesn't Work
Your Mission 3 workflow is fast. It's not smart. That's the problem we fix today.
A linear workflow treats every input the same way. Customer writes praise. System sends response A. Customer writes complaint. System sends response A. That's broken.
Think about how a smart receptionist works. A call comes in. They listen first. "Is this a new lead or an angry customer?" Once they know the answer, they pick the right script. They transfer to the right department. They change their tone. They adjust their urgency.
Your workflow needs the same intelligence. The mental model is simple: - First listen. - Then decide. - Then respond.
Right now you're skipping the listen and decide steps. You're jumping straight to respond.
Every input gets the same treatment. That works if every input is identical. But they're not. Praise should get a warm thank you. Complaints should get an apology and a solution. Suggestions should get documentation and a timeline. One generic response can't do all three well.
A linear workflow is blind to nuance. It's not your fault. You haven't added the decision layer yet.
Worked Example: Jordan's Customer Feedback Automation
Jordan works in customer success at a retail company. In Mission 3, Jordan built a workflow that triggers whenever customer feedback comes in. The workflow reads the feedback. Packaged AI writes a response. The response goes out. It logs the interaction.
The problem: every customer gets the same template response. "Thank you for reaching out. We appreciate your feedback and will review it." Send. Log. Done.
A glowing review gets that response. "Your team was amazing. Best service ever." Response: "Thank you for reaching out. We appreciate your feedback and will review it."
A furious complaint gets the same response. "I've been waiting three days for my package and have no idea where it is. This is ridiculous." Response: "Thank you for reaching out. We appreciate your feedback and will review it."
Both customers got the same thing. The praise customer feels like they're being brushed off. The complaint customer feels like they're being ignored. Neither is satisfied.
After Jordan adds a logic layer:
Feedback comes in: "Your team was amazing!"
Classifier reads it: "This is PRAISE."
Router sends it down the praise path.
Praise response prompt fires: "This customer left praise. Write a warm, genuine thank you. Ask if they'd consider leaving a public review."
Output: "Thank you so much! Your kind words mean everything to our team. Would you consider sharing this review on our site?"
Same workflow. Completely different response. And this one is appropriate.
Second input: "I've been waiting three days for my package."
Classifier: "This is COMPLAINT."
Router sends down complaint path.
Complaint response prompt: "This customer reported a problem. Apologize. Ask what we can do to help. Offer a specific next step."
Output: "I'm truly sorry you're experiencing this. I want to help. Can you reply with your order number? I'll personally look into your tracking right away."
Personalized. Appropriate. Effective.
#### Key Insight
Before AI responds, it needs to know what it's responding to. Classification comes before response. Always. This is a leadership principle, not a technical one. It's the difference between hearing and listening. A linear system hears input. A smart system listens first, understands context, then chooses a response. That's maturity in automation.
Brief 2: Classify, Route, Respond
Teaching: The Three-Step Intelligence Layer
The pattern has three steps. Learn it once. You'll use it everywhere.
The classifier is an AI step with one simple job: read the input, apply a label, output that label and nothing else.
System prompt: "Classify this input as PRAISE, COMPLAINT, or SUGGESTION. Output ONLY one word. No explanation. No punctuation. Just the word."
That's tight. No reasoning. No explanation. One clean label.
Why? Because your router needs to read that label and route based on it. If the classifier starts explaining itself ("This is clearly PRAISE because the customer said..."), your router can't parse that. It gets confused. The routing logic breaks.
Clean input. Clean output. One word. That's the requirement.
The router is simpler than the classifier because it's not AI. It's logic. If-then. That's all.
If label = PRAISE, go to path A. If label = COMPLAINT, go to path B. If label = SUGGESTION, go to path C.
The router is traffic control. No thinking. Just direction. Which path does this labeled input belong in? Route it there.
This happens in your automation platform. Make. Zapier. Whatever you use. You set up a conditional step. If the classifier output contains the word "PRAISE", route to the praise response step. Simple.
Then each path has its own response step. Same structure. Different AI prompt for each path.
Praise path has an AI prompt designed for happy customers: "This customer left praise. Write a warm, genuine thank you. Keep it brief. Ask if they'd consider sharing this as a review."
Complaint path has a prompt designed for problem-solving: "This customer reported a problem. Start with a genuine apology. Ask what we can do to help. Offer a specific next step."
Suggestion path has a prompt designed for capturing ideas: "This customer shared an idea. Thank them. Confirm you've documented it. Tell them how ideas are reviewed."
Same five-part workflow from Mission 3 (Trigger, Data, AI, Action, Log). But now there's intelligence in the middle. Three classifiers categories. Three routers paths. Three responses, each tuned for its input type.
One workflow. Three different behaviors. That's the pattern.
Worked Example: Jordan's Complete Three-Step Build
Jordan is building this for customer feedback.
System prompt: "You are a feedback classifier. Read the following feedback. Classify it as PRAISE, COMPLAINT, or SUGGESTION. Output ONLY one word. No explanation. No punctuation. Just the word."
Test inputs and outputs:
- Input: "Your team was so helpful on my last order!" Output: PRAISE
- Input: "My package is late again and I still don't have tracking." Output: COMPLAINT
- Input: "Have you ever thought about offering a subscription?" Output: SUGGESTION
- Input: "I love your product!" Output: PRAISE
- Input: "The app keeps crashing on login." Output: COMPLAINT
Clean outputs. Consistent. One word each. The classifier is working.
If classifier output = "PRAISE", route to Praise Response path. If classifier output = "COMPLAINT", route to Complaint Response path. If classifier output = "SUGGESTION", route to Suggestion Response path.
Jordan sets this up in Make (or Zapier). Conditional logic. Simple and clean.
Praise Response Path:
System prompt: "A customer left positive feedback. Write a warm, genuine thank you. Keep it brief. Two sentences max. End by asking if they'd consider sharing this as a public review. Sound authentic, not corporate."
Sample output: "Thank you so much! Your kind words mean everything to our team. Would you consider sharing your experience on our review page?"
Complaint Response Path:
System prompt: "A customer reported a problem. Start with a genuine apology. Ask what we can do to help. Offer a specific next step you can take immediately. Sound like you care about solving this."
Sample output: "I'm truly sorry you're experiencing this. I want to help you get your package. Can you reply with your order number? I'll personally check your tracking right now and follow up with an update in 2 hours."
Suggestion Response Path:
System prompt: "A customer shared an idea. Thank them for thinking about your business. Confirm you've documented it. Tell them how suggestions are reviewed (e.g., 'Our product team reviews all suggestions monthly'). Sound grateful and professional."
Sample output: "Thank you for this suggestion! We love hearing ideas from our customers. I've documented this and passed it to our product team. We review suggestions monthly and will let you know if we move forward."
Same workflow. Three different starting prompts. Three different outputs. All appropriate to the input type.
#### Key Insight
The classifier is the foundation. If it can't label correctly, nothing else works. A router with bad input produces bad routing. Responses built on bad classification create confusion. Every decision system's success depends on getting the classification right first. That's a leadership lesson too. You can't route to the right solution if you haven't diagnosed the problem correctly. Diagnosis first. Then routing. Then response.
Brief 3: The Tiny Decision Canvas
Teaching: Design on Paper First
Here's the hard truth: if you can't sketch your logic on paper, you can't wire it in a system.
Before you open your automation platform, grab a piece of paper. Actually grab it. Or open a doc. Something you can write on.
The Tiny Decision Canvas is simple. For each input type you identified, write two things:
First: What does this type mean? A clear definition. Not vague. Specific.
"PRAISE: The customer expresses satisfaction, thanks, or positive emotion. Not reporting a problem. They're happy."
Not: "PRAISE: Good feedback."
Second: What happens next? The action. The response. The follow-up.
"Action: Thank them warmly. Ask if they'd consider a public review. Log as positive sentiment. Follow up: Add to customer success file for renewal conversations."
Not: "Send a response."
Why? Because if your logic is too complex to draw on a page, it's too complex to automate. You'll get confused. Your classifier will get confused. Your router will send things to the wrong place. Your responses will miss the point.
A one-page canvas forces clarity. It forces you to answer hard questions before you build.
Mistake 1: Too many categories. You defined eight types of input. Eight is too many. More than five is a problem. Your classifier will get confused. Your router will have too many paths. Simplify. Combine types that are similar. Narrow to the essential differences.
Mistake 2: Categories that overlap. You defined COMPLAINT and ISSUE. A customer reports a technical issue. Is that COMPLAINT or ISSUE? Both? Neither? If something could fit two categories, you haven't defined them clearly. Go back. Clarify the boundary between categories.
Mistake 3: Missing the action step. You identified types but didn't think through what to do. "We have PRAISE, COMPLAINT, and SUGGESTION." Great. For each one, what happens next? If you can't answer that, you're not ready to build.
Fix these before you move forward.
Worked Example: Jordan's Tiny Decision Canvas for Customer Feedback
INPUT TYPE 1: PRAISE
Definition: The customer expresses satisfaction, thanks, or positive emotion about their experience or service. They might mention what specifically went well. This is not a suggestion for improvement. It's appreciation.
Action: - Thank them warmly and specifically (reference what they praised) - Ask if they would consider sharing their experience as a public review - Log interaction as "positive sentiment" - Follow-up: Add to customer success file; use in renewal conversations
INPUT TYPE 2: COMPLAINT
Definition: The customer reports a problem, issue, or dissatisfaction. Something Jordan's company did or failed to do caused harm or inconvenience. They're frustrated. They want it fixed.
Action: - Start with genuine apology (not defensive) - Understand what the specific problem is (ask clarifying questions if needed) - Offer a concrete next step (not vague promises) - Take ownership (don't blame processes or other departments) - Follow-up: Track resolution; check in to confirm satisfaction
INPUT TYPE 3: SUGGESTION
Definition: The customer proposes an idea, feature, or improvement. They're not describing a problem they're having right now. They're thinking about what would be better. Forward-looking. Constructive.
Action: - Thank them for thinking about the company - Confirm you've documented the idea - Explain where ideas go and when they'll hear back (e.g., "Our product team reviews all suggestions monthly. You'll hear from us by the 15th of next month.") - Sound grateful, not dismissive - Follow-up: Monthly review with product team; track status
This canvas is one page. A colleague could read it and understand Jordan's logic immediately. No confusion. No overlap. Clear actions. This is the blueprint for the automated system.
#### Key Insight
If you can't draw the branches on paper, you can't wire them in a system. This is about clarity before complexity. The best automated systems look simple because the thinking was done first. The worst ones look complicated because the designer tried to build before thinking. Take the time to think on paper. It saves time when you build.
Brief 4: Build the Logic Layer
Teaching: Four Steps to Add Intelligence
You're going to upgrade your Mission 3 workflow to handle multiple input types. You have your Tiny Decision Canvas. You know your categories. You know what should happen for each one. Now you build.
Four steps. 30 minutes if you move fast. An hour if you're thorough. Both are fine.
Step 1: Create Your Classifier Prompt (5 minutes)
Write your system prompt for the classifier AI step.
It must say: "Output ONLY one word" and "No explanation."
No exceptions. If your classifier starts reasoning through its answer, the router can't read that. You need clean labels.
Example: "You are a feedback classifier. Read the following feedback. Classify it as PRAISE, COMPLAINT, or SUGGESTION. Output ONLY one word. No explanation. No punctuation. Just the word."
Tight. Clear. One job.
Now test it. Feed it five different inputs from your categories. Does it output one word consistently? "PRAISE". Not "PRAISE because the customer said..." Just "PRAISE".
If it reasons through its answer, your system prompt isn't tight enough. Tighten it. Make it more direct. Test again.
If it sometimes says "PRAISE/COMPLAINT" (blending categories), your categories overlap. Go back to your Tiny Decision Canvas. Clarify the boundaries.
Don't move to Step 2 until your classifier outputs clean labels consistently.
Step 2: Build Your Router (5 minutes)
This is the easiest step because it's not AI. It's logic.
In your automation platform (Make, Zapier, etc.), add a conditional step.
If the classifier output contains "PRAISE", route to Path A. If the classifier output contains "COMPLAINT", route to Path B. If the classifier output contains "SUGGESTION", route to Path C.
Simple if-then branching. Every platform has this. Look for "Router", "Conditional", or "Branch" in your tool.
Test it. Feed a labeled input through. Does it route to the right path? Does the output go where it should?
Step 3: Write Three Distinct Response Prompts (10 minutes)
Each path needs its own AI step. Each AI step needs its own system prompt. Each prompt should be designed for that specific input type.
The praise prompt should sound different from the complaint prompt. The tone is different. The action is different. The urgency is different.
Path A (Praise Response): System prompt: "A customer left positive feedback. Write a warm, genuine thank you. Keep it brief. Two sentences max. End by asking if they would consider sharing this as a public review. Sound authentic, not corporate."
Path B (Complaint Response): System prompt: "A customer reported a problem. Start with a genuine apology. Ask what we can do to help. Offer a specific next step you can take immediately. Sound like you care about solving this."
Path C (Suggestion Response): System prompt: "A customer shared an idea. Thank them for thinking about your business. Confirm you have documented it. Tell them how ideas are reviewed. Sound grateful and professional."
Each one tuned. Each one appropriate to the input type. Not generic. Specific to the category.
Step 4: Set Up Logging for Each Path (3 minutes)
You want to track data. Which input types are coming in most? Which paths are being triggered? What's the distribution?
Add a logging step after each path's response. Log should capture: - Input type (from classifier label) - Response generated - Timestamp - Any additional context
Over a week or month, you'll see patterns. "We're getting 60% praise, 30% complaints, 10% suggestions." That data tells you about your customers or your business. It tells you what's working and what's not.
Worked Example: Jordan's Complete Build
Scenario: Jordan has the Tiny Decision Canvas. Three input types. Clear definitions. Clear actions. Now Jordan builds.
Step 1: Classifier Prompt
System prompt: "You are a feedback classifier. Read the following feedback. Classify it as PRAISE, COMPLAINT, or SUGGESTION. Output ONLY one word. No explanation. No punctuation. Just the word."
Test inputs: - "Your team was amazing!" → Output: PRAISE ✓ - "The app keeps crashing." → Output: COMPLAINT ✓ - "Have you thought about adding subscriptions?" → Output: SUGGESTION ✓ - "I love your service!" → Output: PRAISE ✓ - "Still waiting for my refund after 2 weeks." → Output: COMPLAINT ✓
Classifier working. Clean outputs.
Step 2: Router in Make
Conditional router step:
``
If classifier_output = "PRAISE" → go to Praise Response step
If classifier_output = "COMPLAINT" → go to Complaint Response step
If classifier_output = "SUGGESTION" → go to Suggestion Response step
``
Test: Input "Your team was amazing!" gets classified as PRAISE and routes to Praise Response step. Works.
Step 3: Three Response Prompts
Praise Response AI step: "A customer left positive feedback. Write a warm, genuine thank you. Keep it brief. Two sentences max. End by asking if they would consider sharing this as a public review. Sound authentic."
Complaint Response AI step: "A customer reported a problem. Start with a genuine apology. Ask what we can do to help. Offer a specific next step. Sound like you care about solving this."
Suggestion Response AI step: "A customer shared an idea. Thank them for thinking about your business. Confirm you have documented it. Tell them when they'll hear back."
After each path fires, log the interaction: - Type: [from classifier] - Response: [from AI] - Timestamp: [automatic]
Updated Workflow Structure:
- Trigger: Customer feedback received
- Data: Extract feedback text
- Classifier AI: Outputs PRAISE, COMPLAINT, or SUGGESTION
- Router: Conditional branch based on label
- Action: Send response via email/platform
- Log: Record type, response, timestamp
Same workflow from Mission 3. Enhanced with logic. Now it's intelligent.
#### Key Insight
The router doesn't think. It just directs. All the intelligence is in the classifier. That's the inversion from how many people think about automation. You'd expect the "smarts" to be in the routing. But no. The routing is dumb logic. The thinking happens in classification. Get that reversed and your system will fail. This is a leadership principle too. Where are the actual decisions being made in your business? Not in the routing of work. In the diagnosis. In understanding what you're dealing with. That diagnosis is everything.
Brief 5: Test, Validate, and Ship
Teaching: Validation Before Handoff
You've built your logic layer. Your classifier works. Your router routes. Your responses vary. It works when everything goes right.
Now comes validation. There's a difference between "it works for me" and "it works for anyone."
Happy Path Testing
Does it work when everything goes right? When the user provides clear input? When they follow your expected format? Yes? Great. That's baseline. But baseline isn't enough.
Edge Case Testing
What breaks? What happens with unclear input? What happens if someone asks for something outside your design? What happens if they provide too much context? Too little? Wrong format?
Ambiguous input is where classifiers struggle. A customer says "Your product is amazing but I wish it had X feature." Is that PRAISE or SUGGESTION? Both? Your classifier has to pick one. Does it pick consistently?
A customer writes one word: "Crashed." Is that a COMPLAINT? Maybe. But maybe it's incomplete. Your classifier has to decide.
A very long message that mentions multiple issues. Does your classifier stay focused on the overall type? Or does it get confused by multiple signals?
Break your classifier now. Every edge case you find now is a problem your team doesn't hit later.
Adjustment Process
When you hit an edge case that's a real problem (the AI gets confused, the output is wrong, it doesn't follow your standards), you adjust.
Adjust what? Usually one of three things:
- Your classifier prompt. Make it tighter. More specific. Add examples of edge cases to the system prompt.
- Your categories. Maybe your overlap is bigger than you thought. Maybe you need four categories instead of three.
- Your router. Maybe a gray-area input should go to a default path instead of trying to pick between two.
Test. Adjust. Test again.
Validation Timeline
Don't scale this to your team after one day of testing. Run real inputs through for a week. Collect 20 or 30 real examples. Let your classifier label them. Do you like the distribution you're seeing? Are there inputs that don't fit any category? That tells you whether your canvas was complete.
When you're confident that your team can use this without asking you questions and without the system breaking, it's ready to ship.
Practice Challenges (45 minutes total)
Your AI Buddy is waiting to help with each of these.
Challenge 1: Complete Your Tiny Decision Canvas (15 minutes)
Take your input types. Define each one clearly. Write the action for each one. Make it one page.
Get specific. Not vague.
Not: "PRAISE - positive feedback. Action: respond positively."
But: "PRAISE - The customer expresses satisfaction or appreciation for a specific experience. Action: Thank them warmly, reference what they praised, ask for public review."
> Want to learn more first? Ask your AI Buddy: "Help me complete my Tiny Decision Canvas. Here's my workflow and my input types: [share them]. For each type, help me write a clear definition (what distinguishes it from the others?) and the specific action I should take."
What good looks like: Three to five clearly defined input types. For each, a definition that distinguishes it from the others. For each, an action that's specific and concrete.
Common mistakes: Categories that overlap. Actions that are too vague. Missing categories that don't fit your three types.
Challenge 2: Build and Test Your Classifier (10 minutes)
Write the classifier system prompt. The one that will output clean labels.
Then feed it five test inputs. At least one of each type. Ideally include one edge case. Document the outputs. Did it classify correctly every time?
> Want to learn more first? Ask your AI Buddy: "Help me write a classifier system prompt for my categories: [list them]. The prompt needs to output ONLY one word per input with no explanation. Then help me come up with 5 test inputs (at least one per category, one edge case). What should the classifier output for each?"
What good looks like: A tight system prompt that produces one-word outputs consistently. Five test cases with documented results. All outputs match your expected labels.
Common mistakes: Prompts that allow explanation. Test inputs that are all clear examples (no edge cases). Not verifying the outputs are actually one word.
Challenge 3: Add the Router (10 minutes)
Wire the if-then logic in your automation platform. If the classifier outputs PRAISE, direct to Path A. If COMPLAINT, Path B. If SUGGESTION, Path C.
Test that the routing logic works. Feed through one example of each type. Does each one go to the right path?
> Want to learn more first? Ask your AI Buddy: "Help me set up the router in [your platform]. I have three classifier outputs (PRAISE, COMPLAINT, SUGGESTION) and three response paths. Walk me through the conditional logic I need to set up. Then help me test that each output routes to the right path."
What good looks like: A working conditional router. Three test inputs that each route to the correct path. No errors.
Common mistakes: Router configured for wrong values (looking for "PRAISE" when classifier outputs just "PRAISE"). Not testing the routing.
Challenge 4: Test All Paths (10 minutes)
Run one complete test for each path. Pick a realistic input. Let it flow through the entire workflow (trigger to log). Document the output.
Does the praise path sound warm? Does the complaint path sound helpful? Does the suggestion path sound grateful?
> Want to learn more first? Ask your AI Buddy: "I've built my complete workflow with three paths. Help me test each path. For each path, suggest a realistic example input. Then I'll run it through and show you the output. Does each path produce the right kind of response?"
What good looks like: Three complete workflow tests (one per path). Each test shows input, classifier output, router decision, and final response. Responses are appropriate to the input type.
Common mistakes: Testing only happy path, not edge cases. Not checking if responses match the intended tone. Moving forward without validating each path separately.
Final Project: Your Upgraded Mission 3 Workflow
This is the real deliverable. Not a practice challenge. Your actual workflow with a logic layer. Your Mission 3 workflow, upgraded.
Part 1: Your Tiny Decision Canvas
List your input types. Minimum three. For each type, write: - Clear definition (what distinguishes it?) - Specific action (what happens next?)
Keep to one page. If it doesn't fit on one page, simplify.
Submit either: - A photo of your handwritten canvas, OR - A typed version in a doc
Clarity over presentation.
Part 2: Your Classifier System Prompt
The exact system prompt for your classifier AI step.
Include: - Your categories (PRAISE, COMPLAINT, SUGGESTION, etc.) - The "Output ONLY one word" constraint - Any specific guidance on edge cases
> Example: "You are a feedback classifier. Read the following feedback. Classify it as PRAISE, COMPLAINT, or SUGGESTION. Output ONLY one word. No explanation. No punctuation. For ambiguous inputs that mention both praise and ideas, prioritize the dominant emotion."
Part 3: Your Complete Workflow Map
Draw or describe your updated workflow from end to end:
- Trigger: [what triggers the workflow?]
- Data: [what data flows in?]
- Classifier: [your classifier step]
- Router: [your three conditional paths]
- Action: [how does output go out?]
- Log: [what gets logged?]
Describe or draw this clearly. A colleague should be able to read your workflow map and understand how logic flows.
Part 4: Test Results Documentation
Run three complete workflow tests. One input per category.
For each test, document: - Input: What did the customer say? - Classified As: What did the classifier output? - Path Taken: Which path did the router direct to? - Final Response: What was the output? - Assessment: Was this response appropriate for the input type? Why or why not?
Test 1: - Input: "Your team was amazing on my last order!" - Classified As: PRAISE - Path Taken: Praise Response - Final Response: "Thank you so much! Your kind words mean everything to our team. Would you consider sharing this review?" - Assessment: Perfect. Warm tone, specific reference, asks for review. Appropriate for praise.
Part 5: Edge Case Testing
Identify one edge case. An input that's ambiguous or tricky.
Run it through your system.
Document: - Edge Case Input: What made this tricky? - Classified As: What did the classifier output? - Was it correct? Did it classify appropriately? - What you learned: Did this reveal any gaps in your system?
This shows you tested beyond happy path.
Before You Submit Checklist
- [ ] I've completed my Tiny Decision Canvas with minimum three input types
- [ ] Each type has a clear definition that distinguishes it from others
- [ ] Each type has a specific action that's concrete, not vague
- [ ] I've written a classifier system prompt that outputs one word only
- [ ] I've tested my classifier with five different inputs including one edge case
- [ ] My classifier outputs clean labels (one word, no explanation) consistently
- [ ] I've set up the router logic in my automation platform (Make, Zapier, etc.)
- [ ] I've tested that inputs route to the correct paths
- [ ] I've written three distinct response prompts (one for each path)
- [ ] I've tested all three paths with realistic examples
- [ ] Each response is appropriate in tone and action for its input type
- [ ] I've set up logging to capture type, response, and timestamp
- [ ] I can explain my workflow map clearly to a colleague
- [ ] My submission includes all five parts (Canvas, Classifier, Workflow Map, Test Results, Edge Case Testing)
Key Takeaways
Key Takeaways
- Before AI responds, it needs to know what it's responding to. Classification comes before response. Every time. This is logic and leadership principle.
- The classifier is the foundation. If it can't label correctly, nothing else works. Bad classification breaks routing. Bad routing creates bad responses. Get classification right first.
- The router doesn't think. It just directs. All the intelligence is in the classifier. The router is dumb logic (if A then B). Don't expect the router to be smart. Make the classifier smart.
- Design on paper before you wire in a system. If you can't sketch your logic on one page, the system is too complex. Simplify before you build.
- This pattern works across all four offices. Classify. Route. Respond. Whether you're in operations, delivery, revenue, or insights, this pattern applies. It's not mission-specific. It's a fundamental structure for intelligent systems.
- Personalization at scale. You're now treating different inputs differently automatically. Different tones. Different actions. Different responses. Same efficiency. That's what scale looks like in the AI era. Not automation that's one-size-fits-all. Automation that adapts.
- You're adding intelligence, not just speed. An automated system without classification is fast but dumb. Fast and dumb is the opposite of useful. Add the logic layer and you've got something that actually works.
- Test edge cases, not just happy paths. Your system works when everything is clear and fits your categories. Real life is messy. Test what breaks. Test ambiguous inputs. Fix those before your team uses this.
Your Commitment
Name one workflow you're going to add a logic layer to this week. Not someday. This week.
It could be the Mission 3 workflow you're upgrading here. It could be something different. Whatever it is, the test is whether the logic actually makes a difference. Does your system give different inputs different treatment? If yes, you've succeeded.
What's your one thing? Describe it briefly. The workflow. The input types. The key decision you're making.
Checkpoint: Quick Knowledge Check
Answer these five questions. Your AI Buddy will tell you if you're on track.
Question 1: What is the purpose of a classifier in a workflow?
A) To generate responses to customer inputs B) To read input and assign a single label to indicate type or category C) To route data to the correct database D) To log all interactions
Correct Answer: B - A classifier's single job is to read an input and assign a clean label. The label tells the system what type of input it is so the router can direct it appropriately. The classifier is pure diagnosis.
Question 2: What makes a good classifier system prompt?
A) It explains the reasoning behind the classification B) It outputs one word with no explanation, using a tight constraint C) It provides three possible classifications so the AI can choose the best one D) It includes examples of every possible input
Correct Answer: B - A good classifier outputs one clean word (PRAISE, COMPLAINT, SUGGESTION) with absolutely no explanation or reasoning. If it explains itself, the router can't parse it. Tightness is the requirement.
Question 3: What does the router do in a Classify, Route, Respond system?
A) It uses AI to decide which path is best B) It applies logic (if-then) to direct input to different paths based on the classifier label C) It writes the response for each input D) It tests the classifier for accuracy
Correct Answer: B - The router is dumb logic. If the classifier says "PRAISE", route to Path A. If "COMPLAINT", route to Path B. No AI. No thinking. Just directing based on the label.
Question 4: What is the Tiny Decision Canvas and why does it matter?
A) A template for writing system prompts for classifiers B) A one-page design tool where you define input types, their meanings, and corresponding actions before building C) A testing framework for validating classifier accuracy D) A log file that tracks all decisions made by the system
Correct Answer: B - The Tiny Decision Canvas is your blueprint. For each input type, you define what it means and what should happen. It forces clarity before complexity. If you can't sketch it on one page, the system is too complicated to build.
Question 5: Why does the Classify, Route, Respond pattern work across all four offices?
A) It only applies to customer-facing workflows B) It depends on the tool you're using C) It's a fundamental structure for intelligent decision-making that applies anywhere you get different input types that need different treatment D) It requires specific AI models to work
Correct Answer: C - This pattern isn't specific to customer service. A revenue officer might classify sales leads. A delivery officer might classify project requests. An insight officer might classify data sources. Different domains. Same pattern. Same power.
Certificate of Completion
You've completed Mission 4. You've added intelligence to your automation.
You came in with a workflow that treated every input the same way. You're leaving with a system that classifies input, routes appropriately, and responds with nuance.
In Session 5, you'll take this logic layer and turn it into an agent. A system that doesn't just respond to requests. A system that sees what needs doing and does it. A system that runs without waiting for your trigger.
That's when automation becomes truly powerful.
Course Experience Survey
Your feedback shapes the next sessions. Take two minutes. [Link to survey]
Words to Know
Classifier - An AI step that reads input and outputs a single label or category. One job. One output. Clean and tight.
Router - Logic (if-then) that directs input to different paths based on the classifier label. Not AI. Just smart traffic control.
Logic Layer - The decision-making intelligence added to a workflow between input and response. What makes workflows smart instead of just fast.
Decision Tree - A branching structure where each branch represents a different input type and its corresponding actions. Like a flowchart.
Input Type - A distinct category of input that should be handled differently from other categories. Different tone. Different action. Different follow-up.
Classification - The process of assigning a label to input based on its characteristics. Diagnosis before response.
Tiny Decision Canvas - A one-page template for defining input types, what they mean, and what happens next. Design before build.
Branching Logic - Conditional pathways in a workflow where different inputs take different routes based on their classification.
Personalization at Scale - Treating different inputs differently (different tones, different actions, different responses) automatically while maintaining efficiency. That's maturity in automation.
Conditional Path - A route through a workflow that is triggered only when a specific condition (like a classifier label = PRAISE) is met. Different paths for different types.
Edge Case - An unusual input that doesn't fit your expected categories or that's ambiguous. The thing that breaks your system if you don't think about it.
> Your AI Buddy is always here. If any term in this list is unclear, ask your AI Buddy: "Explain [term] in plain language with an example from [your role or industry]."
Prompt Library
DEFINING YOUR INPUT TYPES
I have a workflow that treats every input the same way. Help me identify different input types that should get different treatment.
My workflow is: [describe your trigger and what the workflow does].
What are three to five distinct types of input my system might receive? For each type, what makes it different? What should the response be?
Once we identify the types, help me define each one clearly. ```
WRITING A TIGHT CLASSIFIER PROMPT
I need to classify inputs into these categories: [list your categories].
Write a classifier system prompt that: - Tells the AI to output ONE word only - Lists the categories clearly - Says "No explanation" - Includes guidance on edge cases: [describe any ambiguous situations]
Then give me 5 test inputs (at least one per category, one edge case) and tell me what the classifier should output for each. ```
BUILDING YOUR TINY DECISION CANVAS
Help me design my Tiny Decision Canvas. I have these input types: [list them].
For each type, help me write: 1. A clear definition (what distinguishes this type from others?) 2. The specific action I should take (not vague, concrete)
Make this simple and clear. One page. By the end I should be able to show this to a colleague and they'd understand my logic immediately. ```
SETTING UP YOUR ROUTER
I'm building a router in [your platform: Make, Zapier, etc.].
I have three classifier outputs: [list them, e.g., PRAISE, COMPLAINT, SUGGESTION].
I have three corresponding AI response steps: [list them].
Walk me through the conditional logic I need to set up. If classifier output = X, route to step Y.
Then help me test that each type actually routes to the right step. ```
TESTING YOUR CLASSIFIER EDGE CASES
I've built a classifier that outputs: [list your categories].
Here are ambiguous inputs my system might receive: [describe 2-3 edge cases].
For each one, run it through my classifier. What does it output? Is that the right classification? If not, how should I adjust my classifier prompt or my category definitions?
Help me identify gaps in my system before my team uses it. ```
ACROSS THE FOUR OFFICES: CLASSIFY, ROUTE, RESPOND
I'm a [revenue officer / insight officer / delivery officer / trust officer].
Help me identify one workflow in my role that could benefit from a Classify, Route, Respond logic layer. What are the input types? What are the different response paths? What would the classifier need to determine?
Sketch out my logic layer using the Tiny Decision Canvas. ```
> More prompts added each session. [PROMPT LIBRARY - Full version in Lark, link provided by Kate]
End of Mission 4: Teach Your Workflow to Decide