From Prompts to Packaged AI
Turn a great prompt into a reusable AI tool your team can use on day one without asking you a single question.
Welcome to Mission 2
Welcome to Mission 2
You've completed Session 1. You identified opportunities. You selected one. You defined the problem. You set a FAST goal with a number attached.
Now comes the part that turns that work into something real.
You're about to build your first Packaged AI prototype. Not a concept. Not a plan. A working AI tool that someone on your team could use this week without asking you a single question.
This is where good prompts become great systems.
Your Mission Briefs
Five briefs will walk you through the complete picture:
Brief 1: The Setup Time Problem - Why the invisible cost of setup time is the problem you're actually solving. (20 min)
Brief 2: Anatomy of Packaged AI - The three components (Data, Instructions, Interface) and how they fit together. (20 min)
Brief 3: Platform Options - ChatGPT Custom GPTs vs Claude Projects vs Gemini Gems. Same pattern. Different tools. (15 min)
Brief 4: Build Your Prototype - Step-by-step walkthrough of building your first packaged AI. (30 min)
Brief 5: Test and Ship - How to validate your build before you hand it to your team. (20 min)
Learning Guide
- Teaching content that explains the concept
- A worked example showing exactly how it works in practice
- A starter prompt to use with your AI Buddy
- A key insight that connects to leadership, not just technology
At the end, you'll have practice challenges and a final project: your actual Packaged AI prototype for your selected item from Session 1.
The test isn't whether it works for you. The test is whether someone else can use it on day one without asking you anything.
Brief 1: The Setup Time Problem
Brief 1: The Setup Time Problem
Teaching: The Real Cost of Starting Over
Here's what nobody measures: the time you spend setting up AI before you can actually use it.
You open a tool. You explain your role. You upload the same file you uploaded yesterday. You describe the format you need. You re-explain the context. This takes five minutes. Maybe 10 if it's complex.
That doesn't sound like much. But it happens four times a day. Five days a week. That's 100 minutes per week. That's 86 hours per year on setup alone. On one repeated task.
Most people have five of these. Multiply by your team. If you have 10 people doing this, that's 860 hours per year. That's a full-time person. Spending time to be ready to work instead of working.
Nobody tracks it because it feels like working. Setup time is invisible until you measure it. Then it's everywhere.
The problem you identified in Session 1 is actually a setup time problem. Every time you manually do what AI could do, you're paying this hidden cost. Packaging solves it.
Worked Example: The Marketing Manager
Sarah manages content for a mid-size SaaS company. Every day, marketing teams ask her to turn rough ideas into brief formats. She opens Claude. She pastes in the company brand guide. She explains the brief template. She shows an example. She uploads the request. She gets the output.
Five minutes. Every single time. Four times a day.
When she packages it: She creates a Claude Project called "Content Brief Generator." She uploads the brand guide as permanent context. She writes system instructions using RACE that explain the template, the tone, the format. She adds conversation starters: "Turn this idea into a brief," "Brief for a technical audience," "Brief for a business audience." She uploads the template as a reference file.
Now her team opens it. They see three conversation starters. They pick one. They type their idea. The AI generates the brief using her standards, her template, her tone. Two minutes instead of five. And no Sarah in the loop.
The prompt worked. But the system is what scaled.
> Want to learn more first? Ask your AI Buddy: "Walk me through the financial impact of setup time on my team. How much time does my team spend each week getting AI ready versus actually using it?"
#### Key Insight
The prompt worked. The system didn't. Setup time is the invisible tax on every AI user. Packaging is how you eliminate it so your team has time to do actual work.
Brief 2: Anatomy of Packaged AI
Teaching: Three Components That Make It Work
Every Packaged AI tool has three components. Custom GPTs have them. Claude Projects have them. Gemini Gems have them. Get all three right and your tool just works. Miss one and your team needs training.
Component 1: Data
This is what your AI knows. Not what the user tells it. What it already has.
If you're building a customer response tool, this is your FAQ. Your tone guidelines. Your common product issues and how to address them. Your pricing. Your policies.
If you're building a content tool, this is your brand guide. Your templates. Your style examples. Your do's and don'ts.
If you're building a sales tool, this is your product info. Your competitive positioning. Your case studies. Your talking points.
Data is the foundation. Your Packaged AI is only as good as what you feed it. Upload the real files. Not summaries. Not descriptions. The actual documents your team uses.
Component 2: Instructions
This is what your AI does. Not in general. Specifically. Using RACE.
Role: Who is the AI acting as? A customer service manager. A content editor. A sales coach. Be specific.
Action: What exactly does it do? Not "help with customer responses." But "take a customer email complaint, provide an empathetic response using our tone guidelines, and flag any issues that need escalation." Specific.
Context: What are the constraints? What format? What should it always do? What should it never do? "Always check the FAQ first. Never promise a timeline we can't meet. Always include a reference number."
Example: What does good output look like? Show it. Show bad output too. Show why it's bad. The more specific your instructions, the less explanation your team needs.
Component 3: Interface
This is how users interact. The name. The description. The conversation starters.
Name: Tells someone what this does without explanation. "Email Tone Adjuster." "Contract Reviewer." "Content Brief Generator."
Description: One sentence. What does it do? For whom? What format? "Takes customer emails and suggests responses using our tone guidelines and FAQ."
Conversation Starters: Pre-written prompts that appear when users open the tool. These are gold. They guide users to the right input. They show what the tool can do without a manual. "Respond to a complaint," "Respond to a feature request," "Respond to billing confusion."
Get all three components right and your team can use it without training. They open the tool. They see what it does. They see examples. They pick a conversation starter. They get the output. No questions.
Worked Example: Email Tone Adjuster
We'll use the Email Tone Adjuster throughout this brief. Here's how all three components work together.
Data: - Company tone guidelines (friendly, helpful, never defensive) - FAQ document with 50 common customer issues - 10 example emails and ideal responses - Pricing page and policies - Common complaint scenarios and escalation rules
Instructions (RACE):
"You are an expert customer service manager who specializes in tone and empathy. You take incoming customer emails and generate professional, empathetic responses following our tone guidelines and FAQ. Role: Act as a senior customer service manager for our team. Action: For each customer email provided, (1) understand the core issue, (2) check the FAQ for relevant answers, (3) generate a response that's empathetic, helpful, and on-brand, (4) if the issue requires escalation, flag it. Context: Always prioritize empathy. Never promise timelines we can't keep. Always include a reference number. Always check the FAQ first before generating new answers. Format responses as: [Response Text] [Escalation Flag: Yes/No] [Reason if escalated]. Example: [show a sample customer email and an ideal response]"
Name: Email Tone Adjuster
Description: Takes customer emails and generates on-brand responses using our FAQ and tone guidelines.
Conversation Starters: - "Respond to a billing complaint" - "Respond to a feature request" - "Respond to a technical issue with escalation check"
> Want to learn more first? Ask your AI Buddy: "For my selected item from Session 1, break down what data, instructions, and interface I need to build a Packaged AI. Be specific about what files to upload, what RACE instructions to write, and what interface elements to include."
#### Key Insight
Get all three components right and your team can use it without training. Miss one and you become the manual. Data without clear instructions means your team has to figure out how to use the information. Instructions without data means the AI has to ask for context every time. Interface without conversation starters means users have to know what to ask. Three components. All three matter.
Brief 3: Platform Options
Teaching: Same Pattern, Different Tools
All three major platforms support Packaged AI. The names are different. The architecture is identical.
ChatGPT Custom GPTs
Custom GPTs let you upload files, write system instructions, and give your GPT a name and description. Users access it through ChatGPT. They can use it as long as you've shared it with them or published it. Data goes into file uploads or knowledge. Instructions go into the system prompt. Interface is the name and description and conversation starters.
Strength: Most people already use ChatGPT. Low friction. Everyone knows how to chat.
Limitation: All conversations are in ChatGPT. Less integration with other tools.
Claude Projects let you upload files, write a system prompt, and name your project. Users access it at the top of Claude. Each project is its own context window. Data goes into files or contexts. Instructions go into the custom instructions area. Interface is the project name, description, and conversation starters.
Strength: Separate context means cleaner conversations. You can build multiple Projects without interference. Good if you're already using Claude.
Limitation: Requires a Claude subscription (usually already there if you use Claude Plus or Team).
Gemini Gems work similarly. You build a Gem with custom instructions and uploaded context. Users access it through Gemini. Data goes into custom context. Instructions go into the system prompt. Interface is the name and description.
Strength: Gemini is increasingly popular. Good integration with Google Workspace tools.
Limitation: Newest platform. Still growing.
The pattern is identical across all three. Data. Instructions. Interface. Learn it once, build on any platform.
Worked Example: Email Tone Adjuster on All Three
The Email Tone Adjuster (from Brief 2) works identically on all three platforms.
On ChatGPT Custom GPTs: - Upload FAQ, tone guidelines, examples as files - Set system prompt with RACE instructions - Name: "Email Tone Adjuster" - Add conversation starters in Custom Instructions - Share with your team
On Claude Projects: - Upload FAQ, tone guidelines, examples as project files - Set custom instructions with RACE - Name project: "Email Tone Adjuster" - Add conversation starters in the welcome message - Share project with teammates
On Gemini Gems: - Upload FAQ, tone guidelines, examples as custom context - Set custom instructions with RACE - Name: "Email Tone Adjuster" - Add conversation starters - Share with your team
Same tool. Same outcome. Different interface.
> Want to learn more first? Ask your AI Buddy: "Which platform should I use to build my Packaged AI? I'm planning to build [describe your item]. Help me pick between ChatGPT Custom GPTs, Claude Projects, and Gemini Gems based on my workflow and team setup."
#### Key Insight
The platform is the tool. The pattern is the skill. Learn it once, build on any platform. The best choice is the platform your team is already using.
Brief 4: Build Your Prototype
Teaching: Step-by-Step Build Process
You're going to build your Packaged AI for your selected item from Session 1. You have the problem statement. You have the FAST goal. Now you build.
Five steps. 30 minutes total if you move fast. An hour if you're thorough. Both are fine.
Step 1: Create and Name Your Tool (2 minutes)
Open your platform. ChatGPT, Claude, or Gemini. Create a new Custom GPT, Project, or Gem.
Give it a name that explains what it does. Not "AI Tool 1." But something that tells someone what this does in plain language. "Content Brief Generator." "Customer Email Responder." "Product Comparison Tool."
Write a one-sentence description. What does it do? For whom? What format?
Example: "Turns product feedback into prioritized feature requests using our product strategy framework."
This is your interface starting. Users will see this. It needs to make sense on its own.
Step 2: Write Your System Prompt (5 minutes)
This is your instructions. RACE format. Write it all in one prompt or split it into sections depending on your platform.
Role: Who is this AI acting as? Be specific.
Action: What exactly does it do? A real action. With specifics.
Context: What are the rules? What format? What should it always do? What should it never do?
Example: Show what good output looks like. Show what bad output looks like.
Don't skip the example. The AI learns from examples better than from rules.
Make your RACE as specific as your domain allows. The more specific, the less your team has to explain every time.
Step 3: Identify and Upload Your Data (5 minutes)
What does your AI need to know to do this right?
If it's a customer tool: tone guidelines, FAQ, common issues, pricing, policies.
If it's a content tool: brand guide, templates, style examples, tone samples.
If it's a sales tool: product info, positioning, case studies, talking points, competitor comparison.
Upload the actual files. PDFs. Word docs. Text files. The real documents your team uses.
If you don't have these documents yet, create them. Or start with a summary version. The key is: your AI has your standards built in. It doesn't have to ask every time.
Step 4: Add Conversation Starters (3 minutes)
Think about the three most common ways your team will use this tool.
What are the most common requests? The most common starting points?
Write three conversation starters that guide users toward these common use cases.
Examples for an Email Tone Adjuster: - "Respond to a billing complaint" - "Respond to a feature request" - "Respond to a technical issue"
Examples for a Content Brief Generator: - "Turn this idea into a brief for a technical audience" - "Turn this idea into a brief for a business audience" - "Quick one-liner brief for a blog post"
These starters do two things. They guide users. And they show the AI a clean starting point. Users get value in under 10 seconds without training.
Step 5: Test the Happy Path (5 minutes)
Test it. Use a real example from your work. Not a fake example. Real.
Does it work the way you designed it? Is the output usable? Can someone on your team take it and run?
If it works: move to edge cases.
If it doesn't: adjust your instructions or data and test again.
You're not shipping until the happy path works.
Worked Example: Complete Build
We'll walk through building a complete Packaged AI for a specific problem.
Scenario: Sarah is a product manager at a SaaS company. In Session 1, she identified that the team spends hours turning customer feedback into prioritized feature requests. Her FAST goal: "Reduce time to turn feedback into prioritized request from 45 minutes to 15 minutes in the next 30 days." She selected this as her item.
Step 1: Create and Name
Platform: Claude Projects (she already uses Claude)
Name: "Feature Request Prioritizer"
Description: "Turns customer feedback into prioritized feature requests using our product strategy framework."
Step 2: Write System Prompt (RACE)
Role: You are a senior product manager who specializes in customer feedback analysis and feature prioritization.
Action: For each piece of customer feedback provided, you will: (1) Extract the core need or problem, (2) Map it to our product strategy framework, (3) Generate a structured feature request using our template, (4) Assign a priority level (critical, high, medium, low) based on our prioritization rules.
Context: Always reference our product strategy framework first. Categorize feedback as bug, small enhancement, medium feature, or major initiative. Critical issues are those affecting core workflows or security. High priority items solve problems for multiple customer segments. Never assign critical or high priority without clear business impact. Always include the original customer quote. Format as: [Feature Request Template with all required sections filled].
Example: [Show a real customer feedback example and the ideal feature request output]
Step 3: Upload Data
- Product strategy framework (2-page document)
- Feature request template (standardized format)
- Prioritization rules (decision tree)
- 10 past examples of good feature requests
- Competitive feature comparison (for context)
Real files. Sarah's actual documents.
Step 4: Add Conversation Starters
- "Prioritize this customer feedback"
- "Turn this support ticket into a feature request"
- "Bulk process these feedback items"
Happy path test: Sarah pastes in a real customer feedback email. The AI generates a feature request following her template, maps it to her strategy framework, and assigns a priority level. It's usable. She can take it directly to her planning meeting. Works.
Edge case 1: What if the feedback is unclear? The AI asks for clarification instead of guessing.
Edge case 2: What if it's a security issue? The AI flags it as critical and recommends escalation.
> Want to learn more first? Ask your AI Buddy: "I'm building a Packaged AI for [describe your selected item from Session 1]. Walk me through exactly how I would write my RACE system prompt. Give me a template with my specific role, action, context, and example."
#### Key Insight
The leadership test isn't whether it works for you. The test is whether someone else on your team can open it on day one, pick a conversation starter, and get usable output without asking you anything. That's the Packaged AI standard.
Brief 5: Test and Ship
Teaching: Validation Before Handoff
You've built your Packaged AI. You tested the happy path. It works. Now comes validation.
There's a difference between "it works for me" and "it works for anyone."
Happy path testing: Does it work when everything goes right? When the user provides clear input? When they follow the conversation starter? Yes? Great. That's baseline.
Edge case testing: What breaks? What happens with unclear input? What happens if someone asks for something outside your design? What happens if they provide too much context? Too little? Wrong format?
Break it now. Every edge case you find now is a problem your team doesn't hit later.
When you hit an edge case that's a real problem (the AI gets confused, the output is wrong, it doesn't follow your standards), you adjust.
Adjust what? Usually one of three things:
- Your data. Upload a clarifying document or example.
- Your instructions. Make your RACE clearer or more specific.
- Your conversation starters. Guide users away from the edge case or add a starter that handles it.
Test. Adjust. Test again.
When you're confident that your team can use this without asking you questions, it's ready to ship.
Practice Challenges (45 minutes total)
Your AI Buddy is waiting to help with each of these.
Challenge 1: Map Your Data Layer (10 minutes)
What data does your Packaged AI need to know to do its job right?
What are the reference documents? The standards? The templates? The examples?
List them. Describe each one in one sentence.
If you don't have these documents yet, describe what you would create.
> Want to learn more first? Ask your AI Buddy: "For my Packaged AI on [your selected item], what data should I upload or create? List all the documents, guidelines, templates, and examples I should include."
What good looks like: A specific list of 5-10 real or planned documents that your AI needs. Not vague. Specific files.
Common mistakes: Being too vague ("upload customer info" instead of "upload our 15-page customer FAQ and 5 examples of ideal responses"). Not differentiating between documents you have and documents you need to create.
Challenge 2: Write Your RACE System Prompt (15 minutes)
Write your full RACE system prompt for your Packaged AI.
Role: Who is your AI acting as? Action: What exactly does it do? Context: What are the rules, constraints, format? Example: What does good output look like?
Write it complete. Not outline. The actual system prompt text.
> Want to learn more first? Ask your AI Buddy: "Help me write my RACE system prompt for [your selected item]. Be specific about the role, action, context, and example. Here's my context: [your problem statement from Session 1]."
What good looks like: A 200-400 word system prompt that's specific to your domain, not generic. Every section (Role, Action, Context, Example) is filled in with real specifics.
Common mistakes: Being too generic. Making the action vague ("helps with content" instead of "converts a one-line content request into a full brief using our template"). Skipping the example.
Challenge 3: Design Your Interface (10 minutes)
Name: What's the name of your Packaged AI? Make it clear.
Description: One sentence. What does it do? For whom? What format?
Conversation Starters: Three conversation starters that guide your team to the three most common uses.
> Want to learn more first? Ask your AI Buddy: "Design my interface. Give me a great name for my Packaged AI on [your item], a one-sentence description that explains what it does, and three conversation starters for the most common uses."
What good looks like: A name someone could read without explanation and understand what the tool does. A description that sets clear expectations. Three conversation starters that cover the most common use cases.
Common mistakes: Names that are too clever or vague ("AI Helper" instead of "Product Feedback Analyzer"). Descriptions that are too long or technical. Conversation starters that are too similar to each other.
Challenge 4: Test Your Build (10 minutes)
Create your Packaged AI using the platform of your choice.
Add your name. Add your description. Add your system prompt. Upload your data. Add your conversation starters.
Test it with three examples:
- Happy path: A real example from your work. Does it work?
- Edge case 1: What if the input is unclear or incomplete?
- Edge case 2: What if the user asks for something outside your design?
Document what you found. What worked? What didn't? What would you adjust?
> Want to learn more first? Ask your AI Buddy: "I'm about to test my Packaged AI. Give me three test cases I should run: one happy path, two edge cases that might break it. Then ask me what I learned from testing."
What good looks like: Three actual test runs with documented results. Clear notes on what worked and what didn't. Specific ideas for what to adjust if something didn't work.
Common mistakes: Testing only the happy path and skipping edge cases. Not documenting results. Moving forward without understanding where the tool breaks.
Final Project: Your Packaged AI Prototype
This is the real deliverable. Not a practice challenge. Your actual Packaged AI for your selected item from Session 1.
Part 1: Your Packaged AI Prototype
Create your tool using your platform of choice (ChatGPT Custom GPT, Claude Project, or Gemini Gem).
Submit either: - A screenshot of your tool showing name, description, and conversation starters, OR - A written description of your tool with these elements included
Include your system prompt (RACE).
Part 2: Test Results
Run three tests. Document the results.
Test 1 - Happy Path: A real example from your work. What input did you use? What was the output? Is it usable?
Test 2 - Edge Case 1: What input did you use? What was the output? Did it handle it well?
Test 3 - Edge Case 2: What input did you use? What was the output? What did you learn?
Show your three test examples and results. Explain what you learned from testing.
Part 3: The Leadership Test
Write one paragraph (150-200 words):
Imagine a new hire joined your team tomorrow. They have no context about your business, your standards, or this tool. They open your Packaged AI for the first time. Explain how they could use it successfully on their first day without asking you a single question. What guides them? What makes it obvious what to do?
This is the test. If you can't describe how a stranger would succeed with your tool, your data, instructions, or interface need work.
Part 4: Your FAST Goal Connection
Write one paragraph (100-150 words):
You set a FAST goal in Session 1. Something specific and measurable. How does this Packaged AI prototype contribute to reaching that goal? What part of the goal does it address? What's the impact if your team starts using this tool?
Before You Submit Checklist
- [ ] I've created my Packaged AI on my chosen platform (ChatGPT, Claude, or Gemini)
- [ ] My tool has a clear name that explains its purpose
- [ ] My tool has a one-sentence description that sets expectations
- [ ] I've written a complete RACE system prompt
- [ ] I've uploaded or identified the data my tool needs to know
- [ ] I've added 3 conversation starters for common use cases
- [ ] I've tested the happy path and it works
- [ ] I've tested two edge cases and documented what happened
- [ ] I can explain how a new hire would use this tool successfully on day one
- [ ] My tool addresses a specific part of my Session 1 FAST goal
- [ ] My submission includes all four parts (Prototype, Test Results, Leadership Test, Goal Connection)
Key Takeaways
Key Takeaways
- Setup time is the hidden cost. Five minutes of setup times four daily uses times five days equals 86 hours per year per person. Package your work and eliminate it.
- Three components, all three required. Data (what your AI knows), Instructions (what your AI does via RACE), Interface (how users interact). Miss one and your team needs training.
- The leadership test is the filter. Could a new hire use this on day one without asking you anything? If no, keep iterating.
- Good prompts don't scale. Good systems do. You can write a perfect prompt. It disappears into chat history. Package it once. Use it forever.
- Semi-automated is the bridge. Manual is typing everything every time. Automated is agents running alone. Semi-automated is you triggering, AI executing with your standards built in. Most people are stuck on manual. This mission moves you to semi-automated.
- Platform doesn't matter. Pattern does. ChatGPT, Claude, Gemini. All three support the same pattern. Learn it once, build on any platform.
- Your selected item gets the first prototype. You came in with a Session 1 FAST goal. This Packaged AI addresses part of it. Next session you wire it into a workflow. Session after that you add decision logic. You're building something real, one step at a time.
Your Commitment
Name one repeated task you're going to package this week. Not someday. This week.
It could be your Session 1 selected item. It could be something different. Whatever it is, the test isn't whether it works for you. The test is whether someone else on your team can open it and get value without calling you.
What's your one thing?
Checkpoint: Quick Knowledge Check
Answer these four questions. Your AI Buddy will tell you if you're on track.
Question 1: If a team member spends 5 minutes setting up an AI tool four times daily, five days a week, how many hours per year is that on setup alone?
A) 20 hours per year B) 86 hours per year C) 150 hours per year D) 260 hours per year
Correct Answer: B - 5 minutes x 4 times x 5 days = 100 minutes per week. 100 minutes per week x 52 weeks = 5,200 minutes per year. 5,200 minutes / 60 = 86.67 hours per year.
Question 2: What are the three components of Packaged AI?
A) Platform, Plugins, Performance B) Data, Instructions, Interface C) Input, Processing, Output D) Chat, Documents, Feedback
Correct Answer: B - Every Packaged AI tool has Data (what it knows), Instructions (what it does, using RACE), and Interface (how users interact).
Question 3: RACE stands for:
A) Rapid, Accurate, Clear, Effective B) Role, Action, Context, Example C) Request, Analyze, Create, Execute D) Retrieve, Analyze, Compose, Export
Correct Answer: B - RACE is a framework for writing clear system prompts: Role (who is the AI), Action (what it does), Context (constraints and rules), Example (what good looks like).
Question 4: The leadership test for Packaged AI is:
A) Does it work for you? B) Did it cost less than hiring someone? C) Could a new hire use it on day one without asking you questions? D) How many features does it have?
Correct Answer: C - The leadership test is whether someone else on your team could open your tool, use it successfully, and get the right output without asking you for guidance. That's the standard.
Certificate of Completion
You've completed Mission 2. You're building momentum.
You came in with a problem statement and a goal. You're leaving with a working prototype your team can use.
In Session 3, you'll take this prototype and wire it into a workflow. A trigger fires. Data flows in. Your packaged AI processes it. The result goes where it needs to go. You stop pressing the button.
Course Experience Survey
Your feedback shapes the next sessions. Take two minutes. [Link to survey]
Words to Know
Packaged AI - A reusable AI tool with three components (data, instructions, interface) built into a platform so users can access it without rebuilding it each time.
System Prompt - The permanent instructions that run every time someone uses your Packaged AI. Written using RACE framework.
RACE Framework - Role (who the AI is acting as), Action (what it does), Context (constraints and rules), Example (what good output looks like).
Interface - How users interact with your Packaged AI. Includes name, description, and conversation starters.
Conversation Starters - Pre-written prompts that guide users toward common use cases when they open your tool.
Data Layer - The reference documents, guidelines, templates, and examples your Packaged AI needs to do its job right.
Semi-Automated - Workflows where you trigger the AI execution but the data, instructions, and format are pre-configured. You press the button, AI executes with your standards built in.
Automation Spectrum - Manual (you do everything), Semi-Automated (you trigger, AI executes), Automated (agents run without you).
Custom GPT - ChatGPT's version of Packaged AI. Upload files, write instructions, name it, share it.
Claude Project - Claude's version of Packaged AI. Upload files, write custom instructions, name it, share it.
Gemini Gem - Google's version of Packaged AI. Upload context, write instructions, name it, share it.
Setup Time - Time spent getting AI ready to use before you can actually work (uploading files, explaining context, describing format).
The Leadership Test - Could a new hire use this tool on day one without asking you questions?
> Your AI Buddy is always here. If any term in this list is unclear, ask your AI Buddy: "Explain [term] in plain language with an example from [your role or industry]."
Prompt Library
CALCULATING SETUP TIME COST
Audit my daily setup time. For one day, track every time I: - Re-explain a role or context - Re-upload a file or document - Re-describe a format or output type - Re-provide background information
For each instance, note: What did I do? How long did it take?
Then calculate: If this happened 4 times daily, 5 days a week, how many hours per year would this be?
What's the business impact if my team of [X people] spent this much time on setup instead of on core work? ```
BUILDING YOUR DATA LAYER
For my Packaged AI on [describe your selected item], what data should I create or upload?
Think about: What would your AI need to know to do this job right the first time? What reference documents? What standards? What examples?
List specific documents with one-sentence descriptions of what each contains.
For documents I don't have yet, what would I need to create? ```
WRITING YOUR SYSTEM PROMPT (RACE TEMPLATE)
I'm building a system prompt for my Packaged AI. Help me fill in RACE:
Role: I want my AI to act as a [describe role]. Specifically, I want it to [one specific responsibility].
Action: The main action should be: [describe what it does step-by-step].
Context: Here are my constraints and rules: - Format requirement: [describe format] - Always do: [list 2-3 must-do behaviors] - Never do: [list 2-3 must-not behaviors] - Context: [relevant background for the AI to understand]
Example: Here's a real example of good output: [provide example]. Here's what makes this good: [explain]. Here's an example of bad output: [provide example]. Here's what makes this bad: [explain].
Now write my full RACE system prompt in one cohesive block. ```
DESIGNING YOUR INTERFACE
I'm building a Packaged AI for [describe your item]. Help me design the interface:
Name: What's a clear, one-word or two-word name that explains what this tool does? [I suggest: ...]
Description: Write a one-sentence description. What does it do? For whom? In what format?
Conversation Starters: What are the three most common use cases for this tool? Create three conversation starters that guide users toward these uses. Format each as: "[Action] to [specific outcome]" ```
TESTING YOUR PACKAGED AI
Help me test my Packaged AI. I've built a tool on [platform] called [name].
Happy path test: Here's a real example from my work: [provide input]. What output did the AI generate? Is it usable? What would I need to adjust?
Edge case test 1: What if [describe unusual scenario]? How would my tool handle this?
Edge case test 2: What if [describe different unusual scenario]? How would my tool handle this?
For any edge cases that don't work, what should I adjust: my data, my instructions, or my interface? ```
ACROSS THE FOUR OFFICES: PACKAGED AI BY ROLE
I'm a [revenue officer / insight officer / delivery officer / trust officer].
Build a Packaged AI prototype for my most repeated task. What would I call it? What would the system prompt look like? What data would it need? What would the three conversation starters be?
Focus on: A tool I could test with my team this week. Something that actually saves time or improves quality. ```
> More prompts added each session. [PROMPT LIBRARY - Full version in Lark, link provided by Kate]
End of Mission 2: From Prompts to Packaged AI