AI Officer Institute
AI Buddy
🔥 7
1,240 xp
DH
← Back to Mission 3
Generative AI for Business · Mission 03

Advanced Prompt Frameworks

This is where prompts become professional deliverables. Stack RACE and CRA, master the AI Interview, and assemble a complete go-to-market brief that leadership can trust.

Brief 1

RACE: Defines the Task

You already have RACE from Mission 1. Today you add CRA, and learn how stacking them transforms a decent result into something you would actually send.

RACE: Defines the Task

Use whenever you want AI to perform something specific.

R - Role: Who is the AI acting as? A - Action: What should it do? C - Context: What is the situation? E - Example: What does good look like?

CRA: Locks the Quality

Use as a quality control layer on top of your RACE draft.

C - Constraints: What are the limits? Budget, time, brand rules, scope restrictions. R - Requirements: What must be included? The non-negotiables. A - Acceptance Criteria: How do you know it is good enough? Specific pass-fail checks.

Worked Example: BOLT Competitor Analysis

Here is what the two-pass system looks like in practice.

Pass 1: RACE only. You prompt AI with a role (market analyst), an action (write a competitor analysis for BOLT), context (energy drink market, targeting health-conscious millennials), and an example (include a comparison table). The output you get back is decent. It lists competitors, describes their positioning, and gives you a table. But it reads like it could be about any energy drink. The tone is generic. It includes competitors that are not actually relevant to BOLT's price point. It is a B-minus.

Pass 2: RACE + CRA. You take that same output and add CRA on top. Constraints: limit to five direct competitors in the $3-5 premium range, exclude legacy brands like Monster and Red Bull that compete on volume not wellness. Requirements: each competitor entry must include pricing, distribution channel, and one weakness BOLT can exploit. Acceptance Criteria: Dana should be able to read any single competitor entry and immediately know what BOLT does differently.

Now the output is specific. It is focused on the five competitors that actually matter. Every entry has the three data points Dana needs. That is the jump from 70% to something your boss can use.

Want to go deeper? Ask your AI Buddy:

"You are a market analyst specializing in the premium beverage industry. I need a competitor analysis for BOLT, a new energy drink from Buddy Bevs targeting health-conscious millennials. Use only the data I have uploaded. Do not make up numbers or invent competitors. Start with a RACE-structured first draft. Then I will ask you to apply CRA to tighten it. For the RACE draft: identify the top 5 direct competitors in the $3-5 premium energy drink range, compare their positioning, pricing, and distribution, and highlight one weakness per competitor that BOLT can exploit. Keep the total analysis under 800 words."

Key Insight

RACE alone gets you 70% there. RACE + CRA gets you something your boss can use. Think of them as two passes, not two separate processes. First pass gets the content. Second pass tightens it to the standard you need. If you cannot define what good looks like before you start, you will never know if AI got there. This is what separates someone who uses AI from someone who leads an AI program.

Brief 2

What Is the AI Interview?

This is the most important thing you will learn today. Not the most important thing in this brief. The most important thing in the entire session.

Here is why. AI programs fail when leadership cannot articulate what they need. They say "use AI to improve marketing" and wonder why the output is generic. The AI Interview is the discipline of getting specific before you build. It is the same skill you will use when scoping an AI initiative, defining a workflow, or briefing a team. Master it here on a prompt. Apply it everywhere else.

What Is the AI Interview?

Instead of trying to write the perfect prompt, you tell AI what you want to accomplish and say: "Before you start, ask me every question you need answered."

AI asks you questions. You answer them. AI uses your answers to build something specific to your situation. Not generic. Yours.

Why This Changes Everything

Without it: You guess what AI needs. Output is generic. It could have come from anyone.

With it: AI finds the gaps you did not know you had. Output sounds like it came from someone who actually knows your business.

The AI Interview flips the whole process. Instead of you anticipating everything AI needs to know, AI asks you the questions you did not think to answer: who is the audience, what tone, what to avoid, what has already been tried.

Worked Example: What an AI Interview Actually Looks Like

You tell your AI Buddy: "I need to build customer profiles for BOLT's launch. Before you start, ask me every question you need answered."

Here is what AI comes back with:

  1. Who is buying BOLT today vs. who do you want buying it in 12 months?
  2. What is the price point, and does that filter out any segments?
  3. Are these profiles for the sales team, for marketing creative, or for a leadership presentation?
  4. What distribution channels are confirmed?
  5. What has Buddy Bevs tried before that did not land?
  6. Are there any segments Dana has explicitly said to avoid?

Notice what just happened. You were going to write "create customer profiles for a health-conscious millennial audience." That would have gotten you something generic. Instead, AI asked about the difference between current and aspirational customers, about distribution, about what failed before. Those are the questions that make the output specific to Buddy Bevs, not just any beverage company.

You answer honestly. Now AI builds profiles that reflect your actual business, your actual pricing, and your actual failed experiment. The college student insight alone changes the entire output. That is what you would have missed.

Want to go deeper? Ask your AI Buddy:

"I need to build customer profiles for BOLT, a new premium energy drink from Buddy Bevs. These profiles will go into a go-to-market brief that my manager Dana Reyes will present to leadership. Before you write anything, ask me every question you need answered first. I want you to understand my business, my audience, and my constraints before you produce any output. Use only the data I have uploaded. Do not invent demographics or market data."

Key Insight

The gap between what you would have written and what AI asks. That is exactly what makes this the master skill. AI finds what you did not know was missing. When you lead an AI program, this gap is where the ROI lives. Companies that skip this step map AI to departments instead of outcomes. The AI Interview forces you to map to outcomes first.

Brief 3

The AI Officer Does Not Memorize. The AI Officer Asks.

You do not need to memorize frameworks. You need to know how to ask for the right one.

There are hundreds of business frameworks. You will never memorize them all and you do not need to. What you need is the habit of asking AI to teach you the right one for the problem in front of you, and then applying it immediately.

This is one of the most powerful habits of someone who leads AI programs. When your CEO asks you to evaluate a new market, you do not need to already know Porter's Five Forces by heart. You need to know that AI can teach it to you in two minutes and apply it to your data in five. The AI Officer's advantage is not knowing more frameworks. It is accessing the right framework faster than anyone in the room.

The AI Officer Does Not Memorize. The AI Officer Asks.

  1. Ask AI: "What framework should I use for this problem?"
  2. Ask AI: "Explain it simply, then apply it to my situation."
  3. You learn the framework and get the deliverable at the same time.

Worked Example: Learning a Brand Positioning Framework Live

You need brand positioning for BOLT. You do not know which framework to use. Watch what happens when you ask.

You: "I need to write brand positioning for a new premium energy drink. What is the best framework for this? Explain it simply, then apply it to my product."

AI recommends the Brand Positioning Statement framework:

"For [target audience] who [need or opportunity], [brand name] is the [category] that [key benefit] because [reason to believe]."

Then AI applies it to BOLT:

"For health-conscious professionals aged 25-40 who want sustained energy without the crash or the chemicals, BOLT is the premium clean energy drink that delivers steady focus through natural ingredients, backed by a transparent formula and real pilot data from 500+ users."

That took two minutes. You now know a framework you did not know before, and you have a working positioning statement you can refine. Then ask a follow-up: "What are two other frameworks I could have used instead, and why might they work better?" AI might suggest Perceptual Mapping for competitive differentiation or the Brand Key model for a deeper strategic foundation. You just learned three frameworks in the time it would have taken to half-learn one.

Want to go deeper? Ask your AI Buddy:

"I need to develop brand positioning for BOLT, a new premium energy drink from Buddy Bevs. I am not sure which framework to use. Recommend the best brand positioning framework for a product launch. Explain the framework simply so I understand how it works. Then apply it directly to BOLT using the data I have uploaded. After you apply it, suggest two alternative frameworks I could have used and explain when each one works better. Use only the data I have uploaded. Do not invent claims or statistics."

Key Insight

Every time you use AI, you have the opportunity to learn something. The AI Officer who asks "what framework should I use for this?" and then applies it immediately is learning faster than anyone in a traditional training program. This skill compounds. Every session you leave knowing something new. And every framework you learn becomes a tool you can deploy across your AI program.

Brief 4

VRC: Value, Risk, Checkpoint

You have learned to create. Now you learn to evaluate. This is the section that separates people who generate from people who lead.

Here is the truth about AI programs. Everyone says AI is working. They feel more productive. But there is no measurable return on investment. That is because nobody is evaluating the output before it ships. VRC is your evaluation discipline. It works on a single prompt output today. It works on an entire AI initiative tomorrow. The habit is the same: before anything goes out, you check the value, name the risks, and define what success looks like early.

VRC: Value, Risk, Checkpoint

Run VRC before you put your name on anything.

V - Value: Where does the benefit come from? Is the payoff clear?

R - Risk: What are the top two ways this fails?

C - Checkpoint: What is the early signal it is working, before you are too far in?

Worked Example: Running VRC on Your Competitor Analysis

You finished your BOLT competitor analysis from Challenge 1. It looks good. Before you send it to Dana, you run VRC. You paste the competitor analysis into your AI Buddy and say: "Evaluate this output using the VRC framework."

Value: The analysis clearly identifies five direct competitors and highlights a specific weakness for each that BOLT can exploit. The strongest value is the pricing gap analysis showing that no competitor in the $3-5 range is leading with a clean-ingredient story. Dana can use this to justify BOLT's positioning in one slide.

Risk 1: The analysis assumes BOLT's pilot data from 500 users is representative of the broader market. If the pilot skewed toward early adopters, the competitive advantages may not hold when selling to mainstream buyers.

Risk 2: Two of the five competitors were founded in the last 18 months. Their positioning and pricing could shift before BOLT's retail launch. The analysis treats their current strategy as fixed.

Checkpoint: Track whether BOLT's first 90 days of retail sales match the customer segments predicted in the analysis. If the actual buyer profile diverges, the entire analysis needs revisiting.

Now you have a decision to make. The value is clear. Risk 1 is real but manageable if you add a one-line caveat about the pilot sample. Risk 2 means you should flag the two newer competitors as "watch closely." The checkpoint gives Dana a concrete milestone to revisit the analysis. Your call: fix the two risks and send it. That took five minutes and saved you from sending something that would have gotten picked apart in the leadership meeting.

Want to go deeper? Ask your AI Buddy:

"I am going to paste my best output from today's challenges. I need you to evaluate it using the VRC framework. V - Value: Where does the benefit come from? Is the payoff clear and specific? R - Risk: What are the top two ways this output fails or misleads? Be honest. I need to know before my manager sees this. C - Checkpoint: What is the earliest signal that this output is actually working in the real world? After your evaluation, give me a clear recommendation: send it as-is, fix specific things (list them), or start over. Explain why."

Key Insight

RACE and CRA help you create. VRC helps you decide. Should you trust this output? What could go wrong? Every AI Officer runs VRC before putting anything in front of their manager. The question is always: send it, fix it, or start over. This same discipline scales. When you are leading an AI program, VRC becomes the framework you use to evaluate every AI initiative before it gets budget, headcount, or a launch date.

Brief 5

Practice Challenges

Everything you have learned in this mission comes together here. Dana Reyes needs the complete go-to-market package before leadership finalizes the BOLT launch plan. This is not a practice run. This is the job.

Practice Challenges

Challenge 1: Stack Your Frameworks (10 min) Use RACE to draft a competitor analysis for BOLT, then apply CRA to tighten it.

Challenge 2: The AI Interview (15 min) Build BOLT customer profiles by letting AI interview you first. No structured prompt.

Challenge 3: Let AI Teach You a Framework (10 min) Ask AI to recommend and teach you the right brand positioning framework, then apply it to BOLT.

Challenge 4: The Decision Filter - VRC (10 min) Pick your best output from Challenges 1-3 and run VRC. Then decide: send it, fix it, or start over.

Start the Practice Challenges: https://lab.ai-officer.com/program/785403/mission/2267520

Downloads: - Download: Mission 3 Challenge Guide - Download: Mission 3 Words to Know - Download: Mission 3 Prompt Library

REQUIRED FOR CERTIFICATION

Final Project: The BOLT Go-to-Market Brief

This is the work that earns your Mission 3 certification. Two deliverables, each built using every framework from this session. These are not practice exercises. They are professional deliverables Dana could present to leadership.

Part 1: The Go-to-Market Brief (60-90 min) Assemble a complete seven-section brief for BOLT using your cleaned dataset and every framework from this mission: Executive summary, Market opportunity, Competitor analysis (RACE + CRA), Customer profiles (AI Interview), Brand positioning (framework learned on demand), Risks (VRC applied at scale), Next steps.

Part 2: The BOLT Content Pack (60-90 min) Turn your go-to-market brief into a content package ready for execution. Choose at least four content pieces from the content pack list and produce them using the frameworks from this mission.

Before You Submit: Run VRC on the full brief, not just individual sections. Ask yourself: where does the value come from, what are the top two ways this fails, and what is the early signal it is working? Only submit when you would be comfortable if Dana sent this to leadership with your name on it.

Launch Final Project: https://lab.ai-officer.com/program/785403/mission/2267520

SECTION 3: WRAP-UP

Key Takeaways

Anyone can type a prompt. Not everyone can produce something worth trusting. That is the difference between using AI and leading it.

Stack your frameworks. RACE defines the task. CRA locks the quality. VRC evaluates the impact. They work as a system. The same system scales from a single prompt to an entire AI program.

The AI Interview is the master skill. Tell AI your goal. Let AI ask you the questions. The gap between what you would have written and what AI asks is where the value lives. This is also why AI programs fail: leadership cannot articulate what they need. The AI Interview is the discipline that fixes that.

You do not need to memorize frameworks. You need the habit of asking for the right one. AI is your instructor. This skill compounds every session.

Evaluate before you send. Creating output is only half the job. VRC is the habit that earns trust. Everyone says AI is working, but without evaluation, there is no ROI. VRC is how you prove it.

Think in deliverables, not prompts. Every challenge today builds one piece of a brief Dana could put in front of leadership. That is what professional-grade AI output looks like. That is what leading an AI program looks like.

Your Commitment

Before you close out: name one habit you will start, stop, or continue this week. Not a tool. A behavior. Something your manager would notice.

Checkpoint

Question 1: What is the purpose of CRA in the two-pass system?

A) CRA replaces RACE for more advanced tasks B) CRA is applied before RACE to define the problem C) CRA is a quality control layer stacked on top of a RACE draft to tighten it to the standard required D) CRA stands for Create, Review, Approve - the three approval steps before publishing

Correct answer: C. RACE gets you 70% of the way to a usable output. CRA gets you the rest by adding Constraints, Requirements, and Acceptance Criteria. Two passes, not two separate processes.

Question 2: You are about to write a structured prompt for a complex deliverable. What does the AI Officer do first?

A) Choose the right RACE components for the task B) Tell AI the goal and ask AI to ask you questions before starting C) Look up the right framework to use D) Write the full RACE + CRA prompt from scratch

Correct answer: B. The AI Interview comes first. Before any structure, you let AI identify what it needs to know. The questions AI asks reveal gaps in your thinking that no framework alone would catch.

Question 3: What does VRC stand for and when do you use it?

A) Verify, Review, Confirm - used when you finish a project B) Value, Risk, Checkpoint - used before any output goes to your manager, a client, or a leadership meeting C) Vision, Requirements, Criteria - used at the start of a project to define success D) Validate, Run, Confirm - used to test an AI prompt before building from it

Correct answer: B. VRC is your evaluation discipline. Value: where does the benefit come from? Risk: what are the top two ways this fails? Checkpoint: what is the early signal it is working? Run this before anything goes out.

Question 4: You need to build competitive positioning for your product but you do not know which framework to use. What does the AI Officer do?

A) Look up business frameworks online B) Use RACE to figure out which framework fits C) Ask AI to recommend and teach you the right framework for this type of problem, then apply it immediately D) Use the most well-known framework by default

Correct answer: C. The AI Officer does not memorize frameworks. The AI Officer asks for the right one on demand. "What framework should I use for this? Explain it simply, then apply it to my situation." This skill compounds every session.

Question 5: Your BOLT competitor analysis looks great. You are about to send it to Dana for her leadership presentation. What is the last thing you do?

A) Read it one more time and fix any typos B) Run VRC: identify where the value comes from, name the top two risks, and define the early checkpoint - then decide to send, fix, or start over C) Ask AI to review it for grammar and tone D) Show it to a colleague for a second opinion

Correct answer: B. Creating output is only half the job. VRC is what separates someone who generates from someone who leads. Send it, fix it, or start over - but decide before Dana sees it.

Certificate of Completion

AI Essentials Program Advanced Prompt Frameworks - Mission 3 Checkpoint Reached

Solid work, Cadet. You have completed Mission 3 of Generative AI Essentials and built the skills that separate prompt writers from AI program leaders: frameworks that guide quality output, the AI Interview that finds your blind spots, and VRC that tells you whether to trust what you have made.

You now know how to produce output your leadership trusts. Output that has your name on it and deserves to. More importantly, you have started building the operational habits of someone who leads AI programs, not just someone who uses AI tools.

Keep going. One more mission stands between you and the AI Specialist Certification.

Progress: Mission 1 (done) | Mission 2 (done) | Mission 3 (current) | Mission 4

Your Next Mission: Prompting Perfect Visuals

Issued by AI Officer Institute Instructors: Dave Hajdu and David Nilssen dave@ai-officer.com | ai-officer.com

Course Experience Survey

[Survey placeholder - link to be added by Kate]

Words to Know

For definitions of all key terms from this mission, see Mission 3 Words to Know or visit: https://aiofficer.sg.larksuite.com/sync/[Mission3_Words_Link]

You can always ask your AI Buddy to explain any of these concepts in more detail. That is what he is there for.

Prompt Library

For copy-paste prompts from this mission, see Mission 3 Prompt Library or visit: https://aiofficer.sg.larksuite.com/sync/[Mission3_Prompts_Link]

Don't write prompts. Buddy it.

AI Buddy
Discuss these briefs
AI Buddy
AI Buddy
● Mastering frameworks
Mission 3 Briefs: Advanced Prompt Frameworks
AI Buddy
Hey! 👋 You're studying the frameworks that transform prompts into professional outputs. This is where it gets powerful.
AI Buddy
🎯 The Three Pillars: RACE defines what you want. CRA locks the quality. The AI Interview uncovers gaps in your thinking BEFORE you write.
AI Buddy
⚡ The Game-Changer: RACE alone gets you 70% there. RACE + CRA gets you something your boss can use. The AI Interview prevents wasted iterations.
AI Buddy
Questions about these frameworks? Pick one below. 👇