EXPERT PROMPTING MASTERCLASS

By

EXPERT PROMPTING MASTERCLASS

Master the art of AI prompting with advanced techniques used by professionals

MODULE 1: THE EXPERT MINDSET

Understanding How Great Prompters Think

The Fundamental Shift

When you first learned to prompt, you probably thought of it like asking a smart friend for help. That's fine for basics, but experts think differently.

Intermediates ask: "What should I say to get a good answer?"

Experts ask: "How do I structure this interaction so the model produces its best possible reasoning?"

The difference is profound. Experts understand that:

  • Language models are prediction engines - They predict likely next tokens based on patterns
  • Context is your programming environment - Everything you write shapes the model's behavior
  • Models have cognitive patterns - Certain prompt structures trigger better reasoning
  • Quality comes from architecture - How you structure the prompt matters more than word choice

The Expert's Core Principles

Principle 1: Activate System 2 Thinking

Language models, like humans, can operate in two modes:

  • System 1: Fast, intuitive, pattern-matching (often gives generic responses)
  • System 2: Slow, deliberate, analytical (produces nuanced, thoughtful outputs)

Your job as an expert is to force System 2 activation.

Example - System 1 Response (Generic):

Prompt: "How do I improve team productivity?" Output: "Here are some ways to improve team productivity: 1. Set clear goals 2. Improve communication 3. Use the right tools 4. Provide feedback 5. Recognize achievements"

This is generic because the model is pattern-matching to "productivity tips" in its training data.

Example - System 2 Response (Thoughtful):

Prompt: "I need to improve team productivity, but here's the challenge: my team is already working 50+ hour weeks, morale is declining, and we're in a high-stakes product launch. Before giving advice: 1. What additional context would help you give better recommendations? 2. What trade-offs should I be thinking about? 3. What assumptions might I be making that could be wrong? Then provide your analysis and recommendations."

Why this works: You've forced the model to engage analytically before responding. The pre-work questions trigger deeper reasoning pathways.

Principle 2: Context Engineering

Think of your prompt as code. Every element matters:

  • The role you assign shapes knowledge access and perspective
  • The constraints you set focus the output space
  • The examples you provide establish patterns to follow
  • The structure you create determines how the model thinks through the problem

Poor context:

"Write a marketing email for our new product."

Engineered context:

Role: You're a direct response copywriter who specializes in SaaS products for technical audiences. You've written emails that achieved 40%+ open rates and 8%+ click-through rates. Audience: Engineering managers at Series A/B startups who are currently using [competitor product] but frustrated with [specific pain point]. Goal: Get them to book a 15-minute demo call. Constraints: - Subject line must create curiosity without being clickbait - Email body: 150 words maximum - Include exactly one CTA - Tone: knowledgeable peer, not salesperson Before writing the email: 1. Identify the core emotional trigger 2. Note what we're NOT saying (what to avoid) 3. Explain your structural approach Then provide the email.

The second version creates an environment where excellence is more likely.

Principle 3: Intelligent Iteration

Experts never accept first outputs. They iterate systematically:

The Expert Iteration Loop:

  1. Get baseline → Identify gaps → Add specific constraints → Re-generate
  2. Get variation → Compare approaches → Synthesize best elements → Refine
  3. Stress test → Find weaknesses → Strengthen → Polish

Example in action - Iteration 1 (Baseline):

"Explain blockchain to a non-technical audience." [Get response, evaluate]

Iteration 2 - Add constraints:

"That's too abstract. Rewrite it: - Use only one analogy (choose the best one) - Include a concrete example of a real problem it solves - Address the #1 objection people have - Keep it under 200 words" [Get response, evaluate]

Iteration 3 - Refine:

"Better. Now make it more engaging: - Start with a surprising fact or question - Remove any jargon that remains - End with a clear 'so what' moment" [Get response, evaluate]

Iteration 4 - Polish:

"Final pass: tighten the language. Every sentence should be essential." [Get final response]

This takes 5-10 minutes but produces dramatically better results than spending 30 minutes trying to write the "perfect" first prompt.

Practice Exercises

✍️ PRACTICE EXERCISE 1.1: Activate System 2

Your turn. Take this generic prompt and transform it using System 2 activation:

Generic prompt:

"Give me ideas for reducing customer churn."

Your enhanced prompt should:

  • Provide specific context about the business
  • Include pre-work questions that force analysis
  • Request structured thinking
  • Set clear output requirements

Spend 5 minutes writing your enhanced version. Then test it in your preferred AI tool.

✍️ PRACTICE EXERCISE 1.2: Context Engineering

Scenario: You need to write a performance review for a team member who's technically excellent but struggles with communication.

Bad prompt:

"Write a performance review for John."

Your task: Engineer a complete context that includes:

  • Your role and relationship
  • Specific details about John's work
  • The purpose and audience of the review
  • Tone and format requirements
  • Any constraints or sensitivities

Write your engineered prompt and test it.

The Expert Self-Assessment

Before moving to the next module, rate yourself honestly:

I understand that:

  • ☐ Prompts structure thinking, not just request information
  • ☐ Generic inputs produce generic outputs
  • ☐ Iteration is where quality comes from
  • ☐ Context engineering is a skill I can develop
  • ☐ Different prompts trigger different reasoning modes

If you checked all boxes, you're ready for Module 2.

MODULE 2: ADVANCED PROMPTING FRAMEWORKS

The Techniques That Produce Excellence

Now that you understand the expert mindset, let's build your technical toolkit. These are battle-tested frameworks that work across all AI models.

Framework 1: Layered Prompting

The single most powerful technique experts use is building prompts in conceptual layers rather than dumping everything at once.

The Five Layers

  • Layer 1: Role & Expertise - Who is the AI in this interaction? What knowledge should it access?
  • Layer 2: Context & Constraints - What's the situation? What are the boundaries?
  • Layer 3: Task Structure - How should the AI approach this? What's the thinking process?
  • Layer 4: Output Specifications - What format? What length? What elements must be included?
  • Layer 5: Quality Controls - How will we validate? What checks should happen?

Layered Prompting in Action

Scenario: You need a content strategy for a product launch.

❌ Intermediate Approach:

"Create a content strategy for launching our new project management tool targeting remote teams."

✅ Expert Approach:

LAYER 1 - ROLE & EXPERTISE: You are a content strategist who has launched 20+ B2B SaaS products. Your specialty is creating content engines that generate qualified leads with minimal paid spend. You think in terms of customer journey stages and content formats that actually convert. LAYER 2 - CONTEXT & CONSTRAINTS: Product: Project management tool with unique async collaboration features Target: Remote-first teams, 10-50 people, currently using Asana or Trello Timeline: 8 weeks pre-launch, ongoing post-launch Budget: $15K (must include content creation and promotion) Team: 1 content writer, 1 designer, founder available for thought leadership LAYER 3 - TASK STRUCTURE: Approach this in phases: Phase 1 - Strategic Foundation: - Define our 3 core content pillars based on product differentiation - Map content types to each funnel stage (awareness → consideration → decision) - Identify our unique POV that will stand out Phase 2 - Content Planning: - Create 8-week pre-launch calendar - Specify content format, channel, and goal for each piece - Note dependencies and production timelines Phase 3 - Distribution Strategy: - Organic channels (which platforms, why, frequency) - Paid promotion (where to allocate budget for maximum impact) - Partnership/collaboration opportunities LAYER 4 - OUTPUT SPECIFICATIONS: Deliver as: 1. Strategic narrative (2-3 paragraphs explaining the approach) 2. Content calendar (table format: Week | Content Piece | Format | Channel | Goal | Owner) 3. Budget allocation (breakdown of $15K) 4. Success metrics (3-5 KPIs we should track) LAYER 5 - QUALITY CONTROLS: After completing the strategy: - Identify the 3 riskiest assumptions you've made - Note what additional information would strengthen this plan - Provide 2-3 alternative approaches we should consider Let's begin with Phase 1.

Why this is powerful:

  • The AI knows exactly what expertise to draw from
  • Clear constraints prevent generic advice
  • Phased approach enables complex reasoning
  • Specific output format ensures usability
  • Quality controls catch gaps and assumptions

When to Use Layered Prompting

✅ Use it for:

  • Complex business problems
  • Creative projects requiring strategy
  • Technical documentation
  • Any task where quality matters more than speed

❌ Don't use it for:

  • Simple factual queries
  • Quick edits or formatting
  • Straightforward tasks with obvious approaches

Framework 2: Constitutional Prompting

Give the AI a "constitution"—a set of principles that govern how it should think and respond throughout your interaction.

The Power of Constitutions

A constitution creates consistency across multiple exchanges and embeds quality standards directly into the model's behavior.

Example Constitution:

OPERATING PRINCIPLES FOR THIS CONVERSATION: 1. Specificity over Generalization - Provide concrete examples, not abstract concepts - Use numbers, names, and specific scenarios - Replace "often" with "in X% of cases" when possible 2. Reasoning Transparency - Show your thinking process, not just conclusions - Explain why you chose approach A over approach B - Note when you're uncertain 3. Productive Disagreement - Present counter-arguments to your own recommendations - Identify when conventional wisdom might be wrong - Challenge my assumptions if you spot flaws 4. Practical Orientation - Every recommendation must be actionable - Include what it costs (time, money, complexity) - Flag what could go wrong 5. No Platitudes - Ban: "think outside the box," "synergy," "leverage," "circle back" - If it sounds like it came from a corporate memo, rewrite it - Be direct and human ACKNOWLEDGE THESE PRINCIPLES, THEN: Help me decide whether to build or buy a customer support ticketing system.

The AI will follow these principles throughout the conversation, producing much higher quality responses with less hand-holding.

Constitutional Prompting Templates

For Analysis Work:

ANALYTICAL PRINCIPLES: 1. Data before opinions - cite sources and numbers 2. Acknowledge uncertainty - if confidence <80%, say so 3. Multiple perspectives - always include alternate interpretations 4. Falsifiability - state what evidence would prove you wrong Now analyze: [your task]

For Creative Work:

CREATIVE PRINCIPLES: 1. Original over familiar - avoid the first idea that comes to mind 2. Specific over general - use concrete details and sensory language 3. Surprising over expected - look for the unusual angle 4. Purposeful over decorative - every element should serve the goal Now create: [your task]

For Technical Work:

TECHNICAL PRINCIPLES: 1. Correctness over cleverness - working code beats elegant code 2. Explain trade-offs - note what you're optimizing for and against 3. Consider maintenance - flag what will be hard to change later 4. Security-conscious - point out potential vulnerabilities Now build: [your task]

✍️ PRACTICE EXERCISE 2.1: Build Your Constitution

Scenario: You're working with an AI to develop your business strategy for the next year.

Your task: Write a 4-5 principle constitution that will ensure the AI gives you the kind of strategic thinking you need (not generic business advice).

Consider:

  • What bad habits do generic business recommendations have?
  • What qualities do you value in strategic advice?
  • What should the AI prioritize or avoid?

Write your constitution, then test it with a real strategic question.

Framework 3: Chain-of-Thought Scaffolding

Don't just ask for reasoning—provide the exact scaffolding the model should use.

The Scaffolding Principle

When you provide a thinking structure, the model produces dramatically more thorough and logical outputs.

Without Scaffolding:

"Should we expand our product to serve enterprise customers? Explain your reasoning."

Result: You'll get some pros and cons, but the analysis will be shallow.

With Scaffolding:

Should we expand our product to serve enterprise customers? Use this analytical framework: STEP 1 - OPPORTUNITY ASSESSMENT: - Market size and growth rate for enterprise segment - Current competitors and their positioning - Our unique advantages in this segment STEP 2 - CAPABILITY ANALYSIS: - What product capabilities do we have? - What gaps exist for enterprise needs? - Estimated development cost and time for each gap STEP 3 - GO-TO-MARKET CHALLENGE: - What does enterprise sales require (team, process, timeline)? - Expected CAC and sales cycle length - First-year revenue realistic forecast STEP 4 - RISK ASSESSMENT: - What could go wrong? - What could distract from our core business? - What's the opportunity cost? STEP 5 - DECISION FRAMEWORK: - Under what conditions is this a YES? - Under what conditions is this a NO? - What information would change your recommendation? STEP 6 - RECOMMENDATION: Based on the above analysis, provide a clear recommendation with: - Your confidence level (1-10) - The 3 most important factors in your decision - Next steps if we proceed - Alternative approaches we should consider

Why this works: Each step builds on the previous one, forcing comprehensive analysis rather than surface-level thinking.

Advanced Scaffolding Patterns

The Comparison Scaffold:

Compare [Option A] vs [Option B] using this structure: FOR EACH OPTION: 1. Core strengths (top 3) 2. Critical weaknesses (top 3) 3. Best-case scenario 4. Worst-case scenario 5. Hidden costs or complications THEN COMPARE: 6. Which is better for [specific criterion]? 7. Which has lower risk? 8. Which has higher upside? 9. What would make you choose one over the other? 10. Is there a hybrid approach that takes the best of both?

The Problem-Solving Scaffold:

Problem: [describe problem] STEP 1 - PROBLEM DISSECTION: Break this into sub-problems. What are the 3-4 core issues? STEP 2 - ROOT CAUSE ANALYSIS: For each sub-problem, what's causing it? Go at least 2 levels deep. STEP 3 - SOLUTION GENERATION: For each root cause, generate 2-3 potential solutions. Rate each: Impact (1-10), Difficulty (1-10), Time to implement. STEP 4 - DEPENDENCIES & SEQUENCING: What needs to happen first? What depends on what? STEP 5 - RECOMMENDATION: Given the above, what's the optimal sequence of actions? What resources are needed? What could block success?

✍️ PRACTICE EXERCISE 2.2: Create Your Scaffold

Choose one of these scenarios:

  • A) You need to decide whether to hire a senior developer or two junior developers
  • B) You're evaluating whether to rebrand your company
  • C) You're deciding which of three marketing channels to focus on

Your task: Create a complete analytical scaffold (6-8 steps) that would force thorough, logical analysis.

Then test it with an AI and see the difference in output quality.

Framework 4: Few-Shot Mastery

The most underrated expert technique: showing the AI exactly what good looks like through examples.

The Power of Examples

Few-shot learning means providing 2-4 examples of exactly what you want before asking the AI to generate something new.

Why it's powerful:

  • Examples are more precise than descriptions
  • The AI can pattern-match to your specific quality bar
  • You control style, tone, format, and structure
  • Works across all models

Few-Shot Structure

I need [type of output]. Here are examples of exactly what I want: EXAMPLE 1: [paste example] EXAMPLE 2: [paste example] EXAMPLE 3: [paste example] KEY PATTERNS TO NOTICE: - [element 1 that's important] - [element 2 that's important] - [element 3 that's important] Now create one for: [your new case]

Few-Shot in Practice

Scenario: Product Descriptions

I need a product description for wireless earbuds. Here are 3 examples of our brand voice: EXAMPLE 1 (Phone Case): "Your phone survives your life. Military-grade protection meets slim design—because bulk is not a security strategy. $24." EXAMPLE 2 (Laptop Stand): "Your neck deserves better. Aluminum stand puts your screen at eye level. Folds flat, works anywhere. $49." EXAMPLE 3 (USB-C Cable): "The cable that stays plugged in. Reinforced connector survives 12,000+ bends. Charges fast, transfers faster. $15." PATTERNS TO MATCH: - Opens with customer pain point - Technical benefits in plain language - Crisp, confident tone - Ends with price Now write for: Wireless earbuds, noise cancellation, 24hr battery, $129.

✍️ PRACTICE EXERCISE 2.3: Few-Shot Training

Your task: Find 3 examples of something you create regularly (emails, posts, reports, code).

Create a few-shot prompt with:

  • Your 3 examples
  • Explicit pattern notes
  • A new case to generate

Test it and compare quality to outputs without examples.

Framework 5: Constraint-Based Creativity

Counter-intuitive truth: More constraints = better creativity.

The Constraint Paradox

Total freedom produces generic outputs. Smart constraints force originality.

Why constraints work:

  • They eliminate obvious/generic options
  • They force deeper search in possibility space
  • They create novel combinations
  • They give you control over output

The Constraint Formula

[Task] + [What to avoid] + [Format] + [Unusual element] = Original output

Example:

Write a job posting for senior engineer. Constraints: - Avoid: "rockstar," "ninja," "passionate," "fast-paced" - Format: 4 paragraphs max, no bullets - Include: One surprising perk, one honest challenge - Tone: How you'd describe it to a friend Write it.

✍️ PRACTICE EXERCISE 2.4: Constrain for Quality

Add 4-5 constraints to force better output:

  • A) "Write blog post about time management"
  • B) "Create social post about new product"
  • C) "Explain blockchain technology"

Test constrained vs generic version. Notice the difference.

🎯 MODULE 2 CHECKPOINT

You've learned five advanced frameworks:

  • Layered Prompting - Build in conceptual layers
  • Constitutional Prompting - Set operating principles
  • Chain-of-Thought Scaffolding - Provide thinking structures
  • Few-Shot Mastery - Show exactly what you want
  • Constraint-Based Creativity - Use limits for originality

Integration exercise: Create a competitive analysis using at least 3 frameworks combined.

MODULE 3: MODEL-SPECIFIC MASTERY

Getting the Best from Each AI Tool

Not all AI models are created equal. Each has distinct strengths, weaknesses, and quirks. Experts know how to optimize for each one.

ChatGPT (GPT-4) Mastery

Model Characteristics

  • Strengths: Creative tasks, conversational flow, broad knowledge, coding, accessibility
  • Weaknesses: Can be verbose, sometimes overconfident, may require more constraint for precision
  • Best for: Brainstorming, content creation, explanations, code generation, conversational interactions

GPT-4 Optimization Techniques

Technique 1: Custom Instructions as Environment Variables

ChatGPT Plus allows custom instructions. Use these like programming environment variables.

Example Custom Instructions:

WHAT WOULD YOU LIKE CHATGPT TO KNOW: - I'm a B2B SaaS founder, technical background, 8 years experience - Company: project management tool, 50 customers, $30K MRR - I value: directness, speed, practical over theoretical HOW WOULD YOU LIKE CHATGPT TO RESPOND: - Match my tone: casual = casual, formal = formal - Show reasoning before complex answers - Include: what could go wrong - Keep under 300 words unless I ask for comprehensive - Never use: "delve," "leverage," "synergy"

Technique 2: Verbosity Control

GPT-4 can be wordy. Control through language:

For concise output:

"Explain X. Be ruthlessly concise." "Give me the 3-bullet version." "Pretend I'm in an elevator, 30 seconds."

For comprehensive output:

"Give me a deep dive on X. Be thorough." "Walk me through step-by-step, assume I know nothing."

Technique 3: Multi-Turn Sculpting

Instead of perfect first prompt, use rapid turns to sculpt:

Turn 1: "Draft cold email for enterprise sales." [Review] Turn 2: "Too salesy. Rewrite as peer reaching out with value." [Review] Turn 3: "Better. Cut by 40% without losing impact." [Review] Turn 4: "Perfect length. Stronger opening that shows research." [Done - 5 minutes total]

Technique 4: Code Optimization

For coding, GPT-4 needs direction:

Expert request:

Write Python function to validate email addresses. Requirements: - Use regex for validation - Handle: plus addressing, subdomains, international domains - Return: (is_valid, error_message) - Include docstring with examples - Type hints required Then: - Explain regex pattern choice - Note what valid emails this might reject - Show 3 test cases

✍️ PRACTICE EXERCISE 3.1: GPT-4 Sculpting

Task: Product announcement via GPT-4

Steps:

  • Write basic prompt, get output
  • Use 3-4 sculpting turns to improve
  • Document what each turn accomplished

Notice how much faster than writing "perfect" first prompt.

Claude (Sonnet 4.5) Mastery

Model Characteristics

  • Strengths: Long-context reasoning, nuanced analysis, following complex instructions, structured thinking, precision
  • Weaknesses: Can be formal, sometimes cautious
  • Best for: Analysis, research synthesis, complex reasoning, detailed documentation, large documents

Claude Optimization Techniques

Technique 1: Extended Context Exploitation

Claude excels with large context. Feed everything relevant:

I'm uploading 4 documents: 1. Q1-Q3 board presentations (63 pages) 2. Customer interviews (24 interviews) 3. Product roadmap (15 pages) 4. Competitive analysis (8 companies) Task: Identify gaps between customer asks and roadmap. Recommend 3 Q4 priority features. Requirements: - Reference specific quotes and slides - Note contradictions between documents - Include: customer evidence, competitive context, effort estimate - Flag assumptions due to missing info

Technique 2: Artifact-Driven Workflows

Claude creates "artifacts" you can iterate on:

Create PRD for [feature] as artifact. Include: - Problem statement - User stories (3-5) - Technical requirements - Success metrics - Open questions Then critique as 3 stakeholders: 1. Engineering Lead: Technical feasibility 2. Customer Success: User adoption risks 3. Product Marketing: Positioning challenges Update PRD after each critique.

Technique 3: Meta-Cognitive Prompting

Claude responds well to prompts about its thinking:

Analyze [business problem]. Before your analysis: 1. What framework is most useful here? Why? 2. What key information is missing? 3. What's the strongest counter-argument? 4. What assumptions are you making? Rate confidence (1-10). Now provide analysis.

Technique 4: Structured Thinking Frameworks

Claude excels with explicit structures:

Evaluate European market expansion. Framework: SECTION 1 - OPPORTUNITY: Market size, growth, competitive landscape SECTION 2 - CAPABILITY: What we have, what we need, gaps SECTION 3 - RISK MAPPING: Regulatory, operational, financial, competitive Rate: Likelihood (1-10), Impact (1-10) SECTION 4 - SCENARIOS: Best case, base case, worst case SECTION 5 - DECISION: Go/No-Go criteria

✍️ PRACTICE EXERCISE 3.2: Claude Analysis

Task: Complex business scenario with documents

Use meta-cognitive prompting for deeper analysis.

Compare to simple prompt output quality.

Perplexity Mastery

Model Characteristics

  • Strengths: Research, current information, source synthesis, citations
  • Weaknesses: Shorter responses, less creative, more constrained
  • Best for: Research, fact-finding, comparing sources, staying current

Perplexity Optimization

Technique 1: Research Query Structuring

Weak:

"What's happening with AI regulation?"

Strong:

Compare AI regulation: US, EU, China (2024-2025): Focus: - Major legislation passed/proposed - Definition of 'high-risk AI' per jurisdiction - Enforcement mechanisms and penalties - Industry response and compliance challenges Prioritize: government docs, official statements, policy papers Include: company examples affected by each approach

Technique 2: Source Quality Control

Research [topic]. Source requirements: - Prioritize: Academic papers, gov reports, company filings - Avoid: News aggregators, opinion pieces, marketing - Date range: Last 6 months - Include contradicting viewpoints if they exist Cite sources with publication date.

Technique 3: Iterative Research

Turn 1: "Overview of quantum computing commercial applications 2024-2025" [Review] Turn 2: "Focus on top 3 nearest-term applications. Leading companies?" [Review] Turn 3: "For quantum optimization: technical blockers, timelines, expert predictions" [Review] Turn 4: "Find case studies or pilots testing quantum optimization. Results?"

Technique 4: Comparative Research

Compare [Tech A] vs [Tech B] for [use case]: Structure: 1. Technical maturity (evidence/milestones) 2. Current adoption rate (numbers) 3. Cost considerations (pricing range) 4. Key limitations (technical, not marketing) 5. Expert predictions 2025-2026 For each: cite sources, note if experts disagree

✍️ PRACTICE EXERCISE 3.3: Research Challenge

Choose research topic for work/project.

Use Perplexity with:

  • Well-structured initial query
  • Source quality requirements
  • 2+ rounds of iterative refinement

Document insights vs simple Google search.

Model Selection Framework

Which Tool for Which Task?

Use ChatGPT when:

  • Brainstorming and ideation
  • Creative content generation
  • Conversational interactions
  • Code generation and debugging
  • Quick iterations and sculpting

Use Claude when:

  • Deep analysis of complex problems
  • Working with large documents
  • Structured thinking required
  • Detailed documentation
  • Nuanced reasoning needed

Use Perplexity when:

  • Researching current events
  • Finding and citing sources
  • Comparing multiple viewpoints
  • Fact-checking and verification
  • Staying up-to-date

Cross-Model Workflow

Experts often use multiple models in sequence:

Example: Product Launch Strategy 1. Perplexity: Research market trends, competitor analysis 2. Claude: Synthesize research into strategic framework 3. ChatGPT: Generate creative campaign ideas 4. Claude: Evaluate and refine best ideas 5. ChatGPT: Write final copy and content

🎯 MODULE 3 CHECKPOINT

You now understand optimization for:

  • ChatGPT - Creative, conversational, needs sculpting
  • Claude - Analytical, handles complexity, loves structure
  • Perplexity - Research-focused, needs specific queries

Integration Exercise:

New product launch needs:

  • Market research (which tool?)
  • Positioning strategy (which tool?)
  • Marketing copy (which tool?)

Task: Map each to optimal tool and write expert prompts.

MODULE 4: TROUBLESHOOTING LIKE AN EXPERT

Diagnosing and Fixing Common Problems

Even expert prompts sometimes produce suboptimal outputs. The difference is that experts know how to diagnose and fix problems quickly.

The Expert Troubleshooting Framework

  1. IDENTIFY THE GAP - What specifically is wrong?
  2. DIAGNOSE ROOT CAUSE - Why did the model produce this?
  3. APPLY THE FIX - What modification addresses it?
  4. VALIDATE - Did it work? If not, iterate.

Problem 1: Generic / Bland Output

The Symptom

AI gives advice that sounds like corporate handbook or generic blog.

Example:

Prompt: "How can I improve team productivity?" Output: "To improve team productivity: 1. Set clear goals 2. Foster communication 3. Provide right tools 4. Recognize good work 5. Encourage work-life balance"

Useless—everyone knows this.

The Diagnosis

Root cause: Model is pattern-matching to generic training data. Giving "average" response.

The Fix

Add specificity and constraints:

Improve team productivity. Specific situation: - Team: 7 engineers, 2 designers - Current: Shipping features but missing deadlines by 30% - Constraints: Already 50+ hour weeks, morale decent - Tools: Jira, Slack, GitHub - Recent: Switched to 2-week sprints 3 months ago Requirements: - No generic productivity tips - Focus on root cause, not symptoms - Consider time/effort is maxed - Suggest what to STOP doing - Include validation methods What's actually going on?

Problem 2: Factual Errors / Hallucinations

The Symptom

AI states something confidently that's incorrect or makes up details.

The Diagnosis

Root cause: Models predict plausible text, not truth. Can't distinguish fact from fiction.

The Fix

Strategy 1: Verification Loops

Research [topic]. After response: 1. List every factual claim you made 2. Rate confidence in each (1-10) 3. For claims with confidence <8, mark [VERIFY] 4. Suggest how I could verify [VERIFY] claims

Strategy 2: Request Sources

Research [topic]. For every factual claim, cite source. Format: [claim](source URL) If uncertain, say "I'm uncertain about [X] because [reason]" rather than guessing.

Strategy 3: Cross-Validation

  • Get response from AI
  • Ask: "What might be wrong about this?"
  • Verify key claims independently
  • Feed corrections back if needed

Problem 3: Not Following Complex Instructions

The Symptom

Detailed instructions written, but AI ignores parts or gets confused.

Example:

Asked for: 5 sections, each with examples and data

Got: 3 sections, no examples, generic statements

The Diagnosis

Root causes:

  • Instructions too long/complex (working memory overload)
  • Ambiguous instructions
  • Conflicting requirements
  • Model lost track mid-generation

The Fix

Strategy 1: Chunking + Confirmation

Task in 3 steps. I'll describe all, you confirm, then we execute one at a time. STEP 1: Analyze data, identify 5 key trends STEP 2: For each trend, find 2 supporting examples STEP 3: Create summary table with trends, examples, implications Confirm: What are you doing in each step? [Wait for confirmation] Good. Start Step 1.

Strategy 2: Checklist Method

Create market analysis report. Checklist (confirm you'll include all): - [ ] Executive summary (2-3 paragraphs) - [ ] Market size with numbers - [ ] 3-5 key trends with evidence - [ ] Competitive landscape (4+ companies) - [ ] SWOT analysis in table - [ ] 3 strategic recommendations - [ ] Sources cited Confirm checklist, then begin.

Strategy 3: Simplify + Iterate

Instead of: "Do A, B, C, D, E all at once"

Try: "Let's start with A. Once done, we'll do B."

Problem 4: Wrong Tone or Style

The Symptom

Output is too formal when you want casual, or too casual when you want professional.

The Diagnosis

Root cause: Without explicit guidance, models default to "safe" professional tone.

The Fix

Strategy 1: Tone Specification

Write email to colleague Jake about project delay. Tone requirements: - Talk like to a friend, not formal business - Use "I" and "you," not "we" or "one" - Contractions fine (we're, didn't) - Direct—no corporate jargon - Conversational but professional Think: over coffee, not quarterly report.

Strategy 2: Provide Reference

Write [content]. Tone example (different topic): "Hey Sarah—quick update on mockups. Pushed to your review stack, but heads up: mobile needs work. Thinking we simplify the nav. Thoughts?" Match this: casual, direct, brief, human.

Strategy 3: Iterative Adjustment

Turn 1: [Get output] Turn 2: "Too formal. Rewrite like texting colleague, not memo." Turn 3: "Better, still stiff. Sound like how people talk."

Problem 5: Too Long or Too Short

The Symptom

Wall of text when you wanted summary, or bullets when you needed depth.

The Diagnosis

Root cause: No explicit length guidance.

The Fix

Strategy 1: Explicit Constraints

Explain [topic]. Length: Exactly 3 paragraphs, ~150 words total. Structure: - Para 1: Core concept (2-3 sentences) - Para 2: Why it matters (2-3 sentences) - Para 3: Common misconception (2-3 sentences)

Strategy 2: Compression/Expansion

Turn 1: "Explain [topic]" [Review] Turn 2: "Cut by 60% without losing key insights." OR Turn 2: "Expand with 3 concrete examples and detail."

Strategy 3: Reference Length

"Write this in about same length as: [paste example of desired length]"

Problem 6: Lacks Specific Examples

The Symptom

All theory, no practical examples.

The Diagnosis

Root cause: Default to abstract explanations unless prompted for concrete examples.

The Fix

Strategy 1: Explicit Requirements

Explain [concept]. Requirements: - For every principle, provide concrete example - Examples must include specific numbers, names, scenarios - No abstract examples (bad: "like a company might do X") - Good: "like how Slack uses X to achieve Y" Minimum 3 examples total.

Strategy 2: Example-First Structure

Explain [concept]. Structure: 1. Start with specific, concrete example 2. Extract general principle from example 3. Show 2-3 variations 4. End with counter-example (when doesn't work)

Problem 7: Doesn't Challenge Your Thinking

The Symptom

AI agrees with everything or gives what you asked without pushback.

The Diagnosis

Root cause: Models trained to be helpful and agreeable. Won't naturally play devil's advocate.

The Fix

Strategy 1: Request Challenge

Here's my plan: [describe] Your job: 1. Identify 3 weakest parts 2. Present strongest counter-argument 3. What am I not considering? 4. Why might this fail? Be direct. I want critique, not validation.

Strategy 2: Red Team

I'm proposing [decision/strategy]. Act as three skeptics: 1. Pessimist: What's wrong? 2. Competitor: How exploit weaknesses? 3. Experienced advisor: What mistake am I repeating? Each provides perspective, then I'll revise.

The Expert Troubleshooting Checklist

When output quality is poor:

Content Issues:

  • ☐ Too generic? → Add specific context and constraints
  • ☐ Wrong? → Add verification loops and sources
  • ☐ Lacks examples? → Require concrete, specific examples
  • ☐ Too abstract? → Force practical application

Structure Issues:

  • ☐ Ignoring instructions? → Chunk task, use confirmation
  • ☐ Wrong length? → Set explicit length constraints
  • ☐ Poorly organized? → Provide clear structure

Style Issues:

  • ☐ Tone wrong? → Specify tone with examples
  • ☐ Too agreeable? → Request challenge and critique
  • ☐ Too cautious? → Reframe request

✍️ PRACTICE EXERCISE 4.1: Diagnostic Practice

Three problematic outputs. For each: diagnose, fix, test.

Problem A:

Asked: "How should I price my SaaS?"

Got: "Consider value-based, competitive, cost-plus pricing. Research market and test."

Problem B:

Asked for 150-word ML explanation

Got: 450 words of dense technical content

Problem C:

Asked: "Critique my go-to-market strategy"

Got: "Solid strategy with clear focus and good channels. Timeline reasonable, budget sensible."

Write improved prompts, test them.

🎯 MODULE 4 CHECKPOINT

You now have systematic approaches to:

  • Diagnose why outputs are poor
  • Apply specific fixes for common problems
  • Iterate efficiently to quality
  • Build quality controls into prompts

Integration Exercise:

Take a prompt you use regularly that doesn't always work. Apply framework:

  • What specifically goes wrong?
  • What's the root cause?
  • What fix addresses it?
  • Test and validate

Document before/after results.