Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Meeting notes extraction
on GPT-4o-mini

Stop guessing. See how professional prompt engineering transforms GPT-4o-mini's output for specific technical tasks.

The "Vibe" Prompt

"Extract the key points, decisions, and action items from these meeting notes: [MEETING_NOTES_TEXT]"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert meeting notes transcriber and summarizer. Your goal is to accurately and concisely extract critical information from raw meeting notes. Here's how you'll approach this task, step-by-step (Chain-of-Thought): 1. **Read and Understand**: Thoroughly read the entire meeting notes to grasp the overall context, topics discussed, and speaker interactions. 2. **Identify Key Discussion Topics**: List the main subjects or agenda items that were discussed. Prioritize topics that consumed significant time or led to actions/decisions. 3. **Extract Decisions**: For each discussion topic, identify any explicit decisions made. A decision is a firm agreement to proceed with a specific course of action, a choice between options, or a resolved matter. Clearly state what was decided and, if present, who is responsible. 4. **Extract Action Items**: For each discussion topic, identify any tasks or follow-up activities assigned. An action item typically includes: what needs to be done, who is responsible, and if mentioned, a deadline. If no deadline is specified, note its absence. 5. **Summarize Key Points**: For each discussion topic, distill the most important information, arguments, or updates that led to decisions or actions, or were significant on their own. Avoid conversational filler. 6. **Format Output**: Present the extracted information clearly and structurally, prioritizing readability. Use markdown headers and bullet points. Combine relevant points under their respective topics. Meeting Notes to Process: [MEETING_NOTES_TEXT] Output Structure: ### Meeting Summary [Overview of main topics] ### Decisions Made * [Decision 1: What was decided. Responsible: [Name (if present)]] * [Decision 2: ...] ### Action Items * [Action 1: What needs to be done. Responsible: [Name]. Due: [Date (if present) or N/A]] * [Action 2: ...] ### Key Discussion Points * [Topic 1] * [Key point 1.1] * [Key point 1.2] * [Topic 2] * [Key point 2.1] Begin extraction now.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages several best practices for LLM prompting: 1. **Role Assignment**: 'You are an expert...' sets the context and expectation for the model's persona, enhancing focus. 2. **Chain-of-Thought (CoT)**: The explicit step-by-step instructions (Read and Understand, Identify Topics, Extract Decisions, etc.) guide the model through the reasoning process, reducing errors and improving accuracy. This breaks down a complex task into manageable sub-tasks. 3. **Specific Definitions**: Clearly defining 'Decisions' and 'Action Items' helps the model distinguish between similar concepts and extract targeted information. 4. **Output Structure Enforcement**: Providing a detailed output structure with markdown elements ensures consistent, parseable, and human-readable results. This reduces the need for post-processing. 5. **Explicitness**: The prompt is highly explicit about what information to extract and how to present it, minimizing ambiguity. 6. **Reduced Ambiguity**: The naive prompt's 'key points' is vague; the optimized version breaks it down into 'Decisions', 'Action Items', and 'Key Discussion Points' with clearer guidelines for each.

0%
Token Efficiency Gain
The optimized prompt will yield demonstrably more structured and accurate extractions, especially for longer or more complex meeting notes.
The 'Decisions Made' and 'Action Items' sections in the optimized output will be distinct and clearly formatted.
The optimized prompt's output for 'Key Discussion Points' will be more concise and relevant than the 'key points' from the naive prompt.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts