Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Meeting notes extraction
on Gemini 2.0 Flash

Stop guessing. See how professional prompt engineering transforms Gemini 2.0 Flash's output for specific technical tasks.

The "Vibe" Prompt

"Extract the key decisions, action items, and discussion points from these meeting notes. Summarize each briefly."
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert meeting assistant. Your goal is to extract structured, actionable information from raw meeting notes. **Meeting Notes:** ``` [INSERT_MEETING_NOTES_HERE] ``` **Instructions:** 1. **Identify Decisions:** What explicit decisions were made? For each decision, identify the key outcome and any associated participants responsible or impacted. 2. **Identify Action Items:** What are the specific tasks that need to be completed? For each action item, identify: * **Action:** The task description. * **Assignee(s):** Who is responsible? * **Due Date (if specified):** When is it expected to be completed? * **Status:** (e.g., 'Open', 'Pending', 'Completed' - default to 'Open' if not specified). 3. **Identify Key Discussion Points:** What were the main topics discussed, and what were the key takeaways or unresolved questions for each? Focus on substantive discussions, not conversational filler. 4. **Formatting:** Output the extracted information strictly in JSON format. Use the following structure: ```json { "decisions": [ { "id": int, "description": "string", "participants_impacted": ["string"] } ], "action_items": [ { "id": int, "action": "string", "assignees": ["string"], "due_date": "YYYY-MM-DD" | null, "status": "Open" | "Pending" | "Completed" } ], "discussion_points": [ { "id": int, "topic": "string", "summary": "string", "unresolved_questions": ["string"] } ] } ``` **Chain of Thought:** 1. Read through the entire meeting notes to get an overall understanding of the meeting's purpose and flow. 2. First pass: Scan for keywords indicating decisions (e.g., 'decided to', 'agreed that', 'will proceed with'). Extract and structure them into the 'decisions' array, assigning an 'id' and identifying participants. 3. Second pass: Scan for action verbs or explicit assignments (e.g., 'John will', 'we need to', 'task:'). Extract and structure into the 'action_items' array, ensuring to identify action, assignee(s), try to infer due dates, and set status to 'Open' by default. 4. Third pass: Identify distinct thematic sections or significant exchanges. For each, synthesize a 'topic', a concise 'summary' of what was discussed, and list any 'unresolved_questions' that emerge from that discussion. Populate the 'discussion_points' array. 5. Finally, review the extractions against the original notes one last time to ensure accuracy, completeness, and adherence to the JSON schema. Ensure all 'id' fields are populated with unique integers.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt works better for several reasons: 1. **Role Assignment:** It explicitly assigns the model the role of an 'expert meeting assistant,' which helps it contextualize the task and adopt an appropriate tone and focus. 2. **Clear Instructions with Examples:** It breaks down the extraction process into specific sub-tasks (decisions, action items, discussion points) and provides clear, detailed instructions for each, including what to look for and how to structure the output. 3. **Strict JSON Schema:** Providing a precise JSON schema eliminates ambiguity in output format, ensuring a parseable and consistent result. This is crucial for automation and downstream processing. 4. **Chain of Thought (CoT):** The CoT section guides the model through a step-by-step reasoning process, mimicking how a human would approach the task. This leads to more systematic and accurate extraction by encouraging multiple passes and structured thinking. It prompts the model to look for specific cues and organize information logically. 5. **Reduced Ambiguity:** The detailed instructions and CoT minimize assumptions the model needs to make, reducing the likelihood of irrelevant information or incorrect formatting. The prompt explicitly tells the model to distinguish 'substantive discussions' from 'conversational filler'. 6. **Enhanced Completeness & Accuracy:** The iterative passes suggested in the CoT (first pass, second pass, third pass, final review) help ensure that all relevant information is captured and correctly categorized.

0%
Token Efficiency Gain
The output JSON should contain 'decisions', 'action_items', and 'discussion_points' as top-level keys.
Each item in 'decisions' must have 'id', 'description', and 'participants_impacted' keys.
Each item in 'action_items' must have 'id', 'action', 'assignees', 'due_date', and 'status' keys.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts