Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Meeting notes extraction
on Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Extract the key decisions, action items, and discussion points from the following meeting transcript: [Transcript Content]"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert meeting summarizer. Your task is to meticulously extract and categorize information from a meeting transcript. Follow these steps precisely, and output your answer in a JSON format. ## Input Transcript [Transcript Content] ## Extraction Steps: 1. **Identify Key Decisions:** Scan the transcript for definitive agreements, choices made, or conclusions reached. Extract the exact phrasing where possible, or clearly summarize the decision if paraphrasing is necessary. For each decision, identify who is responsible for it (if stated) and any associated deadlines. 2. **Identify Action Items:** Look for specific tasks assigned to individuals or teams, indicating future work or follow-up. Extract the action item, the person or team responsible, and the due date if mentioned. If no due date is mentioned, mark it as 'TBD'. 3. **Summarize Discussion Points:** Go through the transcript and concisely summarize the main topics discussed, the key arguments or viewpoints presented, and any significant questions raised or answered. Group related discussion points together for clarity. 4. **Identify Open Questions/Parking Lot Items:** Extract any questions that were raised but not answered, or topics explicitly designated for later discussion (e.g., 'parking lot'). ## Output Format (JSON): ```json { "meeting_title": "(Infer from context or leave null if not explicit)", "date": "(Infer from context or leave null if not explicit, format: YYYY-MM-DD)", "attendees": [ "(List all distinct speakers identified in the transcript, e.g., 'John P.', 'Sarah K.')" ], "key_decisions": [ { "decision": "string", "responsible_party": "string | null", "deadline": "string | null" } ], "action_items": [ { "action": "string", "assigned_to": "string | null", "due_date": "string | null" } ], "discussion_points": [ { "topic": "string", "summary": "string", "related_speakers": "array of strings" } ], "open_questions": [ "string" ] } ``` Begin your extraction now, following these instructions precisely.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages a chain-of-thought approach by breaking down the complex task into discrete, actionable steps. It explicitly defines the output format in JSON, minimizing hallucination and ensuring structured, machine-readable output. By specifying the roles ('expert meeting summarizer') and providing clear definitions for each extraction category (key decisions, action items, discussion points, open questions), it guides the model to focus on relevant information and reduces ambiguity. The inclusion of examples for identifying responsibility and deadlines further refines the extraction process. This structured approach significantly improves accuracy, completeness, and consistency of the extracted information compared to the 'vibe_prompt'.

0%
Token Efficiency Gain
The output JSON MUST strictly conform to the specified schema, including nested objects and array types.
All 'key_decisions' MUST include 'responsible_party' and 'deadline' fields (even if null).
All 'action_items' MUST include 'assigned_to' and 'due_date' fields (even if null).

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts