Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Meeting notes extraction
on Groq Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Groq Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Extract the key decisions, action items, and participants from these meeting notes. Summarize the main discussion points. Be concise. Meeting Notes: {meeting_notes_text}"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert meeting summarizer. Your task is to meticulously extract and organize information from meeting transcripts. Follow these steps: 1. **Identify Key Decisions**: Create a numbered list of all explicit decisions made during the meeting. For each decision, include the topic and the outcome. 2. **Extract Action Items**: Create a numbered list of all action items. For each action item, state the task, the assigned person (if present), and the deadline (if present). 3. **List Participants**: Identify all individuals who spoke or were mentioned as attending the meeting. 4. **Summarize Main Discussion Points**: Provide a concise, bullet-point summary of the core topics discussed and their significant details, excluding decisions and action items already listed above. Meeting Notes: {meeting_notes_text} Output in the following structured JSON format: ```json { "decisions": [ {"topic": "<topic>", "outcome": "<outcome>"} ], "action_items": [ {"task": "<task>", "assigned_to": "<person>", "deadline": "<date>"} ], "participants": ["<name>"], "discussion_summary": [ "<point_1>", "<point_2>" ] } ```
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt provides clear, step-by-step instructions (chain-of-thought) for the extraction process, explicitly defining what information to look for in each category. It also specifies a rigid JSON output format, which significantly reduces ambiguity and makes the output parseable programmatically. By structuring the task and output, the model spends less effort on interpreting the request and format, leading to more accurate and consistent extractions. The explicit exclusion of decisions/action items from the discussion summary prevents redundancy. This structured approach implicitly saves tokens by guiding the model directly to the desired information and format, avoiding verbose or unstructured responses and reducing the need for post-processing.

15%
Token Efficiency Gain
The output for the optimized prompt MUST be valid JSON.
The 'decisions' array in the optimized output MUST contain JSON objects with 'topic' and 'outcome' keys.
The 'action_items' array in the optimized output MUST contain JSON objects with 'task', 'assigned_to', and 'deadline' keys (deadline can be null if not present).

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts