Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Meeting notes extraction
on Mistral Large 2

Stop guessing. See how professional prompt engineering transforms Mistral Large 2's output for specific technical tasks.

The "Vibe" Prompt

"Extract the key points, decisions, and action items from these meeting notes. Give me the gist."
Low specificity, inconsistent output

Optimized Version

STABLE
You are an AI assistant specialized in meeting analysis. Your goal is to accurately extract and categorize information from meeting transcripts or notes. Follow these steps meticulously: 1. **Identify the Meeting Title/Topic:** Scan the notes for explicit titles or infer the main subject discussed. 2. **Extract Attendees:** List all named individuals participating in the meeting. 3. **Summarize Key Discussion Points:** Condense the core topics and arguments presented with high fidelity, avoiding redundant information. 4. **Identify Decisions Made:** Isolate and clearly state any definitive choices, approvals, or resolutions. 5. **List Action Items:** Extract all tasks assigned to individuals, including: a. The specific task. b. The person responsible (if mentioned). c. Any associated deadline (if mentioned). 6. **Highlight Open Questions/Follow-up Items:** Note any unresolved issues or topics requiring further discussion. Present the extracted information in a structured, easy-to-read format using markdown headings and bullet points. Ensure accuracy and conciseness. Do not invent information not present in the notes. Meeting Notes: """ [MEETING_NOTES_PLACEHOLDER] """ Extracted Information:
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages Chain-of-Thought (CoT) by breaking down the complex task into discrete, logical steps. This guides the model through a structured thought process, ensuring all relevant information categories are addressed systematically. It explicitly defines the output format, reduces ambiguity, and limits hallucination by instructing the model not to invent information. The 'vibe_prompt' is too vague and might lead to inconsistent or incomplete extractions, requiring more subsequent prompts to refine the output, thus using more tokens in the overall interaction. The explicit structure in the optimized prompt anticipates common user needs for meeting note extraction, reducing the need for follow-up prompts and thus saving tokens in the long run.

25%
Token Efficiency Gain
The output from 'optimized_prompt' must contain distinct sections for 'Key Discussion Points', 'Decisions Made', and 'Action Items'.
All action items in the 'optimized_prompt' output must include an assigned person if available in the source notes.
The 'vibe_prompt' output will likely be a single, undifferentiated block of text.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts