Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Analyze sentiment
on Mistral Large 2

Stop guessing. See how professional prompt engineering transforms Mistral Large 2's output for specific technical tasks.

The "Vibe" Prompt

"Tell me the sentiment of this text: [TEXT]"
Low specificity, inconsistent output

Optimized Version

STABLE
You are a highly analytical and precise sentiment analysis AI. Your task is to determine the sentiment of the provided text. Follow these steps meticulously: 1. **Understand the Core Subject**: Identify the main entities, topics, or ideas discussed in the text. 2. **Identify Keywords and Phrases**: Extract all words and phrases that inherently carry a positive, negative, or neutral connotation. Consider context and modifiers (e.g., 'not good' vs 'good'). 3. **Assess Contextual Nuance**: Evaluate how these keywords and phrases interact within the sentences and paragraphs. Look for ironic statements, sarcasm, negation, or intensifiers. 4. **Aggregate Sentiment Scores**: Assign a preliminary sentiment score (e.g., -1 for negative, 0 for neutral, +1 for positive) to each significant clause or sentence. Adjust based on contextual nuances. 5. **Determine Overall Sentiment**: Based on the aggregation, decide the overriding sentiment of the entire text. Finally, present your sentiment analysis in a clear, concise, and definitive manner. State the overall sentiment first, then briefly justify your conclusion by referencing specific elements from the text. Use one of these labels ONLY: 'Positive', 'Negative', 'Neutral', 'Mixed'. **Text for Analysis:** "[TEXT]"
Structured, task-focused, reduced hallucinations

Engineering Rationale

The 'optimized_prompt' leverages chain-of-thought prompting, breaking down a complex task into manageable, sequential steps. This forces the model to process information systematically, leading to more accurate and justifiable sentiment analysis. By first understanding the core subject, then identifying key sentiment-carrying elements, assessing context, and finally aggregating, the model builds a robust internal representation before making a final decision. The explicit instruction to 'justify your conclusion' enhances transparency and reduces hallucination. The constraint on output labels ('Positive', 'Negative', 'Neutral', 'Mixed') ensures consistency and ease of parsing for downstream applications. This structured approach reduces ambiguity and the likelihood of generalized, less accurate responses often seen with simpler prompts.

0%
Token Efficiency Gain
The optimized prompt consistently produces more accurate sentiment labels than the naive prompt for nuanced texts.
The justifications provided by the optimized prompt are logical and directly reference parts of the input text.
The optimized prompt reduces instances of 'conflicting' or 'ambiguous' sentiment responses by forcing a definitive choice from the allowed labels.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts