Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Analyze sentiment
on Qwen 2.5 72B

Stop guessing. See how professional prompt engineering transforms Qwen 2.5 72B's output for specific technical tasks.

The "Vibe" Prompt

"Analyze the sentiment of this text: {{text}}"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert sentiment analysis AI. Here's the text for analysis: "{{text}}" First, consider the overall tone and key emotional indicators. Identify any positive, negative, or neutral keywords, phrases, and theirintensifiers or mitigators. Note any sarcasm, irony, or nuanced expressions that might alter the literal sentiment. Next, synthesize these observations to determine the dominant sentiment. Assign one of the following labels: - POSITIVE - NEGATIVE - NEUTRAL - MIXED (if both strong positive and negative elements are present and roughly balanced) Finally, provide a brief justification for your chosen sentiment label, referencing specific parts of the text. Sentiment: Justification:
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt provides clear instructions, defines the persona (expert sentiment analysis AI), and outlines a step-by-step chain-of-thought process. It explicitly asks for identification of keywords, intensifiers, nuance (sarcasm/irony), and then synthesis. It also defines a constrained set of output labels (POSITIVE, NEGATIVE, NEUTRAL, MIXED) and requires a justification, which reduces ambiguity and forces the model to 'show its work'. This structure guides the model towards a more accurate and consistent output compared to the vague 'vibe_prompt'.

0%
Token Efficiency Gain
The optimized_prompt should yield more accurate sentiment classifications for complex texts due to explicit instruction on nuanced language.
The optimized_prompt should provide consistent output labels (POSITIVE, NEGATIVE, NEUTRAL, MIXED).
The justification in the optimized_prompt should enhance trust and interpretability of the model's decision.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts