Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Analyze sentiment
on Phi-3.5 MoE

Stop guessing. See how professional prompt engineering transforms Phi-3.5 MoE's output for specific technical tasks.

The "Vibe" Prompt

"Analyze the sentiment of this text: 'I absolutely love this new phone! The camera is amazing and the battery lasts forever.'"
Low specificity, inconsistent output

Optimized Version

STABLE
Here's a review: 'I absolutely love this new phone! The camera is amazing and the battery lasts forever.' Think step by step: 1. Identify phrases or words indicating positive, negative, or neutral sentiment. 2. "absolutely love" indicates strong positive sentiment. 3. "amazing" indicates strong positive sentiment. 4. "lasts forever" (referring to battery) indicates strong positive sentiment. 5. Synthesize these individual sentiments into an overall sentiment. All indicators are positive. Based on this analysis, what is the overall sentiment (Positive, Negative, or Neutral)?
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages chain-of-thought (CoT) to guide the model's reasoning process. By breaking down the task into explicit steps, it encourages the model to first identify sentiment-bearing phrases, then interpret them, and finally synthesize an overall sentiment. This structured approach reduces ambiguity, improves accuracy, and makes the model's 'thinking' transparent. For MoE models like Phi-3.5 MoE, clear step-by-step instructions can help activate relevant expert sub-models more effectively, leading to more robust and accurate sentiment analysis by ensuring a thorough consideration of all sentiment indicators.

0%
Token Efficiency Gain
The optimized_prompt should consistently yield 'Positive' for the given example.
The optimized_prompt should demonstrate a clearer reasoning path than the vibe_prompt.
The optimized_prompt should be less prone to misinterpreting nuanced or mixed sentiments due to its structured approach.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts