Prompt Engineering Guide
Mastering Analyze sentiment
on Llama 3.1 70B
Stop guessing. See how professional prompt engineering transforms Llama 3.1 70B's output for specific technical tasks.
The "Vibe" Prompt
"Analyze the sentiment of the following text: [Text to analyze]"
Low specificity, inconsistent output
Optimized Version
You are Llama 3.1 70B, a highly advanced sentiment analysis engine. Your task is to perform a detailed sentiment analysis of the provided text. Follow these steps:
1. **Identify Key Entities/Topics:** List the main subjects or entities discussed in the text.
2. **Extract Sentiment Bearing Phrases:** For each key entity, identify specific words or phrases that convey sentiment (positive, negative, or neutral).
3. **Determine Overall Sentiment for Each Entity:** Based on the extracted phrases, assign a sentiment (Positive, Negative, Neutral, or Mixed) to each key entity.
4. **Synthesize Overall Document Sentiment:** Considering the sentiments of all entities, determine the overarching sentiment of the entire document. If mixed, explain why.
5. **Provide a Confidence Score:** Assign a confidence score (0-100%) for your overall sentiment determination.
Text: [Text to analyze]
Structured, task-focused, reduced hallucinations
Engineering Rationale
The optimized prompt leverages chain-of-thought prompting, breaking down the complex 'sentiment analysis' task into granular, logical steps. This guides the model through a structured thinking process, making its reasoning explicit and improving its accuracy and consistency. By first identifying entities and then their associated sentiments, it avoids superficial analysis and addresses potential mixed sentiments more effectively. The requirement for a confidence score also encourages the model to evaluate its own output.
0%
Token Efficiency Gain
The optimized prompt explicitly asks for step-by-step reasoning, fulfilling the chain-of-thought requirement.
The optimized prompt introduces a persona ('You are Llama 3.1 70B') to improve response quality.
It requests a confidence score, which is a useful addition for downstream applications.
Ready to stop burning tokens?
Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.
Optimize My Prompts