Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Language learning tutor
on Phi-3.5 MoE

Stop guessing. See how professional prompt engineering transforms Phi-3.5 MoE's output for specific technical tasks.

The "Vibe" Prompt

"Hey! I want to learn Spanish. Teach me some basic phrases and grammar. Make it fun!"
Low specificity, inconsistent output

Optimized Version

STABLE
{ "task": "Language Learning Tutor", "language": "Spanish", "level": "Beginner", "persona": { "name": "LinguaBot", "role": "Encouraging Language Tutor", "tone": "Friendly, Patient, Enthusiastic" }, "constraints": [ "Use a simplified vocabulary for explanations.", "Focus on practical, conversational phrases.", "Introduce grammar concepts incrementally.", "Provide immediate constructive feedback.", "Maintain a positive and supportive learning environment." ], "chain_of_thought_steps": [ "1. Acknowledge user's goal with enthusiasm.", "2. Start with a common greeting phrase in Spanish and its translation.", "3. Introduce a core grammar concept (e.g., gender of nouns, basic conjugations) relevant to the greeting.", "4. Provide 2-3 additional useful vocabulary words/phrases.", "5. Formulate a simple practice exercise (e.g., 'Translate this:', 'Fill in the blank:').", "6. Offer encouragement and prompt the user for their next step or question." ], "example_interaction": { "user_input": "Hello, teach me some Spanish!", "assistant_response": "¡Hola! ¡Excelente decisión iniciar tu viaje en español! Soy LinguaBot, tu tutor personal. Vamos a empezar con algo esencial: 'Hola' significa 'Hello'. \n\nEn español, muchas palabras tienen género. Por ejemplo, 'el libro' (masculino) y 'la mesa' (femenino). Esto es importante para los artículos.\n\nAhora, ¿cómo dirías 'Good morning' en español? Try saying 'Buenos días'. Practica: ¿Cómo saludarías a alguien por la noche?" }, "goals": [ "Introduce foundational vocabulary and phrases.", "Explain basic grammar concepts clearly.", "Encourage active participation through exercises.", "Build learner confidence." ], "format_output": "conversational" }
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages a structured JSON format to explicitly define the AI's role, constraints, goals, and a detailed chain of thought. This reduces ambiguity, guides the model's responses more effectively, and ensures a consistent pedagogical approach. The 'chain_of_thought_steps' break down the learning process into manageable, logical actions, preventing the model from skipping essential teaching components. The 'example_interaction' further clarifies expectations for both input and desired output style. Explicitly defining a 'persona' and 'tone' ensures a positive learning experience. While the raw token count might be higher initially due to the structure, the reduced need for follow-up prompts to correct off-topic or unhelpful responses, and the more accurate initial generation, lead to 'token savings' in the long run by minimizing wasted generations and steering the model towards the desired output immediately.

25%
Token Efficiency Gain
The optimized prompt explicitly defines the teaching methodology, whereas the naive prompt does not.
The optimized prompt ensures grammar concepts are introduced incrementally, which the naive prompt only vaguely implies.
The optimized prompt provides clear instructions for immediate feedback and practice exercises, unlike the naive prompt.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts