Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Customer support response
on Command R+

Stop guessing. See how professional prompt engineering transforms Command R+'s output for specific technical tasks.

The "Vibe" Prompt

"Hey R+, a customer's asking about their order, it's delayed. They're a bit annoyed. Can you draft a friendly but informative response? Make sure to sound empathetic and explain what's happening without giving too many technical details. Just tell them we're on it and when they can expect an update."
Low specificity, inconsistent output

Optimized Version

STABLE
{"task": "Customer Support Response", "sub_task": "Order Delay", "customer_sentiment": "Annoyed, Expectant", "response_tone": "Empathetic, Informative, Reassuring", "key_information_to_convey": ["Acknowledge delay", "Apologize for inconvenience", "Brief, non-technical reason (if known & appropriate)", "Action being taken (e.g., investigating, monitoring)", "Expected timeline for next update/resolution", "Offer further assistance"], "constraints": ["Do not provide highly technical details", "Keep language accessible", "Maintain brand voice (professional, friendly)", "Do not make false promises or specific delivery dates unless confirmed"], "example_input": {"customer_name": "Sarah P.", "order_number": "ORD-2023-5678", "original_eta": "October 26th", "new_information": "Shipping partner experienced unforeseen logistical issue, investigating impact on specific order."}, "output_format": "Polite email/message", "follow_up_action": "Monitor order status and provide update by specified timeline."} Based on the provided example_input, draft the customer support response following the instructions.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt works by leveraging a highly structured JSON format, explicitly defining the task, sub-task, desired tone, key information, and constraints. It uses chain-of-thought principles by breaking down the request into clear, actionable components. The inclusion of an 'example_input' guides the model toward the expected content and demonstrates how specific details should be incorporated. This specificity reduces ambiguity, minimizes the need for the model to infer requirements, and directs the model to produce a more precise and relevant output, requiring fewer tokens for clarification or correction during generation.

35%
Token Efficiency Gain
The optimized prompt explicitly defines customer sentiment, which the naive prompt only implies.
The optimized prompt provides a structured list of 'key_information_to_convey', ensuring all necessary points are covered systematically.
Constraints are clearly outlined in the optimized prompt, preventing undesirable outputs such as overly technical descriptions or false promises.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts