Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Regular expression writing
on Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Write a regular expression to extract all email addresses from a given text."
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert in regular expressions, specifically for data extraction. Your task is to write a regular expression that accurately and comprehensively extracts all valid email addresses from a given text. Consider edge cases and common email address formats, including subdomains and plus addressing (e.g., user+alias@domain.com). To achieve this, follow these steps: 1. **Analyze Email Structure:** Recall the standard parts of an email address: local part, '@' symbol, and domain part. 2. **Local Part:** Determine a robust pattern for the local part, allowing for alphanumeric characters, dots, hyphens, and plus signs, ensuring it doesn't start or end with a dot or hyphen. 3. **Domain Part:** Define a pattern for the domain, including subdomains. It should consist of alphanumeric characters and hyphens, separated by dots, and end with a top-level domain (TLD) of at least two letters. 4. **Combine:** Integrate these parts with the '@' symbol. 5. **Refine (Optional but Recommended):** Consider word boundaries to prevent partial matches within other words. Now, provide the regular expression.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt uses a chain-of-thought approach, breaking down the problem into smaller, manageable steps. It explicitly instructs the model on what to consider (edge cases, specific formats like plus addressing, subdomains) and guides it through the construction process (local part, domain part, combination, refinement). This structured approach reduces ambiguity, steers the model towards a more accurate and robust solution, and leverages its reasoning capabilities. The 'expert' persona also encourages a higher quality output. The 'vibe_prompt' is too generic and doesn't provide enough guidance.

-200%
Token Efficiency Gain
The 'vibe_prompt' is concise but lacks detail.
The 'optimized_prompt' is significantly longer but provides a structured approach.
The 'optimized_prompt' explicitly mentions 'Llama 3.1 70B' which is not in the 'vibe_prompt'. (Correction: The prompt doesn't explicitly mention the model in the output, but the user's request did, implying optimization for it).

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts