Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Regular expression writing
on Gemini 2.0 Flash

Stop guessing. See how professional prompt engineering transforms Gemini 2.0 Flash's output for specific technical tasks.

The "Vibe" Prompt

"Hey Gemini, can you help me write some regular expressions? I need to find patterns in text. What regex do I need for email validation, and then for extracting all numbers from a string?"
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert in regular expressions, specifically proficient with the `re` module in Python. Your task is to generate accurate and efficient regular expressions based on precisely defined requirements. For each request, provide: 1. The regular expression pattern itself (e.g., r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'). 2. A brief, clear explanation of how the regex works. 3. A Python `re` module example demonstrating its usage with a sample string. Here are the requests: **Request 1: Email Validation** - **Target**: Validate a standard email address format. - **Constraints**: Must check for presence of '@' and a domain with at least two characters after a dot. Allows alphanumeric, and common special characters in the local part and domain name. **Request 2: Number Extraction** - **Target**: Extract all sequences of digits from a given string. - **Constraints**: Numbers can be integers or decimals (e.g., '123', '3.14', '-5'). Do not extract numbers embedded within words (e.g., 'word123' should not yield '123' unless explicitly part of the number itself). Think step-by-step for each request to construct the most appropriate regex. First, outline the components needed for the pattern, then combine them. Ensure edge cases are considered for both validation and extraction.
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages several best practices for interacting with LLMs. It establishes a clear 'persona' ('expert in regular expressions, proficient with `re` module in Python') which guides the model's tone and expertise. It defines a rigid 'output format' with numbered points for the regex, explanation, and example, making the output predictable and easy to parse. Crucially, it uses 'chain-of-thought' by explicitly instructing the model to 'Think step-by-step' and 'First, outline the components...', which leads to more accurate and robust regex patterns. The 'constraints' for each request are well-defined, eliminating ambiguity inherent in the naive prompt. The naive prompt is conversational and high-level, leading to potentially generic or less precise regexes without detailed explanations or usage examples.

0%
Token Efficiency Gain
The 'vibe_prompt' is shorter than the 'optimized_prompt' in terms of raw token count, making 'token_savings_pct' 0 or negative.
The 'optimized_prompt' will produce a more structured output with distinct regex, explanation, and Python example for each request.
The 'optimized_prompt' will generate more accurate and robust regex patterns due to explicit constraints and chain-of-thought instructions.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts