Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Debug code
on GPT-4o-mini

Stop guessing. See how professional prompt engineering transforms GPT-4o-mini's output for specific technical tasks.

The "Vibe" Prompt

"Debug this code: ```python import pandas as pd def analyze_data(file_path): try: df = pd.read_csv(file_path) # Assuming 'value_column' exists and we want to calculate its mean mean_val = df['value_column'].mean() # Potential error if 'value_column' doesn't exist print(f"Mean value: {mean_val}") return mean_val except FileNotFoundError: print(f"Error: File not found at {file_path}") return None except KeyError as e: print(f"Error: Column not found - {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None # Example usage: # analyze_data('non_existent_file.csv') # analyze_data('data_with_missing_column.csv') # Assume this file exists but lacks 'value_column' # analyze_data('valid_data.csv') # The issue is when 'value_column' is missing or the file is malformed. ``` Explain the issues and suggest fixes."
Low specificity, inconsistent output

Optimized Version

STABLE
You are an expert Python debugger. Your task is to identify and resolve issues in the provided Python code. **Code to Debug:** ```python import pandas as pd def analyze_data(file_path): try: df = pd.read_csv(file_path) # Assuming 'value_column' exists and we want to calculate its mean mean_val = df['value_column'].mean() # Potential error if 'value_column' doesn't exist print(f"Mean value: {mean_val}") return mean_val except FileNotFoundError: print(f"Error: File not found at {file_path}") return None except KeyError as e: print(f"Error: Column not found - {e}") return None except Exception as e: print(f"An unexpected error occurred: {e}") return None # Example usage: # analyze_data('non_existent_file.csv') # analyze_data('data_with_missing_column.csv') # Assume this file exists but lacks 'value_column' # analyze_data('valid_data.csv') # The issue is when 'value_column' is missing or the file is malformed. ``` **Debugging Process (Chain of Thought):** 1. **Understand the Goal:** The function `analyze_data` aims to read a CSV, calculate the mean of 'value_column', and handle various errors. 2. **Identify Potential Failure Points/Symptoms:** * `FileNotFoundError`: Already handled. * `KeyError`: Handled, occurs if 'value_column' is missing. This is a common and important case. * `Exception`: General catch-all. Could mask more specific issues. * `pd.read_csv` issues: Malformed CSVs (e.g., parsing errors, incorrect delimiter) could raise various `pandas` errors, which would currently fall under the generic `Exception`. * `df['value_column'].mean()`: If 'value_column' contains non-numeric data, `.mean()` would raise a `TypeError` or `DataError` (depending on pandas version/data), currently caught by `Exception`. 3. **Analyze Current Error Handling:** The existing `try-except` block is good but could be more granular or proactive, especially for issues related to data quality within the CSV. 4. **Propose Solutions & Refinements:** * **Column Existence Check (Proactive):** Instead of waiting for `KeyError`, check `if 'value_column' in df.columns:` before accessing it. This allows for a more controlled error message or alternative logic. * **Data Type Validation (Proactive):** After ensuring the column exists, validate its data type. Convert to numeric if necessary, handling conversion errors. * **Specific Pandas Exceptions:** Catch `pd.errors.EmptyDataError` (for empty CSVs) and `pd.errors.ParserError` (for malformed CSVs) more specifically than the general `Exception`. * **Logging vs. Printing:** For a real application, using a logging framework is preferable to `print` statements for error messages. * **Return Value Consistency:** Returning `None` for errors is fine, but sometimes raising custom exceptions might be better for the caller. **Revised Code (with suggested improvements):** ```python import pandas as pd import logging # Configure logging (optional, but good practice) logging.basicConfig(level=logging.ERROR, format='%(asctime)s - %(levelname)s - %(message)s') def analyze_data_improved(file_path, column_name='value_column'): try: df = pd.read_csv(file_path) if df.empty: logging.warning(f"Warning: CSV file '{file_path}' is empty.") return None if column_name not in df.columns: logging.error(f"Error: Required column '{column_name}' not found in '{file_path}'. Available columns: {list(df.columns)}") return None # Attempt to convert column to numeric, coercing errors to NaN df[column_name] = pd.to_numeric(df[column_name], errors='coerce') if df[column_name].isnull().all(): logging.error(f"Error: Column '{column_name}' in '{file_path}' contains no valid numeric data after conversion.") return None mean_val = df[column_name].mean() print(f"Mean value for '{column_name}': {mean_val}") # Keep print for direct output indication return mean_val except FileNotFoundError: logging.error(f"Error: File not found at '{file_path}'.") return None except pd.errors.EmptyDataError: logging.error(f"Error: CSV file '{file_path}' is empty or only contains headers.") return None except pd.errors.ParserError as e: logging.error(f"Error: Failed to parse CSV file '{file_path}' - {e}") return None except Exception as e: # Catch any other unexpected errors logging.critical(f"An unexpected critical error occurred while processing '{file_path}': {e}") return None ```
Structured, task-focused, reduced hallucinations

Engineering Rationale

The `optimized_prompt` uses a structured chain-of-thought approach, instructing the model to first understand the goal, then methodically identify failure points, analyze existing error handling, and finally propose refined solutions. This guided thought process leads to a more comprehensive and robust debugging analysis. It specifically asks for a 'Revised Code' section, ensuring a concrete output. Key improvements include proactive checks ('column_name not in df.columns', `pd.to_numeric` with `errors='coerce'`) and more specific exception handling (`pd.errors.EmptyDataError`, `pd.errors.ParserError`) beyond the general `KeyError` and `Exception` caught in the original. It also introduces logging as a best practice.

0%
Token Efficiency Gain
The optimized prompt explicitly asks for a 'Chain of Thought' breakdown.
The optimized prompt provides the expected output structure: an analysis followed by revised code.
The optimized prompt specifically requests 'expert Python debugger' persona.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts