Skip to main content
Back to Library
Prompt Engineering Guide

Mastering Medical report summary
on Llama 3.1 70B

Stop guessing. See how professional prompt engineering transforms Llama 3.1 70B's output for specific technical tasks.

The "Vibe" Prompt

"Summarize this medical report for a general audience. Make it easy to understand and highlight the key findings."
Low specificity, inconsistent output

Optimized Version

STABLE
You are a highly experienced medical summarization AI. Your task is to extract and synthesize critical medical information from the provided patient report. Present the summary in a clear, concise, and structured manner, suitable for a general audience. Before generating the summary, follow these steps: 1.IDENTIFY_PATIENT_DEMOGRAPHICS: Extract patient gender, age, and any relevant identifying but non-private information. 2.EXTRACT_CHIEF_COMPLAINT: Pinpoint the primary reason for the patient's visit or the main health concern. 3.SUMMARIZE_PAST_MEDICAL_HISTORY: Briefly state any significant pre-existing conditions or relevant medical history. 4.OUTLINE_CURRENT_FINDINGS: Detail the most important diagnostic results, physical examination findings, and lab results. 5.IDENTIFY_DIAGNOSIS: State the established diagnosis or differential diagnoses. 6.LIST_TREATMENT_PLAN: Describe the proposed treatment, medications, procedures, or recommendations. 7.SYNTHESIZE_SUMMARY: Combine the extracted information into a coherent, easy-to-understand summary. Ensure medical jargon is explained simply or replaced with common terms where appropriate. Focus on impact and significance. MEDICAL REPORT: [Insert Medical Report Here] SUMMARY:
Structured, task-focused, reduced hallucinations

Engineering Rationale

The optimized prompt leverages chain-of-thought reasoning by breaking down the summarization task into distinct, sequential steps. This forces the model to systematically process and extract specific types of information before synthesizing the final output. The role-playing ('highly experienced medical summarization AI') primes the model for a professional and accurate tone. Explicit instructions for simplifying jargon and focusing on impact ensure the summary is tailored for a 'general audience'. This structured approach reduces the cognitive load on the LLM and directs it towards producing a high-quality, relevant summary by avoiding a direct 'summarize' instruction that might lead to less structured or complete outputs.

0%
Token Efficiency Gain
The optimized prompt consistently extracts patient demographics, chief complaint, current findings, diagnosis, and treatment plan.
The optimized prompt ensures medical jargon is simplified or replaced for a general audience.
The optimized prompt produces summaries that are more structured and easier to read than those from the naive prompt.

Ready to stop burning tokens?

Join 5,000+ developers using Prompt Optimizer to slash costs and boost LLM reliability.

Optimize My Prompts