Best practices of prompt engineering

Key takeaways:

  • Prompt engineering involves crafting precise instructions to guide language models toward accurate and relevant responses.

  • Best practices include clearly defining desired outputs, providing context, and using examples to improve response quality.

  • Iterative testing and refinement of prompts, along with awareness of model biases, enhance the effectiveness of language model applications.

When it comes to interacting with AI language models, well-crafted prompts are essential to obtain the best results. Prompt engineering is the practice of designing prompts in a way that guides the model to produce high-quality, relevant responses. This Answer discusses prompt engineering best practices using sample Python code to demonstrate each technique in action. Following these best practices can help you extract the best from AI language models like OpenAI’s GPT.

What is prompt engineering?

Prompt engineering is a technique used to craft carefully formulated instructions for the model to get desired responses. This extends beyond standard means of engaging with AI, in which users frequently enter simple commands or queries. Instead, Prompt Engineering uses language’s rich structure to direct the model toward more accurate, contextually appropriate, and complex outputs.

Best practices for prompt engineering

Best Practices of prompt engineering
Best Practices of prompt engineering

Setting up the code

The example code below connects to OpenAI’s GPT-3.5-turbo model using the OpenAI API. Each prompt demonstrates a specific best practice.

Note: We are using OpenAI API for this example, and to run the provided code, we will require an OpenAI API key. Please replace the OPENAI_API_KEY with your API key.

import openai
# Set up your OpenAI API key (replace 'your-api-key' with your actual key securely)
openai.api_key = 'your-api-key'
def generate_response(prompt, temperature=0.7, max_tokens=150):
"""
Function to generate a response using OpenAI API with best practices for prompt engineering.
"""
try:
# Sending a request to OpenAI's GPT model
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}],
temperature=temperature,
max_tokens=max_tokens
)
# Extracting and returning the response text
return response['choices'][0]['message']['content'].strip()
except Exception as e:
return f"Error occurred: {e}"
# 1. Clear and Specific Prompt Example
prompt_1 = "Explain the process of photosynthesis in 100 words."
response_1 = generate_response(prompt_1)
print("Response for Prompt 1:\n", response_1)
# 2. Provide Context and Structure
prompt_2 = "List the steps involved in writing a good research paper. Provide a detailed explanation for each step in bullet points."
response_2 = generate_response(prompt_2)
print("\nResponse for Prompt 2:\n", response_2)
# 3. Using Constraints (Word Count and Format)
prompt_3 = "Describe artificial intelligence in 50 words or less. Keep the tone formal and precise."
response_3 = generate_response(prompt_3, temperature=0.5, max_tokens=60)
print("\nResponse for Prompt 3:\n", response_3)
# 4. Experiment with Variations of Phrasing
prompt_4 = "What are the benefits of machine learning in healthcare?"
response_4 = generate_response(prompt_4)
print("\nResponse for Prompt 4:\n", response_4)
prompt_5 = "How is machine learning transforming the healthcare industry?"
response_5 = generate_response(prompt_5)
print("\nResponse for Prompt 5:\n", response_5)
# 5. Few-Shot Learning Example (Provide Examples)
prompt_6 = """
Translate the following English sentences to Spanish:
1. "How are you?" -> "¿Cómo estás?"
2. "Good morning!" -> "¡Buenos días!"
Now, translate: "Where is the nearest hospital?"
"""
response_6 = generate_response(prompt_6, temperature=0.3, max_tokens=60)
print("\nResponse for Prompt 6:\n", response_6)
# 6. Testing Temperature and Max Tokens
prompt_7 = "Write a creative and imaginative story about a lost astronaut."
response_7 = generate_response(prompt_7, temperature=0.9, max_tokens=200)
print("\nResponse for Prompt 7 (Creative Story):\n", response_7)
prompt_8 = "Write a concise description of the process of photosynthesis."
response_8 = generate_response(prompt_8, temperature=0.3, max_tokens=60)
print("\nResponse for Prompt 8 (Concise Description):\n", response_8)

With this setup, let’s explore the best practices applied to each example.

1. Crafting clear and specific prompts

A clear, specific prompt helps the model produce precise, accurate responses.

prompt_1 = "Explain the process of photosynthesis in 100 words."
response_1 = generate_response(prompt_1)
print("Response for Prompt 1:\n", response_1)

A prompt like this defines both the topic and the desired length, reducing ambiguity. Such specificity helps the model stay focused, generating responses that meet content and length requirements.

2. Providing context and structure

Providing context or a structured prompt often yields more accurate, well-organized responses.

prompt_2 = "List the steps involved in writing a good research paper. Provide a detailed explanation for each step in bullet points."
response_2 = generate_response(prompt_2)
print("\nResponse for Prompt 2:\n", response_2)

By requesting bullet points and specifying that each step should be explained, you’re guiding the model to provide a step-by-step breakdown. This context encourages structured, relevant answers.

3. Using constraints (Word count, tone, and format)

Constraining the response in terms of word count, tone, or other specific formats can further improve relevance and clarity.

prompt_3 = "Describe artificial intelligence in 50 words or less. Keep the tone formal and precise."
response_3 = generate_response(prompt_3, temperature=0.5, max_tokens=60)
print("\nResponse for Prompt 3:\n", response_3)

Constraints help the model tailor its response to your needs. By specifying a word limit and formal tone, this prompt ensures a concise, precise description that aligns with formal communication needs.

4. Experimenting with variations of phrasing

Variations in phrasing can often result in subtle differences in the responses.

prompt_4 = "What are the benefits of machine learning in healthcare?"
response_4 = generate_response(prompt_4)
print("\nResponse for Prompt 4:\n", response_4)
prompt_5 = "How is machine learning transforming the healthcare industry?"
response_5 = generate_response(prompt_5)
print("\nResponse for Prompt 5:\n", response_5)

Although similar, these prompts may yield slightly different angles of response. Small changes in phrasing can guide the model to produce different types of content, from general benefits to specific transformative impacts.

5. Few-shot learning by providing examples

Few-shot learning involves giving the model example prompts and responses to set the format or pattern for a new task.

prompt_6 = """
Translate the following English sentences to Spanish:
1. "How are you?" -> "¿Cómo estás?"
2. "Good morning!" -> "¡Buenos días!"
Now, translate: "Where is the nearest hospital?"
"""
response_6 = generate_response(prompt_6, temperature=0.3, max_tokens=60)
print("\nResponse for Prompt 6:\n", response_6)

By showing examples, you help the model understand the desired format and context. This can be particularly useful in translation, pattern recognition, or step-by-step problem-solving tasks.

6. Testing temperature and max tokens for desired creativity and length

Temperature controls the randomness of the model’s response, with higher values generating more creative output. Lower values make responses more predictable and precise. Similarly, the max tokens setting limits the response length.

prompt_7 = "Write a creative and imaginative story about a lost astronaut."
response_7 = generate_response(prompt_7, temperature=0.9, max_tokens=200)
print("\nResponse for Prompt 7 (Creative Story):\n", response_7)
prompt_8 = "Write a concise description of the process of photosynthesis."
response_8 = generate_response(prompt_8, temperature=0.3, max_tokens=60)
print("\nResponse for Prompt 8 (Concise Description):\n", response_8)

Adjusting temperature allows you to control how creatively the model responds. For storytelling, a higher temperature encourages uniqueness, while for concise explanations, a lower temperature yields straightforward answers. Setting max tokens prevents overly long or verbose responses.

Additional best practices to keep in mind

  • Iterative testing and refinement: Prompt engineering often involves multiple rounds of testing and refinement. You might need to tweak prompts based on the output to get the most useful results.

  • Domain-specific knowledge: If your task is highly specialized, use industry terms or jargon to help guide the model. This narrows its focus and often results in more relevant responses.

  • Evaluating model biases: Be aware of biases that might affect responses. Adjust prompts to minimize these effects where possible.

Conclusion

Prompt engineering is a critical skill for maximizing the effectiveness of AI language models. By crafting prompts carefully, providing context, experimenting with parameters, and iterating on results, you can guide models like GPT to produce highly relevant, accurate responses. With practice and these best practices, prompt engineering becomes a powerful tool in your AI interaction toolkit.

Frequently asked questions

Haven’t found what you were looking for? Contact Us


How can I effectively prompt an engineer?

Effective prompt engineering involves crafting clear, concise, and specific instructions for AI models to maximize their potential. When working with engineers, this can significantly enhance collaboration and outcomes. Key principles include specificity, clarity, conciseness, context, and open-endedness.


Which of the five principles of prompt engineering was the most helpful?

Specificity is often the most impactful principle when working with engineers. By providing detailed instructions, you can minimize misinterpretation and ensure the AI generates the desired results.


What is a key best practice in effective prompting?

Iterative refinement is a crucial best practice. Start with a basic prompt and gradually refine it based on the AI’s output to achieve optimal results.


Free Resources

Copyright ©2025 Educative, Inc. All rights reserved