2. Practical Prompt Engineering Strategies & Techniques

Prompts are the inputs that you provide to a model, such as a question, a command, or a context. Prompts can influence the quality, relevance, and accuracy of the model's outputs, so it is important to design them carefully and strategically.

2.1. General Prompting Tips, Tricks, and Hacks

Tip #1: Use context effectively

Context is the information that helps the model understand the task, the goal, and the expectations of the user. Context can include instructions, examples, keywords, formatting, tone, and more. Let's take a look at an example.

Original Prompt (bad):
Write a summary for the following court case below.

Revised Prompt (good):
Summarize the court case below in 3 sentences or less. 
Include the names of the parties, the main issue, and the outcome. 
Do not include any opinions or irrelevant details.

Our revised prompt includes context that instructs the model what kind of text to generate (a summary), how long it should be (3 sentences or less), what information it should include (names, issue, outcome), and what it should avoid (opinions, irrelevant details). This helps the model focus on the task and produce a relevant and concise summary.

Tip #2: Provide examples

A popular technique is to provide the AI model examples demonstrating the desired input-output format or behavior to the model. We call this "few-example" prompting. The idea is simple. If you provide the model with a few examples, the model can use that as a reference when producing its result. This strategy becomes very effective when the examples you provide are real-world scenarios. You can think of it as you quickly training the AI model how to perform a specific task. Let's take a look at how to format they examples you want to include within your prompt.

Let's say you want to provide the model with a legal document and have it provide answers to questions you may have regarding that particular document. Here is how you can craft a prompt to do so:

Example Prompt #2A:
Document: The Constitution of the United States 
Question: How many amendments are there in the Constitution? 
Answer: There are 27 amendments in the Constitution.

Document: The Constitution of the United States 
Question: What is the name of the first amendment? 
Answer: The name of the first amendment is the Freedom of Speech Amendment.

Document: The Constitution of the United States 
Question: What is the name of the amendment that abolished slavery? 
Answer: The name of the amendment that abolished slavery is the Thirteenth Amendment.

Document: The Constitution of the United States 
Question: What is the name of the amendment allowing you the right to own a firearm?

Now if you copy and paste example prompt 2A into ChatGPT you'll see a response that follows the same format as the previous answers:

The name of the amendment allowing the right to own a firearm is the Second Amendment.

Section 2.2 Few-Example Prompting dives much deeper into this technique and how it can be applied to a wide range of different prompts aimed at accomplishing different tasks.

Tip #3: Use strong verbs

One of the ways to improve your ChatGPT prompts is to use strong verbs. Strong verbs are more specific, clear, and concise. They can help prevent vague, passive, or redundant responses from the AI.

Here are some examples of how you can use strong verbs in your prompts:

  • Instead of "Write a summary of the case", use "Summarize the case".

  • Instead of "Make a list of the key points", use "List the key points".

  • Instead of "Create a document that outlines the legal arguments", use "Outline the legal arguments".

  • Instead of "Explain the benefits and risks of each option", use "Compare the benefits and risks of each option".

Tip #4: Prevent hallucination

We touched on AI hallucination in the previous chapter. To recap, LLM hallucination occurs when a language model generates text that is fluent but factually incorrect or inconsistent. This can happen when the model is given a request that it does not have enough information or knowledge to answer. There are a few ways to prevent hallucination. The simplest approach involves making a very small change your prompt. Here is what that change might look like:

Answer the following question based on the information provided. If you do not
know the answer, respond by saying "I do not know."

Including a simple statement in the instruction or context portion of your prompt can save you from a world of trouble. By telling the model to not fulfill the request if it does not know the answer or have the relevant info, we can reduce the risk of LLM hallucination and improve the quality and reliability of the generated text.

Other techniques that can be used to prevent hallucination are discussed in Chapter 4 of this guide.

Last updated