2.2. Few-Example Prompting

Extensive research has been done by AI experts on how to get better results from LLMs. One of the things they've spent a lot of time looking into is the idea of providing a few examples within prompts. Can providing examples help guide the understanding of the LLM of the task it's been asked to perform...?

Researchers found that in many situations, providing examples within your prompts leads to more accurate results returned from the LLM. In research circles, this approach is more formally referred to as "Few-shot" prompting. We've opted towards referring to this strategy as "few-example" prompting to make it more human readable.

At this point, you've seen few-example prompting on numerous occasions in previous sections of this guide. This strategy is somewhat universal when it comes to prompt engineering. Lets dive into some "examples" 😂.

Example: Preparing for Depositions

You can have ChatGPT generate an exhaustive list of questions to ask a witness during a deposition. However, you don't want to ask questions that are not relevant to the case.

Prompt:



Input: Generate questions to ask a witness during a deposition in a car accident case?
Output:
- Can you describe the events leading up to the accident?
- What were the weather and road conditions?
- Did you admit fault or make any statements about the accident at the scene?

Input: Create a list of questions to ask a defendant during a deposition in a 
workplace discrimination case?
Output:
- Are you aware of the company's policies regarding workplace discrimination?
- Did the plaintiff make you aware of the alleged discriminatory behavior? 
- Were any actions taken by the company after the alleged incidents were reported?

Input: Questions to ask a witness during a deposition in a commercial lease dispute?

If you're curious in following along and seeing ChatGPT's response, take a look at our conversation here.

Example: Case Briefings

Supercharge your legal research and case analysis & strategy by providing ChatGPT a few examples.

Prompt:



Case: "Brown v. Board of Education, 347 U.S. 483 (1954)"
Output:
- Issue: Does segregation of public schools based on race deprive minority children 
of equal protection under the law as guaranteed by the 14th Amendment?
- Rule: The Equal Protection Clause of the 14th Amendment.
- Analysis: The Court found that segregation in public education has a detrimental 
effect on minority children because it is interpreted as a sign of inferiority. 
The impact is greater when it has the sanction of the law.
- Conclusion: The Court held that "in the field of public education the doctrine of 
'separate but equal' has no place," as segregated schools are inherently unequal.
 
Case: "Roe v. Wade, 410 U.S. 113 (1973)"

Again, we can see here that ChatGPT provided a response matching the format of the example we provided.

Example: Medical Chronologies

Let's change things up a bit by demonstrating how we can help guide the AI model to return its response in a format we desire by giving it a few examples. This example touches on output parsers covered in chapter 4.

Many firms choose to produce medical chronology reports using a spreadsheet. You can import data into Microsoft Excel (or similar software) in a variety of different data formats. One of the more common formats is comma-separated values (CSV). Lucky for us, ChatGPT can work CSV data. Here's an example on how we can have ChatGPT return CSV data that we can then import into our spreadsheet.

Prompt:
Extract the date, event, and generate a one-sentence description from the provided 
medical summaries and output in CSV format.

Examples:
Summary: On July 1, 2023, John Doe was involved in a car accident, during which he 
reported immediate pain in his neck, back, and left shoulder.
Output:
"2023-07-01","Accident Occurred","John Doe was involved in a car accident. Reported immediate pain in neck, back and left shoulder."

Summary: A day after the accident, on July 2, John had a detailed consultation with an orthopedic specialist. The orthopedist, recognizing the urgency of the situation, reset his shoulder back into place. Despite this progress, John was not out of the woods yet – the orthopedist strongly recommended that he commence physical therapy immediately.
Output:
"2023-07-02","Orthopedic Visit","Consultation with an orthopedist. Shoulder set back into place. Recommended physical therapy."

Summary: John started his physical therapy on July 4. It was a grueling session, with John reporting significant pain throughout the process. Despite the discomfort, John remained committed to the rehabilitation process, understanding that this was a crucial step towards his recovery.
Output:

Don't worry about trying to interpret all of the quotes and commas in the output. Instead, have ChatGPT display a table for you like the one shown below. Reference our interaction with ChatGPT, and experiment further on how you can generate structured data with small changes to your prompt.

Summary

Few-example prompting is analogous to how humans learn from only a few examples. For instance, if you show a child a few pictures of dogs and tell them "These are dogs," they can typically recognize other dogs they haven't seen before. It's important to note that while few-shot learning can be quite effective, it's not perfect. The quality of responses can vary based on the complexity of the prompt, the quality and relevance of the examples given, and the inherent limitations of the AI model itself. However, this is a valuable low-cost prompting strategy you should be using regularly.

Last updated