2.2. Few-Example Prompting

Extensive research has been done by AI experts on how to get better results from LLMs. One of the things they've spent a lot of time looking into is the idea of providing a few examples within prompts. Can providing examples help guide the understanding of the LLM of the task it's been asked to perform...?

Researchers found that in many situations, providing examples within your prompts leads to more accurate results returned from the LLM. In research circles, this approach is more formally referred to as "Few-shot" prompting. We've opted towards referring to this strategy as "few-example" prompting to make it more human readable.

Example: Preparing for Depositions

You can have ChatGPT generate an exhaustive list of questions to ask a witness during a deposition. However, you don't want to ask questions that are not relevant to the case.

If you're curious in following along and seeing ChatGPT's response, take a look at our conversation here.

Example: Case Briefings

Supercharge your legal research and case analysis & strategy by providing ChatGPT a few examples.

Again, we can see here that ChatGPT provided a response matching the format of the example we provided.

Example: Medical Chronologies

Let's change things up a bit by demonstrating how we can help guide the AI model to return its response in a format we desire by giving it a few examples. This example touches on output parsers covered in chapter 4.

Many firms choose to produce medical chronology reports using a spreadsheet. You can import data into Microsoft Excel (or similar software) in a variety of different data formats. One of the more common formats is comma-separated values (CSV). Lucky for us, ChatGPT can work CSV data. Here's an example on how we can have ChatGPT return CSV data that we can then import into our spreadsheet.

Prompt:
Extract the date, event, and generate a one-sentence description from the provided 
medical summaries and output in CSV format.

Examples:
Summary: On July 1, 2023, John Doe was involved in a car accident, during which he 
reported immediate pain in his neck, back, and left shoulder.
Output:
"2023-07-01","Accident Occurred","John Doe was involved in a car accident. Reported immediate pain in neck, back and left shoulder."

Summary: A day after the accident, on July 2, John had a detailed consultation with an orthopedic specialist. The orthopedist, recognizing the urgency of the situation, reset his shoulder back into place. Despite this progress, John was not out of the woods yet – the orthopedist strongly recommended that he commence physical therapy immediately.
Output:
"2023-07-02","Orthopedic Visit","Consultation with an orthopedist. Shoulder set back into place. Recommended physical therapy."

Summary: John started his physical therapy on July 4. It was a grueling session, with John reporting significant pain throughout the process. Despite the discomfort, John remained committed to the rehabilitation process, understanding that this was a crucial step towards his recovery.
Output:

Don't worry about trying to interpret all of the quotes and commas in the output. Instead, have ChatGPT display a table for you like the one shown below. Reference our interaction with ChatGPT, and experiment further on how you can generate structured data with small changes to your prompt.

Summary

Few-example prompting is analogous to how humans learn from only a few examples. For instance, if you show a child a few pictures of dogs and tell them "These are dogs," they can typically recognize other dogs they haven't seen before. It's important to note that while few-shot learning can be quite effective, it's not perfect. The quality of responses can vary based on the complexity of the prompt, the quality and relevance of the examples given, and the inherent limitations of the AI model itself. However, this is a valuable low-cost prompting strategy you should be using regularly.

Last updated