2.2. Few-Example Prompting
Last updated
Last updated
Extensive research has been done by AI experts on how to get better results from LLMs. One of the things they've spent a lot of time looking into is the idea of providing a few examples within prompts. Can providing examples help guide the understanding of the LLM of the task it's been asked to perform...?
Researchers found that in many situations, providing examples within your prompts leads to more accurate results returned from the LLM. In research circles, this approach is more formally referred to as "Few-shot" prompting. We've opted towards referring to this strategy as "few-example" prompting to make it more human readable.
At this point, you've seen few-example prompting on numerous occasions in previous sections of this guide. This strategy is somewhat universal when it comes to prompt engineering. Lets dive into some "examples" .
You can have ChatGPT generate an exhaustive list of questions to ask a witness during a deposition. However, you don't want to ask questions that are not relevant to the case.
If you're curious in following along and seeing ChatGPT's response, take a look at our conversation here.
Supercharge your legal research and case analysis & strategy by providing ChatGPT a few examples.
Again, we can see here that ChatGPT provided a response matching the format of the example we provided.
Let's change things up a bit by demonstrating how we can help guide the AI model to return its response in a format we desire by giving it a few examples. This example touches on output parsers covered in chapter 4.
Many firms choose to produce medical chronology reports using a spreadsheet. You can import data into Microsoft Excel (or similar software) in a variety of different data formats. One of the more common formats is comma-separated values (CSV). Lucky for us, ChatGPT can work CSV data. Here's an example on how we can have ChatGPT return CSV data that we can then import into our spreadsheet.
Don't worry about trying to interpret all of the quotes and commas in the output. Instead, have ChatGPT display a table for you like the one shown below. Reference our interaction with ChatGPT, and experiment further on how you can generate structured data with small changes to your prompt.
Few-example prompting is analogous to how humans learn from only a few examples. For instance, if you show a child a few pictures of dogs and tell them "These are dogs," they can typically recognize other dogs they haven't seen before. It's important to note that while few-shot learning can be quite effective, it's not perfect. The quality of responses can vary based on the complexity of the prompt, the quality and relevance of the examples given, and the inherent limitations of the AI model itself. However, this is a valuable low-cost prompting strategy you should be using regularly.