How to Write a Good Prompt? —Part II.
Few-shot—Zero-shot Prompting
It is important to distinguish between two generic terms that are used when compiling prompts. Zero-shot prompting is a technique where the model solves a given task without any prior examples or inputs. This means that the model must be made to work based on a single question or prompt, without having previously encountered similar tasks. This method is particularly useful when you don’t have the time or space to give examples to the model and want to get a direct answer to a specific question. The expert prompt illustrated in the first part of this post implicitly also used this technique.
Few-shot prompting follows a slightly different logic, where the model is also given a few examples related to the task before solving the actual task. These examples help the AI to understand the nature of the task and the type of response required. This method can be more effective for more complex tasks, as it provides the AI with context and patterns from which it can generate more accurate and relevant responses.
In the case of a few-shot prompt, the previous expert prompt can be supplemented with examples, say shorter passages of text, and summaries of those:
Example 1:
Original Text: ‘The contract requires the Tenant to pay the rent by the 5th of each month. In the event of late payment, the Tenant shall pay interest on arrears at the rate of 0.5% per day.”
Summary: “The Lessee must pay by the 5th of each month, and in the event of late payment, the Lessee will be charged interest on arrears of 0.5% per day.”
Example 2:
Original Text: “The parties agree that in the event of breach of contract, the remedy shall be exclusively within the jurisdiction of the Hungarian courts.”
Summary: “In the event of breach of contract, the remedy shall be subject to the jurisdiction of the Hungarian courts.
The examples linked to the prompt may help the model to find the optimal logical thread along which to weave its answers.
Chain of Thought Prompting
Chain of thought prompting (CoT) is a technique that allows a model to think step-by-step when solving a problem. It is particularly useful when solving complex problems, as it helps the model to approach the problem in a logical and systematic way. In the CoT technique, the AI is encouraged to explain in a detailed and systematic way all the steps required to reach the final solution. The point here is to force the model to form a chain of reasoning. In theory, this helps to structure the information to be used for the answer, while at the same time reducing the hallucinations that occur during generation.
The latter can be particularly problematic, as they can lead to factual errors, incorrect information, or information that does not fit the context. These can then significantly deviate the response from what was expected. In addition, detecting them is far from trivial; it requires almost as much time investment as writing the answer expected from the model yourself.
Advantages:
- Logical Thinking: Helps the model to approach problems as a series of logical steps.
- Transparency: the model’s answers will be more transparent and understandable as each step is explained in detail.
- Accuracy: Increases the accuracy and relevance of the answers as the model better understands the context of the problem.
Of course, the methods can be freely combined. The example below shows how role allocation can be linked to the logic required by CoT:
“You are the legal adviser, and your job is to summarize a legal text in a concise and professionally correct way, while preserving the normative content. Use legal jargon and highlight all relevant legal provisions and obligations. Make sure that the text is clear and understandable and, where relevant, highlight legal consequences and deadlines. At the end of the summary, give a brief overview of the purpose and scope of the document.
Introduction: First, define the purpose and context of the legal text. Describe the type of legal document (e.g. contract, court judgment, legislation).
Summary of main points: Identify the main provisions and obligations of the text. Describe the legal requirements and obligations in the text. Highlight any important deadlines and legal consequences.
Conclusion: Give a brief overview of the purpose and scope of the document. Summarize the key points and their relevance in the legal context.
Original Text: […] “
Of course, in addition to summarizing legal texts, many other tasks can be carried out efficiently using language models, provided that appropriate prompting techniques are used. Chain of thought prompting (CoT) allows logical, step-by-step reasoning, which is particularly useful when dealing with complex legal issues. The expert prompting technique uses role allocation to help you understand the context accurately and generate professionally correct answers. Few-shot prompting increases the accuracy and relevance of AI by providing examples, while zero-shot prompting provides quick and direct answers without prior examples.
Together, these techniques allow legal professionals to make the most of the capabilities of language models and to effectively summarize and analyze legal documents. The examples and techniques used have, of course, only briefly illustrated how specific questions and instructions can be given to a language model to enable it to produce the best possible answers.
Choosing and applying the right prompting techniques is essential to be able to tailor the output of language models in a way that best suits our specific goals. It is important to stress, however, that this is not a one-size-fits-all approach and that for each task we should be encouraged to experiment until we find the technique or combination of techniques that best suits us.
István ÜVEGES is a researcher in Computer Linguistics at MONTANA Knowledge Management Ltd. and a researcher at the HUN-REN Centre for Social Sciences, Political and Legal Text Mining and Artificial Intelligence Laboratory (poltextLAB). His main interests include practical applications of Automation, Artificial Intelligence (Machine Learning), Legal Language (legalese) studies and the Plain Language Movement.