Skip to content

What is prompt engineering? Explaining basic and applied techniques, examples, and tips!

 

image-5-3


The quality of outputs generated by AI (Artificial Intelligence) varies significantly depending on what kind of "prompt" is input.

As a result, "Prompt Engineering" is attracting considerable attention.

Prompt Engineering is not merely the task of creating instructions for generative AI. It is a strategic approach to creating "templates" that allow anyone to obtain outputs close to their ideal results at any time.

In this article, we explain everything from the basic concepts of Prompt Engineering to fundamental and applied techniques, along with specific prompt examples. We also introduce methods to enhance the accuracy and safety of prompts, such as tips and points to consider in Prompt Engineering.

This content is useful for those who want to learn various templates ranging from basic to technical prompts and utilize generative AI more effectively.

 

Nextremer offers data annotation services to achieve highly accurate AI models. If you are considering outsourcing annotation, free consultation is available. Please feel free to contact us.

 

 

1. What Is Prompt Engineering?

image-11-3


Prompt Engineering is the technology and methodology of designing and executing precise and effective prompts for generative AI.

The output accuracy of generative AI and the quality of the information it produces depend heavily on the quality of the prompts provided. If the precision of the prompt is low, the generative capabilities of the AI cannot be fully realized, resulting in only ambiguous and low-quality output.

Therefore, the design of appropriate prompts can be considered an extremely crucial element in the utilization of generative AI.

 

The Importance of Prompt Engineering in Generative AI

In recent years, as generative AI technology has advanced remarkably, the importance of Prompt Engineering has increased significantly. The primary roles of Prompt Engineering are as follows:

 

  • Reducing the risk of hallucinations (imaginary answers without factual basis)
  • Reducing the risk of adversarial prompts
  • Improving output accuracy

 

Prompt Engineering plays a vital role particularly in generative AI systems that use Web searches or RAG (Retrieval-Augmented Generation) to search external databases, as it enables precise searching, appropriate interpretation of vast amounts of relevant information, and consistent output.

 

Basic Components of a Prompt

The four basic components of a prompt are:

 

  • Instruction: Providing clear instructions
  • Context: Giving background information such as current status, relevant information, and necessary prerequisites
  • Input Data: Providing data related to keywords or case studies
  • Output Indicator: Clearly stating the format or style in which the answer should be output

 

Based on the above, a fundamental prompt would look like the following. Let's look at an example of asking generative AI to propose a recipe.

 

【Prompt Example】
You are a professional Japanese chef. → Context
Currently, I have chicken, onions, and eggs in my refrigerator. → Input Data
Please propose a recipe that will satisfy a family of four ranging in age from teens to 40s. → Instruction
Please provide the recipe in a step-by-step format at a level an elementary school student can understand (Output Indicator).

 

The "Input Data" is the most important point because it consolidates the core information for the task. By providing specific and accurate data, the AI can achieve higher precision output, resulting in a significant improvement in the effectiveness of the entire prompt.

To make the input data in the above example more specific, changing "chicken" to "1kg of chicken breast" would likely result in a more practical recipe.

By using these elements as a foundation and combining them with the techniques described later, you can design more advanced and accurate prompts to obtain higher precision output.

 

 

Nextremer offers data annotation services to achieve highly accurate AI models. If you are considering outsourcing annotation, free consultation is available. Please feel free to contact us.

 

2. Prompt Engineering Basics: Zero-shot Prompting and Few-shot Prompting

image-18-2


First, we introduce "Zero-shot prompting" and "Few-shot prompting" as basic Prompt Engineering techniques.

Zero-shot Prompting

Zero-shot prompting is the simplest type of prompt, consisting only of a question.

 

【Prompt Example】
Classify the sentiment of the following text into "Positive," "Neutral," or "Negative."
Text: I don't feel like doing anything.
Sentiment:

 

Zero-shot prompting is effective for universal matters or questions concerning common knowledge. However, for most problems involving latest information not in the training data or requiring reasoning ability, it is often impossible to get a correct answer.

 

Few-shot Prompting

Few-shot prompting goes a step beyond Zero-shot prompting by providing simple model examples to execute a task.

 

【Prompt Example】
Text: I don't want to do anything.
Sentiment: Negative
Based on the example above, classify the sentiment of the following text into "Positive," "Neutral," or "Negative."
Text: I don't feel like doing anything.
Sentiment:

 

Compared to Zero-shot prompting, higher precision output can be expected. However, it is still insufficient for more complex tasks with multiple conditions. This is where more advanced Prompt Engineering becomes necessary.

 

3. Three Tips to Keep in Mind for Prompt Engineering

image-2-3


There are three tips to keep in mind for more advanced Prompt Engineering beyond Zero-shot or Few-shot prompting:

 

  • Adding constraint conditions
  • Asking questions in stages
  • Repeating trial and error

 

We will introduce each of these in detail.

 

Adding Constraint Conditions

Including specific constraint conditions in the prompt is effective for improving the quality of the desired answer.

By setting various constraints, such as specifying the answer volume, the technical terms to be used, or setting prohibited words, you can prevent unintended answers or ambiguous expressions.

By combining "absolute conditions" stating what should be done with "constraint conditions" stating what must be strictly avoided, you can more effectively narrow down the output.

Below is an example of adding constraint conditions to the previously mentioned recipe example.

 

【Prompt Example】
You are a professional Japanese chef.
Currently, I have chicken, onions, and eggs in my refrigerator.
Please propose a recipe that will satisfy a family of four ranging in age from teens to 40s.
Avoid rice bowl dishes. → Constraint Condition
Please provide the recipe in a step-by-step format at a level an elementary school student can understand (Output Indicator).


Asking Questions in Stages

Instead of seeking the entire answer at once, an approach that breaks the question into stages is effective for complex challenges or problems requiring multifaceted perspectives.

For instance, by first asking for an overall summary and then following up with questions that delve deeper into each section, gaps or omissions in the answer are less likely to occur. Additionally, the process of integrating the staged answers maintains overall consistency and logic, thereby improving the quality of the final output.

Using the recipe example mentioned above, instead of having the recipe output immediately, have the AI first list dishes that align with the input data and constraint conditions. The method then involves selecting a preferred dish from that list and having the AI output the recipe.
Repeating Trial and Error

A problem particularly likely to occur at the beginning of generative AI implementation is obtaining only irrelevant output that does not meet the expected answer. It is not uncommon for individuals to blame the generative AI model and give up on its utilization when the cause of the poor-quality answer is actually the prompt.

However, as the initially input prompt does not necessarily produce optimal results, it is necessary to adjust the prompt through trial and error based on the generative AI's output to obtain more ideal answers.

It is advisable to verify which parts of the prompt should be improved while actually looking at the AI's response and optimize it incrementally.

 

4. 8 Applied Techniques of Prompt Engineering

 


Here, we introduce applied-level Prompt Engineering techniques along with examples of those prompts. Utilizing applied-level prompts makes it possible to handle more advanced tasks.

 

Directional Stimulus (Directing the answer's course)

Directional Stimulus is a technique for clearly indicating the direction of an answer. If the desired information or content is decided to some extent, you can obtain output closer to your preference by controlling the direction of the output.

Directional Stimulus is a particularly effective technique for summarization.

 

【Prompt Example】
Summarize the following text annotation article.
Key points: Recommended proxy services and the reasons for them.
XXX (Article)

 

Generate Knowledge Prompting (Prompting for knowledge generation)

Generate Knowledge Prompting is a technique where information highly relevant to the answer is provided in the prompt. This is effective when you want a more rigorous answer or wish to analyze specific information.

Additionally, generative AI only supports information up to the time of its training and may not be able to answer based on the latest information. For such issues, it is also possible to input latest knowledge or information and have the AI provide an answer.

 

【Prompt Example】
Analyze the current status of the aging society in Japan and its impact based on the following statistical data.
・The latest demographic statistics from the Ministry of Internal Affairs and Communications
・The growth rate of the elderly population and the decline rate of the labor force population
・Aging rates by region
Based on this information, consider the challenges and countermeasures for future social security systems.

 

Chain-of-Thought (Presenting the sequence of thought)

Chain-of-Thought is a technique that provides a chain of step-by-step reasoning through examples. Generative AI occasionally has a tendency to answer in leaps, so this technique aims to prevent that.

It is particularly effective for simple math problems.

 

【Prompt Example】
Problem: Sum up only the odd numbers in a group of several numbers and determine if the result will be even or odd according to the following steps.
Step 1: Extract the odd numbers in the group.
Step 2: Count the number of extracted odd numbers.
Step 3: Output "It will be even" if the count is even, and "It will be odd" if the count is odd.

Now, execute this for the following group: 2, 5, 8, 11, 14, 17, 20

 

Self-Consistency

In Self-Consistency, instead of simply settling for one answer when a complex question or task is given, you have the AI answer the same problem multiple times. This method then involves comparing each answer and choosing the most consistent answer or common points found among multiple responses.

This reduces accidental errors or biases present in a single answer, allowing for more reliable results.

 

【Prompt Example】
You are a project manager. You are deciding the price for a new smartwatch with the following features.

Battery life: 1 week
Health monitoring functions (heart rate, sleep tracking, stress measurement)
Luxurious design (titanium body)
Target audience: Business people and health-conscious users in their 30s to 50s.
The price range for competitor products is 20,000 yen to 50,000 yen.

However, you must provide three independent answers to this challenge and show the pricing derived in each answer along with the rationale. Finally, choose and recommend the one that is the most logical, consistent, and reliable among the three answers.

 

Zero-shot CoT (Prompting Chain-of-Thought without examples)

Zero-shot CoT is a technique that adds "step-by-step" to an existing prompt. By adding just this one phrase, Zero-shot CoT advances reasoning incrementally like Chain-of-Thought, making it easier to reach a problem solution than CoT Prompting, which requires model answers.

As the generative AI will do the thinking even if you cannot think of an example, this can be widely utilized for everything from high-difficulty math problems to questions requiring logical thinking.

 

【Prompt Example】
Problem: Initially, there were 12 apples. Later, 7 apples were added, and 4 were removed. Afterwards, 2 rotten apples were found and replaced with 2 new ones. What is the total number of apples now?
Please think step-by-step.

 

ReAct (Deriving reasoning and action examples)

ReAct is a technique that solves a task by executing both reasoning and action. It is executed through the following steps:

  1. Execute reasoning related to the prompt's inquiry.
  2. Provide specific actions to achieve the reasoned content.

By repeating reasoning, action, and reasoning based on obtained information, more advanced tasks such as complex decision-making tasks can be carried out.

 

【Prompt Example】
Please tell me how to automate text annotation according to the following framework.
#Framework
Thought: [Reasoning regarding the problem]
Action: [Specific action to be executed]
Observation: [Observation obtained as a result of the action]
Thought: [Next reasoning based on new information]
Action: [Next action]

 

Tree of Thoughts Prompting (Encouraging hierarchical thinking)

Tree of Thoughts (ToT) prompting is a technique that has the AI output multiple ideas, keep those closest to the correct answer, and then derive further ideas from them.

ToT prompting breaks a complex problem into manageable steps and gives the information a hierarchy. By doing so, it has the AI self-evaluate intermediate stages of thought, aiming for more creative and deep output.

 

【Prompt Example】
Explore three different thought paths for the following problem. Follow these steps in each path:
1. Generate initial ideas (3 different approaches).
2. Evaluate each idea (pros and cons).
3. Select and develop the most promising idea.
4. Identify new challenges or questions.
5. Return to Step 1 if necessary and explore a new thought path.
Finally, present the most effective solution and explain the reason for its selection.
Problem: [Insert specific problem or question here]


Multimodal CoT Prompting (Incorporating text and image information)

Multimodal CoT prompting refers to a technique that provides an image as information in addition to a traditional text question. As this is a prompt specialized for multimodal generative AI, it is effective when improving the output of ChatGPT or Gemini.

Because image information is input, answers that take visual information into account can be obtained. When asking questions with text information, conditions must be written in detail. On the other hand, through Multimodal CoT prompting, visual information is added, allowing a precise answer to be derived even with a short text prompt.

If you want the AI to refer to a large amount of external information instead of just images, consideration of RAG is also necessary. With RAG, information can be obtained from images, text, and audio in a vast database, allowing for more comprehensive and accurate answers.

5. Summary

Prompt Engineering is an extremely crucial technology that determines the quality of AI output.

By combining various methods and techniques, such as specific modeling, staged questions, and providing image information, you can obtain higher precision answers and proposals.

Additionally, it is also important to prepare high-quality input data as a prerequisite. This is because high-quality input data contains sufficient necessary background information and prerequisites, allowing for deep context to be provided through the prompt and resulting in improved output quality.

 

 

Nextremer offers data annotation services to achieve highly accurate AI models. If you are considering outsourcing annotation, free consultation is available. Please feel free to contact us.

 

 

Author

 

nextremer-toshiyuki-kita-author

 

Toshiyuki Kita
Nextremer VP of Engineering

After graduating from the Graduate School of Science at Tohoku University in 2013, he joined Mitsui Knowledge Industry Co., Ltd. As an engineer in the SI and R&D departments, he was involved in time series forecasting, data analysis, and machine learning. Since 2017, he has been involved in system development for a wide range of industries and scales as a machine learning engineer at a group company of a major manufacturer. Since 2019, he has been in his current position as manager of the R&D department, responsible for the development of machine learning systems such as image recognition and dialogue systems.

 

Latest Articles