Prompt engineering is the process of crafting effective prompts for large language models like GPT-4. The goal is to get the best possible results from these models. Here are six strategies for getting better results:
- Write clear instructions: These models can’t read your mind. If you want brief replies, ask for them. If you want expert-level writing, ask for that. If you dislike the format, demonstrate the format you’d like to see. The less the model has to guess at what you want, the more likely you’ll get it.
- Include details in your query to get more relevant answers: In order to get a highly relevant response, make sure that requests provide any important details or context. Otherwise, you are leaving it up to the model to guess what you mean.
- Ask the model to adopt a persona: This can help the model understand the tone and style of your writing. For example, you could ask the model to write like a news reporter or a poet.
- Use delimiters to clearly indicate distinct parts of the input: This can help the model understand the structure of your prompt. For example, you could use headings to separate different sections of your prompt.
- Specify the steps required to complete a task: This can help the model understand what you’re asking it to do. For example, you could ask the model to write a recipe with specific ingredients and steps.
- Provide examples: This can help the model understand what you’re looking for. For example, you could provide a few sentences of text that demonstrate the style or tone you’re looking for.
In addition to these strategies, there are several tactics that can be used to improve the quality of the model’s output:
- Ask for brief replies: If the model is generating responses that are too long, ask for brief replies instead.
- Ask for expert-level writing: If the model is generating responses that are too simple, ask for expert-level writing instead.
- Demonstrate the format you’d like to see: If you dislike the format of the model’s output, demonstrate the format you’d like to see instead.
- Provide reference text: Language models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. Providing reference text to these models can help in answering with fewer fabrications.
- Split complex tasks into simpler subtasks: Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
- Give the model time to “think”: Asking for a “chain of thought” before an answer can help the model reason its way toward correct answers more reliably.
By following these strategies and tactics, you can get better results from large language models like GPT-4. Remember to experiment and find the methods that work best for you.
I hope this helps! Let me know if you have any other questions.
Sources: (1) Prompt engineering – OpenAI API – platform.openai.com. https://platform.openai.com/docs/guides/prompt-engineering/prompt-engineering. (2) Prompt Engineering for Generative AI | Machine Learning | Google for …. https://developers.google.com/machine-learning/resources/prompt-eng. (3) Prompt engineering techniques with Azure OpenAI – Azure OpenAI Service …. https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering. (4) Best practices for prompt engineering with OpenAI API. https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api.
Have a Question?
More on AI?