10 tips to nail Prompt Engineering
4 min readJul 17, 2023
How you can nail prompt engineering is simple: practice. A lot. :) To make your journey faster than mine, here are still some tips for your practice:
- Select an LLM and stick to it: In a world of perfection, all LLMs would interpret all your prompts the same. However, right now all of them have a somewhat different logic. So try to select one, stick to it and practice a lot with it. You might still use different versions of the LLMs (e.g. for some tasks I use GTP 3.5, for other 4), but be aware of the differences in prompt processing.
- Be clear: Think similar as if you would need to present to a CEO with very limited time. No bullshiting, just add what is needed for the task in the clearest form.
- Be simple: Usually LLMs try to use all pieces of information you add. Therefore, you need to add only the information needed to perform the task. Obviously, in many cases you do not know exactly what will be needed. In that case give clear instructions on under which circumstances the information should be used and when it should be ignored. (Especially if you add history to your prompts.)
- Split tasks: Prompts have a higher chance to do what you want if you can be very clear and simple. This is only possible by reducing complexity of the tasks you want the prompt to do. Therefore, it is usually a good idea to set up scenarios and have specialised prompts for them. (E.g. if you are dealing with tax advisory, you should have several different prompts for private individuals or enterprises — and at the beginning use a sorter prompt to decide which one to use.)
- Add context: To get the best answer, you need to add context to your question. This might include simple things like the current date or even the purpose of the prompt with some ‘personality’ expectations. To start with a simple example: “What are the seasonal fruits now?” Umm, where are you? What is the date today? Or going a bit further, probably 42 as the answer for the question about ‘Life, the Universe and Everything’ was perfect in a vacuum. However, the engineer missed to add the intellectual capabilities of the receiver at the beginning.
- Tell what not to do: Even if you are not allowing external user to add inputs to your prompt — so you are not afraid of prompt infusion -, it is still pretty helpful to tell the prompt what NOT to do. Usually they try to help you, but it also gives lots of false answers. So, besides adding the context, add what is not in the scope. It should help a lot. And be prepared: even if you add very specifically what not to do, you will need to figure out HOW to tell it. E.g. of you instruct your LLM “do not ever add hummingbird to your answer”, it might still add. Even if you tell that it will make a kitten sad or it will break your whole code. After trying enough times and also simplifying your prompt, you will actually see how you can make the LLM act as you asked for. Most of the time. ;)
- Format your input: By putting everything into logical order and also separating different parts clearly, your chance of success will be much bigger. Even if you use an API. You can use commas, capitals and any other special characters for this, LLMs usually recognize the pattern.
- Format the output: You have a much higher chance of getting what you want, if you tell it. So, tell the LLM what you expect from the output. Size, language, style, format — whatever matters for you. They have great capabilities, use them. Also note that you need to know what kind of formatting makes the prompts bigger and slower — you might want to simplify the output and do some post-processing on your own.
- Provide examples: If you want a good output — especially if it is about formatting your output — give some examples. It will significantly help the LLM to do exactly what you want. Try to use real use-case examples.
- Try and refine: Based on your sharp, well written prompt your LLM might do mostly what you want. However, there will be edge-cases for sure. So, do not save on testing. Once you see those edge cases, try to cover them in your prompt. However, the more complex is the prompt, the higher the chance is for inconsistencies, so you might want to evaluate whether your changes do more harm than good and be fine with actual errors in edge cases or decide to create a new, specialized prompt for them. Continuous refinement will also help you to learn a lot about theLLM’s prompting logic. E.g. in some cases the order of instructions matter in execution — even if logically there should be no difference in the output.
As a +1, I would add: Write some automated tests monitoring your prompts. If the LLM you use is in development, how it treats prompts can be changed with a new release. So, you might need to adjust your prompts from time-to-time. It is better to know it asap when it happens and not from your customers.
And the last: Have fun! It is an exciting journey, enjoy it!