Skip to main content

Cooking has long been a hobby of mine. It is a creative outlet, but it also has an element of near-term gratification when you sit down to enjoy the meal – or better yet watch others enjoy that meal. In a daily pursuit of building new products, near-term gratification from your efforts is a wonderful escape. 

Now, let’s imagine you’re getting ready to make a delicious meal. You wouldn’t just toss all your ingredients in a pot all at once. You’d prep and add them one by one to make sure each element is perfect individually and that the flavors build and harmonize with one another.

Using large language models (LLMs) to deliver a feature or a product is pretty similar. If you give the LLM all of the ingredients at once and ask it to execute a complex task, the results are often underwhelming. You have to take care to add a little bit at a time, building upon the results of each subsequent step.

That’s where a multi-step prompting strategy can make all the difference, empowering you to build in perfect harmony with an LLM.  

Demystifying LLMs

 

Before we dive into strategies, let’s set the stage. LLMs are advanced AI tools capable of understanding and generating human-like text. They can write essays, summarize articles, and even draft code. However, they’re not plug-and-play devices. They require precise instructions—prompts—to deliver quality results. Think of LLMs as talented sous chefs who need a recipe; without it, they can’t deliver gourmet results. 

The Step-by-Step Guide

This guide is like a recipe for success. Instead of asking the LLM to do a big job all at once, you ask it to complete a sequence of smaller jobs, one after another, with the output for each job informing the next. This can help you get superior results on larger complex tasks like:

  • Creating comprehensive product documentation
  • Rewriting a resume to better fit a job description
  • Analyzing unstructured data and compiling formatted summaries 
  • Even drafting an amazing blog post about using a recipe to get more from your LLMs 

How to Use the Guide

  1. Choose the right jobs: Pick more complex tasks that need a bit more attention. This strategy is particularly effective for tasks that require both the analysis of existing content and the generation of new content.
  2. Identify the inputs: It is important to provide the LLM with the context it needs to perform the task effectively. This may be creating a system prompt, assigning a role, uploading sample content, or allowing for custom context. 
  3. Dissect the task: Identify the components of the complex task. If you were to complete the task manually, what set of micro tasks would you need to accomplish. 
  4. Write clear prompts: Give the LLM simple, step-by-step instructions so it knows what to do next. 
  5. Link the steps together: Ensure the output of each prompt flows into the input of the next, allowing the results to build upon one another. 
  6. Test and Iterate: Check what the LLM gives you. If it’s not quite right, go back and adjust your instructions. It’s like tasting your cooking and adding spices to make it just right. – 
  7. Be flexible with your plan: If you see something’s not working, be ready to change your steps. The best chefs aren’t afraid to adjust their recipes, and neither should you.

It’s not always a walk in the park. Sometimes the steps might not work perfectly, or you might need to give the LLM a little help. Plus, LLMs are always getting better, so you’ll want to keep your steps up to date. 

In Conclusion

To sum it up, this multi-stage prompting strategy can be your secret recipe for taking your LLM results from okay to amazing! It helps you get superior results for complex tasks, one step at a time, and keeps you in control from start to finish. So, give this guide a try, and you’ll be making awesome LLM creations in no time. 

Interested in Working with Mark II?

Learn More