top of page
rvtheverett

LLM Prompt Engineering for Beginners [Webinar Recap]

Updated: May 10

Last week I attended a lighting lesson webinar hosted by Britney Muller on LLM Prompt Engineering for Beginners. While this is a summary of my key takeaways from the session, I would recommend watching the recording for even more insights, examples and answers from Britney. There’s also a discount code to her upcoming course, Fundamentals of Generative AI, which I’m excited to start on 21st May - if it’s in your means to join I’d love to see you as a fellow cohort peer. 


Onto the takeaways! 


What is prompt engineering?

‘Prompt engineering is the process of writing, refining and optimising inputs to get GenAI systems to create high quality outputs’


The most well known is ChatGPT - you can type any prompt in human written natural language and get a human written natural language output from the tool. 


Prompt engineering research and evaluation

It’s important to think critically when consuming content from papers or the media when it comes to GenAI. There are a lot of click bait articles swirling around social media and even research studies should be read mindfully. The research paper that is cited as the most data backed study on prompt engineering shares 26 guiding principles to streamline the process of prompting. However, each of these 26 principles was tested with just 20 tasks - this means prompt engineering research is evaluated on a tiny subset of tasks. 


That’s why it’s important to get good at developing prompts that work for us and our needs. As Google said, ‘prompt engineering to date is more of an art form than a science and much is based on trial and error’. 


Prompt engineering framework: SPEAR

Start with the problem

Provide formatting guidance/examples (get specific)

Explain the situation (clarify)

Ask the model what you want it to do

Evaluate the results

Rinse and repeat


Prompt engineering is an iterative process, very rarely will the tool get it right on the first try. It’s also a moving target, what is true now may not work in the future. 


It’s also important to never enter any private or confidential information into a LLM. Companies use the prompts we enter to help train future iterations of models to help them perform better. This can sometimes lead to private information being used in ways that we don’t. 


Common and unreliable prompt engineering tips 

As with all technical sounding topics, there is a lot of misinformation shared and prompt engineering is no different. Some of these common but unreliable tips include:

  1. Give it a role

  2. Give it a monetary ‘tip’ 

  3. Give the model time to think

  4. You need an expensive prompt AI tool or specialist prompt engineering IDE

  5. Technical training is required

  6. You need to know how to use APIs 

  7. You need state of the art LLMs for quality outputs 


While some of these may work for certain queries, it’s important not to assume there are any hacks to getting the perfect output. Systems are changing all of the time, so our approach to generating prompts also needs to be tested and refined over time. 


Prompt engineering will only get you so far

No prompt will get you outputs from a model that they are incapable of providing. This is because generative AI models are predictive engines - they generate the average of everything they have seen or read. 


An example of this is that, to date, no prompt can get Midjourney to produce an image of someone writing with their left hand. 


Knowing what GenAI is good at

The key to efficient prompt engineering is knowing what generative models are good and bad at. Because they are statistical machines, there is a probability distribution to bear in mind. This means that what worked one day might not work the next, and there is an element of randomness. 


What are LLMs good at?

  • Language translation

  • Content summarisation

  • Helping with writers block

  • Content generation

  • Sentiment analysis

  • Question answering 

  • Personalisation 

  • Stylised writing 

  • Refurbishing content for social

  • Simplifying long or complex text 

  • Correcting spelling and grammar

  • Prompt engineering

  • Writing and debugging code


What are LLMs bad at?

  • Being factual 100% of the time

  • Common sense

  • Representing marginalised groups 

  • Research

  • Current events

  • High level strategy

  • Reasoning and logic

  • Understanding humour 

  • Being environmentally friendly 

  • Handling uncommon scenarios 

  • Emotional intelligence 

  • Consistency

  • Remembering beyond context limits


What takes up most of your time?

When thinking about prompt engineering and integrating the use of LLMs into regular tasks, the key is to think about the tasks that take up most of your time during the day. Then consider how tools like ChatGPT and Gemini can be used as an assistant throughout the day.


Dos and Don'ts


Don’t

  • Submit confidential or PPI information

  • Submit large, complex or multi-steps

  • Use for human centric tasks like being empathic

  • Feel that you have to be super technical

  • Fall for expensive AI wrapper tools

Do 

  • Remove any confidential information 

  • Chunk out big tasks into sub tasks 

  • Use AI for tasks that free you up to be more present with people

  • Customise prompts for specific tasks 


Remember to use GenAI responsibly and always make sure there is a human in the loop to review all outputs. 


Examples of tasks

I would definitely recommend reviewing the recording, as Britney shares prompt examples for a large list of tasks including;

  • Data summarisation

  • Turning content into social posts 

  • CSV formulas 

  • Game ideation

  • Learning complex topics 

  • Data analysis 

  • Data cleaning

  • Job interview practice

  • Writing code documentation 

  • Text classification

  • Code assistance

  • Care support

  • Email blasts

  • Summarising long text

  • Writing how to guides 

  • Negotiation prep

  • Contract reviews

  • Creative ideation

  • Image creation 


Image prompt engineering framework: SADSWEET

Start with a big picture

Add details

Describe the subject

Stylise (get specific)

Write your prompt as if you’re describing the scene to someone

Extra parameters to finetune

Experiment with your prompt

Test different prompt combinations 


Remember that outputs are not information retrieval, there will be errors and misinformation. That is why there must be a human always in the loop. 


To finish up the webinar, Britney shared a really helpful list of words that always seem to be present in content generated by LLMs. I’m sure we’ve all seen these words and they are usually a real giveaway that the content has been AI generated. However, you can feed a list of words into your prompts to ensure that they are not used - I’m sure I’m not the only person who has asked ChatGPT to make an output sound less wanky so I’m hopeful that this will help!


As a reminder, Britney’s upcoming course, Fundamentals of Generative AI, starts on 21st May - if it’s in your means to join I’d love to see you as a fellow cohort peer! 


I also made some handwritten digital notes covering the main takeaways, you can download those below :)










180 views0 comments

Comments


bottom of page