Being Prompt with Prompt Engineering

Krista Yuen, The University of Waikato
Danielle Degiorgio, Edith Cowan University

Warning – ChatGPT and DALL-E were used in the making of this post.

Experienced AI users have been experimenting with the art of prompt engineering to ensure they are getting the most useful and accurate responses from generative AI systems. As a result, they have created and synthesised techniques to ensure that they are getting the best output from these systems. Crafting an effective prompt, also known as prompt engineering, is arguably a skill that may be needed in a world of information seeking, as the trend of AI continues to grow.

Whilst AI continues to improve, and many systems now encourage more precise prompting from their users, AI is still only as good as the prompts they are given. Essentially, if you want quality content, you must use quality prompts. The structure of a solid prompt requires critical thinking and reflection in the design of your prompt, as well as how you interact with the output. While there are many ways to structure a prompt, these are the three more important things to remember when constructing your prompt:


  • Provide background information
  • Set the scene
  • Use exact keywords
  • Specify audience
  • You could also give the AI tool a role to play, e.g. “Act as an expert community organiser!”


  • Clearly define tasks
  • Be as specific as possible about exactly what you want the AI tool to do
  • Break down the steps involved if needed
  • Put in any extra detail, information or text that the AI tool needs


  • Specify desired format, style, and tone
  • Specify inclusions and exclusions
  • Tell it how you would like the results formatted, e.g. a table, bullet point list or even in HTML or CSS.

Example prompt for text generation e.g., ChatGPT

You are an expert marketing and communications advisor working on a project for dolphin conservation and need to create a comprehensive marketing proposal. The goal is to raise awareness and promote actions that contribute to the protection of dolphins and their habitats. The target audience includes environmental activists and the general public who might be interested in marine conservation.

The proposal should highlight the current challenges faced by dolphins, including threats like pollution, overfishing, and habitat destruction. It should emphasise the importance of dolphins to marine ecosystems and their appeal to people due to their intelligence and playful nature. It should include five bullet points for each area: campaign objectives, target audience, key messages, marketing channels, content ideas, partnerships, budget estimation, timeline, and evaluation metrics.

Please structure it in a format that is easy to present to stakeholders, such as a PowerPoint presentation or a detailed report. It should be professionally written, persuasive, and visually appealing with suggestions for imagery and design elements that align with the theme of dolphin conservation.

Example prompt for image generation e.g., DALL∙E

Create a captivating and colourful image for a marketing campaign focused on dolphin conservation. The setting is a serene, crystal-clear ocean under a bright blue sky with soft, fluffy clouds. In the foreground, a group of three playful dolphins is leaping gracefully out of the water. These dolphins should appear joyful and full of life, symbolising the beauty and intelligence of marine life.

The central dolphin, a majestic bottlenose, is at the peak of its jump, with water droplets sparkling around it like diamonds under the sunlight. On the left, a smaller, younger dolphin, mirrors its movement, adding a sense of playfulness and family. To the right, another dolphin is partially submerged, preparing to leap. In the background, a distant, unspoiled coastline with lush greenery and a few palm trees provides a natural, pristine environment. This idyllic scene should evoke a sense of peace and the importance of preserving such beautiful natural habitats.

This image was created with DALL·E 2 via ChatGPT 4 (November 22 Version).

Not getting the results you want?

If your first response has not given you exactly what you need, remember you can try and try again! You may need to add more guidelines to your prompt:

  • Try adding more words or ideas that might be needed. What kind of instructions might make your prompt obtain more?
  • Provide some more context, like “I’m not an expert and I need this explained to me in simpler terms.”
  • Do you need more detailed information that will make your response more relevant and useful?

Want to learn more?

There are a few places you can go to learn more about developing good prompts for your generative AI tool:

LinkedIn Learning: How to write an effective prompt for AI

Learn Prompting: Prompt Engineering Guide

Is ChatGPT cheating? The complexities of AI use in tertiary education. 

Craig Wattam, Rachael Richardson-Bullock

Te Mātāpuna Library & Learning Services, Auckland University of Technology

“The university is at the stage of reviewing its rules for misconduct because they really don’t apply as much anymore.” 

– Tom, Student Advocate, on the Noisy Librarian Podcast

Cheating in the tertiary education sector is not new. Generative AI technologies, while presenting enormous opportunity, are the latest threat to academic integrity. AI tools like Chat GPT blur the lines between human-generated and machine-generated content. They present a raft of issues, including ambiguous standards for legitimate and illegitimate use, variations in acceptance and usage across discipline contexts, and little or inadequate evidence of their use. A nuanced response is required.

Fostering academic integrity through AI literacy

Academic integrity research argues pervasively that a systematic, multi-stakeholder, networked approach is the best way to foster a culture of academic integrity (Kenny & Eaton, 2022). Fortunately, this is also the way to foster ethical, critical reflective and skilful use of AI tools, in other words, a culture of AI literacy. Ironically, to support integrity, we must shift our attention away from merely preventing cheating to ensuring that students learn how to use these tools responsibly. Thus, we can ensure that our focus is on learning and helping students develop the skills necessary to navigate the digital age ethically and effectively.

Hybrid future 

So, the challenge of AI is an opportunity and an imperative. As we humans continue to interact with technology in high complexity systems, so the way we approach academic work will continue to develop.  Rather than backing away or banning AI technologies from the classroom all together, forging a hybrid future, where AI tools play a role in setting students up for success, will benefit both staff and students.

Information and academic literacy practitioners, and other educators, will need to be dexterous enough to respond to the eclipsing, revision, and constant evolution of some of our most ingrained concepts. Concepts such as authorship, originality, plagiarism, and acknowledgement. 

What do students say? 

This was the topic of discussion in a recent episode of the Noisy Librarian Podcast. Featured guests were an academic and a student – a library Learning Advisor and a Student Advocate. The guests delved into the complexities of academic integrity in today’s digital landscape. Importantly, their discussion underscored the need for organizations to understand and hear from students about how AI is impacting them, how they are using it, and what they might be concerned about. Incorporating the student voice and understanding student perspectives is crucial for developing guidelines and support services that are truly effective and relevant.  

Forget supervillains! 

Both podcast guests emphasised that few cases of student misconduct involve serial offenders or super villains who have made a career out of gaming the system. Rather than intending to cheat, more closely, misconduct is related to a lack of knowledge or skill. Meantime, universities are facing challenges – needing to adapt their misconduct rules and provide clear guidelines on the acceptable use of AI tools. 

Listen to the Noisy Librarian podcast episode Is ChatGPT cheating? The complexities of AI use in tertiary education


Or find us on Google Podcasts, Apple Podcasts or I Heart Radio


Kenny, N., & Eaton, S. E. (2022). Academic Integrity Through a SoTL Lens and 4M Framework: An Institutional Self-Study. In Academic Integrity in Canada (pp. 573–592). Springer, Cham.

The power of large language models to augment human learning 

By Fernando Marmolejo-Ramos, Tim Simon and Rhoda Abadia; University of South Australia

In early 2023, OpenAI’s ChatGPT became the buzzword in the Artificial Intelligence (AI) world. A cutting-edge large language model (LLM) that is part of the revolutionary generative AI movement. Google’s Bard and Anthropic’s Claude are other notable LLMs in this league, transforming the way we interact with AI applications. LLMs are super-sized dynamic libraries that can respond to queries, abstract text, and even tackle complex mathematical problems. Ever since ChatGPT’s debut, there has been an overwhelming surge of academic papers and grey literature (including blogs and pre-prints) both praising and critiquing the impact of LLMs. In this discussion, we aim to emphasise the importance of recognising LLMs as technologies that can augment human learning. Through examples, we illustrate how interacting with LLMs can foster AI literacy and augment learning, ultimately boosting innovation and creativity in problem-solving scenarios. 

In the field of education, LLMs have emerged as powerful tools with the potential to enhance the learning experience for both students and teachers. They can be used as powerful supplements for reading, research, and personalised tutoring, benefiting students in various ways. 

For students, LLMs offer the convenience of summarising lengthy textbook chapters and locating relevant literature with tools like ChatPDF, ChatDOC, Perplexity, or Consensus. We believe that these tools not only accelerate students’ understanding of the material but also enable a deeper grasp of the subject matter. LLMs can also act as personalised tutors that are readily available to answer students’ queries and provide guided explanations. 

For teachers, LLMs may help in reducing repetitive tasks like grading assignments. By analysing students’ essays and short answers, they can assess coherence, reasoning, and plagiarism, thereby saving valuable time for meaningful teaching. Additionally, LLMs have the potential to suggest personalised feedback and improvements for individual students, enhancing the overall learning experience. The caveat, though, is that human judgement is to be ‘in-the-loop’ as LLMs have limited understanding of teaching methodologies, curriculum, and student needs. UNESCO has recognised this importance and produced a short guide on the use of LLMs in higher education, providing valuable insights for educators (see table on page 10). 

Achieving remarkable results with LLMs is made possible through the art of “prompt engineering” (PE) – a term referring to the art of crafting effective prompts to guide these language models towards informed responses. For instance, a prompt could be as straightforward as “rewrite the following = X,” where X represents the text to be rephrased. Alternatively, a more complex prompt like “explain what Z is in layman’s terms?” can help clarify intricate concepts. In Figure 1, we present an example demonstrating how students can use specific prompts to learn statistical concepts while simultaneously gaining familiarity with R coding.

Figure 1.  Example of a prompt given to ChatGPT to create R code. The plot on the right shows the result when the code is run in R. Note how the LLM features good code commenting practices and secures reproducibility via the ‘set.seed( )’ function.

Additionally, Figure 2 reveals that not all LLMs offer identical responses to the same prompts, highlighting the uniqueness of each model’s output.

Figure 2. Example of how ChatGPT (left) and Claude (right) respond to the same prompt. Claude seemed to give a better response than ChatGPT and provided an explanation of what was done.

However, the most interesting aspect of PE lies in formulating appropriate questions for the LLMs, making it a matter of problem formulation. We believe this crucial element is at the core of effective prompting in educational contexts. Seen this way, it’s clear that good prompts should have context for the question being asked, as context provides reference points for the intended meaning. For example, a teacher or student could design a prompt like: “Given the information in texts A and B, produce a text that discusses concepts a1 and a2 in text A in terms of concepts b1 and b2 in text B”; where A and B are paragraphs or texts given along with the prompt and a1, a2, b1 and b2 are specific aspects from texts A and B. Admittedly, that prompt lacks context. Nonetheless, context-rich prompts could still be conceived (see Figure 3). These examples also hint at the idea that prompts work in a “rubbish prompts in; rubbish responses out” fashion; i.e. the quality of the prompt is directly proportional to the quality of the response.

Figure 3.  Example of a prompt with good context. This prompt was obtained via Bard through the prompt “construct a prompt on the subject of cognitive science and artificial intelligence that provides adequate context for any LLM to generate a meaningful response”.

PE is thus a process that involves engaging in a dialogue with the LLM to discover creative and innovative solutions to problems. One effective approach is the “chain-of-thought” (CoT) prompting, which entails eliciting the LLM to provide more in-depth responses by following up on previously introduced ideas. The example shown in Figure 4 was output by Bard after the prompt “provide an example of a chain of thought prompting to be submitted to a large language model”. The green box contains the initial prompt, the orange box represents three subsequent questions, and the blue box represents a potential answer given by the LLM. Another way of CoT prompting can be achieved by starting by setting a topic (e.g. “The Role of Artificial Intelligence (AI) in Education”), then ask questions such as “start by defining Artificial Intelligence (AI) and its relevance in the context of education, including its potential applications in learning, teaching, and educational administration.”, “explore how AI can personalise the learning experience for students, catering to individual needs, learning styles, and pace of progress.”, “discuss the benefits of AI-powered adaptive learning systems in identifying students’ strengths and weaknesses, providing targeted interventions, and improving overall academic performance.”, “examine the role of AI in automating administrative tasks, such as grading, scheduling, and resource management, to enhance efficiency and reduce the burden on educators.” etc.

Figure 4. Example of a CoT prompt.

Variants of CoT prompting can be considered by generating several CoT reasoning paths (see the following articles Tree of Thought Deliberate Problem Solving with Large Language Models and Large Language Models Tree -of -Thoughts.). Regardless of the CoT prompting used, the ultimate goal is to solve a problem in an original and informative ways.

It’s crucial not to overlook AI technologies but rather embrace them, finding the right balance between tasks delegated to AI and those best suited for human involvement. Fine-tuning interactions between humans and AI is key when exchanging information, ensuring a seamless and effective collaboration between the two.

A new direction: Our journey creating a chatbot

By Bryony Hawthorn, Information Services Manager, University of Waikato Library,

The University of Waikato Library has been using a live chat service successfully for more than 14 years. This is a very popular service with students – and that was even before the pandemic flipped our lives upside down!

In 2019 library staff numbers were reduced, and we realised we may not always be able to staff the live chat as we have done in the past. This led to the idea of a chatbot.

Chatbot box. University of Waikato Library.

Meet our chatbot, Libby
We chose to build our chatbot using the LibraryH3lp platform as we already use this for our live chat service. So bonus = no extra costs! We named our chatbot Libby.

Libby’s interface is similar to live chat so it creates a consistent experience for users. The only difference is the colour: green for live chat and orange for the chatbot.

We create the responses that Libby sends. The chatbot administration back end has been set up to be simple to use and this means library staff creating responses don’t need to be tech experts. We’ve chosen to focus primarily on library-related topics.

Bumpy beginnings
Libby was very basic when we started. We struggled to get her to reply to keywords (the user had to type the EXACT word or phrase we had in our response bank) and she couldn’t return multiple responses to a single question. Because of this, Libby’s most common response was, “Sorry, I could not process your request. Please try a different word or phrase”. Let’s just say it was a bumpy beginning and a frustrating experience for our early users.

Stepping up
The road became a lot smoother when we introduced a natural language toolkit. This included:
● Text filtering – keywords can appear anywhere in a user’s question so no need to type an exact phrase anymore.
● Removing stop words (e.g. a, at, the, not, and, etc).
● Tokenizing – isolates words so they are compared separately.
● Stemming – allows for different endings for keywords.
● Synonyms – increase the range of words that trigger a response.

We also improved the way Libby greets users and made it clear how to receive help from a person. Most recently we added a module to assist with spelling errors.

One of our biggest successes has been introducing a prompt to encourage users to type their email address if they want a follow up from a librarian. Prior to Libby’s introduction, if the chat service was offline, users were told to email the library for assistance. This didn’t happen very often. But now users find it easy to add their email address and thus allow us to contact them. This has markedly increased the number of users receiving further help.

Example chat with chatbot. University of Waikato Library.

What we learned along the way
● Don’t do it alone. Use those around you with the right technical experience.
● Simple fixes can make a big difference.
● Make it clear to your users they are chatting with a bot who won’t be able to answer everything.
● Make it easy for users to request a follow up from a librarian.

Libby is still a work in progress and our journey is ongoing. Who knows where the road will lead. There are other ways to build a chatbot and some are simpler than what we have done. If you are interested in creating something similar, do look around for options to find something that will suit your needs.

If you’d like to learn more about our journey so far, you can watch our presentation from the LearnFest2021 conference.