Data poisoning: how artists are sabotaging AI to take revenge on image generators

Over the break we read and loved this article from The Conversation, originally published on 18 December 2023. We hope you do too!

T.J. Thomson, Author provided

T.J. Thomson, RMIT University and Daniel Angus, Queensland University of Technology

Imagine this. You need an image of a balloon for a work presentation and turn to a text-to-image generator, like Midjourney or DALL-E, to create a suitable image.

You enter the prompt: “red balloon against a blue sky” but the generator returns an image of an egg instead. You try again but this time, the generator shows an image of a watermelon.

What’s going on?

The generator you’re using may have been “poisoned”.

What is ‘data poisoning’?

Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.

But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases where artists have accused big tech companies of stealing and profiting from their work.

This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade” to fight back against unauthorised image scraping.

The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes.

If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.

Symptoms of poisoning

As in our earlier example, a balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso.

Some of the issues with earlier AI models, such as trouble accurately rendering hands, for example, could return. The models could also introduce other odd and illogical features to images – think six-legged dogs or deformed couches.

The higher the number of “poisoned” images in the training data, the greater the disruption. Because of how generative AI works, the damage from “poisoned” images also affects related prompt keywords.

For example, if a “poisoned” image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected.

Nightshade’s developer hopes the tool will make big tech companies more respectful of copyright, but it’s also possible users could abuse the tool and intentionally upload “poisoned” images to generators to try and disrupt their services.

Is there an antidote?

In response, stakeholders have proposed a range of technological and human solutions. The most obvious is paying greater attention to where input data are coming from and how they can be used. Doing so would result in less indiscriminate data harvesting.

This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit.

Other technological fixes also include the use of “ensemble modeling” where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected “poisoned” images.

Audits are another option. One audit approach involves developing a “test battery” – a small, highly curated, and well-labelled dataset – using “hold-out” data that are never used for training. This dataset can then be used to examine the model’s accuracy.

Strategies against technology

So-called “adversarial approaches” (those that degrade, deny, deceive, or manipulate AI systems), including data poisoning, are nothing new. They have also historically included using make-up and costumes to circumvent facial recognition systems.

Human rights activists, for example, have been concerned for some time about the indiscriminate use of machine vision in wider society. This concern is particularly acute concerning facial recognition.

Systems like Clearview AI, which hosts a massive searchable database of faces scraped from the internet, are used by law enforcement and government agencies worldwide. In 2021, Australia’s government determined Clearview AI breached the privacy of Australians.

In response to facial recognition systems being used to profile specific individuals, including legitimate protesters, artists devised adversarial make-up patterns of jagged lines and asymmetric curves that prevent surveillance systems from accurately identifying them.

There is a clear connection between these cases and the issue of data poisoning, as both relate to larger questions around technological governance.

Many technology vendors will consider data poisoning a pesky issue to be fixed with technological solutions. However, it may be better to see data poisoning as an innovative solution to an intrusion on the fundamental moral rights of artists and users.

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University and Daniel Angus, Professor of Digital Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Being Prompt with Prompt Engineering

Krista Yuen, The University of Waikato
Danielle Degiorgio, Edith Cowan University

Warning – ChatGPT and DALL-E were used in the making of this post.

Experienced AI users have been experimenting with the art of prompt engineering to ensure they are getting the most useful and accurate responses from generative AI systems. As a result, they have created and synthesised techniques to ensure that they are getting the best output from these systems. Crafting an effective prompt, also known as prompt engineering, is arguably a skill that may be needed in a world of information seeking, as the trend of AI continues to grow.

Whilst AI continues to improve, and many systems now encourage more precise prompting from their users, AI is still only as good as the prompts they are given. Essentially, if you want quality content, you must use quality prompts. The structure of a solid prompt requires critical thinking and reflection in the design of your prompt, as well as how you interact with the output. While there are many ways to structure a prompt, these are the three more important things to remember when constructing your prompt:

Context

  • Provide background information
  • Set the scene
  • Use exact keywords
  • Specify audience
  • You could also give the AI tool a role to play, e.g. “Act as an expert community organiser!”

Task

  • Clearly define tasks
  • Be as specific as possible about exactly what you want the AI tool to do
  • Break down the steps involved if needed
  • Put in any extra detail, information or text that the AI tool needs

Output

  • Specify desired format, style, and tone
  • Specify inclusions and exclusions
  • Tell it how you would like the results formatted, e.g. a table, bullet point list or even in HTML or CSS.

Example prompt for text generation e.g., ChatGPT

You are an expert marketing and communications advisor working on a project for dolphin conservation and need to create a comprehensive marketing proposal. The goal is to raise awareness and promote actions that contribute to the protection of dolphins and their habitats. The target audience includes environmental activists and the general public who might be interested in marine conservation.

The proposal should highlight the current challenges faced by dolphins, including threats like pollution, overfishing, and habitat destruction. It should emphasise the importance of dolphins to marine ecosystems and their appeal to people due to their intelligence and playful nature. It should include five bullet points for each area: campaign objectives, target audience, key messages, marketing channels, content ideas, partnerships, budget estimation, timeline, and evaluation metrics.

Please structure it in a format that is easy to present to stakeholders, such as a PowerPoint presentation or a detailed report. It should be professionally written, persuasive, and visually appealing with suggestions for imagery and design elements that align with the theme of dolphin conservation.

Example prompt for image generation e.g., DALL∙E

Create a captivating and colourful image for a marketing campaign focused on dolphin conservation. The setting is a serene, crystal-clear ocean under a bright blue sky with soft, fluffy clouds. In the foreground, a group of three playful dolphins is leaping gracefully out of the water. These dolphins should appear joyful and full of life, symbolising the beauty and intelligence of marine life.

The central dolphin, a majestic bottlenose, is at the peak of its jump, with water droplets sparkling around it like diamonds under the sunlight. On the left, a smaller, younger dolphin, mirrors its movement, adding a sense of playfulness and family. To the right, another dolphin is partially submerged, preparing to leap. In the background, a distant, unspoiled coastline with lush greenery and a few palm trees provides a natural, pristine environment. This idyllic scene should evoke a sense of peace and the importance of preserving such beautiful natural habitats.

This image was created with DALL·E 2 via ChatGPT 4 (November 22 Version).

Not getting the results you want?

If your first response has not given you exactly what you need, remember you can try and try again! You may need to add more guidelines to your prompt:

  • Try adding more words or ideas that might be needed. What kind of instructions might make your prompt obtain more?
  • Provide some more context, like “I’m not an expert and I need this explained to me in simpler terms.”
  • Do you need more detailed information that will make your response more relevant and useful?

Want to learn more?

There are a few places you can go to learn more about developing good prompts for your generative AI tool:

LinkedIn Learning: How to write an effective prompt for AI

Learn Prompting: Prompt Engineering Guide

Is ChatGPT cheating? The complexities of AI use in tertiary education. 

Craig Wattam, Rachael Richardson-Bullock

Te Mātāpuna Library & Learning Services, Auckland University of Technology

“The university is at the stage of reviewing its rules for misconduct because they really don’t apply as much anymore.” 

– Tom, Student Advocate, on the Noisy Librarian Podcast

Cheating in the tertiary education sector is not new. Generative AI technologies, while presenting enormous opportunity, are the latest threat to academic integrity. AI tools like Chat GPT blur the lines between human-generated and machine-generated content. They present a raft of issues, including ambiguous standards for legitimate and illegitimate use, variations in acceptance and usage across discipline contexts, and little or inadequate evidence of their use. A nuanced response is required.

Fostering academic integrity through AI literacy

Academic integrity research argues pervasively that a systematic, multi-stakeholder, networked approach is the best way to foster a culture of academic integrity (Kenny & Eaton, 2022). Fortunately, this is also the way to foster ethical, critical reflective and skilful use of AI tools, in other words, a culture of AI literacy. Ironically, to support integrity, we must shift our attention away from merely preventing cheating to ensuring that students learn how to use these tools responsibly. Thus, we can ensure that our focus is on learning and helping students develop the skills necessary to navigate the digital age ethically and effectively.

Hybrid future 

So, the challenge of AI is an opportunity and an imperative. As we humans continue to interact with technology in high complexity systems, so the way we approach academic work will continue to develop.  Rather than backing away or banning AI technologies from the classroom all together, forging a hybrid future, where AI tools play a role in setting students up for success, will benefit both staff and students.

Information and academic literacy practitioners, and other educators, will need to be dexterous enough to respond to the eclipsing, revision, and constant evolution of some of our most ingrained concepts. Concepts such as authorship, originality, plagiarism, and acknowledgement. 

What do students say? 

This was the topic of discussion in a recent episode of the Noisy Librarian Podcast. Featured guests were an academic and a student – a library Learning Advisor and a Student Advocate. The guests delved into the complexities of academic integrity in today’s digital landscape. Importantly, their discussion underscored the need for organizations to understand and hear from students about how AI is impacting them, how they are using it, and what they might be concerned about. Incorporating the student voice and understanding student perspectives is crucial for developing guidelines and support services that are truly effective and relevant.  

Forget supervillains! 

Both podcast guests emphasised that few cases of student misconduct involve serial offenders or super villains who have made a career out of gaming the system. Rather than intending to cheat, more closely, misconduct is related to a lack of knowledge or skill. Meantime, universities are facing challenges – needing to adapt their misconduct rules and provide clear guidelines on the acceptable use of AI tools. 

Listen to the Noisy Librarian podcast episode Is ChatGPT cheating? The complexities of AI use in tertiary education

Podbean

Or find us on Google Podcasts, Apple Podcasts or I Heart Radio

Reference:

Kenny, N., & Eaton, S. E. (2022). Academic Integrity Through a SoTL Lens and 4M Framework: An Institutional Self-Study. In Academic Integrity in Canada (pp. 573–592). Springer, Cham. https://doi.org/10.1007/978-3-030-83255-1_30

The power of large language models to augment human learning 

By Fernando Marmolejo-Ramos, Tim Simon and Rhoda Abadia; University of South Australia

In early 2023, OpenAI’s ChatGPT became the buzzword in the Artificial Intelligence (AI) world. A cutting-edge large language model (LLM) that is part of the revolutionary generative AI movement. Google’s Bard and Anthropic’s Claude are other notable LLMs in this league, transforming the way we interact with AI applications. LLMs are super-sized dynamic libraries that can respond to queries, abstract text, and even tackle complex mathematical problems. Ever since ChatGPT’s debut, there has been an overwhelming surge of academic papers and grey literature (including blogs and pre-prints) both praising and critiquing the impact of LLMs. In this discussion, we aim to emphasise the importance of recognising LLMs as technologies that can augment human learning. Through examples, we illustrate how interacting with LLMs can foster AI literacy and augment learning, ultimately boosting innovation and creativity in problem-solving scenarios. 

In the field of education, LLMs have emerged as powerful tools with the potential to enhance the learning experience for both students and teachers. They can be used as powerful supplements for reading, research, and personalised tutoring, benefiting students in various ways. 

For students, LLMs offer the convenience of summarising lengthy textbook chapters and locating relevant literature with tools like ChatPDF, ChatDOC, Perplexity, or Consensus. We believe that these tools not only accelerate students’ understanding of the material but also enable a deeper grasp of the subject matter. LLMs can also act as personalised tutors that are readily available to answer students’ queries and provide guided explanations. 

For teachers, LLMs may help in reducing repetitive tasks like grading assignments. By analysing students’ essays and short answers, they can assess coherence, reasoning, and plagiarism, thereby saving valuable time for meaningful teaching. Additionally, LLMs have the potential to suggest personalised feedback and improvements for individual students, enhancing the overall learning experience. The caveat, though, is that human judgement is to be ‘in-the-loop’ as LLMs have limited understanding of teaching methodologies, curriculum, and student needs. UNESCO has recognised this importance and produced a short guide on the use of LLMs in higher education, providing valuable insights for educators (see table on page 10). 

Achieving remarkable results with LLMs is made possible through the art of “prompt engineering” (PE) – a term referring to the art of crafting effective prompts to guide these language models towards informed responses. For instance, a prompt could be as straightforward as “rewrite the following = X,” where X represents the text to be rephrased. Alternatively, a more complex prompt like “explain what Z is in layman’s terms?” can help clarify intricate concepts. In Figure 1, we present an example demonstrating how students can use specific prompts to learn statistical concepts while simultaneously gaining familiarity with R coding.

Figure 1.  Example of a prompt given to ChatGPT to create R code. The plot on the right shows the result when the code is run in R. Note how the LLM features good code commenting practices and secures reproducibility via the ‘set.seed( )’ function.

Additionally, Figure 2 reveals that not all LLMs offer identical responses to the same prompts, highlighting the uniqueness of each model’s output.

Figure 2. Example of how ChatGPT (left) and Claude (right) respond to the same prompt. Claude seemed to give a better response than ChatGPT and provided an explanation of what was done.

However, the most interesting aspect of PE lies in formulating appropriate questions for the LLMs, making it a matter of problem formulation. We believe this crucial element is at the core of effective prompting in educational contexts. Seen this way, it’s clear that good prompts should have context for the question being asked, as context provides reference points for the intended meaning. For example, a teacher or student could design a prompt like: “Given the information in texts A and B, produce a text that discusses concepts a1 and a2 in text A in terms of concepts b1 and b2 in text B”; where A and B are paragraphs or texts given along with the prompt and a1, a2, b1 and b2 are specific aspects from texts A and B. Admittedly, that prompt lacks context. Nonetheless, context-rich prompts could still be conceived (see Figure 3). These examples also hint at the idea that prompts work in a “rubbish prompts in; rubbish responses out” fashion; i.e. the quality of the prompt is directly proportional to the quality of the response.

Figure 3.  Example of a prompt with good context. This prompt was obtained via Bard through the prompt “construct a prompt on the subject of cognitive science and artificial intelligence that provides adequate context for any LLM to generate a meaningful response”.

PE is thus a process that involves engaging in a dialogue with the LLM to discover creative and innovative solutions to problems. One effective approach is the “chain-of-thought” (CoT) prompting, which entails eliciting the LLM to provide more in-depth responses by following up on previously introduced ideas. The example shown in Figure 4 was output by Bard after the prompt “provide an example of a chain of thought prompting to be submitted to a large language model”. The green box contains the initial prompt, the orange box represents three subsequent questions, and the blue box represents a potential answer given by the LLM. Another way of CoT prompting can be achieved by starting by setting a topic (e.g. “The Role of Artificial Intelligence (AI) in Education”), then ask questions such as “start by defining Artificial Intelligence (AI) and its relevance in the context of education, including its potential applications in learning, teaching, and educational administration.”, “explore how AI can personalise the learning experience for students, catering to individual needs, learning styles, and pace of progress.”, “discuss the benefits of AI-powered adaptive learning systems in identifying students’ strengths and weaknesses, providing targeted interventions, and improving overall academic performance.”, “examine the role of AI in automating administrative tasks, such as grading, scheduling, and resource management, to enhance efficiency and reduce the burden on educators.” etc.

Figure 4. Example of a CoT prompt.

Variants of CoT prompting can be considered by generating several CoT reasoning paths (see the following articles Tree of Thought Deliberate Problem Solving with Large Language Models and Large Language Models Tree -of -Thoughts.). Regardless of the CoT prompting used, the ultimate goal is to solve a problem in an original and informative ways.

It’s crucial not to overlook AI technologies but rather embrace them, finding the right balance between tasks delegated to AI and those best suited for human involvement. Fine-tuning interactions between humans and AI is key when exchanging information, ensuring a seamless and effective collaboration between the two.