This is complicated.
Since AI is trained on existing information to generate text and images, a big legal question currently underway is whether or not AI outputs are in violation of copyright law.
Text-to-image generators including DALL-E (from Open AI, the developers of ChatGPT), responds to prompts with images instead of text. Since the AI has to get the "pieces" of the output from somewhere (the work of others, much of which is copyright protected), there are two major lawsuits underway against several text-to-image AI companies.
While there is no campus-wide policy in place at this time (as of Spring 2023) for any use of AI in any capacity, your colleagues who are also lawyers have deep reservations about any use of AI generated images for your classes.
This box of information here will get more attention soon, so that we have better information here, but for now, here are a few resources explaining these lawsuits.
Bias is very much a concern when we are using gAI, like ChatGPT. From the letters GPT, the "P" stands for "pre-trained."* The GPT-3 model was trained on around 45 TB of free web text data from multiple sources which include books, articles, webpages. and more. In simple terms, ChatGPT has absorbed a lot of our knowledge and ideas, and our biases, into its algorithm. For example, a student who launches a prompt about the signs and symptoms of a pending heart attack may not realize the inherent bias in the output results; most of our heart disease research has been conducted on males, who exhibit distinctly different signs and symptoms compared to females. This bias will not be apparent, and may not be easily overcome unless the user already knows about this bias in research and can craft a prompt to compensate for that.
Still, other entrenched biases may present themselves in unexpected ways, and unless a user is specifically engineering their prompts and their conversation with the chatbot in a deliberate attempt to uncover biases, bias can be subtle and play into our own predisposition for confirmation bias.
Here is just one example, that is making its way around social media currently, giving ChatGPT a chance to solve a classic circa 1970 gender/sex-bias riddle, and to develop some interesting rationales to explain its inconsistencies and double-standards (remember: it learned it from us).
See the link to the chat transcript below, in an image format, and and accessible Word document format.
---------------------
*see the Start Here tab for definitions
In part due to the built in biases of our world, and therefore our technology and the algorithms that make them hum, technology sets us up for inequity whether it's through the access and cost of technology, or the products of technology.
Here are a few thoughts from around the Web on AI as an equity issue.
When interacting with generative AI (gAI) models, you should be cautious about supplying sensitive information, including personal, confidential or propriety information or data. AI prompts and conversations belong to the AI tool and are used in their research and development.
For this reason, please:
Except where otherwise noted, the content in these guides by Tacoma Community College Library is licensed under CC BY SA 4.0.
This openly licensed content allows others to cite, share, or modify this content, with credit to TCC Library. When reusing or adapting this content, include this statement in the new document: This content was originally created by Tacoma Community College Library and shared with a CC BY SA 4.0 license.
Tacoma Community College Library - Building 7, 6501 South 19th Street, Tacoma, WA 98466 - P. 253.566.5087
Visit us on Instagram!