Generative artificial intelligence (GAI) is a form of artificial intelligence capable of generating text, images, or other media. GAI learns the patterns and structure of input training data and then generates new data that has similar characteristics (Wikipedia). Naturally, there are many concerns about GAI, with intellectual property rights and academic dishonesty among them. However, GAI can be a useful tool and can be adapted to classroom teaching and learning once its capabilities, limitations, and programming are understood.
Hoffman (2023) urges us to reconsider how we view technology that affects classroom learning, stating "If we decide that it's important to use [generative artificial intelligence], then by definition it becomes part of the curriculum, not cheating." Already in wide use in areas such as visual arts, programming, library and information technology, help desks, and other business sectors, GAI is certain to become a natural part of the workforce and daily living, and thus cannot be ignored. One analyst (Bersin, 2023) predicts that "8% (800,000) [of U.S. jobs] will immediately be impacted" by GAI, mostly through upgrades and enhancements, and that additional jobs will be created as a result of GAI's proliferation. Further, Rutter and Mintz (2023) challenge us to re-evaluate how we think of intelligent technology:
...if a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t.
It is our responsibility as educators to teach students to use GAI ethically and with full knowledge of its advantages and limitations. This page will help UMass Global faculty incorporate GAI into their classroom teaching, and design assignments that can confound use of GAI while generally discouraging cheating.
Page Contents:
The developers of GAI tell us that tools such as ChatGPT and Bard, also known as "large language models" or LLMs, work by predicting the flow of language. GAI uses publicly available sources to "recognize the relationships that most commonly exist between individual units of meaning (including full or partial words, phrases and sentences)" (Hoffman, 2023). According to Chat GPT (2023, August) itself, this type of artificial intelligence
...is designed to generate human-like text based on the input it receives. For this to happen, the model is trained on a massive amount of text data from the internet. It learns to predict the next word in a sentence, which helps it capture grammar, syntax, and some level of world knowledge. The result is a 'pre-trained' model with knowledge of language patterns and facts. After pre-training, the model is fine-tuned on specific tasks using more narrow and focused datasets. This process tailors the model's behavior to perform certain tasks, like language translation, text completion, or question answering. The GPT architecture employs attention mechanisms that allow it to weigh the importance of different words in a sentence and generate coherent and contextually relevant responses. The large number of parameters (weights) in the model, often in the tens or hundreds of billions, enables it to generalize well to various tasks and generate text that is often remarkably coherent and contextually appropriate.
ChatGPT was trained on a large corpus of material; as it was developed, humans interacted with the model, ranking answers it gave so that it could learn which were better responses. Essentially, this model can recognize, summarize, and predict text based on giant data sets (Northwestern University, 2023).
GAI has the potential to improve certain aspects of higher education. For example, Rutter and Mintz (2023) believe that GAI may be able to help students with traditionally under-funded areas of university life such as career counseling. For administrative work, GAI can act as an assistant, taking notes and providing meeting summaries. Available built in tools include:
For course development and classroom work, GAI can be used to help design lessons and assignments and help students produce better work. Some ways that GAI can be used effectively in courses include:
Naturally, any material generated by any form of artificial intelligence should be proofread, personalized, improved, and appropriately cited.
Despite its amazing capabilities, there are things that GAI cannot do. Understanding its limitations may help you work confidently with GAI in the classroom. Here are examples of what GAI cannot do:
Naturally, for all its benefits, new technology brings with it the threat of students using it for cheating and plagiarism, as addressed elsewhere in this guide. To compound the problem, some of what these programs were fed as training material was illegally used, poorly written, inaccurate, biased, or even violent. While companies have tried to cleanse their GAI models of the worst content, they have not been able to remove it all. When students use GAI without parameters or guidance to produce work, the initial product is likely to include some if not all of these problems. In addition, GAI tools are growing in capability by the day, increasing concerns that artificial intelligence can replace the work of humans. GAI has even been able to pass medical licensing exams and law school courses (Hoffman, 2023).
In addition, like any technology, GAI is subject to inconsistencies, misinformation, and technical issues including:
Beyond that, GAI has created other knowledge divides that contribute to the phenomenon of information privilege:
Note: We are a fully virtual library and do not have a physical location. Below is our address for administrative purposes:
UMass Global Administration
65 Enterprise, Suite 150
Aliso Viejo, CA 92656