Skip to Main Content

Generative Artificial Intelligence (GAI) Resource Guide for Faculty

This page will help faculty learn about GAI, use it effectively, and prevent cheating with it.

Generative Artificial Intelligence

Generative artificial intelligence (GAI) is a form of artificial intelligence capable of generating text, images, or other media. GAI learns the patterns and structure of input training data and then generates new data that has similar characteristics (Wikipedia). Naturally, there are many concerns about GAI, with intellectual property rights and academic dishonesty among them. However, GAI can be a useful tool and can be adapted to classroom teaching and learning once its capabilities, limitations, and programming are understood.

Hoffman (2023) urges us to reconsider how we view technology that affects classroom learning, stating "If we decide that it's important to use [generative artificial intelligence], then by definition it becomes part of the curriculum, not cheating." Already in wide use in areas such as visual arts, programming, library and information technology, help desks, and other business sectors, GAI is certain to become a natural part of the workforce and daily living, and thus cannot be ignored. One analyst (Bersin, 2023) predicts that "8% (800,000) [of U.S. jobs] will immediately be impacted" by GAI, mostly through upgrades and enhancements, and that additional jobs will be created as a result of GAI's proliferation. Further, Rutter and Mintz (2023) challenge us to re-evaluate how we think of intelligent technology:  

...if a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t. 

It is our responsibility as educators to teach students to use GAI ethically and with full knowledge of its advantages and limitations. This page will help UMass Global faculty incorporate GAI into their classroom teaching, and design assignments that can confound use of GAI while generally discouraging cheating.

Page Contents:


How Does GAI Work?

The developers of GAI tell us that tools such as ChatGPT and Bard, also known as "large language models" or LLMs, work by predicting the flow of language. GAI uses publicly available sources to "recognize the relationships that most commonly exist between individual units of meaning (including full or partial words, phrases and sentences)" (Hoffman, 2023). According to Chat GPT (2023, August) itself, this type of artificial intelligence

 ...is designed to generate human-like text based on the input it receives. For this to happen, the model is trained on a massive amount of text data from the internet. It learns to predict the next word in a sentence, which helps it capture grammar, syntax, and some level of world knowledge. The result is a 'pre-trained' model with knowledge of language patterns and facts. After pre-training, the model is fine-tuned on specific tasks using more narrow and focused datasets. This process tailors the model's behavior to perform certain tasks, like language translation, text completion, or question answering. The GPT architecture employs attention mechanisms that allow it to weigh the importance of different words in a sentence and generate coherent and contextually relevant responses. The large number of parameters (weights) in the model, often in the tens or hundreds of billions, enables it to generalize well to various tasks and generate text that is often remarkably coherent and contextually appropriate.

ChatGPT was trained on a large corpus of material; as it was developed, humans interacted with the model, ranking answers it gave so that it could learn which were better responses. Essentially, this model can recognize, summarize, and predict text based on giant data sets (Northwestern University, 2023).


What GAI CAN Do

GAI has the potential to improve certain aspects of higher education. For example, Rutter and Mintz (2023) believe that GAI may be able to help students with traditionally under-funded areas of university life such as career counseling. For administrative work, GAI can act as an assistant, taking notes and providing meeting summaries. Available built in tools include:

Generative artificial intelligence technology is currently not in FERPA compliance. Using GAI to record students is not recommended. See the FAQ Page for more information.


For course development and classroom work, GAI can be used to help design lessons and assignments and help students produce better work. Some ways that GAI can be used effectively in courses include:

  • Drafting of assignment prompts.
  • Drafting of syllabi.
  • Suggestions for project topics.
  • Keywords for library database searching.
  • Drafting of sections of a paper, or suggestions for improvement.
  • Synthesizing and arranging recorded knowledge, so that basic facts can be learned and checked in one place instead of many.
  • Solving math problems (when the student knowing the process is not critical).
  • Coding, providing code snippets, explanations of programming concepts, and debugging assistance.

Naturally, any material generated by any form of artificial intelligence should be proofread, personalized, improved, and appropriately cited


What GAI CANNOT Do

Despite its amazing capabilities, there are things that GAI cannot do. Understanding its limitations may help you work confidently with GAI in the classroom. Here are examples of what GAI cannot do:

  • Access Paid Content: Just like Google, GAI finds content that is popular and paid by advertising first, and it generally cannot access material behind firewalls, thus excluding premium library material. GAI can search abstracts and summaries on Google Scholar and publisher's websites, but that type of analysis is likely to be shallow.
  • Mine the Most Current Internet Content: At this time, GAI is a few years behind in the content it can mine; however this may change quickly.
  • Analyze Newer or Under-Represented Topics: Topics or subjects with minimal content on the Internet will not produce good GAI output.
  • Analyze Certain Sensitive Topics: Some GAI models like ChatGPT are programmed to avoid discussion of what they classify as "harmful ideologies." Questions that ask GAI to consider problematic concepts, no matter how carefully they are phrased, may be met with a message to the effect of, "I'm sorry, but I cannot engage in discussions that promote or glorify harmful ideologies." When asked "What harmful ideologies will you refuse to discuss?" ChatGPT (2023, August) responded that topics it will not engage include but are not limited to:
    • Hate Speech
    • Extremism
    • Misinformation
    • Harassment and Bullying
    • Illegal Activities
    • Self Harm or Suicide
    • Adult Content

Concerns about GAI

Naturally, for all its benefits, new technology brings with it the threat of students using it for cheating and plagiarism, as addressed elsewhere in this guide. To compound the problem, some of what these programs were fed as training material was illegally used, poorly written, inaccurate, biased, or even violent. While companies have tried to cleanse their GAI models of the worst content, they have not been able to remove it all. When students use GAI without parameters or guidance to produce work, the initial product is likely to include some if not all of these problems. In addition, GAI tools are growing in capability by the day, increasing concerns that artificial intelligence can replace the work of humans. GAI has even been able to pass medical licensing exams and law school courses (Hoffman, 2023).

In addition, like any technology, GAI is subject to inconsistencies, misinformation, and technical issues including:

  • Hallucinations: An overload of input or a lack of data to mine can result in GAI generating nonsensical, incorrect, or made up information.
  • Being Flat-Out Wrong: GAI can re-generate whatever bad information it mines, or conflate closely related ideas that are frequently mentioned together. 
  • Theft of Intellectual Property: GAI programs mine material taken from online sources not located behind a paywall without the permission of the authors/creators and thus the output, particularly when unattributed, may constitute intellectual property theft. Further, a number of well known authors and other entertainers have filed lawsuits against OpenAI (the creator of ChatGPT) for violating copyright by using their protected works without permission to train ChatGPT, which in turn allows the creation of “ 'derivative works' that can mimic and summarize the authors’ books, potentially harming the market for authors’ work" without compensation (New York Times, September 20).
  • Propaganda: GAI output is subject to be tainted by misinformation, malinformation, and propaganda, such as doctored images and "deepfake" videos. For example, until at least 2018, there was a website with "martinlutherking" as part of the URL that was managed by neo-Nazi propagandists. There is an active organization that promotes "conversion therapy," and though this might seem easy to identify as harmful pseudo-science, its website and documentation follow the standards laid out by the American Psychological Association, making it extremely difficult for a novice to distinguish from results of studies based on true scientific method. Mistaken, nefarious, and harmful information such as that in these examples can be mined and repeated by GAI in a way that sounds authoritative.
  • Lack of Transparency: Because the developers of GAI do not disclose their algorithms, it is impossible for researchers to analyze output for inherent bias and accuracy of source material. ChatGPT (2023, August) had this to say about its own transparency: "ChatGPT does not have access to its training data, and it cannot provide specific details about the sources used during its training. The model was trained on a mixture of licensed data, data created by human trainers, and publicly available data. OpenAI, the organization behind ChatGPT, has not publicly disclosed the specifics of the training duration, the individual datasets used, or the proportion of the data that comes from different sources. Therefore, ChatGPT cannot provide transparency regarding its sources in terms of specific documents or data sets."
  • Reliability Checks: GAI currently offers no reliability predictors or crowdsourced fact checking such as is the case with Wikipedia.
  • Security: Like any other technology, GAI can be susceptible to malware, spam, phishing, and other forms of privacy breaches and data theft. For more see GAI and Personal Security.

Beyond that, GAI has created other knowledge divides that contribute to the phenomenon of information privilege:

  • Equity, or who can use these tools most effectively when use is appropriate, or without being detected when use is in appropriate or disallowed. Who can afford to pay for the tools when they become monetized?
  • Data Quality and Accuracy: GAI will provide data that it can find by scouring the free Internet; where one can find data sets and statistics that are incomplete, incorrect, or subject to misinterpretation, and these errors may be repeated in a GAI inquiry. Per ChatGPT: "Inequities in data collection can lead to underrepresentation or misrepresentation of certain groups, making it difficult for the models to provide fair and unbiased information or support for those groups" (ChatGPT, September 2023).
  • Algorithmic Bias, which favors patterns of the user’s own searches, paid content, and inherent bias in the content it mines.
  • Language and Cultural Bias: GAI tends to be biased toward the dominant languages and cultures that are prevalent in the data used to train the program. This can result in less accurate and less comprehensive responses for users from marginalized linguistic or cultural backgrounds (ChatGPT, August 2023).

Note: We are a fully virtual library and do not have a physical location. Below is our address for administrative purposes:
UMass Global Administration
65 Enterprise, Suite 150
Aliso Viejo, CA 92656