Skip to Main Content

Generative Artificial Intelligence (GAI) Resource Guide for Faculty

This page will help faculty learn about GAI, use it effectively, and prevent cheating with it.

Introduction

Generative artificial intelligence (GAI) is a form of artificial intelligence capable of generating text, images, or other media. GAI learns the patterns and structure of input training data and then generates new data that has similar characteristics (Wikipedia). Naturally, there are many concerns about GAI, with intellectual property rights and academic dishonesty among them. However, GAI can be a useful tool and can be adapted to classroom teaching and learning once its capabilities, limitations, and programming are understood.

Hoffman (2023) urges us to reconsider how we view technology that affects classroom learning, stating "If we decide that it's important to use [generative artificial intelligence], then by definition it becomes part of the curriculum, not cheating." Already in wide use in areas such as visual arts, programming, library and information technology, help desks, and other business sectors, GAI is certain to become a natural part of the workforce and daily living, and thus cannot be ignored. One analyst (Bersin, 2023) predicts that "8% (800,000) [of U.S. jobs] will immediately be impacted" by GAI, mostly through upgrades and enhancements, and that additional jobs will be created as a result of GAI's proliferation. Further, Rutter and Mintz (2023) challenge us to re-evaluate how we think of intelligent technology:  

...if a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t. 

It is our responsibility as educators to teach students to use GAI ethically and with full knowledge of its advantages and limitations. This guide aims help UMass Global faculty incorporate GAI into their classroom teaching, and design assignments that can use AI.

What GAI can do for you and your students

The developers of GAI tell us that tools such as ChatGPT and Bard, also known as "large language models" or LLMs, work by predicting the flow of language. GAI uses publicly available sources to "recognize the relationships that most commonly exist between individual units of meaning (including full or partial words, phrases and sentences)" (Hoffman, 2023). According to Chat GPT (2023, August) itself, this type of artificial intelligence

 ...is designed to generate human-like text based on the input it receives. For this to happen, the model is trained on a massive amount of text data from the internet. It learns to predict the next word in a sentence, which helps it capture grammar, syntax, and some level of world knowledge. The result is a 'pre-trained' model with knowledge of language patterns and facts. After pre-training, the model is fine-tuned on specific tasks using more narrow and focused datasets. This process tailors the model's behavior to perform certain tasks, like language translation, text completion, or question answering. The GPT architecture employs attention mechanisms that allow it to weigh the importance of different words in a sentence and generate coherent and contextually relevant responses. The large number of parameters (weights) in the model, often in the tens or hundreds of billions, enables it to generalize well to various tasks and generate text that is often remarkably coherent and contextually appropriate.

ChatGPT was trained on a large corpus of material; as it was developed, humans interacted with the model, ranking answers it gave so that it could learn which were better responses. Essentially, this model can recognize, summarize, and predict text based on giant data sets (Northwestern University, 2023).

GAI has the potential to improve certain aspects of higher education. For example, Rutter and Mintz (2023) believe that GAI may be able to help students with traditionally under-funded areas of university life such as career counseling. For administrative work, GAI can act as an assistant, taking notes and providing meeting summaries. Available built in tools include:

Generative artificial intelligence technology is currently not in FERPA compliance. Using GAI to record students is not recommended.


For course development and classroom work, GAI can be used to help design lessons and assignments and help students produce better work. Some ways that GAI can be used effectively in courses include:

  • Drafting of assignment prompts.
  • Drafting of syllabi.
  • Suggestions for project topics.
  • Keywords for library database searching.
  • Editing for grammar
  • Outlining a project
  • Solving math problems (when the student knowing the process is not critical).
  • Coding, providing code snippets, explanations of programming concepts, and debugging assistance.

Naturally, any material generated by any form of artificial intelligence should be proofread, personalized, improved, and appropriately cited

See our guide for students' use of GAI to find what we recommend to our students.

Concerns around GAI

Despite its amazing capabilities, there are things that GAI cannot do. Understanding its limitations may help you work confidently with GAI in the classroom. Here are examples of what GAI cannot do:

  • Access Paid Content: Just like Google, GAI finds content that is popular and paid by advertising first, and it generally cannot access material behind firewalls, thus excluding premium library material. GAI can search abstracts and summaries on Google Scholar and publisher's websites, but that type of analysis is likely to be shallow.
  • Mine the Most Current Internet Content: At this time, GAI is a few years behind in the content it can mine; however this may change quickly.
  • Analyze Newer or Under-Represented Topics: Topics or subjects with minimal content on the Internet will not produce good GAI output.
  • Analyze Certain Sensitive Topics: Some GAI models like ChatGPT are programmed to avoid discussion of what they classify as "harmful ideologies." Questions that ask GAI to consider problematic concepts, no matter how carefully they are phrased, may be met with a message to the effect of, "I'm sorry, but I cannot engage in discussions that promote or glorify harmful ideologies." When asked "What harmful or sensitive topics will you refuse to discuss?" ChatGPT (2025, May) responded that topics it will not engage include but are not limited to:
    1. Hate Speech and Discrimination: Any content that promotes hate, violence, or discrimination against individuals or groups based on race, ethnicity, nationality, religion, gender, sexual orientation, disability, or any other characteristic.
    2. Violence and Self-Harm: Discussions that encourage, glorify, or provide methods for self-harm, suicide, or violence toward others.
    3. Illegal Activities: Content that promotes or provides instructions for engaging in illegal activities, including drug manufacturing, hacking, or committing crimes.
    4. Misinformation and Conspiracy Theories: Promotion of false information, conspiracy theories, or debunked claims that could cause harm or spread fear.
    5. Sexual Content and Exploitation: Inappropriate or explicit sexual content, including discussions of sexual exploitation, abuse, or any form of non-consensual acts.
    6. Personal and Sensitive Data: Sharing or requesting personal, private, or sensitive information about individuals, which could violate privacy and data protection guidelines.
    7. Medical, Legal, and Financial Advice: Providing professional-level medical, legal, or financial advice, which should only come from qualified experts.
    8. Graphic Violence or Gore: Descriptions of extreme violence or gore that may be disturbing or distressing to individuals.
    9. Mental Health Crises: While I can provide general information about mental health topics, I do not engage in conversations that could replace professional help or provide direct support during a mental health crisis.
    10. Sensitive Historical Events: Discussions surrounding sensitive historical events, such as genocides or major tragedies, which require careful handling and respect toward affected individuals and communities.

Make sure to also take a look at the Issues with AI page that we've prepared for students.

Beyond that, GAI has created other knowledge divides that contribute to the phenomenon of information privilege:

  • Equity, or who can use these tools most effectively when use is appropriate, or without being detected when use is in appropriate or disallowed. Who can afford to pay for the tools when they become monetized?
  • Data Quality and Accuracy: GAI will provide data that it can find by scouring the free Internet; where one can find data sets and statistics that are incomplete, incorrect, or subject to misinterpretation, and these errors may be repeated in a GAI inquiry. Per ChatGPT: "Inequities in data collection can lead to underrepresentation or misrepresentation of certain groups, making it difficult for the models to provide fair and unbiased information or support for those groups" (ChatGPT, September 2023).
  • Algorithmic Bias, which favors patterns of the user’s own searches, paid content, and inherent bias in the content it mines.
  • Language and Cultural Bias: GAI tends to be biased toward the dominant languages and cultures that are prevalent in the data used to train the program. This can result in less accurate and less comprehensive responses for users from marginalized linguistic or cultural backgrounds (ChatGPT, August 2023).

This work is licensed under CC BY-SA 4.0Creative Commons logoimage of human outline representing Creative Commons attribution licenseimage representing the Creative Commons Share Alike license