Faculty and students at Lehigh are focused on artificial intelligence (AI) and its place in university classrooms. AI as an aid to productivity. AI as a coding assistant and teaching consultant. AI for improvisation and shared humor. Also: students’ use of AI as incongruent to academic integrity?
This web page on AI and approaches to teaching and learning will:
(1) help faculty talk with students and set policies regarding the use of AI; and
(2) aid faculty in the design of assignments and assessments related to AI.
Further reading and resources
You can continue your study with Lehigh-specific Resources on generative AI for Fall 2023 and guides on Generative Artificial Intelligence (AI), Resources for directed learning about AI, and AI Tools and Library Research.
What is Generative AI?
In a few words, “AI can be defined as ‘automation based on associations.’”1 In longer form: “AI is a branch of computer science. AI systems use hardware, algorithms, and data to create ‘intelligence’ to do things like make decisions, discover patterns, and perform some sort of action.”2 Generative AI refers to AI systems that produce text and images, among other possibilities.
What is ChatGPT?
ChatGPT is an AI-powered text generator. It is also a chatbot and an application on the Internet. ChatGPT is powered by a large language model, or LLM. At their inception, LLMs depend on text mining and/or web scraping to build a corpus for study by a computer. An LLM then learns based on rules, or algorithms, set by developers, as well as training by humans.
LLMs “learn a probability distribution of the next word/pixel/value in a sequence.”3 This means that they do not output whole sentences from a text-mined foundation based on a user's prompt. Instead, LLMs and their chatbot intermediaries build outputs word by word, based on machine learning. This is one reason why proper citation becomes an issue for people who rely on text generators. Outputs from ChatGPT are probable, plausible but they do not draw from specific, reliable sources. This is also why text generators make up references, or hallucinate, even when a user asks for citations within a prompt.4
The impact of AI
In a recent critical literature review, researchers identified a discourse of imperative change surrounding AI in higher education.5 But that imperative should be critiqued and analyzed in specific scenarios of teaching and learning. Faculty bandwidth is wide and limited. Take time to consider the impact of AI on your course and field of study. Know that there will be many opportunities for change in semesters and years to come.
Speaking to students about AI
Break the ice
Use a quote, image, multimedia, the news, or movie reference to break the ice of a difficult (and technical!) issue such as the appropriate use of AI-powered tools in teaching and learning.6 Ask questions to open up a conversation about students’ use of AI. Are they using AI-powered tools? If not, why?7 If so,
how?8 Aim to create a community of inquiry in which you learn from students and students learn from you.
Trust your students
One reason to introduce AI through a movie reference or news coverage is to pique students’ interest while addressing a pressing issue. Thinking about movies helps us identify anxieties related to AI and realize that a productive place, pedagogically, is to trust your students.
Summarize your policies
Set aside time early in a course to clarify your policies related to AI. At a time of technological change, such policies will be uneven from a student perspective and across their course load. Some faculty will prohibit the use of AI in their classrooms. Others will make AI a constitutive part of the learning environment they create. Consider adding a policy on AI to your course syllabus. An LU Syllabus Template developed by Greg Reihman provides a few examples. Feel free to select one – or modify the language as you see fit.9
If you are considering use of an AI detection software, read this Lehigh guide on the use of the Turnitin AI Detection feature.10 Also have your students review Undergraduate and Graduate Student Senate Statements on Academic Integrity and Article III of Lehigh's Code of Conduct.
Foreground discussions of equity and algorithmic biases in your first discussions with students about AI. One example of algorithmic biases within AI systems involves police departments that use facial recognition programs to compare surveillance footage to a database of potential suspects. Algorithmic decision-making is also easily polluted by the racial and gender biases embedded in our society, which are reflected in large training sets and the actions of developers.11 Gender biases – Who wasn't working hard enough?, Who was late? – are prominent in these responses to prompts in ChatGPT.12
Explore AI tools together
You will be better prepared to lead students through a critical use of ChatGPT and other AI tools with a general understanding of how text-generators are built and operate. You can also practice prompting, together. What is ChatGPT good at? Where does ChatGPT struggle? Have students fact check and refine outputs, identify biases, and compete with ChatGPT on responses to course-related queries.
Consider students' professional development
The current conversation around AI is market-driven. According to a recent report from Stanford University, “The demand for AI-related professional skills is increasing across virtually every American industrial sector.”13 A contribution of a Lehigh education, however, is to think of the ethical use of AI alongside possible adoption in specific disciplines, to track misuse, and to imagine a better future marked by AI.
Follow the news
One way to stay up-to-date with the ever-changing landscape of AI is with a newsreel created by Future Tools. The AIAAIC also keeps a list of “AI, algorithmic, and automation incidents and controversies.”
The Next Generation
AI-powered tools generate prose, poems, music lyrics, and code. AI for image and video generation is already here. So what's next? Chart a path for the future of AI alongside developers and policymakers. One that considers intellectual property and copyright, ethical labor practices, public safety without over policing, and a cleaner planet. In the words of Jean-Luc Picard: Engage!
7. According to a recent study, “Only 35% of sampled Americans (among the lowest of surveyed countries) agreed that products and services using AI had more benefits than drawbacks.” Nestor Maslej, et al., “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. Source.
8. A survey of 399 undergraduate and postgraduate students from various disciplines in Hong Kong revealed a generally positive attitude towards GenAI in teaching and learning. Cecilia Ka Yuk Chan and Wenjie Hu, “Students' voices on generative AI: perceptions, benefits, and challenges in higher education,” International Journal of Educational Technology in Higher Education 20, no. 43 (2023): 1-18. Source.
9. For more examples of language you might borrow for your syllabus, see: “Classroom Policies for AI Generative Tools,” a crowdsourced Google Doc organized by Lance Eaton, the Director of Digital Pedagogy at College Unbound in Providence, RI; and Boris Steipe et al., The Sentient Syllabus Project: Charting a Course for the Academy in an Era of Synthesized Thought, founded December 2022. Source. The project's website, print materials, and Substack includes guides for understanding AI issues, sample text for a syllabus, and course activities involving AI.
10. Also see: Rhiannon Williams, “AI-text detection tools are really easy to fool,” MIT Technology Review, published July 7, 2023, source; and Andrew Myers, “AI-Detectors Biased Against Non-Native English Writers,” Stanford University Human-Centered Artificial Intelligence blog, published May 15, 2023, source.
11. See the case of Robert Williams and the Detroit Police Department from January 2020. Kashmir Hill, “Wrongfully Accused by an Algorithm,” The New York Times, June 24, 2020. Source. A National Institute of Standards and Technology Interagency or Internal Report from December 2019 found that "With domestic law enforcement images, the highest false positives are in American Indians, with elevated rates in African American and Asian populations; the relative ordering depends on sex and varies with algorithm. We found false positives to be higher in women than men, and this is consistent across algorithms and datasets. This effect is smaller than that due to race." For details of the corpus of images used by the NIST in their study, see: Patrick Grother, Mei Ngan, and Kayee Hanaoka, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects,” National Institute of Standards and Technology Interagency or Internal Report, published December 2019, 2. Source.
13. Nestor Maslej, et al., “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023. Source. According to the same report, “Global AI private investment was $91.9 billion in 2022, which represented a 26.7% decrease since 2021.”