On Wednesday, November 19th, the Center for Teaching Excellence and Innovation (CTEI) hosted a Lunch and Learn featuring faculty panelists discussing the Ethics of
Using AI as an Instructor. Faculty panelists included Dan Ryan, Teaching Professor of Computer Science, Louis Hyman, Dorothy Ross Professor of Political Economy in History and Professor at the SNF Agora Institute, and Steven Gross, Professor and Chair of Philosophy. Caroline Egan, Teaching Academy Program Manager, moderated the discussion.
The session began with the panelists introducing themselves and discussing how they integrate AI into their course content and whether they’ve received feedback, positive or negative, from students. Dan Ryan uses AI “all the time and for everything.” He described how he uses it as a thought partner when building and designing his courses; he has also created tools for students to use, such as a custom GPT that helps them prepare for oral exams or assesses their answers to particular questions. Ryan has always been very open with his students about his use of AI and so far has not received any negative feedback.
Louis Hyman uses AI in varying degrees depending on the course he is teaching. In his AI methods and history course, his students use AI to teach themselves how to write code and then use the code to analyze historical data. This semester, he’s teaching a social theories course that involves much less interaction with AI. However, Hyman encourages his students to use AI where it makes sense, for “boring, tedious, non-creative work,” such as helping with grammar. He frequently uses it himself in this way to offload mundane tasks, such as extracting keywords from his lectures to create study handouts or revising dates in his syllabus. He does not support using AI when creative thinking is required, such as when writing a paper. Hyman is encouraged that the students in his class do not seem to be overly reliant on AI.
In his course, Philosophy and AI, Steven Gross and his students spend a lot of time discussing the philosophical underpinnings of artificial intelligence but use ChatGPT sparingly for in-class demonstrations of its capabilities.
Gross expressed concern that students may be tempted to use AI, especially in Humanities courses that require a lot of dense reading and writing assessments. He designed his course to reduce this temptation by using in-class blue-book exams and discussions to encourage critical thinking without the use of AI. Gross does, however, support the use of AI as a study companion or for help working through difficult concepts.
Egan continued the discussion by asking panelists what ethical considerations they addressed, if any, related to using AI to support the creation of instructional materials. The discussion started with their own use but evolved into a larger discussion that included student use, indicating the interrelated questions and complexity of this topic. All panelists agreed that they see AI as a tool, like any other, that has its advantages and disadvantages. Hyman is fully transparent with his students and encourages them to use AI in situations where they think it would be helpful, such as helping to interpret difficult readings. He reinforces to students that it’s not “an oracle” but that they should treat it as “another point of contact” for helping to understand course material. Ryan holds class discussions about AI’s capabilities and appropriate applications and emphasized the importance of being explicit with students regarding AI usage. Gross agreed with the need for transparency and being upfront about when students can or cannot use it. He also stated that instructors have an obligation to prepare students to use and understand AI.
All panelists agreed that full disclosure about their own use of AI as instructors is essential. Hyman noted the “sacred bond” that exists between teachers and students that must be preserved through mutual trust, which includes being open and honest about the use of AI. Ryan commented that deception of any kind is off-limits, saying one should no more present AI-generated work as their own than present another person’s work as their own. He also does not support the use of AI for grading. Gross agreed that using AI for grading is not a good strategy, although he feels this may change at some point in the future for certain specific applications, such as logic problem sets. For now, Gross said instructors should err on the side of greater transparency and help students understand “why AI is ok here and not ok there.” Another point that was mutually agreed upon was that it is both reasonable and appropriate to place different limits of AI usage on learners than on instructors, just as we already do with other tools and resources.
A question was raised by an audience member who requested “good practices” for instructors and students when using AI. Responses from the panel included:
- Use the “project” feature inside of a LLM to keep prompts and chats organized. The LLM will increase its knowledge base about your topic as more interactions occur within the project space. (Note: this feature is not available in HopGPT, the Hopkins AI platform).
- Group interaction idea: Have students invite the LLM to join a study group with them. Before doing a pair and share in class, ask students to do a pair and share with the LLM as a warm-up.
- Experiment with more than one LLM: ChatGPT, ClaudeAI, Google Notebook LLM, HopGPT, etc. Ask ChatGPT (or other LLM) what other tools are out there, what tools are best suited for what tasks, etc.
- Get in the habit of asking the model what else it needs from you. The more detail you can provide, the better the responses will be.
- Clearly outline your expectations for students regarding AI usage. Explain why you allow or prohibit it in specific situations and for what purpose.
Another question from the audience asked panelists how they handle various reactions and attitudes from students about using AI, particularly those who have concerns about its environmental impact. Gross stated that his impression was that the environmental impact of individual users is fairly small, but the training of the model is what has the larger impact. Ryan stated that we need to put this issue into context, and that perhaps
it is not as large of an issue as it is made out to be. Hyman described a recent meeting he had with someone from the Baltimore Gas and Electric company (BG&E) who stated that the price of electricity is going to increase about ten times what it is now in just a few years, due in part to demands from a large data center in Maryland that is helping to fuel AI. He acknowledged that this is concerning but stated that this should not be a reason to not use AI. He tried to provide context by comparing this issue to other environmental concerns, such as the use of water on golf courses or the power required for cell phones. Gross agreed and encouraged audience members to consider the consequences of choosing not to use AI, asking whether it could help generate alternative approaches to solving global problems like climate change – and what might be lost if climate scientists were denied access to a tool that could potentially contribute to solutions.
Some audience members pushed back, arguing that AI’s environmental impact is far greater than the other issues mentioned – especially for people living near data centers – and that students should be able to assert their individual political agency in terms of using AI. One faculty member described having a student who refused to complete an assignment that required AI, so she prepared an alternative assignment for that situation. The panelists agreed that the environmental impact of AI is not trivial and needs to be addressed. Hyman suggested that we insist data centers find ways to use alternative energy sources, such as solar energy. Ryan and Gross agreed, with Ryan questioning why we are allowing a few big companies to make all of the decisions regarding regulation and argued for overall better governance of AI use.
Panel members wrapped up by sharing final thoughts with the group. Ryan encouraged everyone to remember that we have an established set of principles in place within academia (honesty, integrity, etc.) that we should use to help guide us and answer questions about AI usage. Gross encouraged responsible use of AI, pointing out that we would be failing pedagogically if we all use AI to represent ourselves in our work. Hyman echoed this response, expressing the need for intentional educational experiences with students, while also recognizing the value of AI in certain situations.
Amy Brusini, Senior Instructional Designer
Center for Teaching Excellence and Innovation
Image source: Lunch and Learn logo, Unsplash