Panel Discussion: “Teaching and Learning in the Age of Chatbots and Artificial Intelligence”

On April 4th, the Center for Teaching Excellence and Innovation hosted “Teaching and Learning in the Age of Chatbots and Artificial Intelligence,” a panel discussion on the implications of artificial intelligence in Hopkins classrooms. This discussion, open to attendees from all schools and divisions in Hopkins, yielded insights into the opportunities and limitations of Chatbots, particularly ChatGPT; identified ways to frame its pedagogical uses for students and faculty; and gave guidance for integrating it into classrooms.

The five-person panel consisted of Victoria Harms, DAAD Visiting Assistant Professor, History; Austin Heath, PhD Candidate, Philosophy; Mike Todasco, MFA student, Writing Seminars and former PayPal executive; and Opal Sitzman and Timothy Huang, first-year students taking the Reintroduction to Writing seminar with Alex Lewis, a Post-Doctoral Fellow in the University Writing Program who is using ChatGPT in his courses.

The discussion produced several incisive observations about chatbots and their role in higher education classrooms.

Here is a summary of the main points:

  • Teaching and learning: There was broad consensus that instructors should engage in active inquiry into artificial intelligence (AI) with their students and leverage the tool to help students think critically about evaluating texts, the accuracy of texts, and what a Chatbot’s opportunities and limitations are as a source, creator, and partner in their work.
  • A metacognitive tool: Both instructors and students said one of the best ways to use ChatGPT is as a tool to help students think about their learning and knowledge, from helping to improve writing to assessing the substance of texts.
  • Academic Integrity: Panelists thought that the written work produced by ChatGPT fell below standards for a finished product; it could be inaccurate, incorrect, and overly broad.
  • Academic Integrity and Assessments: One student urged faculty to identify the core issues driving the need for assessment and use those ideas to motivate students to produce original work. This assessment design contrasts with more mechanical and easily-plagiarizable assignments.
  • The students were teaching the faculty: Opal and Tim provided a huge amount of guidance to faculty, including recommended readings, results from their individual research projects, and thoughts on assessment design.

And words of wisdom from some of the panelists:

  • Austin Heath urged attendees to conceptualize ChatGPT as “a tool inquiry vs. a received text or received piece” of truth.
  • Opal Sitzman warned against a “tend[ancy] to overestimate ChatGPT’s current prowess.”
  • Mike Todasco compared ChatGPT’s current capabilities to “mansplaining,” with all of attendant drawbacks of the term.

Tim and Opal kicked off the conversation, describing the ways that students are using AI technology. Opal assured people that AI is not a “nefarious actor” in student lives: “In general, students like playing around with it like writing a Seinfeld episode, but it’s used more for inspiration than cheating.” Tim said, “You can use it to create the first draft of a paper,” and he’s using it as a self-tutoring tool “to adjust how I write.” Mike, in his MFA classes, used it “to be the voice of a computer in a story I was writing. The key is to always acknowledge it.”

Austin and Victoria discussed how they are guiding students to use and think about artificial intelligence. Austin thought of Chatbots “as a student’s student,” a way for students to learn how to evaluate and critique writing. He gives students output from a chatbot explaining a concept and invites them to grade it and offer suggestions for improvement. In Victoria’s class on Europe since 1945, she asked the Chatbot, “Why did the Soviet Union collapse?” Her students critique the answer for “accuracy and substance,” which taught “students that they know something, too.” She urged the audience “to teach students to be critical digesters of information.”

The panelists also weighed in on how their subject matter expertise influenced the way they used and thought about artificial intelligence. Mike, who has been writing about it for a while, said, “I felt like a Cassandra in that no one was listening and now everyone is talking about it.” He then talked about how “People who don’t have access to JHU resources can use it to learn […] the more people use it – not just for teaching, but for life – will help us learn.” Victoria teaches her students “to fact check results, like I do with Wikipedia. We need to integrate these tools into our assessments so they will use them appropriately.”

Opal, who’s interested in neuroscience, wrote a paper considering whether AI is conscious. Her verdict: “[I]t’s still much more simple than our brain,” but, importantly, “it helps us understand the concept of consciousness even if it isn’t conscious itself.” Austin, as a philosopher, applauded Opal’s interest in consciousness before explaining his own interest in “generat[ing] alternative thoughts about writing and giving credit,” saying, “I’m interested in exploring what it means to give attribution. Did a student write this work? Or did AI write this? Or did students work with AI to write this?”

When queried about Chatbots and academic integrity, the panelists mostly talked about its limitations as an easily accessible cheating tool. Opal said, “ChatGPT has a bad reputation for helping students cheat, but people overestimate its abilities. You still have to do a lot of work that requires critical thinking when using it because it doesn’t produce sophisticated results. It might help with a basic prompt.” Mike and Victoria echoed Opal’s opinion. Mike said, “If you were teaching middle schoolers, you might be concerned with cheating,” though he went on to add, “That said, the future version will get better.” Victoria added, “The pandemic taught us that not all students are excited about technology or are tech savvy.”

Tim offered a very good, thoughtful response about using ChatGPT to plagiarize code in a computing course when Kwame Kutton, a Lecturer in Biomedical Engineering, raised a question about doing this. Currently in a computer science course himself, Tim said, “In BME there are unique opportunities to write code that saves lives. Therefore, students need to tackle the core issue to solve before they even write code. We want faculty to teach us how to think about the logic of the problem, not just writing code.” His comment encouraged instructors to think deeply about first framing and identifying the problem for students, which will help motivate them to produce original and independent work.

Mike stated another perspective: “I don’t know any programmer who doesn’t use Copilot,” a code repository on GitHub that uses AI to suggest coding solutions. “My analogy is calculators,” he said. “You need to know how to do math without a calculator, but once you are doing the calculations after setting up the problem, you should use a calculator to help solve the problem.”

A question from the audience about languages, accents, and ChatGPT turned the discussion to issues of accessibility and political bias. Tim saw one of his friends using the Chatbot to translate English to Japanese and then used it himself to translate a Spanish article he was familiar with. His opinion: “It does a better job than Google Translate” though “there are lots of metaphors that get lost in translation by these tools.”

Mike then gave two excellent examples about how ChatGPT is providing access and support to people with divergent and impaired abilities. He said, “ChatGPT 4 is available, but they haven’t released the picture-to-text feature that exists yet. They shared video of someone with visual impairment using ChatGPT 4 to learn what was in the fridge using their phone. It will be able to do amazing things in the future to help us.” He went on to talk about a friend of his who knew someone in San Francisco with a lawncare business who struggled to communicate via email. The owner of the business now uses ChatGPT “to help polish his emails,” thus improving his client relationships.

Opal talked about how ChatGPT struggles with dialects, which turned the conversation to political bias. She’s using ChatGPT to write a short story “in the style of Kate Chopin,” a 19th Century American writer known for writing about Louisiana Creole culture. Opal said, “[Chopin] used a lot of Louisiana dialect” and ChatGPT “struggles” with this because it “is filtered so it doesn’t mimic the racist language used during that time.” She said that people have found ChatGPT to be “an establishment liberal” in its political biases. Victoria brought up “issues of bias in Silicon Valley” and wondered how ChatGPT would address Critical Race Theory (CRT). Mike decided to ask ChatGPT whether we should ban CRT and copied and pasted ChatGPT’s response in the Zoom chat:

As an AI language model, I don’t have personal opinions. However, I can provide you with an analysis. Whether Critical Race Theory (CRT) should be banned in schools is a subjective question, often debated among educators, policymakers, and parents. Supporters argue that CRT promotes understanding of systemic racism and its impacts, while opponents believe it can be divisive and foster racial animosity. Ultimately, the decision to include or exclude CRT in schools depends on the goals and values of the educational community involved.[1]

The conversation ended with speculation about how quickly ChatGPT would progress. Mike said, “The current GPT4 has been remarkable. I’ve written fiction in each version and I’d say it’s getting two grade levels better in each version.” Opal also weighed in: “It will be quick, but I’m not wary yet. We need to keep considering these questions, but I think it’s less something to be scared of and more something to utilize. I don’t see anything being more powerful than humans in the near future.”

Recommended reading and activities:

[1] OpenAI. (2023). ChatGPT (Apr 4 version) [Large language model]. https://chat.openai.com/

Caroline Egan
Caroline Egan is a Project Manager in the Center for Teaching Excellence and Innovation, supporting instructional training and development for Hopkins faculty, graduate students, post-doctoral fellows, and staff.

Image source: Pixabay, Unsplash