Panel Discussion: “Teaching and Learning in the Age of Chatbots and Artificial Intelligence”

On April 4th, the Center for Teaching Excellence and Innovation hosted “Teaching and Learning in the Age of Chatbots and Artificial Intelligence,” a panel discussion on the implications of artificial intelligence in Hopkins classrooms. This discussion, open to attendees from all schools and divisions in Hopkins, yielded insights into the opportunities and limitations of Chatbots, particularly ChatGPT; identified ways to frame its pedagogical uses for students and faculty; and gave guidance for integrating it into classrooms.

The five-person panel consisted of Victoria Harms, DAAD Visiting Assistant Professor, History; Austin Heath, PhD Candidate, Philosophy; Mike Todasco, MFA student, Writing Seminars and former PayPal executive; and Opal Sitzman and Timothy Huang, first-year students taking the Reintroduction to Writing seminar with Alex Lewis, a Post-Doctoral Fellow in the University Writing Program who is using ChatGPT in his courses.

The discussion produced several incisive observations about chatbots and their role in higher education classrooms.

Here is a summary of the main points:

  • Teaching and learning: There was broad consensus that instructors should engage in active inquiry into artificial intelligence (AI) with their students and leverage the tool to help students think critically about evaluating texts, the accuracy of texts, and what a Chatbot’s opportunities and limitations are as a source, creator, and partner in their work.
  • A metacognitive tool: Both instructors and students said one of the best ways to use ChatGPT is as a tool to help students think about their learning and knowledge, from helping to improve writing to assessing the substance of texts.
  • Academic Integrity: Panelists thought that the written work produced by ChatGPT fell below standards for a finished product; it could be inaccurate, incorrect, and overly broad.
  • Academic Integrity and Assessments: One student urged faculty to identify the core issues driving the need for assessment and use those ideas to motivate students to produce original work. This assessment design contrasts with more mechanical and easily-plagiarizable assignments.
  • The students were teaching the faculty: Opal and Tim provided a huge amount of guidance to faculty, including recommended readings, results from their individual research projects, and thoughts on assessment design.

And words of wisdom from some of the panelists:

  • Austin Heath urged attendees to conceptualize ChatGPT as “a tool inquiry vs. a received text or received piece” of truth.
  • Opal Sitzman warned against a “tend[ancy] to overestimate ChatGPT’s current prowess.”
  • Mike Todasco compared ChatGPT’s current capabilities to “mansplaining,” with all of attendant drawbacks of the term.

Tim and Opal kicked off the conversation, describing the ways that students are using AI technology. Opal assured people that AI is not a “nefarious actor” in student lives: “In general, students like playing around with it like writing a Seinfeld episode, but it’s used more for inspiration than cheating.” Tim said, “You can use it to create the first draft of a paper,” and he’s using it as a self-tutoring tool “to adjust how I write.” Mike, in his MFA classes, used it “to be the voice of a computer in a story I was writing. The key is to always acknowledge it.”

Austin and Victoria discussed how they are guiding students to use and think about artificial intelligence. Austin thought of Chatbots “as a student’s student,” a way for students to learn how to evaluate and critique writing. He gives students output from a chatbot explaining a concept and invites them to grade it and offer suggestions for improvement. In Victoria’s class on Europe since 1945, she asked the Chatbot, “Why did the Soviet Union collapse?” Her students critique the answer for “accuracy and substance,” which taught “students that they know something, too.” She urged the audience “to teach students to be critical digesters of information.”

The panelists also weighed in on how their subject matter expertise influenced the way they used and thought about artificial intelligence. Mike, who has been writing about it for a while, said, “I felt like a Cassandra in that no one was listening and now everyone is talking about it.” He then talked about how “People who don’t have access to JHU resources can use it to learn […] the more people use it – not just for teaching, but for life – will help us learn.” Victoria teaches her students “to fact check results, like I do with Wikipedia. We need to integrate these tools into our assessments so they will use them appropriately.”

Opal, who’s interested in neuroscience, wrote a paper considering whether AI is conscious. Her verdict: “[I]t’s still much more simple than our brain,” but, importantly, “it helps us understand the concept of consciousness even if it isn’t conscious itself.” Austin, as a philosopher, applauded Opal’s interest in consciousness before explaining his own interest in “generat[ing] alternative thoughts about writing and giving credit,” saying, “I’m interested in exploring what it means to give attribution. Did a student write this work? Or did AI write this? Or did students work with AI to write this?”

When queried about Chatbots and academic integrity, the panelists mostly talked about its limitations as an easily accessible cheating tool. Opal said, “ChatGPT has a bad reputation for helping students cheat, but people overestimate its abilities. You still have to do a lot of work that requires critical thinking when using it because it doesn’t produce sophisticated results. It might help with a basic prompt.” Mike and Victoria echoed Opal’s opinion. Mike said, “If you were teaching middle schoolers, you might be concerned with cheating,” though he went on to add, “That said, the future version will get better.” Victoria added, “The pandemic taught us that not all students are excited about technology or are tech savvy.”

Tim offered a very good, thoughtful response about using ChatGPT to plagiarize code in a computing course when Kwame Kutton, a Lecturer in Biomedical Engineering, raised a question about doing this. Currently in a computer science course himself, Tim said, “In BME there are unique opportunities to write code that saves lives. Therefore, students need to tackle the core issue to solve before they even write code. We want faculty to teach us how to think about the logic of the problem, not just writing code.” His comment encouraged instructors to think deeply about first framing and identifying the problem for students, which will help motivate them to produce original and independent work.

Mike stated another perspective: “I don’t know any programmer who doesn’t use Copilot,” a code repository on GitHub that uses AI to suggest coding solutions. “My analogy is calculators,” he said. “You need to know how to do math without a calculator, but once you are doing the calculations after setting up the problem, you should use a calculator to help solve the problem.”

A question from the audience about languages, accents, and ChatGPT turned the discussion to issues of accessibility and political bias. Tim saw one of his friends using the Chatbot to translate English to Japanese and then used it himself to translate a Spanish article he was familiar with. His opinion: “It does a better job than Google Translate” though “there are lots of metaphors that get lost in translation by these tools.”

Mike then gave two excellent examples about how ChatGPT is providing access and support to people with divergent and impaired abilities. He said, “ChatGPT 4 is available, but they haven’t released the picture-to-text feature that exists yet. They shared video of someone with visual impairment using ChatGPT 4 to learn what was in the fridge using their phone. It will be able to do amazing things in the future to help us.” He went on to talk about a friend of his who knew someone in San Francisco with a lawncare business who struggled to communicate via email. The owner of the business now uses ChatGPT “to help polish his emails,” thus improving his client relationships.

Opal talked about how ChatGPT struggles with dialects, which turned the conversation to political bias. She’s using ChatGPT to write a short story “in the style of Kate Chopin,” a 19th Century American writer known for writing about Louisiana Creole culture. Opal said, “[Chopin] used a lot of Louisiana dialect” and ChatGPT “struggles” with this because it “is filtered so it doesn’t mimic the racist language used during that time.” She said that people have found ChatGPT to be “an establishment liberal” in its political biases. Victoria brought up “issues of bias in Silicon Valley” and wondered how ChatGPT would address Critical Race Theory (CRT). Mike decided to ask ChatGPT whether we should ban CRT and copied and pasted ChatGPT’s response in the Zoom chat:

As an AI language model, I don’t have personal opinions. However, I can provide you with an analysis. Whether Critical Race Theory (CRT) should be banned in schools is a subjective question, often debated among educators, policymakers, and parents. Supporters argue that CRT promotes understanding of systemic racism and its impacts, while opponents believe it can be divisive and foster racial animosity. Ultimately, the decision to include or exclude CRT in schools depends on the goals and values of the educational community involved.[1]

The conversation ended with speculation about how quickly ChatGPT would progress. Mike said, “The current GPT4 has been remarkable. I’ve written fiction in each version and I’d say it’s getting two grade levels better in each version.” Opal also weighed in: “It will be quick, but I’m not wary yet. We need to keep considering these questions, but I think it’s less something to be scared of and more something to utilize. I don’t see anything being more powerful than humans in the near future.”

Recommended reading and activities:

[1] OpenAI. (2023). ChatGPT (Apr 4 version) [Large language model]. https://chat.openai.com/

Caroline Egan
Caroline Egan is a Project Manager in the Center for Teaching Excellence and Innovation, supporting instructional training and development for Hopkins faculty, graduate students, post-doctoral fellows, and staff.

Image source: Pixabay, Unsplash

ChatGPT: A Brief Introduction and Considerations for Academic Integrity

I’ve been reading about the potential impact of artificial intelligence (AI) on teaching and learning for some time. A close family friend gave me a book entitled In the Mind of the Machine by Ken Warwick in 1998. The Education Horizon Report Advisory Committee, of which I was a member, first listed artificial intelligence as an emerging technology likely to have an impact on learning, teaching, and creative inquiry in education in 2017. November 2022 brought the long-anticipated arrival of ChatGPT beta with accompanying media attention.

What is OpenAI and ChatGPT?

OpenAI is an artificial intelligence research lab. Open AI developed a chatbot called ChatGPT (GPT = Generative Pre-trained Transformer) and an image generation application DALL-E 2. ChatGPT is a natural language processing model that is trained on hundreds of billions of documents and websites. Users can interact by asking it questions or submitting statements to which it can generate responses.  For example, here is ChatGPT answering a question about itself:

Prompt: What is ChatGPT?
“ChatGPT is a large language model developed by OpenAI. It is trained on a diverse range of internet text and is able to generate human-like text in response to various prompts. The model can be fine-tuned for various natural language processing tasks such as language translation, question answering, and conversation.”

While ChatGPT received most of the media attention in winter 2022-23, there are other chatbots that exist like Jasper and Chincilla.

What are the main concerns?

The main concern for instructors is students asking OpenAI applications to complete assignments for them. This includes writing essays or research papers along with coding assignments for which ChatGPT is trained. Students can also ask ChatGPT to answer test questions.

Things to Consider

While the capabilities of artificial intelligence applications will continue to evolve, there are currently some limitations. For example, current models do not include articles behind paywalls (e.g., subscription journals). This makes it harder for students to generate essays based on peer-reviewed research.  While the models are trained on a large number of documents, the applications’ responses to specific, focused inquiries tend to be vague.  My colleagues and I asked ChatGPT to write a strategic plan for the Center for Teaching Excellence and Innovation. It suggested relevant ideas, but it was generic and too broad to be useful. That said, we could have used it as a starting point for brainstorming a draft.

Some applications, like Turnitin, are claiming they can detect if students used ChatGPT, but like any technology, these applications are not perfect and students can work around them (e.g., editing the essay produced to make it closer to their own writing style).

 Academic Integrity

Use of OpenAI applications can fall under academic integrity policies like plagiarism, but the gray zone between clearly plagiarized work and an academic support tool is large. For example, most instructors would consider it plagiarism for students to ask ChatGPT to write a paper based on a writing prompt from class. But is it OK for students to ask ChatGPT for a summary of research on a topic, which they then use to generate a bibliography as the basis for a research paper they write?  Instructors should learn more about how ChatGPT and other AI technologies work so they can inform students what is considered appropriate use of AI technologies and what is not. Here are additional strategies to consider to help you and your students navigate this new territory:

  • Scaffold the activity by asking students to turn in an outline and iterative drafts that address comments and feedback from the instructor or teaching assistants. This requires students to show progression in a way that is difficult for tools like ChatGPT to produce.
  • Ask students to write papers using a shared Microsoft document through One Drive so you can see the version history.
  • Use writing prompts that are more specific or require students to cite specific texts.
  • Use AI tools to teach students. For example, generate essays in ChatGPT and have students critique them.
  • Discuss with students what is considered acceptable use of AI technologies (e.g., generating a summary of a field) and what is not (e.g., responding to a specific assignment prompt).

A colleague also commented that as we engage with ChatGPT and other AI technologies, we are feeding it data it can use to improve its models. They own the submissions as part of the terms of agreement when accounts are created. Explain to students that they may be giving over their intellectual property if they are using these tools.  If they submit your tests for ChatGPT to answer, they may be violating your intellectual property rights.

Where to Learn More

 Here are some resources to learn more about AI technologies:

We are all orienting ourselves to this new technology and best practices are evolving. The CTEI will continue to share more information and host discussions over the semester.

Mike Reese
Mike Reese is Associate Dean of the Center for Teaching Excellence and Innovation and associate teaching professor in Sociology.

Image Source: OpenAI Logo, Pixabay