On Thursday, February 19th, the Center for Teaching Excellence and Innovation (CTEI) hosted a Lunch and Learn featuring two faculty members discussing Practical AI Pedagogies. Emily Fisher, Director of Undergraduate Studies and Associate Teaching
Professor of Biology, and Illysa Izenberg, Associate Teaching Professor in JHU’s Center for Leadership Education, presented two approaches to integrating AI into their courses. Caroline Egan, Teaching Academy Program Manager, moderated the discussion.
Fisher began the presentation by describing how she and co-instructors Nichole Broderick and Chiara De Luca redesigned an exam preparation assignment to intentionally incorporate generative AI, also known as the “AI Study Buddy,” in their Molecular Biology course. Their goal was not to outsource thinking to AI, but to foster metacognition; they wanted students to appreciate the role of retrieval in learning, view exam prep as a learning opportunity, and practice self-directed learning and studying.
In previous iterations of this assignment (2020 – 2024), students were asked to write and answer their own exam questions (without AI). While this exercise was useful, it was unclear how many students had improperly used AI to complete their work. Also, Fisher noted that many of the questions produced were generally of low quality.
In 2025, the instructors reframed the activity: instead of avoiding the use of AI, they integrated it deliberately. Students were asked to use HopGPT (GPT-4o mini) to complete the following steps:
- Prompt AI to generate a challenging exam question about molecular biology (without providing the answer).
- Answer the AI-generated question themselves.
- Ask AI to answer the same question.
- Evaluate the AI’s response for:
-Accuracy
-Similarity to their own answer
-Alignment with the style of course exams
-Overall usefulness as a study tool
The assignment was worth 2 points out of a total of 106 for the course. A non-AI alternative was available for students who preferred not to use generative AI. (Two students selected this option.)
Overall, students rated the AI answers as highly accurate – but many times the students’ answers were different than the AI answers. Questions and answers were often very broad; the course content is much more detailed and includes experimental techniques that the AI answers did not include. Still, student feedback was mostly positive, rating it about “medium” in usefulness in helping them study. Fisher added that it was a valuable exercise, moving the focus away from writing exam questions and instead towards answering, analyzing, and critically evaluating content.
There were a few questions from the audience about the AI Study Buddy:
Q: Are there any data or future plans to compare these results with having students study together, asking each other questions?
EF: Not at the moment. This would be a challenge. I would have to trust that they weren’t using AI.
Q: Did students do better on the exam after doing this exercise?
EF: Students usually do well on exams in this class. I did not look to compare.
Q: Have you thought about asking HopGPT for more topics for more study? For example, telling students, “Here is what we think you should focus on, how does that compare with what AI recommends?”
EF: This is a good idea. Students are telling us they use AI to help them study. I guess I just wanted them to wonder how well it was helping.
Q: What about student equity to AI platforms?
EF: That’s why I used HopGPT, since all students have access to that one. I would have selected Claude if I did it again.
Q: Next time, would you keep it the same or expand it? Also, next time would you continue to focus on the answers?
EF: I thought it was useful the way it went. I would keep it. I love that there are ways to go deeper, but this is the right number of points for this class.
Illysa Izenberg continued the presentation by sharing her approach to using AI in her Engineering Management and Leadership course. Like many instructors, she wants to help students use AI tools productively without becoming overly dependent on them. Izenberg believes in total AI transparency between instructors and students and provides very clear guidance for students about its use. She uses color-coded symbols to show when and how students may use AI, and requires them to cite all sources (including AI tools) while providing detailed instructions on proper citation.
Izenberg has developed a unique framework that encourages students to view AI as “teammates” they can rely on for different types of support, depending on the task. Each teammate has a clearly defined role:
- The Tasker handles repetitive work like formatting citations, organizing notes, or cleaning data — tasks that are routine but still require students to verify accuracy.
- The Draftsmith can help refine writing, generate study materials, or suggest improvements. Students need discipline when using this role, as allowing AI to draft too much early on can lead to missed learning opportunities. For example, Izenberg notes that students need to develop their own writer’s voice which will be severely impacted if they rely on AI to draft everything for them.
- The Facilitator acts as a thinking partner, asking questions that help the student consider alternatives, evaluate plans, and expand and sharpen their analyses. This role also requires discipline so that learning is not undermined. When used thoughtfully, this role can promote reflection and deeper understanding.
In addition to the AI teammates framework, Izenberg also introduced students to an “AI Gatekeeper,” a tool similar to a rubric that guides students in regulating their use of AI. The AI Gatekeeper asks students to first define their own criteria for using AI and then rate each task on a scale from 1 to 5 to decide whether to proceed (1 = no AI use, 5 = AI use is appropriate). Izenberg recommends a rating of at least “3” for students to consider using AI for assistance. Example student criteria and ratings:
- Practice and expertise:
- 1 if this task gives me essential practice in a skill I’ll need later.
- 5 if I’ve done it many times and already have the needed expertise.
- Creativity vs. rote work:
-
- 1 if the task requires creativity or original thought.
- 5 if it is mostly rote.
Students were not required to use AI or these tools in class, but they were there if they chose to use them. According to Izenberg, students who used the tools realized that they knew less about AI than they initially thought and learned a great deal in the
process: to push themselves when articulating their reasoning, to avoid the temptation of allowing AI to make decisions for them, and to resist using it too early in the process so their creativity would not be compromised. Several students found the tools extremely helpful and suggested that they be made available and integrated across all of their courses.
The presentation concluded with additional questions from the audience and facilitator Caroline Egan:
Q: Given that the paid-for LLMs seem like they do a better job, what are the equity issues around giving them GPTs and prompts to use for coursework?
II: This is something we need to acknowledge. First, no assignment required AI use and students could succeed in the course without it. For those who chose to use it, many didn’t realize that HopGPT can access various ChatGPT models as well as Claude and others. I tried to mitigate the equity issues by either giving them prompts that they could edit and use in HopGPT or giving them access to GPTs I created for my engineering students working on senior design. Every time I create a GPT for students, I hire 2-3 people to try out the GPT at least 3 times. At least one of those times has to be using the free version of ChatGPT. I made up a rubric to evaluate GPTs and so I can see where the GPT might not work well for someone with a free account before I launch it so I can edit it. For now, that’s the best I can do I think. I’m going to be working on porting over my GPTs to another tool during the next few months in the hopes of resolving this and other issues.
Q: Maybe [the AI Gatekeeper] could be addressed as part of the introductions for all classes.
II: I love that idea. Students are harming themselves [with their unregulated approach to using AI] and they don’t know it.
CE: A question for Emily and Illysa: both of you have had students critically reflect on the use of AI. What would you recommend for faculty who haven’t done that yet?
II: It is important to decide for yourself and your course where the lines are. Ask yourself, what do students need to be able to do? From there, draw the lines. Then talk to the students about what happens if they go outside of the lines.
EF: I agree. In another course, I showed students data from a paper about what happens when you use AI summaries vs. a web search. Participants in the experiment learned less overall with the AI summaries.
II: Also, in my opinion, students are overloaded with classes, credits, and extracurriculars. It is costing them the ability to reflect and learn. So we have to help them and ourselves by planning ahead on what we want to turn to AI for and what we want to do on our own. Without clear guidance from faculty, students are never sure if they are doing something they should hide. This harms trust. Make the lines clear, relate them to your learning outcomes, and then teach them Total AI Transparency. You don’t have to hide what you’re allowed to do!
CE: Emily, the case study you brought up was on metacognition. It was an opportunity for students to practice metacognition. Illysa, did you also do this?
II: Yes – anytime students used AI, they had to explain why they chose to use it, why they made those choices, and why they used or didn’t use the output. They had to evaluate the output compared to their original goal. They weren’t just doing it, they were explaining why and how. The point is to teach them discernment.
Amy Brusini, Senior Instructional Designer
Center for Teaching Excellence and Innovation
Image source: Lunch and Learn logo, Unsplash



