Lunch and Learn: Generative AI Uses in the Classroom

On Tuesday, April 23rd, the Center for Teaching Excellence and Innovation (CTEI) hosted a Lunch and Learn on Generative AI Uses in the Classroom. Faculty panelists included Louis Hyman, Dorothy Ross Professor of Political Economy in History and Professor at the SNF Agora Institute, Jeffrey Gray, Professor of Chemical and Biomolecular Engineering in the Whiting School, and Brian Klaas, Assistant Director for Technology and instructor at the Bloomberg School of Public Health. Caroline Egan, Teaching Academy Program Manager, moderated the discussion.  

Louis Hyman began the presentation by reminding the audience what large language models (LLMs) like ChatGPT can and cannot do. For example, ChatGPT does not “know” anything and is incapable of reasoning. It generates text that it predicts will best answer the prompt it was given, based on how it was trained. In addition to his course work, Hyman mentioned several tasks he uses ChatGPT to assist with, including text summarization, writing complicated Excel formulas, writing and editing drafts, making PowerPoint tables, and turning image files in the right direction.

In Hyman’s course, AI and Data Methods in History, students are introduced to a variety of tools (e.g., Google Sheets, ChatGPT, Python) that help them analyze and think critically about historical data. Hyman described how students used primers from LinkedIn Learning as well as Generative AI prompts to increase their technical skills which enabled them to take a deeper dive into data analysis. For example, while it would have been too complicated for most students to write code on their own, they learned how to prompt ChatGPT to write code for them.  By the end of the semester, students used application programming interface (API) calls to send data to Google, used OpenAI to clean up historical documents and images presented using optical character recognition (OCR), and used ChatGPT and Python to plot and map historical data.Two maps of 1850 New England showing the number of congregational churches and the value of congregational property. Data points plotted by students using AI.

Hyman noted that one of the most challenging parts of the course was convincing students that it was OK to use ChatGPT, that they were not cheating.  Another challenge was that many students lacked basic computer literacy skills, therefore, getting everyone up to speed took some time. There was also not one shared computer structure/platform. The successes of the course include students’ ability to use libraries and APIs to make arguments in their data analysis, apply statistical analysis of the data, and ask historical questions about the results they were seeing in the data.

Jeff Gray continued by describing his Computational Protein Structure Prediction and Design course that he has taught for over 18 years. In this course, students use molecular visualization and prediction tools like PyRosetta, an interactive Python-based interface that allows them to design custom molecular modeling algorithms. Recently, Gray has introduced open-sourced AI tools into the curriculum (AlphaFold and RoseTTAFold), which predict 3D models of protein structures.

Example of protein folding using AlphaFold.

One of the challenges Gray mentioned was the diversity of student academic backgrounds. There were students from engineering, biology, bioinformatics, computer science, and applied math, among others. To accommodate this challenge, Gray used specifications grading, a grading method in which students are graded pass/fail on individual assessments that align directly with learning goals. In Gray’s class, students were presented with a bundle of problem sets categorized at various difficulty levels. Students selected which ones they wanted to complete and had the option of resubmitting them a second time for full credit. Gray is undecided about using this method going forward, noting that half of the students ended up dropping the course when they tried to complete all of the problems instead of just a few, and found the workload too heavy.  Another challenge was how to balance the fundamental depth of the subject matter versus application.  To address this, Gray structured the twice weekly class with a lecture on one day and a hands-on workshop the other day, which seemed to work well.

Brian Klaas teaches a one credit pass/fail course called Using Generative AI to Improve Public Health. The goal of this course is to allow students to explore AI tools, gain a basic understanding of how they work, and then apply them to their academic work and research. In addition to using the tools, students discussed the possible harms in Generative AI, such as confabulations, biases, etc., the impact of these tools in Public Health research, and future concerns such as the impact on the environment and copyright law. Klaas shared his syllabus statement regarding the usage of AI tools in class, something he strongly recommends all faculty share with their students 

Hands-on assignments included various ways of using Generative AI. In one assignment, students were asked to write a summary of a journal article and then have GenAI write a summary of the same article geared towards different audiences (academics vs. high school students). Students were then asked to analyze the differences between the summaries.Sample instagram post created using AI showing people from different cultures dressed as medical professionals. For another assignment, students were asked to pick from a set of topics and use Generative AI to teach them about the selected topic, noting any confabulations or biases present. They then asked GenAI to create a five-question quiz on the topic and take the quiz. A final assignment was to create an Instagram post on the same topic including a single image and a few sentences explaining the topic to a lay audience. All assignments included a reflection piece which often required peer review.

Lessons learned: Students loved the interdisciplinary approach to the course, confabulations reinforce core data research skills, and learning from each other is key.

The discussion continued with questions from the audience: 

Q: What would you recommend to an instructor who is considering implementing GenAI in the classroom? How do they start thinking about GenAI?
JG: Jupyter notebooks are pretty easy to use. I think students should just give it a try.
LH: I recommend showing students what ”bad” examples look like. The truth is, we can still write better than computers. Use AI to draft papers and then use it as an editing tool – it’s very good as an editing tool. Students can learn a lot from that.
BK : I recommend having students experiment and see where the strengths lie, get an overall awareness of it. Reflect on that process, see what went well, not so well. Feed in an assignment and see what happens. Use a rubric to evaluate the assignment. Put a transcript in and ask it to create a quiz on that information. It can save you some time.

Q for Brian Klaas: What version of GPT were you using?
BK: Any of them – I didn’t prescribe specific tools or versions. We have students all over the world, so they used whatever they had. ChatGPT, Claude, MidJourney, etc. I let the students decide and allowed them to compare differences.

Q for Jeff Gray: Regrading the number of students who dropped, is the aim of the course to have as many students as possible, or a group who is wholly into it?
JG: I don’t know, I’m struggling with this. I want to invite all students but also need to be able to dig into the math and material. It feels like we just scratched the surface. Maybe offering an intersession course to learn the tools before they take this class would be helpful. There is no standard curriculum yet for AI. Where to begin…we’re all over the map as far as what should be included in the curriculum.
LH: I guess it depends on what your goals are. Students are good at “plug and chug,” but bad at asking questions like, “what does this mean?”
BK: We didn’t get to cover everything, either – there is not enough time in a one credit class. There are just so many things to cover.

Q: What advice do you have for faculty who are not computer scientists? Where should we start learning? What should we teach students?
LH: You can ask it to teach you Python, or how to do an API call. It’s amazing at this. I don’t know coding as well as others, but it helps. Just start asking it [GenAI]. Trust it for teaching something like getting Pytorch running on your PC. Encourage students to be curious and just start prompting it.
BK: If you’re not interested in Jupyter notebooks, or some of the more complicated functions, you can use these tools without dealing in data science. It can do other things. It’s about figuring out how to use it to save time, for ideation, for brainstorming.
JG: I have to push back – what if I want to know about what’s going on in Palestine and Israel? I don’t know what I don’t know. How do I know what it’s telling me is correct?
LH: I don’t use it for history – but where is the line of what it’s good and not good at?
BK: I would use it for task lists, areas to explore further, but remember that it has no concept of truth. If you are someone who knows something about the topic, it does get you over the hurdles.
JG: You have to be an expert in the area to rely on it.
LH: Students at the end of my course made so much progress in coding. It depends on what you ask it to do – protein folding is very different than history that already happened.

Q: How can we address concerns with fairness and bias with these tools in teaching?
BK: Give students foundational knowledge about how the tools work. Understand that these are prediction machines that make stuff up. There have been studies done that show how biased they are, with simple prompts. Tell students to experiment – they will learn from this. I suggest working this in as a discussion or some practice for themselves.

Q: Students have learned to ask questions better – would you rather be living now with these tools, or without them?
JG: Students are brainstorming better. They are using more data and more statistics.
BK: AI requires exploration and play to get good responses. It really takes time to learn how to prompt well. You have to keep trying. Culturally, our students are optimized for finding the “right answer;” AI programs us to think that there are multiple answers. There is no one right answer for how to get there.
LH: Using AI is just a different process to get there. It’s different than what we had to do in college. It was hard to use computers because many of us had to play with them to get things to work. Now it all works beautifully with smart phones. Students today aren’t comfortable experimenting. How do we move from memorization to asking questions? It’s very important to me that students have this experience. It’s uncomfortable to be free and questioning, and then go back to the data. How do we reconcile this?

JG: What age is appropriate to introduce AI to kids?
LH: Students don’t read and write as much as they used to. I’m not sure about the balance.
Guest: I work with middle and high school teachers. Middle school is a great time to introduce AI. Middle school kids are already good at taking information in and figuring out what it means. Teachers need time to learn the tools before introducing it to students, including how the tools can be biased, etc.

Q: How can we encourage creative uses of AI?
BK: Ethan Mollick is a good person to follow regarding creative uses of AI in education and what frameworks are out there. To encourage creativity, the more we expose AI to students, the better. They need to play and experiment. We need to teach them to push through and figure things out.
LH: AI enables all of us to do things now that weren’t possible. We need to remember it’s an augment to what we do, not a substitute for our work.

Resources:
Hyman slides
Gray slides
Klaas slides

Amy Brusini, Senior Instructional Designer
Center for Teaching Excellence and Innovation
 

Image source: Lunch and Learn logo, Hyman, Gray, and Klaas presentation slides, Unsplash

Lunch and Learn: Generative AI – Teaching Uses, Learning Curves, and Classroom Guidelines

On Tuesday, October 3rd, the Center for Teaching Excellence and Innovation (CTEI) hosted its first Lunch and Learn of the academic year, a panel discussion titled, “Generative AI: Teaching Uses, Learning Curves, and Classroom Guidelines.” The three panelists included Jun Fang, Assistant Director of the Instructional Design and Technology Team in the Carey Business School, Carly Schnitzler, KSAS instructor in the University Writing Program, and Sean Tackett, Associate Professor in the School of Medicine.  The discussion was moderated by Caroline Egan, project manager in the CTEI. Mike Reese, director of the CTEI, also helped to facilitate the event. 

The panelists began by introducing themselves and then describing their experiences with generative AI. Jun Fang loves new technology and has been experimenting with AI since its inception. He noticed the faculty that he works with generally fall into two categories when it comes to using AI: some are quite concerned about students using it to cheat and are not ready to use it, while others see a great deal of potential and are very excited to use it in the classroom.  In speaking with colleagues from across the institution, Fang quickly realized these are common sentiments expressed by faculty in all JHU divisions. This motivated him to lead an effort to create a set of AI guidelines specifically geared toward faculty. The document contains a number of strategies for using AI including: designing engaging course activities, providing feedback for students on their assignments, and redesigning course assessments. The section on redesigning course assessments uses two approaches: the “avoidance approach,” which involves deliberately designing assessments without AI, and the “activation approach,” which intentionally integrates AI tools into the curriculum. The document includes specific examples of many of the strategies mentioned as well as links to widely used generative AI tools. 

Fang described a recent scenario in which a faculty member was concerned that students were using ChatGPT to generate answers to online discussion board questions.  To mitigate this situation, Fang suggested the faculty member revise the questions so that they were tied to a specific reading or perhaps to a topic generated in one of his online synchronous class sessions.  Another suggestion was to have students submit two answers for each question – one original answer and one generated by ChatGPT – and then have the students compare the two answers.  The faculty member was not comfortable with either of these suggestions and ended up making the discussion more of a synchronous activity, rather than asynchronous.  Fang acknowledged that everyone has a different comfort level with using AI and that one approach is not necessarily better than another.     

Carly Schnitzler currently teaches two introductory writing courses to undergraduates and is very open to using generative AI in her classroom.  At the start of the semester, she asked students to fill out an intake survey which included questions about previous writing experiences and any technologies used, including generative AI. She found that students were reluctant to admit that they had used these technologies, such as ChatGPT, for anything other than ‘novelty’ purposes because they associated these tools with cheating. After seeing the results of the survey, Schnitzler thought it would be beneficial for students to explore the potential use of generative AI in class. She asked students to do an assignment where they had to create standards of conduct in a first year writing class, which included discussing their expectations of the course, the instructor, their peers, and how AI would fit in among these expectations. The class came up with three standards: 

  1. AI tools should support (and not distract from) the goals of the class, such as critical thinking, analytical skills, developing a personal voice, etc.  
  2. AI tools can be used for certain parts of the writing process, such as brainstorming, revising, or editing, but students must disclose that AI tools were used. 
  3. If there appears to be an over-use or over-reliance on AI tools, a discussion will take place to address the situation rather than disciplinary action. (Schnitzler wants students to feel safe exploring the tools without fear of repercussion.) 

This assignment comes from an open collection of cross-disciplinary assignments that use text generation technologies, mostly in a writing context. TextGenEd: Teaching with Text Generation Technologies, co-edited by Schnitzler, consists of freely accessible assignments submitted by scholars from across the nation. Assignments are divided into categories, such as AI literacy, rhetorical engagements, professional writing, creative explorations, and ethical considerations. Most are designed so that the technologies used are explored by students and instructors together, requiring very little ‘expert’ technological skills.  Schnitzler noted that there is a call for new submissions twice each year and encouraged instructors to consider submitting their own assignments that use text generation AI.

Sean Tackett was initially fearful of ChatGPT when it was released last year. Reading article after article stating how generative AI was going to “take over” pushed him to learn as much as he could about this new technology. He began experimenting with it and initially did not find it easy to use or even necessarily useful in his work with medical school faculty. However, he and some colleagues recognized potential in these tools and ended up applying for and receiving a JHU DELTA grant to find ways they could apply generative AI to faculty development in the medical school. Tackett described how they are experimenting with generative AI in a curriculum development course that he teaches to the med school faculty. For example, one of the tasks is for faculty to learn to write learning objectives, so they’ve been developing prompts that can be used to specifically critique learning objectives. Another example is developing prompts to critique writing. Most of Tackett’s students are medical professionals who do not have a lot of time to learn new technologies, so his team is continually trying to refine prompts in these systems to make them as useful and efficient as possible. Despite being so busy, Tackett noted the faculty are generally enthusiastic about having the opportunity to use these tools.     

The discussion continued with a question and answer session with audience members: 

Q: How do we transfer and integrate this knowledge with teaching assistants who help manage the larger sized classes? What about grading?
ST: I would advocate for the potential of AI to replace a TA in terms of grading, but not in terms of a TA having a meaningful dialogue with a student. 
JF: Generative AI tools can be used to provide valuable feedback on assessments. There are a lot of tools out there to help make grading easier for your TAs, but AI can be used for the feedback piece. 

Q: How might professors provide guidelines to students to use generative AI to help them study better for difficult and complex topics?
MR: One possibility is to generate quiz questions – and then have students follow up by checking the work of these quizzes that have been generated.
CS: Using a ChatGPT or other text generation tool as a reading comprehension aid is something that has been useful for non-native English speakers. For example, adding a paragraph from an academic article into ChatGPT and asking what this means in plain language can be helpful.

CE: This gets to what I call ‘prompt literacy,’ which is designing better prompts to give you better answers. There is a very good series about this on Youtube from the University of Pennsylvania.
Sean, what have you experienced with prompting right now, in terms of challenges and opportunities?
ST: We’re trying to put together advice on how to better prompt the system to get more refined and accurate answers. After a few iterations of prompting the system, we refine the prompt and put it into a template for our faculty, leaving a few ‘blanks’ for them to fill in with their specific variables. The faculty are experts in their subject areas, so they can tell if the output is accurate or not. We’re in the process of collecting their output, to put together best practices about what works, what does not work.  

CE: What would you all like to see in terms of guidelines and best practices for AI on a web page geared towards using AI in the classroom?
Guest: And along those lines, how to we move forward with assigning research projects, knowing that these tools are available for students?
ST: I think it could be useful for students to learn research skills. They could use the tools to research something, then critique the results and explain how they verified those results. It can also be useful for generating ideas and brainstorming. Another thought is that there are a number of domain specific generative AI databases, such as Open Evidence which is useful in the medical field.  
CS: To Sean’s point, I think a comparative approach is useful with these tools. The tools are very good at pattern matching genre conventions, so doing comparative work within a genre could be useful.
JF: I think ChatGPT and other generative AI tools can be useful for different parts of the research process, such as brainstorming, structure, and editing. But not for something like providing or validating evidence.  

Q: As a grad student, I’m wondering how the presence of AI might force us to refine the types of questions and evaluations that we give our students. Are there ways to engineer our own questions so that the shift of the question is changed to avoid the problem [of having to refine and update the question] in the first place?
CS: There is an assignment in our collection that talks about bringing an assignment from past to present. Again, thinking in terms of a comparative approach, ask ChatGPT the question, and then ask your students the same question and see how they compare, if there are any patterns.  I think it can be helpful to think of ChatGPT as adding another voice to the room.
JF: We have a section in the guidelines on how to redesign assessment to cope with generative AI related issues. We suggest two approaches: the avoidance approach and the activation approach. The avoidance approach is for faculty who are not yet comfortable using this technology and want to avoid having students use it.  One example of this approach is for faculty to rework their assignments to focus on a higher level of learning, such as creativity or analysis, which will hopefully reduce or eliminate the opportunity for students to use AI tools. The activation approach encourages faculty to proactively integrate AI tools into the assessment process. One example of this approach I mentioned earlier is when I suggested to a faculty member to rework their discussion board questions to allow students to submit two versions of the answers, one created by them and the other by ChatGPT, and then analyze the results. 

Q: What is the ultimate goal of education? We may have different goals for different schools. Also, AI may bridge people from different social backgrounds. In China, where I grew up, the ability to read or write strongly depends on the social status of the family you come from. So there is some discomfort using it in the classroom.
CS: I feel some discomfort also, and that’s what led to the development of the guidelines in my classroom. I posed a similar question to my students: if we have these tools that can allegedly write for us, what is the point of taking a writing class?  They responded by saying things like, “writing helps to develop critical thinking and analytical skills,” to which I added, “being here is an investment in yourself as a student, a scholar, and a thinker.” I think asking students to articulate the value of the education that they want to get is really helpful in determining guidelines for AI.
ST: Going to school and getting an education is an investment of your time. You pay now so you can be paid later. But it’s not as transactional as that. AI is already in the work environment and will become more prevalent. If we’re not preparing students to succeed in the work environment, we are doing them a disservice. We teach students to apply generative AI in their classes so they are prepared to use it in the workforce.
JF: In the business school, everything is market driven. I think education can fit into that framework as well. We’re trying to provide graduates with the confidence they need to finish the work and meet the market’s need. We know that generative AI tools have really changed the world and they’re starting to emerge in every part of our life. We need to train students to realize that ChatGPT might be part of their education, part of life in the future, and part of the work in the future as well. There are things AI can help us do, but there are still fundamentals that students need to learn. One example is calculators: we still need to learn from the beginning that 1 + 1 = 2. 
CE: This question also reminded me of asking your students, what is the ultimate purpose of a research paper? Where do they think ChatGPT should fit into the research process?  

Q: I work at the library and we’re getting lots of questions about how to detect if students are using AI. And also, how do you determine if students are relying too heavily on AI?
JF: We also get this question from our faculty. The most used detection tool right now is Turnitin, which is embedded in Canvas. But the level of accuracy is not reliable. We encourage faculty to always validate before accepting the results.  For faculty who are actively using AI in the classroom, we also encourage them to provide clear guidance and expectations to students on how they are allowed to use it.  This may make it a little easier to determine if they are using it correctly or not.
MR: There are some other tools out there, such a GPTZero, ZeroGPT, but to Jun’s point, the difficult thing is that it’s different than plagiarism detection which says this is copied, and here’s the source. These tools say there’s a probability that part of this was taken, but you can’t point to a direct source. It’s up to instructors whether or not to use these tools, but consider using them to facilitate a conversation with students. In my own classes if I suspect academic misconduct, I usually start by asking them to explain, talk to me about what is happening before I make accusations. With these tools, there tends to be no hard evidence, just probabilities that something may have happened.  This is definitely an area we’re all still learning about.
Guest: I was just thinking that having a conversation with students about why they are turning to the tool in the first place might prevent misconduct.  Instead of sending them to an academic misconduct committee, we could have these conversations, like Carly mentioned. Making students aware of the limitations of the tool could also be helpful.
CS: Yes, I say that in our guidelines that I’m prioritizing conferences with students over immediate disciplinary action. I try to pre-empt anxiety students might feel around using these tools. Designing your assignments in a way that reduces anxiety is also helpful. For example, I tend to design assignments that build on one another throughout the semester in smaller bits, rather than one giant chunk all at once.  

Q: Is there any discussion around combining AI with teaching, such as generating personalized explanations of a topic? Students will have different levels of expertise and comfort with different topics.
ST: We’re trying to do this, to create a teaching aid for the future. We’re planning to use it to create assessment items.  

Amy Brusini, Senior Instructional Designer
Center for Teaching Excellence and Innovation
 

Image Source: Pixabay, Unsplash