Quick Tips: Low to No Prep Classroom Activities

Student engagement is a critical component of higher education and a frequent topic of interest among instructors.  Actively engaging students in the learning process helps increase motivation, supports collaboration, and deepens understanding of course material. Finding activities that instructors can implement quickly while also proving worthwhile to students can be a challenge. I recently attended a conference with a session titled, “Low to No Prep Classroom Activities.”  Jennifer Merrill, psychology professor from San Mateo County Community College, shared some simple classroom activities that require very little or no preparation ahead of time that I thought were worth sharing:

Music:
Playing music as students enter the classroom creates a shared experience which can encourage social interaction, inspire creative thinking, and lead to positive classroom dynamics. It can be used as an icebreaker, to set a particular mood, or specifically relate to the course in some way. Research shows that music stimulates activity in the brain that is tied to improved focus, attention, and memory.

  • Incorporate music as part of a regular classroom routine to indicate that it’s time to focus on the upcoming lesson.
  • Use it to introduce a new topic or review a current or past topic. Ask students to articulate how they think the music/artist/song relates to the course material and then share with the class.
  • Allow students to suggest/select what type of music they would like to hear.

Academic Speed Dating:
Like traditional speed dating, academic speed dating consists of short, timed conversations with a series of partners around a particular topic.Two lines of college students in a classroom, playing a round of academic speed dating. In this case, students are given a prompt from the instructor, briefly discuss their response with a partner, and then rotate to a new partner when the time is up. Partners face each other in two lines, with one line of students continuously shifting through the other line until they return to their original partner. This can also be done by having students form inner and outer circles, instead of lines. A few of the benefits of academic speed dating include:

  • Sharing and questioning students’ own knowledge while gaining different perspectives on a topic.
  • Enhancing communication skills as students learn to express their ideas quickly and efficiently.
  • Providing a safe space to share ideas as students interact with others, which can lead to a positive classroom climate.

Memory:
The classic “Memory Game” consists of a set of cards with matching pairs of text or images. Cards are shuffled and placed face down; players take turns turning over 2 cards at a time, trying to find matching pairs.  In this version, students take part in creating the cards themselves, using index cards, before playing the game. Memory can be used to reinforce learning and enhance the retention of course material.

Suggested steps for implementation:
1. On the board, the instructor lists 10 terms or concepts related to the course in some way.
2. Students are divided into groups of no more than 5 people. Each student in the group selects 2 terms/concepts from the list.
3. Using index cards, students write the name of the term/concept on one card, and an example of the term/concept on another card (e.g., “supply and demand” and “gasoline prices rising in the summer with more people driving”). Examples could also include images, instead of text.
4. When the groups are finished creating their sets of cards, they exchange their cards with another group and play the game, trying to match as many pairs as they can.

  • Use Memory College students playing a memory game with index cards.to review definitions, formulas, or other test material in a fun, collaborative environment.
  • Enhance cognitive skills, such as concentration, short-term memory, and pattern recognition.
  • Facilitate team building skills as students work in groups to create and play the game.

Pictionary:
In this version of classroom Pictionary, students are divided into groups that are each assigned a particular topic.  Each group is tasked with drawing an image representation of their topic, e.g., “Create images that represent the function of two glial cells assigned to your group.”  Ideally, it works best if drawings are large enough to be displayed College students in a classroom doing a gallery walk.around the classroom, such as on an easel, whiteboard, or large Post-it note paper. When each group is finished with their drawings, all students participate in a gallery walk, offering feedback to the other groups.  Facilitate a small or whole group discussion to reflect on the feedback each group received.

  • Enhance problem solving skills and creativity by asking students to think critically about how to represent information visually.
  • Use Pictionary to get students up and moving around the classroom, which will help keep them actively engaged with course content.
  • Help students develop constructive feedback skills as they participate in the gallery walk part of the activity.

Hawks and Eagles:
This activity is a version of “think-pair-share” that gets students up and moving around the classroom.

Suggested steps for implementation:
1. Students pair with someone nearby and decide who will be the Hawk and who will be the Eagle.
2. Give all students a prompt or topic to discuss and allow them time to think about their response (1-3 minutes).
3. Students share their responses with their paired partner (1-3 minutes).
4. Ask Hawks to raise their hands. Ask the Eagles to get up and go find a different Hawk.
5. Students share their responses with their new partner.
6. Repeat steps 4 and 5, if desired, to allow students to pair with multiple partners.
7. Debrief topic with the whole class.

  • Use Hawks and Eagles as an icebreaker activity for students to introduce and get to know one another.
  • Use this activity as a formative assessment to gauge student comprehension of a particular topic.
  • Expose students to multiple perspectives or viewpoints on a particular topic by having them engage with multiple partners.

IQ Cards:
IQ cards (“Insight/Question Cards”) is an exit ticket activity that acts as a formative assessment strategy. At the end of a class or unit, ask students to write down on an index card any takeaways or new information they have learned. On the other side ofStack of index cards. the card, ask them to write down any remaining questions they have about the lesson or unit. Collect student responses and share their “insights” and “questions” with the class at the next meeting.

  • Gather instant feedback from students and quickly assess their grasp of the material, noting where any changes or adjustments might be needed.
  • Reinforce knowledge by asking students to recall key concepts of the lesson or unit.
  • Use IQ Cards as a self-assessment activity for students to reflect on their own learning.

Do you have any additional low or no prep activities you use in the classroom? Please feel free to share them in the comments. If you have any questions about any of the activities described above or other questions about student engagement, please contact the CTEI – we are here to help!

Amy Brusini, Senior Instructional Designer
Center for Teaching Excellence and Innovation
 

References:
Baker, M. (2007). Music moves brain to pay attention, Stanford study finds. Stanford Medicine: News Center. Retrieved August 26, 2024, from https://med.stanford.edu/news/all-news/2007/07/music-moves-brain-to-pay-attention-stanford-study-finds.html

Image source: Jennifer Merrill, Pixabay

Quick Tips: Alternative Assessments

Throughout the past year and a half, instructors have made significant changes to the way they design and deliver their courses. The sudden shift to being fully remote, then hybrid, and now back to face-to-face for some courses has required instructors to rethink not only the way they teach, but also the way they assess their students. Many who have previously found success with traditional tests and exams are now seeking alternative forms of assessment, some of which are described below:

Homework assignments: Adding more weight to homework assignments is one way to take the pressure off of high stakes exams while keeping students engaged with course material. Homework assignments will vary according to the subject, but they may include answering questions from a chapter in a textbook, writing a summary of a reading or topic discussed in class, participating in an online discussion board, writing a letter, solving a problem set, etc.

Research paper:  Students can apply their knowledge by writing a research paper. To help ensure a successful outcome, a research paper can be set up as a scaffolded assignment, where students turn in different elements of the paper, such as a proposal, an outline, first and second drafts, bibliography, etc. throughout the semester, and then the cumulative work at the end.

Individual or group presentations: Student presentations can be done live for the class or prerecorded ahead of time using multimedia software (e.g., Panopto, VoiceThread) that can be viewed asynchronously. Depending on the subject matter, presentations may consist of a summary of content, a persuasive argument, a demonstration, a case study, an oral report, etc. Students can present individually or in groups.

Reflective paper or journal: Reflective exercises allow students to analyze what they have learned and experienced and how these experiences relate to their learning goals. Students develop an awareness of how they best acquire knowledge and can apply these metacognitive skills to both academic and non-academic settings. Reflective exercises can be guided or unguided and may include journaling, self-assessment, creating a concept map, writing a reflective essay, etc.

Individual or group projects: Student projects may be short-term, designed in a few weeks, or long-term, designed over an entire semester or more. If the project is longer term, it may be a good idea to provide checkpoints for students to check in about their progress and make sure they are meeting deadlines. Ideas for student projects include: creating a podcast, blog, interactive website, interactive map, short film, digital simulation, how-to guide, poster, interview, infographic, etc. Depending on the circumstances, it may be possible for students to partner with a community-based organization as part of their project. Another idea is to consider allowing students to propose their own project ideas.

Online Tests and Exams: For instructors who have moved their tests online, it may be worth considering lowering the stakes of these assessments.  Instead of high-stakes midterms and finals, replace them with weekly quizzes that are weighted lower than a traditional midterm or final. Giving more frequent assessments allows for additional opportunities to provide feedback to students and help them reach their goals successfully. To reduce the potential for cheating, include questions that are unique and require higher-level critical thinking. Another consideration is to allow at least some of the quizzes to be open-book.

It’s worth noting that offering students a variety of ways to demonstrate their knowledge aligns with the principles of universal design for learning (UDL). Going beyond traditional tests and exams helps to ensure that all learners have an opportunity to show what they have learned in a way that works best for them. If you’re looking for more ideas, here are a few sites containing additional alternative assessment strategies:

https://www.scholarlyteacher.com/post/alternatives-to-the-traditional-exam-as-measures-of-student-learning-outcomes

https://teaching.berkeley.edu/resources/course-design-guide/design-effective-assessments/alternatives-traditional-testing

https://cei.umn.edu/alternative-assessment-strategies

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay

Expanding Students’ Research Skills with a Virtual Museum Exhibit

Morgan Shahan received her PhD in History from Johns Hopkins University in 2020. While at Hopkins, she received Dean’s Teaching and Prize Fellowships. In 2019, her department recognized her work with the inaugural Toby Ditz Prize for Excellence in Graduate Student Teaching. Allon Brann from the Center for Educational Resources spoke to Morgan about an interesting project she designed for her fall 2019 course,“Caged America: Policing, Confinement, and Criminality in the ‘Land of the Free.’”

I’d like to start by asking you to give us a brief description of the final project.  What did your students do?

Students created virtual museum exhibits on topics of their choice related to the themes of our course, including the rise of mass incarceration, the repeated failure of corrections reform, changing conceptions of criminality, and the militarization of policing. Each exhibit included a written introduction and interpretive labels for 7-10 artifacts, which students assembled using the image annotation program Reveal.  On the last day of class, students presented these projects to their classmates. Examples of projects included: “Birthed Behind Bars: Policing Pregnancy and Motherhood in the 19th and 20th Centuries,” “Baseball in American Prisons,” and “Intentional Designs: The Evolution of Prison Architecture in America in the 19th and 20th Centuries.”

Can you describe how you used scaffolding to help students prepare for the final project?

I think you need to scaffold any semester-long project. My students completed several component tasks before turning in their final digital exhibits. Several weeks into the semester, they submitted a short statement outlining the “big idea” behind their exhibitions. The “big idea statement,” a concept I borrowed from museum consultant Beverly Serrell, explained the theme, story, or argument that defined the exhibition’s tone and dictated its content. I asked students to think of the “big idea statement” as the thesis for their exhibition.

Students then used the big idea to guide them as they chose one artifact and drafted a 200-word label for it. I looked for artifact labels that were clearly connected to the student’s big idea statement, included the context visitors would need to know to understand the artifact, and presented the student’s original interpretation of the artifact. The brevity of the assignment gave me time to provide each student with extensive written comments. In these comments and in conversations during office hours, I helped students narrow their topics, posed questions to help guide analysis and interpretation of artifacts, and suggested additional revisions focused on writing mechanics and tone.

Later in the semester, students expanded their big idea statements into rough drafts of the introductions for their digital exhibit. I asked that each introduction orient viewers to the exhibition, outline necessary historical context, and set the tone for the online visit. I also set aside part of a class period for a peer review exercise involving these drafts. I hoped that their classmates’ comments, along with my own, would help students revise their introductions before they submitted their final exhibit.

If I assigned this project again, I would probably ask students to turn in another label for a second artifact. This additional assignment would allow me to give each student more individualized feedback and would help to further clarify my grading criteria before the final project due date.

When you first taught this course a few years ago, you assigned students a more traditional task—a research paper. Can you explain why you decided to change the final assignment this time around?

I wanted to try a more flexible and creative assignment that would push students to develop research and analytical skills in a different format. The exhibit project allows students to showcase their own interpretation of a theme, put together a compelling historical narrative, and advance an argument. The project remains analytically rigorous, pushing students to think about how history is constructed. Each exhibit makes a claim—there is reasoning behind each choice the student makes when building the exhibit and each question he or she asks of the artifacts included. The format encourages students to focus on their visual analysis skills, which tend to get sidelined in favor of textual interpretation in most of the student research papers I have read. Additionally, the exhibit assignment asks students to write for a broader audience, emphasizing clarity and brevity in their written work.

What challenges did you encounter while designing this assignment from scratch?  

In the past I have faced certain risks whenever I have designed a new assignment. First, I have found it difficult to strike a balance between clearly stating expectations for student work while also leaving room for students to be creative. Finding that balance was even harder with a non-traditional assignment. I knew that many of my students would not have encountered an exhibit project before my course, so I needed to clarify the utility of the project and my expectations for their submissions.

Second, I never expected to go down such a long research rabbit hole when creating the assignment directions. I naively assumed that it would be fairly simple to put together an assignment sheet outlining the requirements for the virtual museum project.  I quickly learned, however, that it was difficult to describe exactly what I expected from students without diving into museum studies literature and scholarship on teaching and learning.

I also needed to find a digital platform for student projects. Did I want student projects to be accessible to the public? How much time was I willing to invest in teaching students how to navigate a program or platform? After discussing my options with Reid Sczerba in the Center for Educational Resources (CER), I eventually settled on Reveal, a Hopkins-exclusive image-annotation program. The program would keep student projects private, foreground written work, and allow for creative organization of artifacts within the digital exhibits. Additionally, I needed to determine the criteria for the written component of the assignment. I gave myself a crash course in museum writing, scouring teaching blogs, museum websites, journals on exhibition theory and practice, and books on curation for the right language for the assignment sheet. I spoke with Chesney Medical Archives Curator Natalie Elder about exhibit design and conceptualization. My research helped me understand the kind of writing I was looking for, identify models for students, and ultimately create my own exhibit to share with them.

Given all the work that this design process entailed, do you have any advice for other teachers who are thinking about trying something similar?

This experience pushed me to think about structuring assignments beyond the research paper for future courses. Instructors need to make sure that students understand the requirements for the project, develop clear standards for grading, and prepare themselves mentally for the possibility that the assignment could crash and burn. Personally, I like taking risks when I teach—coming up with new activities for each class session and adjusting in the moment should these activities fall flat—but developing a semester-long project from scratch was a big gamble.

How would you describe the students’ responses to the project? How did they react to the requirements and how do you think the final projects turned out?

I think that many students ended up enjoying the project, but responses varied at first. Students expressed frustration with the technology, saying they were not computer-savvy and were worried about having to learn a new program. I tried to reassure these students by outing myself as a millennial, promising half-jokingly that if I could learn to use it, they would find it a cinch. Unfortunately, I noticed that many students found the technology somewhat confusing despite the tutorial I delivered in class. After reading through student evaluations, I also realized that I should have weighted the final digital exhibit and presentation less heavily and included additional scaffolded assignments to minimize the end-of-semester crunch.

Despite these challenges, I was really impressed with the outcome. While clicking through the online exhibits, I could often imagine the artifacts and text set up in a physical museum space. Many students composed engaging label text, keeping their writing accessible to their imaginary museum visitors while still delivering a sophisticated interpretation of each artifact. In some cases, I found myself wishing students had prioritized deeper analysis over background information in their labels; if I assigned this project again, I would emphasize that aspect.

I learned a lot about what it means to support students through an unfamiliar semester-long project, and I’m glad they were willing to take on the challenge. I found that students appreciated the flexibility of the guidelines and the room this left for creativity. One student wrote that the project was “unique and fun, but still challenging, and let me pursue something I couldn’t have if we were just assigned a normal paper.”

If you’re interested in pursuing a project like this one and have more questions for Morgan, you can contact her at: morganjshahan@gmail.com. 

For other questions or help developing new assessments to use in your courses, contact the Center for Educational Resources (cerweb@jhu.edu).

Allon Brann, Teacher Support Specialist
Center for Educational Resources

Image Source: Morgan Shahan

Navigating Grades During Covid-19 

Like many other universities nationwide, Johns Hopkins has made the decision to forgo letter grades this semester for its undergraduates. Faculty in the Krieger and Whiting schools have been instructed to use the special designation S*/U* this semester. On Friday, April 3, the Center for Educational Resources (CER) hosted an online session, “Transitioning to S/U Grading.” Jessie Martin, Assistant Dean, Office of Academic Advising, and Janet Weise, Assistant Dean, Office of Undergraduate Affairs, provided an overview of JHU’s updated grading policy which was followed by a question and answer sharing session, moderated by Allon Brann from the CER.  

Highlights of the grading policy for both KSAS and WSE faculty include: 

  • All AS and EN undergraduate students will receive S* or U* grades for the spring 2020 semester(The asterisk (*) distinguishes this semester from a regular S/U grade given during past semesters.) There will be a semester-specific transcript notation explaining that students were not eligible for a letter grade.
  • This applies to the AS and EN undergraduate students regardless of the fact that they may be in graduate level courses or in courses offered by other schools. 
  • There will be an option to assign a grade of I/U*
  • Faculty may have students enrolled in their undergraduate classes who are grad students and/or from other JHU schools and therefore have different emergency grading systems.  

More details about the policy can be found here:
KSAShttps://krieger.jhu.edu/covid19/teaching/
WSEhttps://engineering.jhu.edu/novel-coronavirus-information/faculty-undergrad-grading-faqs/
(Note: the links are different, but the information is identical for both the Krieger and Whiting Schools)

Session participants shared strategies in terms of how to move forward with grading this semester, which are summarized below:  

  • Consulting the studentsOne faculty member shared how she consulted with her students to help decide how to move her course forward this semester. She facilitated student discussions and allowed them a say in how things would be adapted. The outcome: course work has been scaled back, but no assessments have been eliminated. For example, instead of students turning in a full assignment, they now have to submit a list of bullet points highlighting the main ideas, or an outline instead of a full analysis. Lectures have been replaced by students working in groups through Zoom and then regrouping as a full class to report outThe faculty member has been very pleased with the results noting that because students were involved in the decision-making, they are working even harder because they chose this path.  
    Another idea related to consulting students mentioned by a CER staff member is to ask students how they are going to demonstrate that they’ve met the goals of the course.  
  • Using technology to monitor students:  Another faculty member described how Zoom can take attendancerecord how many minutes students are on a call, and even how attentive they are during a sessionShe also mentioned the detailed statistics provided by Panopto (lecture capture software) that records which video recordings students have viewed and for how longWhile it is possible to incorporate this information into students’ grades, this faculty member stated she prefers to use these tools in a more informal way to monitor students and flag those who are not engaged. 
    A CER staff member mentioned additional ways faculty are using technology, including: 
    • Embedding quizzes inside of Panopto as a knowledge check while watching video recordings. 
    • Creating a Blackboard quiz that is dependent on students having watched a video recording or attended a Zoom session.
  • Alternate grading strategiesA list of alternate grading strategies shared by the CER that may be useful in adjusting your approach this semester or in future semesters. 
  • Specific S/U grading approaches: A list of approaches shared by the CER that might be worth considering as you transition to S/U grading this semester.

What modifications, if any, are you making in order to shift to S/U? We encourage you to share your ideas in the comments section. 

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay

 

Quick Tips: Formative Assessment Strategies

Designing effective assessments is a critical part of the teaching and learning process. Instructors use assessments, ideally aligned with learning objectives, to measure student achievement and determine whether or not they are meeting the objectives. Assessments can also inform instructors if they should consider making changes to their instructional method or delivery.

Assessments are generally categorized as either summative or formative. Summative assessments, usually graded, are used to measure student comprehension of material at the end of an instructional unit. They are often cumulative, providing a means for instructors to see how well students are meeting certain standards. Instructors are largely familiar with summative assessments. Examples include:

  • Final exam at the end of the semester
  • Term paper due mid-semester
  • Final project at the end of a course

In contrast, formative assessments provide ongoing feedback to students in order to help identify gaps in their learning. They are lower stakes than summative assessments and often ungraded. Additionally, formative assessments help instructors determine the effectiveness of their teaching; instructors can then use this information to make adjustments to their instructional approach which may lead to improved student success (Boston). As discussed in a previous Innovative Instructor post about the value of formative assessments, when instructors provide formative feedback to students, they give students the tools to assess their own progress toward learning goals (Wilson). This empowers students to recognize their strengths and weaknesses and may help motivate them to improve their academic performance.

Examples of formative assessment strategies:

  • Surveys – Surveys can be given at the beginning, middle, and/or end of the semester.
  • Minute papers – Very short, in-class writing activity in which students summarize the main ideas of a lecture or class activity, usually at the end of class.
  • Polling – Students respond as a group to questions posed by the instructor using technology such as iclickers, software such as Poll Everywhere, or simply raising their hands.
  • Exit tickets – At the end of class, students respond to a short prompt given by the instructor usually having to do with that day’s lesson, such as, “What readings were most helpful to you in preparing for today’s lesson?”
  • Muddiest point – Students write down what they think was the most confusing or difficult part of a lesson.
  • Concept map – Students create a diagram of how concepts relate to each other.
  • First draft – Students submit a first draft of a paper, assignment, etc. and receive targeted feedback before submitting a final draft.
  • Student self-evaluation/reflection
  • Low/no-grade quizzes

Formative assessments do not have to take a lot of time to administer. They can be spontaneous, such as having an in-class question and answer session which provides results in real time, or they can be planned, such as giving a short, ungraded quiz used as a knowledge check. In either case, the goal is the same: to monitor student learning and guide instructors in future decision making regarding their instruction. Following best practices, instructors should strive to use a variety of both formative and summative assessments in order to meet the needs of all students.

References:

Boston, C. (2002). The Concept of Formative Assessment. College Park, MD: ERIC Clearinghouse on Assessment and Evaluation. (ERIC Document Reproduction Service No. ED470206).

Wilson, S. (February 13, 2014). The Characteristics of High-Quality Formative Assessments. The Innovative Instructor Blog. http://ii.library.jhu.edu/2014/02/13/the-characteristics-of-high-quality-formative-assessments/

Amy Brusini
Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay

Lunch and Learn: Strategies to Minimize Cheating (A Faculty Brainstorming Session)

On Wednesday, April 17, the Center for Educational Resources (CER) hosted the final Lunch and Learn for the 2018-2019 academic year: Strategies to Minimize Cheating (A Faculty Brainstorming Session).  As the title suggests, the format of this event was slightly different than past Lunch and Learns. Faculty attendees openly discussed their experiences with cheating as well as possible solutions to the problem. The conversation was moderated by James Spicer, Professor, Materials Science and Engineering, and Dana Broadnax, Director of Student Conduct.

The discussion began with attendees sharing examples of academic misconduct they identified. The results included: copying homework, problem solutions, and lab reports; using other students’ clickers; working together on take-home exams; plagiarizing material from Wikipedia (or other sites); and using online solution guides (such as chegg.com, coursehero.com, etc.).

Broadnax presented data from the Office of the Dean of Student Life regarding the numbers of cheating incidents per school, types of violations, and outcomes. She stressed to faculty members how important it is to report incidents to help her staff identify patterns and repeat offenders. If it’s a student’s first offense, faculty are allowed to determine outcomes that do not result in failure of the course, transcript notation, or change to student status. Options include: assigning a zero to the assessment, offering a retake of the assessment, lowering the course grade, or giving a formal warning.  A student’s second or subsequent offense must be adjudicated by a hearing panel (Section D – https://studentaffairs.jhu.edu/policies-guidelines/undergrad-ethics/).

Some faculty shared their reluctance to report misconduct because of the time required to submit a report. Someone else remarked that when reporting, she felt like a prosecutor.  As a longtime ethics board member, Spicer acknowledged the burdens of reporting but stressed the importance of reporting incidents. He also shared that faculty do not act as prosecutors at a hearing. They only provide evidence for the hearing panel to consider. Broadnax agreed and expressed interest in finding ways to help make the process easier for faculty. She encouraged faculty to share more of their experiences with her.

The discussion continued with faculty sharing ideas and strategies they’ve used to help reduce incidents of cheating. A summary follows:

  • Do not assume that students know what is considered cheating. Communicate clearly what is acceptable/not acceptable for group work, independent work, etc. Clearly state on your syllabus or assignment instructions what is considered a violation.
  • Let students know that you are serious about this issue. Some faculty reported their first assignment of the semester requires students to review the ethics board website and answer questions. If you serve or have served on the ethics board, let students know.
  • Include an ethics statement at the beginning of assignment instructions rather than at the end. Research suggests that signing ethics statements placed at the beginning of tax forms rather than at the end reduces dishonest reporting.
  • Do not let ‘low levels’ of dishonesty go without following University protocol – small infractions may lead to more serious ones. The message needs to be that no level of dishonesty is acceptable.
  • Create multiple opportunities for students to submit writing samples (example: submit weekly class notes to Blackboard) so you can get to know their writing styles and recognize possible instances of plagiarism.
  • Plagiarism detection software, such as Turnitin, can be used to flag possible misconduct, but can also be used as an instructional tool to help students recognize when they are unintentionally plagiarizing.
  • Emphasize the point of doing assignments: to learn new material and gain valuable critical thinking skills. Take the time to personally discuss assignments and paper topics with students so they know you are taking their work seriously.
  • If using clickers, send a TA to the back of the classroom to monitor clicker usage. Pay close attention to attendance so you can recognize if a clicker score appears for an absent student.
  • Ban the use of electronic devices during exams if possible. Be aware that Apple Watches can be consulted.
  • Create and hand out multiple versions of exams, but don’t tell students there are different versions. Try not to re-use exam questions.
  • Check restrooms before or during exams to make sure information is not posted.
  • Ask students to move to different seats (such as the front row) if you suspect they are cheating during an exam. If a student becomes defensive, tell him/her that you don’t know for sure whether or not cheating has occurred, but that you would like him/her to move anyway.
  • Make your Blackboard site ‘unavailable’ during exams; turn it back on after everyone has completed the exam.
  • To discourage students from faking illness on exam days, only offer make-ups as oral exams. One faculty member shared this policy significantly reduced the number of make-ups due to illness in his class.

Several faculty noted the high-stress culture among JHU students and how it may play a part in driving them to cheat. Many agreed that in order to resolve this, we need to create an environment where students don’t feel the pressure to cheat. One suggestion was to avoid curving grades in a way that puts students in competition with each other.  Another suggestion was to offer more pass/fail classes. This was met with some resistance as faculty considered the rigor required by courses students need to get into medical school. Yet another suggestion was to encourage students to consult with their instructor if they feel the temptation to cheat. The instructor can help address the problem by considering different ways of handling the situation, including offering alternative assessments when appropriate. Broadnax acknowledged the stress, pressure, and competition among students, but also noted that these are not excuses to cheat: “Our students are better served by learning to best navigate those factors and still maintain a standard of excellence.”

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Source: Lunch and Learn Logo

Lunch and Learn: Innovative Grading Strategies

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Thursday, February 28, the Center for Educational Resources (CER) hosted the third Lunch and Learn for the 2018-2019 academic year. Rebecca Kelly, Associate Teaching Professor, Earth and Planetary Sciences and Director of the Environmental Science and Studies Program, and Pedro Julian, Associate Professor, Electrical and Computer Engineering, presented on Innovative Grading Strategies.

Rebecca Kelly began the presentation by discussing some of the problems in traditional grading. There is a general lack of clarity in what grades actually mean and how differently they are viewed by students and faculty. Faculty use grades to elicit certain behaviors from students, but it doesn’t necessarily mean that they are learning. Kelly noted that students, especially those at JHU, tend to be focused on the grade itself, aiming for a specific number and not the learning; this often results in high levels of student anxiety, something she sees often. She explained how students here don’t get many chances to fail and not have their grades negatively affected. Therefore, every assessment is a source of stress because it counts toward their grade. There are too few opportunities for students to learn from their mistakes.

Kelly mentioned additional challenges that faculty face when grading: it is often time consuming, energy draining, and stressful, especially when haggling over points, for example.  She makes an effort to provide clearly stated learning goals and rubrics for each assignment, which do help, but are not always enough to ease the burden.

Kelly introduced the audience to specifications grading and described how she’s recently started using this approach in Introduction to Geographic Information Systems (GIS). With specifications grading (also described in a recent CER Innovative Instructor article), students are graded pass/fail or satisfactory/unsatisfactory on individual assessments that align directly with learning goals. Course grades are determined by the number of learning goals mastered. This is measured by the number of assessments passed. For example, passing 20 or more assignments out of 23 would equate to an A; 17-19 assignments would equate to a B. Kelly stresses the importance of maintaining high standards; for rigor, the threshold for passing should be a B or better.

In Kelly’s class, students have multiple opportunities to achieve their goals. Each student receives three tokens that he/she can use to re-do an assignment that doesn’t pass, or select a different assignment altogether from the ‘bundle’ of assignments available. Kelly noted the tendency of students to ‘hoard’ their tokens and how it actually works out favorably; instead of risking having to use a token, students often seek out her feedback before turning anything in.

Introduction to GIS has both a lecture and a lab component. The lab requires students to use software to create maps that are then used to perform data analysis. The very specific nature of the assignments in this class lend themselves well to the specifications grading approach. Kelly noted that students are somewhat anxious about this approach at first, but settle into it once they fully understand. In addition to clearly laying out Grade bundles used in specifications gradingexpectations, Kelly lists the learning goals of the course and how they align with each assignment (see slides). She also provides students with a table showing the bundles of assignments required to reach final course grades. Additionally, she distributes a pacing guide to help students avoid procrastination.

The results that Kelly has experienced with specifications grading have been positive. Students generally like it because the expectations are very clear and initial failure does not count against them; there are multiple opportunities to succeed. Grading is quick and easy because of the pass/fail system; if something doesn’t meet the requirements, it is simply marked unsatisfactory. The quality of student work is high because there is no credit for sloppy work. Kelly acknowledged that specifications grading is not ideal for all courses, but feels the grade earned in her GIS course is a true representation of the student’s skill level in GIS.

Pedro Julian described a different grading practice that he is using, something he calls the “extra grade approach.” He currently uses this approach in Digital Systems Fundamentals, a hands-on design course for freshmen. In this course, Julian uses a typical grading scale: 20% for the midterm, 40% for labs and homework, and 40% for the final project. However, he augments the scale by offering another 20% if students agree to put in extra work throughout the semester. How much extra work? Students must commit to working collaboratively with instructors (and other students seeking the 20% credit) for one hour or more per week on an additional project.  This year, the project is to build a vending machine. Past projects include building an elevator out of Legos and building a robot that followed a specific path on the floor.

Julian described how motivated students are to complete the extra project once they commit to putting in the time. Students quickly realize that they learn all sorts of skills they would not have otherwise learned and are very proud and engaged. Student participation in the “extra grade” option has grown steadily since Julian started using this approach three years ago. The first year there were 5-10 students who signed up, and this year there are 30. Julian showed histograms (see slides) of student grades from past semesters in his class and how the extra grade has helped push overall grades higher.  The histograms also show that it’s not just students who may be struggling with the class who are choosing to participate in the extra grade, but “A students” as well.

Similar to Rebecca Kelly’s experience, Julian expressed how grade-focused JHU students are, much to his dismay. In an attempt to take some of the pressure off, he described how he repeatedly tells his students that if they work hard, they will get a good grade; he even includes this phrase in his syllabus. Julian explained how he truly wants students to concentrate more on the learning and not on the grade, which is his motivation behind the “extra grade” approach.

An interesting discussion with several questions from the audience followed the presentations. Below are some of the questions asked and responses given by Kelly and Julian, as well as audience members.

Q: (for Julian) Some students may not have the time or flexibility in their schedule to take part in an extra project. Do you have suggestions for them? Did you consider this when creating the “extra grade” option?

Julian responded that in his experience, freshmen seem to be available. Many of them make time to come in on the weekends. He wants students to know he’s giving them an “escape route,” a way for them to make up their grade, and they seem to find the time to make it happen.  Julian has never had a student come to him saying he/she cannot participate because of scheduling conflicts.

Q: How has grade distribution changed?

Kelly remarked how motivated the students are and therefore she had no Cs, very few Bs, and the rest As this past semester. She expressed how important it is to make sure that the A is attainable for students. She feels confident that she’s had enough experience to know what counts as an A. Every student can do it, the question is, will they?

Q: (for Kelly) Would there ever be a scenario where students would do the last half of the goals and skip the first half?

Kelly responded that she has never seen anyone jump over everything and that it makes more sense to work sequentially.

Q: (for Kelly) Is there detailed feedback provided when students fail an assignment?

Kelly commented that it depends on the assignment, but if students don’t follow the directions, that’s the feedback – to follow the directions. If it’s a project, Kelly will meet with the student, go over the assignment, and provide immediate feedback. She noted that she finds oral feedback much more effective than written feedback.

Q: (for Kelly) Could specs grading be applied in online classes?

Kelly responded that she thinks this approach could definitely be used in online classes, as long as feedback could be provided effectively. She also stressed the need for rubrics, examples, and clear goals.

Q: Has anyone tried measuring individual learning gains within a class? What skills are students coming in with? Are we actually measuring gain?

Kelly commented that specifications grading works as a compliment to competency based grading, which focuses on measuring gains in very specific skills.

Julian commented that this issue comes up in his class, students coming in with varying degrees of experience. He stated that this is another reason to offer the extra credit, to keep things interesting for those that want to move at a faster pace.

The discussion continued among presenters and audience members about what students are learning in a class vs. what they are bringing in with them. A point was raised that if students already know the material in a class, should they even be there?  Another comment was made regarding if it is even an instructor’s place to determine what students already know.  Additional comments were made about what grades mean and concerns about grades being used for different things, i.e. employers looking for specific skills, instructors writing recommendation letters, etc.

Q: Could these methods be used in group work?

Kelly responded that with specifications grading, you would have to find a way to evaluate the group. It might be possible to still score on an individual basis within the group, but it would depend on the goals. She mentioned peer evaluations as a possibility.

Julian stated that all grades are based on individual work in his class. He does use groups in a senior level class that he teaches, but students are still graded individually.

The event concluded with a discussion about how using “curve balls” – intentionally difficult questions designed to catch students off-guard – on exams can lead to challenging grading situations. For example, to ultimately solve a problem, students would need to first select the correct tools before beginning the solution process. Some faculty were in favor of including this type of question on exams, while others were not, noting the already high levels of exam stress.  A suggestion was made to give students partial credit for the process even if they don’t end up with the correct answer. Another suggestion was to give an oral exam in order to hear the student’s thought process as he/she worked through the challenge. This would be another way for students to receive partial credit for their ideas and effort, even if the final answer was incorrect.

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn Logo, slide from Kelly presentation

What is Specifications Grading and Why Should You Consider Using It?

During the fall semester I came across the concept of specifications grading. We had a faculty member interested in trying it out, and another professor who was already using a version of it in his courses. For today’s post, I’d like to give an overview of specifications grading with resources to turn to for more information.

Note paper check list with pencil.Specifications grading is not a brand new concept. In the spring of 2016, both Inside Higher Ed and The Chronicle of Higher Education ran articles on this grading method. The Inside Higher Ed piece, Yes, Virginia, There’s a Better Way to Grade (January 19, 2016) was written by Linda Nilson, who authored the seminal work on the concept: Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time (Stylus Publishing, 2015).

Nilson starts her book, which is a relatively short read (131 pages of text), by giving an overview of the history of grading. While the origins of our university system goes back to the 6th century, grading students is a more recent idea, first appearing in the 1700s and becoming more formalized in the 19th century. There is little standardization across institutions and practices vary considerably. Nilson notes that grading on the curve, grade inflation, and interpretations attached to grades further complicate the practice, leading to a system that she characterizes as broken and damaging to both faculty and students. Moreover, it is not at all clear that grades are an accurate predictor of future success.

Nilson contends there is a better system (see summary pp. 129-131), i.e., specifications grading (also called specs grading), which will:

  1. Uphold high academic standards,
  2. Reflect student learning outcomes,
  3. Motivate students to learn,
  4. Motivate students to excel,
  5. Discourage cheating,
  6. Reduce student stress,
  7. Make students feel responsible for their grades,
  8. Minimize conflict between faculty and students,
  9. Save faculty time,
  10. Give students feedback they will use,
  11. Make expectations clear,
  12. Foster higher-order cognitive development and creativity,
  13. Assess authentically,
  14. Achieve high interrater agreement,
  15. Be simple.

Her grading construct, which can be adapted in part or fully (as she explains in detail in her book), relies on pass/fail grading of assignments and assessments, the structuring of course content into modules linked to learning outcomes, and the bundling of assignments and assessments within those modules. The completion of course modules and bundles is linked to traditional course grades. In the pure form of specs grading, students determine what grade they want and complete the modules and bundles that correspond to that grade.

Nilson provides a summary of the features of specifications grading (p. 128):

  • Students are graded pass/fail on individual assignments and tests or on bundles or modules of assignments and tests.
  • Instructors provide very clear, detailed specifications (specs)—even models if necessary— for what constitutes a passing (acceptable/ satisfactory) piece of work. Specs reflect the standards of B-level or better work.
  • Students are allowed at least one opportunity to revise an unacceptable piece of work, or start the course with a limited number of tokens that they can exchange to revise or drop unacceptable work or to submit work late.
  • Bundles and modules that earn higher course grades require students to demonstrate mastery of more skills and content, more advanced/ complex skills and content, or both.
  • Bundles and modules are tied to the learning outcomes of the course or the program. Students will not necessarily achieve all the possible outcomes, but their course grade will indicate which ones they have and have not achieved.

Nilson’s article in Inside Higher Ed referenced above, gives a quick overview to specifications grading basics. It’s a good starting place to determine if the concept holds appeal for you. While any new system of teaching, including grading, will have a learning curve, specs grading offers a great deal of flexibility. In her book, Nilson gives examples of ways to partially integrate the concept into your course planning. It is also clear that once implemented, the system saves faculty time in making “hairsplitting decisions” about how many points to award on an assignment or test. Rubrics are required, but they are based on a satisfactory/unsatisfactory set of criteria, rather than spelling out what is expected for a full range of grades. Yes, faculty must be transparent and up front about this system with the students, but the anecdotal experiences that Nilson shares in her book indicate that students find specs grading to be less stressful and more motivating than traditional methods.

I recommend reading Nilson’s full book to understand the nuances and to determine which aspects of the system you will want to employ. To give you a sense of the scope of the book, following is an outline of the chapters and material covered.

Chapter 1: Introduction to and history of grading, and rationale for a new grading system.
Chapter 2: Discussion of learning outcomes and course design.
Chapter 3: Linking grades to outcomes—covers Bloom’s Taxonomy and how specs grading works in this regard.
Chapter 4: The efficacy of pass/fail grading.
Chapter 5: Details of specifications grading with detailed examples, including the role of rubrics, and adding flexibility through the use of tokens and second chances.
Chapter 6: How to convert specs graded student work to final letter grades. This chapter explains the concept of modules and bundling as related to levels of learning and grades that will be earned as modules/bundles are completed.
Chapter 7: Examples of specifications-graded course design. Nilson presents nine case studies from a variety of disciplines that include the types of activities and assessments that can be used.
Chapter 8: How and why specs grading motivates students—this chapter examines theories and research on student motivation to build a case for specs grading.
Chapter 9: Detailed instructions for developing a course with specs grading. This chapter includes tips for a hybrid course model that combines elements of specs grading with traditional grading constructs, and ideas for introducing students to specs grading.
Chapter 10: Conclusion and evaluation of specifications grading.

I also want to mention the article in The Chronicle of Higher Education, Prof Hacker Blog, Experimenting with Specifications Grading, Jason B. Jones, March 23, 2016, which reports on an instructor’s experience with specifications grading. It links to this blog post Rethinking Grading: An In-Progress Experiment, February 16, 2016, by Jason Mittell, who teaches at Middlebury College. The first-hand experience will be enlightening to those considering specs grading. Mittell includes the statement on his syllabus explaining specs grading to students, which will help you formulate your own explanation for this important part of a successful implementation of specs grading. You should also be sure to read the comments on the Prof Hacker piece for some additional ideas and resources.

As always, I am interested in comments from those who have tried or are considering this idea. Please share your thoughts.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay.com

Grading in the fast lane with Gradescope

[Guest post by Scott Smith, Professor, Computer Science, Johns Hopkins University]

Three speedometers for quality, grades per hour, and efficiency.Grading can be one of the most time consuming and tedious aspects of teaching a course, but it’s important to give prompt and meaningful feedback to your students. In large courses, aligning grading practices across multiple teaching assistants (TAs) necessitates a level of coordination that includes scheduling grading meetings, reviewing materials for correct answers, and calibrating point evaluations, all of which can take up valuable time during the semester.

In courses that teach programming, we typically assign students projects that require them to write programs to solve problems. When instructors grade this type of assignment, they not only have to observe the program’s results but also the student’s approach. If the results are not correct or the program doesn’t run, we have to spend time reviewing hundreds of lines of code to debug the program to give thoughtful feedback.

In the past, my method for grading assignments with my TAs may have been arduous but it worked. However, last year, no TAs were assigned to my Principles of Programming Languages course. Concerned that I wouldn’t have enough time to do all the work, I looked for another solution.

Consistent grading and providing meaningful feedback for student’s every submission, especially with multiple teaching assistants (TAs) can be challenging. Typically, when grading, I would schedule a time to sit down with all of my TAs, review the assignment or exam, give each TA a set of questions to grade, pass the submissions around until all were graded, and finally calculate the grades. When a TA had a question, we could address it as a group and make the related adjustments throughout the submissions as needed. While this system worked, it was tedious and time consuming. Occasionally, inconsistencies in the grades came up, which could prompt regrade requests from students. I kept thinking that there had to be a better way.

About year and a half ago, a colleague introduced me to an application called Gradescope to manage the grading of assignments and exams. I spent a relatively short amount of time getting familiar with the application and used it in a course in the fall of 2016, for both student-submitted homework assignments and in-class paper exams. In the case of the homework, students would upload a digital version of the assignment to Gradescope. The application would then prompt the student to designate the areas in the document where their answers can be found so that the application could sort and organize the submissions for the ease of grading. For the in-class exams, I would have the students work on a paper-based exam that I set up in Gradescope with the question areas established. I then would scan and upload the exams so that Gradescope could associate the established question areas to the student submissions automatically. The process of digitizing the completed tests and correlating them to the class roster was made easy with a scanner and Gradescope’s automatic roster matching feature. Gradescope became a centralized location where my TAs and I could grade student work.

There are a few ways to consider incorporating Gradescope into your course. Here is a non-exhaustive list of scenarios for both assignments and exams that can be accommodated:

  • Handwritten/drawn homework (students scan them and upload the images/PDFs)
  • Electronic written homework (students upload PDFs)
  • In-class exams (instructor scans them and uploads the PDFs)
  • Coding scripts for programming assignment (students upload their program’s files for auto-grading)
  • Code assignments graded by hand (students upload PDFs of code)

The real power of Gradescope is that it requires setting up a reusable rubric (a list of competencies or qualities used to assess correct answers) to grade each question. When grading, you select from or add to the rubric to add or deduct points. This keeps the grading consistent across multiple submissions. As the rubric is established as a part of the assignment, you can also update the point values at any time if you later determine that a larger point addition/deduction is advisable, and the grade calculations will update automatically.

Screenshot from Gradescope--Review grade for assignment feature.

Screenshot of Gradescope’s Review Grade for an assignment

After being informed that I wouldn’t have any TAs for my Principles of Programming Languages course the following semester, I was motivated to use one of Gradescope’s [features, the programming assignment auto-grader platform. Being able to automatically provide grades and feedback for students’ submitted code has long been a dream of instructors who teach programming. Gradescope offers a language-agnostic environment in which the instructor sets up the components and libraries needed for the students’ programs to run. The instructor establishes a grading script that is the basis for the analysis, providing grades and feedback for issues found in each student’s submitted program.

Overall, the use of Gradescope has reduced time spent grading and improves the quality of feedback that I am able to provide students. For instance, when I release grades to the students, they are able to review each of the descriptive rubrics that were used when grading their submissions, as well as any additional comments. Auto-grader was really the star feature in this case. Students were able to submit their code, determine if it would run, and make corrections before the deadline to increase their chances of a better grade. There are features to reduce the number of allowed submissions, but I choose not to set a limit so that the students could use an iterative approach to getting the right solution.

Gradescope is only effective if your rubrics and grading criteria are well thought out, and the auto-grading scripts require some time to set up.  Creating the grading scripts for the programming assignments may seem time intensive, but by frontloading the work with detailed rubrics and test cases, more time is saved in the grading process. The value of this preparation scales as enrollment increases, and the rubrics and scripts can be reused when you teach the course again. With more time during the semester freed up by streamlining the grading process, my TAs and I were able to increase office hours, which is more beneficial in the long run for the students.

Screenshot showing student's submission with rubric items used in grading.

Student’s submission with rubric items used in grading

The process for regrading is much easier for both students and instructors. Before Gradescope, a regrade request meant determining which TA graded that question, discussing the request with them, and then potentially adjusting the grade. With the regrade feature, students submit a regrade request, which gets routed to that question’s grader (me or the TA) with comments for the grader to consider. The grader can then award the regrade points directly to the student’s assignment. As the instructor, I can see all regrade requests, and can override if necessary, which helps to reduce the bureaucracy and logistics involved with manual regrading. Additionally, regrade requests and Gradescope’s assignment statistics feature may allow you to pinpoint issues with a particular question or how well students have understood a topic.

I have found that when preparing assignments with Gradescope, I am more willing to create multiple mini-assignments. With large courses, the tendency would be to create fewer assignments that are larger in scope to lessen the amount of grading. When there are too few submission points for students who are deadline oriented, I find that they wait till the last few days to start the assignment, which can make the learning process less effective. By adding more assignments, I can scaffold the learning to incrementally build on topics taught in class.

After using Gradescope for a year, I realized that it could be used to detect cheating. Gradescope allows you to see submissions to specific questions in sequence, making it easy to spot submissions that are identical, a red-flag for copied answers. While not a feature, it is an undocumented bonus. It should also be noted that Gradescope adheres to FERPA (Family Educational Rights and Privacy Act) standards for educational tools.

Additional Resources:

  • Gradescope website: https://gradescope.com
  • NOTE TO JHU READERS ONLY: The institutional version of Gradescope is currently available to JHU faculty users through a pilot program. If you are faculty at Johns Hopkins University’s Homewood campus interested in learning more about how Gradescope might work for your courses, contact Reid Sczerba in the Center for Educational Resources at rsczerb1@jhu.edu.

 

Scott Smith, Professor
Department of Computer Science, Johns Hopkins University

Scott Smith has been a professor of Computer Science at Hopkins for almost 30 years. His research specialty is programming languages. For the past several years, he has taught two main courses, Software Engineering, a 100 student project-based class, and Principles of Programming Languages, a mathematically-oriented course with both written and small programming assignments.

Images Sources: CC Reid Sczerba, Gradescope screenshots courtesy Scott Smith

New Mobile Application to Improve Your Teaching

Tcrunch logo. Tcrunch in white letters on blue background.Finding time to implement effective teaching strategies can be challenging, especially for professors where teaching is only one of their many responsibilities. PhD student John Hickey is trying to solve this problem with Tcrunch, a new application (available on the Apple and Google App stores for free) he has created.

Tcrunch enables more efficient and frequent teacher-student communication. You can think about it as an electronic version of the teaching strategy called an “exit ticket.” An “exit ticket” is traditionally a 3×5 card given to students at the end of class; the teacher asks a question to gain feedback from the students and the students write a brief response. Here you can do the same thing, but Tcrunch eliminates any paper and performs all collecting and analyzing activities in real-time.

Tcrunch Teacher Portal screen shot.There is both a teacher and student portal into the app. Teachers can create and manage different classes. Within a class, teachers can create a question or prompt and release it to their students, who will also have Tcrunch. Students can then see this question, click on it, and answer it. Student answers come into the teacher’s app in real-time. Teachers can evaluate the results in the app or email themselves the results in the form of an Excel document. Other functionalities include multiple choice, a bank of pre-existing questions to help improve teaching, and an anonymous setting for student users.

John developed Tcrunch because of his own struggles with time and improving learning in the classroom:

“I taught my first university-level class at Johns Hopkins, and I wanted more regular feedback to my teaching style, classroom activities, and student comprehension than just the course evaluation at the end of the year. As an engineer, frequent feedback is critical to iterative improvements. I also knew that I was not going to handout, collect, read, and analyze dozens of papers at the end of each class. So, I created Tcrunch.”

The app development process took nearly a year, with iterative coding and testing with Tcrunch student view of app. Screen shot.teachers and students. Both student and teacher users have enjoyed using Tcrunch. They have referenced enjoying the ease of use, being able to create and answer questions on the go, and having a platform for all their classes in one place. John has personally found Tcrunch has helped him to restructure classroom time and assignment load, and even to find out why students are missing class.

John cites this development process as the main difference between his app and already existing polling technologies.

“Finding out what the professors and students wanted allowed me to see the needs that were not filled by existing technologies. This resulted in an app specifically designed to help teachers, instead of the other way around, for example, a generalized polling tool that is also applied to teaching. The specificity in design gives it its unique functionality and user experience.”

In the future John wants to extend the reach of Tcrunch to more teachers through advertising and partnering with Edtech organizations.

While the app may not be as flashy as Pokemon Go, Tcrunch has great utility and potential in the classroom.

To find and use the app, search Tcrunch in the Apple or Google App stores and download. John Hickey can be contacted at jhickey8@jhmi.edu

John Hickey
National Science Foundation Fellow
Biomedical Engineering Ph.D. Candidate
Johns Hopkins University

Images source: John Hickey 2018