Grading in the fast lane with Gradescope

[Guest post by Scott Smith, Professor, Computer Science, Johns Hopkins University]

Three speedometers for quality, grades per hour, and efficiency.Grading can be one of the most time consuming and tedious aspects of teaching a course, but it’s important to give prompt and meaningful feedback to your students. In large courses, aligning grading practices across multiple teaching assistants (TAs) necessitates a level of coordination that includes scheduling grading meetings, reviewing materials for correct answers, and calibrating point evaluations, all of which can take up valuable time during the semester.

In courses that teach programming, we typically assign students projects that require them to write programs to solve problems. When instructors grade this type of assignment, they not only have to observe the program’s results but also the student’s approach. If the results are not correct or the program doesn’t run, we have to spend time reviewing hundreds of lines of code to debug the program to give thoughtful feedback.

In the past, my method for grading assignments with my TAs may have been arduous but it worked. However, last year, no TAs were assigned to my Principles of Programming Languages course. Concerned that I wouldn’t have enough time to do all the work, I looked for another solution.

Consistent grading and providing meaningful feedback for student’s every submission, especially with multiple teaching assistants (TAs) can be challenging. Typically, when grading, I would schedule a time to sit down with all of my TAs, review the assignment or exam, give each TA a set of questions to grade, pass the submissions around until all were graded, and finally calculate the grades. When a TA had a question, we could address it as a group and make the related adjustments throughout the submissions as needed. While this system worked, it was tedious and time consuming. Occasionally, inconsistencies in the grades came up, which could prompt regrade requests from students. I kept thinking that there had to be a better way.

About year and a half ago, a colleague introduced me to an application called Gradescope to manage the grading of assignments and exams. I spent a relatively short amount of time getting familiar with the application and used it in a course in the fall of 2016, for both student-submitted homework assignments and in-class paper exams. In the case of the homework, students would upload a digital version of the assignment to Gradescope. The application would then prompt the student to designate the areas in the document where their answers can be found so that the application could sort and organize the submissions for the ease of grading. For the in-class exams, I would have the students work on a paper-based exam that I set up in Gradescope with the question areas established. I then would scan and upload the exams so that Gradescope could associate the established question areas to the student submissions automatically. The process of digitizing the completed tests and correlating them to the class roster was made easy with a scanner and Gradescope’s automatic roster matching feature. Gradescope became a centralized location where my TAs and I could grade student work.

There are a few ways to consider incorporating Gradescope into your course. Here is a non-exhaustive list of scenarios for both assignments and exams that can be accommodated:

  • Handwritten/drawn homework (students scan them and upload the images/PDFs)
  • Electronic written homework (students upload PDFs)
  • In-class exams (instructor scans them and uploads the PDFs)
  • Coding scripts for programming assignment (students upload their program’s files for auto-grading)
  • Code assignments graded by hand (students upload PDFs of code)

The real power of Gradescope is that it requires setting up a reusable rubric (a list of competencies or qualities used to assess correct answers) to grade each question. When grading, you select from or add to the rubric to add or deduct points. This keeps the grading consistent across multiple submissions. As the rubric is established as a part of the assignment, you can also update the point values at any time if you later determine that a larger point addition/deduction is advisable, and the grade calculations will update automatically.

Screenshot from Gradescope--Review grade for assignment feature.

Screenshot of Gradescope’s Review Grade for an assignment

After being informed that I wouldn’t have any TAs for my Principles of Programming Languages course the following semester, I was motivated to use one of Gradescope’s [features, the programming assignment auto-grader platform. Being able to automatically provide grades and feedback for students’ submitted code has long been a dream of instructors who teach programming. Gradescope offers a language-agnostic environment in which the instructor sets up the components and libraries needed for the students’ programs to run. The instructor establishes a grading script that is the basis for the analysis, providing grades and feedback for issues found in each student’s submitted program.

Overall, the use of Gradescope has reduced time spent grading and improves the quality of feedback that I am able to provide students. For instance, when I release grades to the students, they are able to review each of the descriptive rubrics that were used when grading their submissions, as well as any additional comments. Auto-grader was really the star feature in this case. Students were able to submit their code, determine if it would run, and make corrections before the deadline to increase their chances of a better grade. There are features to reduce the number of allowed submissions, but I choose not to set a limit so that the students could use an iterative approach to getting the right solution.

Gradescope is only effective if your rubrics and grading criteria are well thought out, and the auto-grading scripts require some time to set up.  Creating the grading scripts for the programming assignments may seem time intensive, but by frontloading the work with detailed rubrics and test cases, more time is saved in the grading process. The value of this preparation scales as enrollment increases, and the rubrics and scripts can be reused when you teach the course again. With more time during the semester freed up by streamlining the grading process, my TAs and I were able to increase office hours, which is more beneficial in the long run for the students.

Screenshot showing student's submission with rubric items used in grading.

Student’s submission with rubric items used in grading

The process for regrading is much easier for both students and instructors. Before Gradescope, a regrade request meant determining which TA graded that question, discussing the request with them, and then potentially adjusting the grade. With the regrade feature, students submit a regrade request, which gets routed to that question’s grader (me or the TA) with comments for the grader to consider. The grader can then award the regrade points directly to the student’s assignment. As the instructor, I can see all regrade requests, and can override if necessary, which helps to reduce the bureaucracy and logistics involved with manual regrading. Additionally, regrade requests and Gradescope’s assignment statistics feature may allow you to pinpoint issues with a particular question or how well students have understood a topic.

I have found that when preparing assignments with Gradescope, I am more willing to create multiple mini-assignments. With large courses, the tendency would be to create fewer assignments that are larger in scope to lessen the amount of grading. When there are too few submission points for students who are deadline oriented, I find that they wait till the last few days to start the assignment, which can make the learning process less effective. By adding more assignments, I can scaffold the learning to incrementally build on topics taught in class.

After using Gradescope for a year, I realized that it could be used to detect cheating. Gradescope allows you to see submissions to specific questions in sequence, making it easy to spot submissions that are identical, a red-flag for copied answers. While not a feature, it is an undocumented bonus. It should also be noted that Gradescope adheres to FERPA (Family Educational Rights and Privacy Act) standards for educational tools.

Additional Resources:

  • Gradescope website: https://gradescope.com
  • NOTE TO JHU READERS ONLY: The institutional version of Gradescope is currently available to JHU faculty users through a pilot program. If you are faculty at Johns Hopkins University’s Homewood campus interested in learning more about how Gradescope might work for your courses, contact Reid Sczerba in the Center for Educational Resources at rsczerb1@jhu.edu.

 

Scott Smith, Professor
Department of Computer Science, Johns Hopkins University

Scott Smith has been a professor of Computer Science at Hopkins for almost 30 years. His research specialty is programming languages. For the past several years, he has taught two main courses, Software Engineering, a 100 student project-based class, and Principles of Programming Languages, a mathematically-oriented course with both written and small programming assignments.

Images Sources: CC Reid Sczerba, Gradescope screenshots courtesy Scott Smith

New Mobile Application to Improve Your Teaching

Tcrunch logo. Tcrunch in white letters on blue background.Finding time to implement effective teaching strategies can be challenging, especially for professors where teaching is only one of their many responsibilities. PhD student John Hickey is trying to solve this problem with Tcrunch, a new application (available on the Apple and Google App stores for free) he has created.

Tcrunch enables more efficient and frequent teacher-student communication. You can think about it as an electronic version of the teaching strategy called an “exit ticket.” An “exit ticket” is traditionally a 3×5 card given to students at the end of class; the teacher asks a question to gain feedback from the students and the students write a brief response. Here you can do the same thing, but Tcrunch eliminates any paper and performs all collecting and analyzing activities in real-time.

Tcrunch Teacher Portal screen shot.There is both a teacher and student portal into the app. Teachers can create and manage different classes. Within a class, teachers can create a question or prompt and release it to their students, who will also have Tcrunch. Students can then see this question, click on it, and answer it. Student answers come into the teacher’s app in real-time. Teachers can evaluate the results in the app or email themselves the results in the form of an Excel document. Other functionalities include multiple choice, a bank of pre-existing questions to help improve teaching, and an anonymous setting for student users.

John developed Tcrunch because of his own struggles with time and improving learning in the classroom:

“I taught my first university-level class at Johns Hopkins, and I wanted more regular feedback to my teaching style, classroom activities, and student comprehension than just the course evaluation at the end of the year. As an engineer, frequent feedback is critical to iterative improvements. I also knew that I was not going to handout, collect, read, and analyze dozens of papers at the end of each class. So, I created Tcrunch.”

The app development process took nearly a year, with iterative coding and testing with Tcrunch student view of app. Screen shot.teachers and students. Both student and teacher users have enjoyed using Tcrunch. They have referenced enjoying the ease of use, being able to create and answer questions on the go, and having a platform for all their classes in one place. John has personally found Tcrunch has helped him to restructure classroom time and assignment load, and even to find out why students are missing class.

John cites this development process as the main difference between his app and already existing polling technologies.

“Finding out what the professors and students wanted allowed me to see the needs that were not filled by existing technologies. This resulted in an app specifically designed to help teachers, instead of the other way around, for example, a generalized polling tool that is also applied to teaching. The specificity in design gives it its unique functionality and user experience.”

In the future John wants to extend the reach of Tcrunch to more teachers through advertising and partnering with Edtech organizations.

While the app may not be as flashy as Pokemon Go, Tcrunch has great utility and potential in the classroom.

To find and use the app, search Tcrunch in the Apple or Google App stores and download. John Hickey can be contacted at jhickey8@jhmi.edu

John Hickey
National Science Foundation Fellow
Biomedical Engineering Ph.D. Candidate
Johns Hopkins University

Images source: John Hickey 2018

Midterm Course Evaluations

Many of us are reaching the mid-semester mark and students are anticipating or completing midterm exams. Perhaps you are in the throes of grading.  Now is a good time to think about letting your students grade you, in the sense of evaluating your teaching. Think of this as a type of formative assessment, an opportunity for you to make corrections to your teaching strategies and clarify student misconceptions.

There are several ways to obtain feedback and these evaluations do not needTwo buttons, green with a thumbs up and red with a thumbs down. to be lengthy. Examples and resources are explored below. Popular among instructors I’ve talked to are short, anonymous surveys, offered either online or on paper. Blackboard and other course management systems allow you to create surveys where student responses are anonymous but you can see who has responded and who has not, making it easy to track. You want to keep these evaluations focused with three or four questions, which might include: What is working in the class/what is not working? What change(s) would you suggest to improve [class discussions/lectures/lab sessions]? What is something you are confused about? Have you found [specific course assignment] to be a useful learning activity?

As the Yale Center for Teaching and Learning states on their website page Midterm Student Course Evaluations: “Midterm course evaluations (MCE) are a powerful tool for improving instructors’ teaching and students’ learning.  … MCE provide two critical benefits for teaching and learning: the temporal advantage of improving the course immediately, and the qualitative benefit of making teaching adjustments specific to the particular needs and desires of current students. In addition, MCE generally produce better quality feedback than end-of-term evaluations since students have a shared stake in the results and instructors can seek clarification on any contradicting or confusing responses.” The Yale site offers useful examples, strategies, and resources.

Michigan State University Academic Advancement Network offers a comprehensive guide with Mid-term Student Feedback, which includes research citations as well as examples. Here, too, you will find a list of resources from other universities on the topic, as well as more in-depth methods to gain student feedback. There is also a section with tips on effective use of information gained from student feedback.

A sampling survey-type midterm evaluations can be found in PDF format at the UC Berkeley Center for Teaching and Learning: Teaching Resources: Sample Midterm Evaluations. This document will get you off and running with little effort.

Ideally you will be using the results on the midterm exam or other learning assessment as a gauge along with the teaching evaluations. If the learning assessment is indicating gaps in content understanding, you can see how it aligns with feedback gained from the student evaluations. The value is that you can make timely course corrections. Another plus—students will see that you are genuinely interested in your teaching and their learning.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay.com

Facilitating and Evaluating Student Writing

Over the summer I worked on revising a manual for teaching assistants that we hand out each year at our annual TA Orientation. One of the sections deals with writing intensive courses across disciplines and how TAs can facilitate and evaluate writing assignments. The information, advice, and resources in the manual speak to an audience beyond graduate student teaching assistants. Even seasoned instructors may struggle with teaching writing skills and evaluating written assignments.

View from above and to the right of a woman's hands at a desk writing in a journal next to a lap top computer.Two mistakes that teachers may make are assuming that students in their courses know how to write a scholarly paper and not providing appropriate directions for assignments. These assumptions are likely to guarantee that the resulting student writing will disappoint.

As a quick aside, faculty often complain about the poor quality of student writing, claiming that students today don’t write as well as students in some vaguely imagined past, perhaps when the faculty member was a college freshman. However, the results of an interesting longitudinal study suggest otherwise. A report in JSTOR Daily, Student Writing in the Digital Age by Anne Trubek (October 19, 2016), summarizes the findings of the  2006 study by Andrea A. Lunsford and Karen J. Lunsford, Mistakes Are a Fact of Life: A National Comparative Study. “Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.” Their conclusion? “College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents.” Regardless of your take on the writing of current students, it is worth giving thoughtful consideration to your part in improving your students’ writing.

Good writing comes as a result of practice and it is the role of the instructor to facilitate that practice. Students may arrive at university knowing how to compose a decent five-paragraph essay, but no one has taught them how to write a scholarly paper. They must learn to read critically, summarize what they have read, identify an issue, problem, flaw, or new development that challenges what they have read. They must then construct an argument, back it with evidence (and understand what constitutes acceptable evidence), identify and address counter-arguments, and reach a conclusion. Along the way they should learn how to locate appropriate source materials, assemble a bibliography, and properly cite their sources. As an instructor, you must show them the way.

Students will benefit from having the task of writing a term paper broken into smaller components or assignments. Have students start with researching a topic and creating a bibliography. Librarians are often available to come to your class to instruct students in the art of finding sources and citing them correctly. Next, assign students to producing a summary of the materials they’ve read and identifying the issue they will tackle in their paper. Have them outline their argument. Ask for a draft. Considering using peer review for some of these steps to distribute the burden of commenting and grading. Evaluating other’s work will improve their own. [See the May 29, 2015 Innovative Instructor post Using the Critique Method for Peer Assessment.] And the opportunity exists to have students meet with you in office hours to discuss some of these assignments so that you may provide direct guidance and mentoring. Their writing skills will not develop in a vacuum.

Your guidance is critical to their success. This starts with clear directions for each assignment. For an essay you will be writing a prompt that should specify the topic choices, genre, length, formal requirements (whether outside sources should be used, your expectations on thesis and argument, etc.), and formatting, including margins, font size, spacing, titling, and student identification. Directions for research papers, fiction pieces, technical reports, and other writing assignments should include the elements that you expect to find in student submissions. Do not assume students know what to include or how to format their work.

As part of the direction you give, consider sharing with your students the rubric by which you will evaluate their work. See the June 26, 2014 Innovative Instructor post Sharing Assignment Rubrics with Your Students for more detail. Not sure how to create a rubric? See previous posts: from October 8, 2012 Using a Rubric for Grading Assignments, November 21, 2014 Creating Rubrics (by Louise Pasternak), and June 14, 2017 Quick Tips: Tools for Creating Rubrics. Rubrics will save you time grading, ensure that your grading is equitable, and provide you with a tangible defense against students complaining about their grades.

Giving feedback on writing assignments can be time consuming so focus on what is most important. This means, for example, noting spelling and grammar errors but not fixing them. That should be the student’s job. For a short assignment, writing a few comments in the margins and on the last page may be doable, but for a longer paper consider typing up your comments on a separate page. Remember to start with something positive, then offer a constructive critique.

As well, bring writing into your class in concrete ways. For example, at the beginning of class, have students write for three to five minutes on the topic to be discussed that day, drawing from the assigned readings. Discuss the assigned readings in terms of the authors’ writing skills. Make students’ writing the subject of class activities through peer review. Incorporate contributions to a class blog as part of the course work. Remember, good writing is a result of practice.

Finally, there are some great resources out there to help you help your students improve their writing. Purdue University’s Online Writing Lab—OWL—website is all encompassing with sections for instructors (K-12 and Higher Ed) and students. For a quick start go to the section Non-Purdue College Level Instructors and Students. The University of Michigan Center for Research on Learning and Teaching offers a page on Evaluating Student Writing that includes Designing Rubrics and Grading Standards, Rubric Examples, Written Comments on Student Writing, and tips on managing your time grading writing.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Photo by: Matthew Henry. CC License via Burst.com.

 

 

Lunch and Learn: Team-Based Learning

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Friday, December 16, the Center for Educational Resources (CER) hosted the second Lunch and Learn—Faculty Conversations on Teaching, for the 2016-1017 academic year. Eileen Haase, Senior Lecturer in Biomedical Engineering, and Mike Reese, Director, Center for Educational Resources, and Instructor in Sociology, discussed their approaches to team-based learning (TBL).

Eileen Haase teaches a number of core courses in Biomedical Engineering at the Whiting School of Engineering, including Freshmen Modeling and Design, BME Teaching Practicum, Molecules and Cells, and System Bioengineering Lab I and II, as well as being course director for Cell and Tissue Engineering and assisting with System Bioengineering II. She has long been a proponent of team work in the classroom.

In her presentation, Haase focused on the Molecules and Cells course, required for BME majors in the sophomore year, which she co-teaches with Harry Goldberg, Assistant Dean at the School of Medicine, Director of Academic Computing and faculty member, Department of Biomedical Engineering. The slides from Haase’s presentation are available here.

In the first class, Haase has the students do a short exercise that demonstrates the value of teamwork. Then the students take the VARK Questionnaire. VARK stands for Visual Aural Read/Write Kinesthetic and is a guide to learning styles. The questionnaire helps students and instructors by suggesting strategies for teaching and learning that align with these different styles. Haase and Goldberg found that 62% of their students were “multimodal” learners who will benefit from having the same material presented in several modes in order to learn it. In Haase’s class, in addition to group work, students work at the blackboard, use clickers, have access to online materials, participate in think-pair-share exercises, and get some content explained in lecture form.

Team work takes place in sections most FridSlide from Eileen Haase's presentation on Team-based Learning showing a scratch card test.ays. At the start of class, students take an individual, 10 question quiz called the iRAT, Individual Readiness Assurance Test, which consists of multiple-choice questions based on pre-class assigned materials. The students then take the test as a group (gRAT). Haase uses IF-AT scratch cards for these quizzes. Both tests count towards the students’ grades.

To provide evidence for the efficacy of team-based learning, Haase and Goldberg retested students from their course five months after the original final exam (99 of the 137 students enrolled in the course were retested). The data showed that students scored significantly better on the final exam on material that had been taught using team-based learning strategies and on the retest, retained significantly more of the TBL taught material.

Slide from Mike Reese's presentation on Team-based Learning showing four students doing data collection at a Baltimore neighborhood market.Mike Reese, Director of the Center for Educational Resources and instructor in the Department of Sociology, presented on his experiences with team-based learning in courses that included community-based learning in Baltimore City neighborhoods [presentation slides]. His courses are typically small and discussion oriented. Students read papers on urban issues and, in class, discuss these and develop research methodologies for gathering data in the field. Students are divided into teams, and Reese accompanies each team as they go out into neighborhoods to gather data by talking to people on the street and making observations on their surroundings. The students then do group presentations on their field work and write individual papers. Reese says that team work is hard, but students realize that they could not collect and analyze data in such a short time-frame without a group effort.

Reese noted that learning is a social process. We are social beings, and while many students dislike group projects, they will learn and retain more (as Haase and Goldberg demonstrated). This is not automatic. Instructors need to be thoughtful about structuring team work in their courses. The emotional climate created by the teacher is important. Reese shared a list of things to consider when designing a course that will incorporate team-based learning.

  1. Purpose: Why are you doing it? For Reese, teamwork is a skill that students should acquire, but primarily it serves his learning objectives.  If students are going to conduct a mini-research project in a short amount of time, they need multiple people working collectively to help with data collection and analysis.
  2. Group Size: This depends on the context and the course, but experts agree that having three to five students in a group is best to prevent slacking by team members.
  3. Roles: Reese finds that assigning roles works well as students don’t necessarily come into the course with strong project management skills, and projects typically require a division of labor. It was suggested that assigning roles is essential to the concept of true team-based learning as opposed to group work.
  4. Formation: One key to teamwork success is having the instructor assign students to groups rather than allowing them to self-select. [Research supports this. See Fiechtner, S. B., & Davis, E. A. (1985). Why some groups fail: A survey of students’ experiences with learning groups. The Organizational Behavior Teaching Review, 9(4), 75-88.] In Reese’s experience assigning students to groups helps them to build social capital and relationships at the institution beyond their current group of friends.
  5. Diversity: It is important not to isolate at-risk minorities. See: Heller, P. and Hollabaugh, M. (1992). Teaching problem solving through cooperative grouping. American Journal of Physics, 60 (7), 637-644.
  6. Ice Breakers: The use of ice breakers can help establish healthy team relationships. Have students create a team name, for example, to promote an identity within the group.
  7. Contracts: Having a contract for teamwork is a good idea. In the contract, students agree to support each other and commit to doing their share of the work. Students can create contracts themselves, but it is best if the instructor provides structured questions to guide them.
  8. Persistence: Consider the purpose of having groups and how long they will last. Depending on learning goals, teams may work together over an entire semester, or reform after each course module is completed.
  9. Check-ins: It is important to check in with teams on a regular basis, especially if the team is working together over an entire semester, to make sure that the group hasn’t developed problems and become dysfunctional.
  10. Peer Evaluation: Using peer evaluation keeps a check on the students to ensure that everyone is doing a fair share of the work. The instructor can develop a rubric, or have students work together to create one. Evaluation should be on specific tasks. Ratings should be anonymous (to the students, not the instructor) to ensure honest evaluation, and students should also self-evaluate.

In the discussion that followed the presentation, mentoring of teams and peer assessment were key topics. Several faculty with experience working with team-based learning recommended providing support systems in the form of mentors and or coaches who are assigned to the groups. These could be teaching assistants or undergraduate assistants who have previously taken the course. Resources for team-based learning were mentioned. CATME, “which stands for ‘Comprehensive Assessment of Team Member Effectiveness,’ is a free set of tools designed to help instructors manage group work and team assignments more effectively.”

Doodle was suggested as another tool for scheduling collaborative work. Many are familiar with the Doodle poll concept, but there are also free tools such as Connect Calendars and Meet Me that can be used by students.

An Innovative Instructor print article, Making Group Projects Work by Pam Sheff and Leslie Kendrick, Center for Leadership Education,  August 2012, covers many aspects of successful teamwork.

Another resource of interest is a scholarly article by Barbara Oakley and Richard Felder, Turning Student Groups into Effective Teams [Oakley, B., Felder, R.M., Brent, R., Elhajj, I. Journal of student centered learning, 2004]. “This paper is a guide to the effective design and management of team assignments in a college classroom where little class time is available for instruction on teaming skills. Topics discussed include forming teams, helping them become effective, and using peer ratings to adjust team grades for individual performance. A Frequently Asked Questions section offers suggestions for dealing with several problems that commonly arise with student teams, and forms and handouts are provided to assist in team formation and management.

If you are an instructor on the Homewood campus, staff in the Centerfor Educational Resources will be happy to talk with you about team-based learning and your courses.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn logo by Reid Sczerba, presentation slides by Eileen Haase and Mike Reese

Tips for Writing Effective Multiple Choice Questions

Writing test questions is a daunting task for many instructors. It can be challenging to come up with questions that correctly assess students on the comprehension of course objectives. Multiple choice questions are no exception; despite being very popular, instructors often struggle to create well-constructed questions.

Piece of notebook paper with Questions at the top, followed by numbers and ABCD for each of the six numbers. Answers are circled in red.Multiple choice questions have several advantages. They lend themselves to covering a broad range of content and assessing a wide variety of learning objectives. They are very useful when testing a student’s lower level knowledge of a topic, such as factual recall and definitions, but if written correctly, they can be used to assess at the higher levels of analysis, evaluation, and critical thinking skills. Multiple choice questions are scored efficiently (even automatically, if an electronic test is used), therefore, they are frequently the evaluation method preferred by instructors of large courses.

There are some disadvantages, including the fact that this type of question can be time-consuming to construct. Multiple choice questions are made up of two parts: the stem, which identifies the question, and the alternative responses which include the correct answer as well as incorrect alternatives, known as distractors. Coming up with plausible distractors for each question can be a difficult task. And, while some higher level thinking skills can be addressed, multiple choice questions cannot measure a student’s ability to organize and express ideas.  Another thing to consider is that student success when answering multiple choice questions can be influenced by factors unrelated to the subject matter, such as reading ability, deductive reasoning, and the use of context clues.

The following guidelines are offered to help streamline the process of creating multiple choice questions as well as minimize the disadvantages of using them.

General guidelines for writing stems:

  1. When possible, prepare the stem as a clearly written question rather than an incomplete statement.

Poor Example: Psychoanalysis is….

Better example: What is the definition of psychoanalysis? 

  1. Eliminate excessive or irrelevant information from the stem.

Poor example: Jane recently started a new job and can finally afford her own car, a Honda Civic, but is surprised at the high cost of gasoline. Gasoline prices are affected by:

Better example: Which of the following are factors that affect the consumer price of gasoline? 

  1. Include words/phrases in the stem that would otherwise be repeated in the alternatives.

Poor example: Which of the following statements are true?
1. Slowing population growth can prevent global warming
2. Halting deforestation can prevent global warming
3.  Increasing beef production on viable land can prevent global warming
4.  Improving energy efficiency can prevent global warming

Better example: Which of the following techniques can be used to prevent global warming?
1. Slowing population growth
2. Halting deforestation
3. Increasing beef production on viable land
4. Improving energy efficiency 

  1. Avoid using negatively stated stems. If you must use them, highlight the negative word so that it is obvious to students.

Poor example: Which of the following is not a mandatory qualification to be the president of the United States?

Better example: Which of the following is NOT a mandatory qualification to be the president of the United States?

General guidelines for writing alternative responses:

  1. Make sure there is only one correct answer.
  1. Create distractors that are plausible to avoid students guessing the correct answer.

Poor example:
Who was the third president of the United States?
1. George Washington
2. Bugs Bunny
3. Thomas Jefferson
4. Daffy Duck

Better example: Who was the third president of the United States?
1. George Washington
2. Benjamin Franklin
3. Thomas Jefferson
4. John Adams 

  1. Make sure alternative responses are grammatically parallel to each other.

Poor example: Which of the following is the best way to build muscle?
1. Sign up to run a marathon
2. Drinking lots of water
3. Exercise classes
4. Eat protein

Better example: Which of the following is the best way to build muscle?
1. Running on a treadmill
2. Drinking lots of water
3. Lifting weights
4. 
Eating lots of protein 

  1. When possible, list the alternative responses in a logical order (numerical, alphabetical, etc.)

Poor example: How many ounces are in a gallon?
1. 16
2. 148
3. 4
4. 128

Better example: How many ounces are in a gallon?
1. 4
2. 16
3. 128
4. 148

  1. Avoid using ‘All of the above’ or ‘None of the above’ to prevent students from using partial knowledge to arrive at the correct answer.
  2. Use at least four alternative responses to enhance the reliability of the test.

References:

Brame, C., (2013) Writing good multiple choice test questions. Retrieved December 14, 2016 from https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/

Burton, S. J., Sudweeks, R. R., Merrill, P.F., and Wood, B. (1991). How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty. Provo, Utah: Brigham Young University Testing Services and The Department of Instructional Science.

“Multiple Choice Questions.” The University of Texas at Austin Faculty Innovation Center, 14 Dec. 2016, https://facultyinnovate.utexas.edu/teaching/check-learning/question-types/multiple-choice.

Amy Brusini, Blackboard Training Specialist
Center for Educational Resources

Image Source: Pixabay.com

To Curve or Not to Curve Revisited

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.The practice of normalizing grades, more popularly known as curving, was a subject of an Innovative Instructor post, To Curve or Not to Curve on May 13, 2013. That article discussed both norm-referenced grading (curving) and criterion-referenced grading (not curving). As the practice of curving has become more controversial in recent years, an op-ed piece in this past Sunday’s New York Times caught my eye. In Why We Should Stop Grading Students on a Curve (The New York Times Sunday Review, September 10, 2016), Adam Grant argues that grade deflation, which occurs when teachers use a curve, is more worrisome than grade inflation. First, by limiting the number of students who can excel, other students who may have mastered the course content are unfairly punished. Second, curving creates a “toxic” environment, a “hypercompetitive culture” where one student’s success means another’s failure.

Grant, a professor of psychology at the Wharton School at the University of Pennsylvania, cites evidence that curving is a “disincentive to study.” Taking observations from his work as an organizational psychologist and applying those in his classroom, Grant has found he could both disrupt the culture of cutthroat competition and get students to work together as a team to prepare for exams. Teamwork has numerous advantages in both the classroom and the workplace as Grant details. Another important aspect is “…that one of the best ways to learn something is to teach it.” When students study together for an exam they benefit from each other’s strengths and expertise. Grant details the methods he used in constructing the exams and how his students have leveraged teamwork to improve their scores on course assessments. One device he uses is a Who Wants to Be a Millionaire-type “lifeline” for students taking the final exam. While his particular approaches may not be suitable for your teaching, the article provides food for thought.

Because I am not advocating for one way of grading over another, but rather encouraging instructors to think about why they are taking a particular approach and whether it is the best solution, I’d like to present a counter argument. In praise of grading on a curve by Eugene Volokh appeared in The Washington Post on February 9, 2015. “Eugene Volokh teaches free speech law, religious freedom law, church-state relations law, a First Amendment Amicus Brief Clinic, and tort law, at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy.” He counters some of the standard arguments against curving by pointing out that students and exams will vary from year to year making it difficult to draw consistent lines between, say an A- and B+ exam. This may be even more difficult for a less experienced teacher. Volokh also believes in the value of the curve for reducing the pressure to inflate grades. He points out that competing law schools tend to align their curves, making it an accepted practice for law school faculty to curve. As well, he suggests some tweaks to curving that strengthen its application.

As was pointed out in the earlier post, curving is often used in large lecture or lab courses that may have multiple sections and graders, as it provides a way to standardize grades. However, that issue may be resolved by instructing multiple graders how to assign grades based on a rubric. See The Innovative Instructor on creating rubrics and calibrating multiple graders.

Designing effective assessments is another important skill for instructors to learn, and one that can eliminate the need to use curving to adjust grades on a poorly conceived test. A good place to start is Brown University’s Harriet W. Sheridan Center for Teaching and Learning webpages on designing assessments where you will find resources compiled from a number of Teaching and Learning Centers on designing “assessments that promote and measure student learning.”  The topics include: Classroom Assessment and Feedback, Quizzes, Tests and Exams, Homework Assignments and Problem Sets, Writing Assignments, Student Presentations, Group Projects and Presentations, Labs, and Field Work.

Macie Hall, Instructional Designer
Center for Educational Resources


Image Source: © Reid Sczerba, 2013.

 

 

Rethinking Oral Examinations for Undergraduate Students

Oral examinations, also called viva voce, have long been associated with graduate studies, but many years ago, when I was an undergraduate, oral exams were not unheard of. All undergraduates at my university were required to write a thesis, and many of us took comprehensive written and oral examinations in our fields. I had several courses in my major field, art history, which held oral examinations as the final assessment of our work. At the time, this practice was not uncommon in British and European universities for undergraduates. Since then it has become a rarity both here and abroad, replaced by other forms of assessment for undergraduate students.

Stack of triangular, caution-type road signs with red border and the word TEST in the white center.Recently I learned that Richard Brown, Director of Undergraduate Studies and an associate teaching professor in the JHU Department of Mathematics, had experimented with oral examinations of his undergraduate students in Honors Multivariable Calculus.

Some background: Honors Multivariable Calculus is designed to be a course for students who are very interested in mathematics, but are still learning basics. Students must have the permission of the instructor to enroll. They are likely to be highly motivated learners. In this instance, Brown had only 6 students in the class—five freshmen and one sophomore. For the freshmen, this fall course was their first introduction to a college math course. They came in with varying levels of skill and knowledge, knowing that the course would be challenging. The course format was two 75 minute lectures a week and one hour-long recitation (problem solving) session with a graduate student teaching assistant. This is the part of the course where students work in an interactive environment, applying theory to practice, answering questions, and getting an alternate point of view from the graduate student assistant instructor.

Assessments in the course included two in-class midterms (written and timed), weekly graded homework assignment (usually problems), and the final exam. As Brown thought about the final exam, he realized that he had already seen his students approach to timed and untimed “mathematical writing” in the midterms and homeworks. So, why not try a different environment for the final and do an oral examination? He discussed the concept with the students in class and allowed the students to decide as a class which option they preferred. The students agreed to the oral exam.

Brown made sequential appointments with the students, giving them 20 minutes each for the exam. He asked them different questions to minimize the potential for sharing information, but the questions were of the same category. For example, one student might be asked to discuss the physical or geometric interpretation of Gauss’s Theorem, and another would be given the same question about Stokes’s Theorem. If a student got stuck in answering, Brown would reword the question or provide a small hint. In contrast, on a written exam, if a student gets stuck, they are stuck. You may never identify exactly what they know and don’t know. Another advantage, Brown discovered, was that by seeing how a student answered a question, he could adjust follow up questions to get a deeper understanding of the student’s depth of learning. He could probe to assess understanding or push to see how far the student could go. He found the oral exam gave him a much more comprehensive view of their knowledge than a written one.

In terms of grading, Brown noted that by the end of the semester he knew the students quite well and had a feel for their levels of comprehension, so in many ways the exam was a confirmation. He did not have a written rubric for the exam, as he did for the midterms, but he did take notes to share with the students if they wanted to debrief on their performance. He saw this as a more subjective assessment, balanced by the relatively objective assessment of the homeworks and midterms.

Following up with students after the exam, Brown found that four of the six students really liked the format and found it easier than anticipated. Only two of the students had planned to become majors at the start of the course, but ultimately four declared a mathematics major. Brown noted that he would like to use the oral examination again in the future, but felt that it would not be possible with more than 10 students in a class.

After talking with Brown, I searched to find recent literature on undergraduate oral exams. Two papers are worth reading if the concept is of interest:

Oral vs. Written Evaluation of Students, Ulf Asklund and Lars Bendix, Department of Computer Science, Lund Institute of Technology, Pedagogisk Inspirationskonferens, Lund University Publications, 2003. A conference paper detailing advantages and disadvantage of the two formats. The authors, based on their experience, found that oral examinations are better suited than written for evaluating higher levels of understanding based on Bloom’s Taxonomy.

Oral versus written assessments: A test of student performance and attitudes, Mark Huxham, Fiona Campbell, and Jenny Westwood, Assessment & Evaluation in Higher Education 37(1):125-136, January 2012. This study of two cohorts of students examined “…[s]tudent performance in and attitudes towards oral and written assessments using quantitative and qualitative methods.” Many positive aspects of oral examinations were found. See also a SlideShare Summary of this paper. Possible benefits of oral assessment included: “1) Development of oral communication skills 2) More ‘authentic’ assessment 3) More inclusive 4) Gauging understanding & Encouraging critical thinking 5) Less potential for plagiarism 6) Better at conveying nuances of meaning 7) Easier to spot rote-learning.”

A site to explore is the University of Pittsburgh’s Speaking in the Disciplines initiative. “…committed to the centrality of oral expression in the educational process.” Detailed information for those considering oral examinations is provided, including benefits (“direct, dialogic feedback,” “encourages in-depth preparation,” “demands different skills,” “valuable practice for future professional activity,” and “reduced grading stress”) and potential drawbacks (“time,” “student resistance and inexperience,” “subjective grading,” and “inadequate coverage of material”).

**********************************************************************************

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Images source: Pixabay.com

Report on the JHU Symposium on Excellence in Teaching and Learning in the Sciences

On January 11th and 12th Johns Hopkins University held its fourth Symposium on Excellence in Teaching and Learning in the Sciences. The event was part of a two-day symposium co-sponsored by the Science of Learning Institute and the Gateway Sciences Initiative (GSI). The first day highlighted cognitive learning research; theLogo for the JHU Gateway Sciences Initiative second day examined the practical application of techniques, programs, tools, and strategies that promote gateway science learning. The objective was to explore recent findings about how humans learn and pair those findings with the latest thinking on teaching strategies that work.  Four hundred people attended over the course of the two days; approximately 80% from Johns Hopkins University, with representation from all divisions and 20% from other universities, K-12 school systems, organizations, and companies. Videos of the presentations from the January 12th presentations are now available.

The GSI program included four guest speakers and three Johns Hopkins speakers. David Asai, Senior Director of Science Education at Howard Hughes Medical Institute, argued persuasively for the impact of diversity and inclusion as essential to scientific excellence.  He said that while linear interventions (i.e., summer bridge activities, research experiences, remedial courses, and mentoring/advising programs) can be effective at times, they are not capable of scaling to support the exponential change needed to mobilize a diverse group of problem solvers prepared to address the difficult and complex problems of the 21st Century.  He asked audience participants to consider this:  “Rather than developing programs to ‘fix the student’ and measuring success by counting participants, how can we change the capacity of the institution to create an inclusive campus climate and leverage the strengths of diversity?” [video]

Sheri Sheppard, professor of mechanical engineering at Stanford University, discussed learning objectives and course design in her presentation: Cooking up the modern undergraduate engineering education—learning objectives are a key ingredient [video].

Eileen Haase, senior lecturer in biomedical engineering at Johns Hopkins, discussed the development of the biomedical engineering design studio from the perspective of both active learning classroom space and curriculum [video]. Evidenced-based approaches to curriculum reform and assessment was the topic addressed by Melanie Cooper, the Lappan-Phillips Chair of Science Education at Michigan State University [video]. Tyrel McQueen, associate professor of chemistry at Johns Hopkins talked about his experience with discovery-driven experiential learning in a report on the chemical structure and bonding laboratory, a new course developed for advanced freshman [video]. Also from Hopkins, Robert Leheny, professor of physics, spoke on his work in the development of an active-learning- based course in introductory physics [video].

Steven Luck, professor of psychology at the University of California at Davis, provided an informative and inspiring conclusion to the day with his presentation of the methods, benefits, challenges, and assessment recommendations for how to transform a traditional large lecture course into a hybrid format [video].

Also of interest may be the videos of the presentations from the Science of Learning Symposium on January 11, 2016. Speakers included: Ed Connor, Johns Hopkins University; Jason Eisner, Johns Hopkins University; Richard Huganir, Johns Hopkins University; Katherine Kinzler, University of Chicago; Bruce McCandliss, Stanford University; Elissa Newport, Georgetown University; Jonathan Plucker, University of Connecticut; Brenda Rapp, Johns Hopkins University; and Alan Yuille, Johns Hopkins University.

*********************************************************************************************************

Kelly Clark, Program Manager
Center for Educational Resources

Image Source: JHU Gateway Sciences Initiative logo

Using the Critique Method for Peer Assessment

As a writer I have been an active participant in a formal critique group facilitated by a professional author and editor. The critique process, for those who aren’t familiar with the practice, involves sharing work (traditionally, writing and studio arts) with a group to review and discuss. Typically, the person whose work is being critiqued must listen without interrupting as others provide comments and suggestions. Critiques are most useful if a rubric and a set of standards for review is provided and adhered to during the commentary. For example, in my group, we are not allowed to say, “I don’t like stories that are set in the past.” Instead we must provide specific examples to improve the writing: “In terms of authoritative writing, telephones were not yet in wide use in 1870. This creates a problem for your storyline.”  After everyone has made their comments, the facilitator adds and summarizes, correcting any misconceptions. Then the writer has a chance to ask questions for clarification or offer brief explanations. In critique, both the creator and the reviewers benefit. Speaking personally, the process of peer evaluation has honed my editorial skills as well as improved my writing. Looking down on a group of four students with laptops sitting at a table in discussion.

With peer assessment becoming a pedagogical practice of interest to our faculty, could the critique process provide an established model that might be useful in disciplines outside the arts? A recent post on the Tomorrow’s Professor Mailing List, Teaching Through Critique: An Extra-Disciplinary Approach, by Johanna Inman, MFA Assistant Director, Teaching and Learning Center, Temple University, addresses this topic.

“The critique is both a learning activity and assessment that aligns with several significant learning goals such as critical thinking, verbal communication, and analytical or evaluation skills. The critique provides an excellent platform for faculty to model these skills and evaluate if students are attaining them.” Inman notes that critiquing involves active learning, formative assessment, and community building. Critiques can be used to evaluate a number of different assignments as might be found in almost any discipline including, short papers and other writing assignments, multimedia projects, oral presentations, performances, clinical procedures, interviews, and business plans. In short, any assignment that can be shared and evaluated through a specific rubric can be evaluated through critique.

A concrete rubric is at the heart of recommended best practices for critique. “Providing students with the learning goals for the assignment or a specific rubric before they complete the assignment and then reviewing it before critique can establish a focused dialogue. Additionally, prompts such as Is this work effective and why? or Does this effectively fulfill the assignment? or even Is the planning of the work evident? generally lead to more meaningful conversations than questions such as What do you think?

It is equally important to establish guidelines for the process, what Inman refers to as an etiquette for providing and receiving constructive criticism. Those on the receiving end should listen and keep an open mind. Learning to accept criticism without getting defensive is life skill that will serve students well. Those providing the assessment, Inman says, should critique the work not the student, and offer specific suggestions for improvement. The instructor or facilitator should foster a climate of civility.

Inman offers tips for managing class time for a critique session and specific advice for instructors to insure a balanced discussion.  For more on peer assessment more generally, see the University of Texas at Austin Center for Teaching and Learning’s page on Peer Assessment.  The Cornell Center for Teaching Excellence also has some good advice for instructors interested in Peer Assessment, answering some questions about how students might perceive and push back against the activity. Peer assessment, whether using a traditional critique method or another approach, benefits students in many ways. As they learn to evaluate others’ work, it strengthens their own.

********************************************************************************************************* Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Meeting. CC BY-SA Marco Antonio Torres https://www.flickr.com/photos/torres21/3052366680/in/photostream/