Lunch and Learn: Strategies to Minimize Cheating (A Faculty Brainstorming Session)

On Wednesday, April 17, the Center for Educational Resources (CER) hosted the final Lunch and Learn for the 2018-2019 academic year: Strategies to Minimize Cheating (A Faculty Brainstorming Session).  As the title suggests, the format of this event was slightly different than past Lunch and Learns. Faculty attendees openly discussed their experiences with cheating as well as possible solutions to the problem. The conversation was moderated by James Spicer, Professor, Materials Science and Engineering, and Dana Broadnax, Director of Student Conduct.

The discussion began with attendees sharing examples of academic misconduct they identified. The results included: copying homework, problem solutions, and lab reports; using other students’ clickers; working together on take-home exams; plagiarizing material from Wikipedia (or other sites); and using online solution guides (such as chegg.com, coursehero.com, etc.).

Broadnax presented data from the Office of the Dean of Student Life regarding the numbers of cheating incidents per school, types of violations, and outcomes. She stressed to faculty members how important it is to report incidents to help her staff identify patterns and repeat offenders. If it’s a student’s first offense, faculty are allowed to determine outcomes that do not result in failure of the course, transcript notation, or change to student status. Options include: assigning a zero to the assessment, offering a retake of the assessment, lowering the course grade, or giving a formal warning.  A student’s second or subsequent offense must be adjudicated by a hearing panel (Section D – https://studentaffairs.jhu.edu/policies-guidelines/undergrad-ethics/).

Some faculty shared their reluctance to report misconduct because of the time required to submit a report. Someone else remarked that when reporting, she felt like a prosecutor.  As a longtime ethics board member, Spicer acknowledged the burdens of reporting but stressed the importance of reporting incidents. He also shared that faculty do not act as prosecutors at a hearing. They only provide evidence for the hearing panel to consider. Broadnax agreed and expressed interest in finding ways to help make the process easier for faculty. She encouraged faculty to share more of their experiences with her.

The discussion continued with faculty sharing ideas and strategies they’ve used to help reduce incidents of cheating. A summary follows:

  • Do not assume that students know what is considered cheating. Communicate clearly what is acceptable/not acceptable for group work, independent work, etc. Clearly state on your syllabus or assignment instructions what is considered a violation.
  • Let students know that you are serious about this issue. Some faculty reported their first assignment of the semester requires students to review the ethics board website and answer questions. If you serve or have served on the ethics board, let students know.
  • Include an ethics statement at the beginning of assignment instructions rather than at the end. Research suggests that signing ethics statements placed at the beginning of tax forms rather than at the end reduces dishonest reporting.
  • Do not let ‘low levels’ of dishonesty go without following University protocol – small infractions may lead to more serious ones. The message needs to be that no level of dishonesty is acceptable.
  • Create multiple opportunities for students to submit writing samples (example: submit weekly class notes to Blackboard) so you can get to know their writing styles and recognize possible instances of plagiarism.
  • Plagiarism detection software, such as Turnitin, can be used to flag possible misconduct, but can also be used as an instructional tool to help students recognize when they are unintentionally plagiarizing.
  • Emphasize the point of doing assignments: to learn new material and gain valuable critical thinking skills. Take the time to personally discuss assignments and paper topics with students so they know you are taking their work seriously.
  • If using clickers, send a TA to the back of the classroom to monitor clicker usage. Pay close attention to attendance so you can recognize if a clicker score appears for an absent student.
  • Ban the use of electronic devices during exams if possible. Be aware that Apple Watches can be consulted.
  • Create and hand out multiple versions of exams, but don’t tell students there are different versions. Try not to re-use exam questions.
  • Check restrooms before or during exams to make sure information is not posted.
  • Ask students to move to different seats (such as the front row) if you suspect they are cheating during an exam. If a student becomes defensive, tell him/her that you don’t know for sure whether or not cheating has occurred, but that you would like him/her to move anyway.
  • Make your Blackboard site ‘unavailable’ during exams; turn it back on after everyone has completed the exam.
  • To discourage students from faking illness on exam days, only offer make-ups as oral exams. One faculty member shared this policy significantly reduced the number of make-ups due to illness in his class.

Several faculty noted the high-stress culture among JHU students and how it may play a part in driving them to cheat. Many agreed that in order to resolve this, we need to create an environment where students don’t feel the pressure to cheat. One suggestion was to avoid curving grades in a way that puts students in competition with each other.  Another suggestion was to offer more pass/fail classes. This was met with some resistance as faculty considered the rigor required by courses students need to get into medical school. Yet another suggestion was to encourage students to consult with their instructor if they feel the temptation to cheat. The instructor can help address the problem by considering different ways of handling the situation, including offering alternative assessments when appropriate. Broadnax acknowledged the stress, pressure, and competition among students, but also noted that these are not excuses to cheat: “Our students are better served by learning to best navigate those factors and still maintain a standard of excellence.”

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Source: Lunch and Learn Logo

Lunch and Learn: Innovative Grading Strategies

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Thursday, February 28, the Center for Educational Resources (CER) hosted the third Lunch and Learn for the 2018-2019 academic year. Rebecca Kelly, Associate Teaching Professor, Earth and Planetary Sciences and Director of the Environmental Science and Studies Program, and Pedro Julian, Associate Professor, Electrical and Computer Engineering, presented on Innovative Grading Strategies.

Rebecca Kelly began the presentation by discussing some of the problems in traditional grading. There is a general lack of clarity in what grades actually mean and how differently they are viewed by students and faculty. Faculty use grades to elicit certain behaviors from students, but it doesn’t necessarily mean that they are learning. Kelly noted that students, especially those at JHU, tend to be focused on the grade itself, aiming for a specific number and not the learning; this often results in high levels of student anxiety, something she sees often. She explained how students here don’t get many chances to fail and not have their grades negatively affected. Therefore, every assessment is a source of stress because it counts toward their grade. There are too few opportunities for students to learn from their mistakes.

Kelly mentioned additional challenges that faculty face when grading: it is often time consuming, energy draining, and stressful, especially when haggling over points, for example.  She makes an effort to provide clearly stated learning goals and rubrics for each assignment, which do help, but are not always enough to ease the burden.

Kelly introduced the audience to specifications grading and described how she’s recently started using this approach in Introduction to Geographic Information Systems (GIS). With specifications grading (also described in a recent CER Innovative Instructor article), students are graded pass/fail or satisfactory/unsatisfactory on individual assessments that align directly with learning goals. Course grades are determined by the number of learning goals mastered. This is measured by the number of assessments passed. For example, passing 20 or more assignments out of 23 would equate to an A; 17-19 assignments would equate to a B. Kelly stresses the importance of maintaining high standards; for rigor, the threshold for passing should be a B or better.

In Kelly’s class, students have multiple opportunities to achieve their goals. Each student receives three tokens that he/she can use to re-do an assignment that doesn’t pass, or select a different assignment altogether from the ‘bundle’ of assignments available. Kelly noted the tendency of students to ‘hoard’ their tokens and how it actually works out favorably; instead of risking having to use a token, students often seek out her feedback before turning anything in.

Introduction to GIS has both a lecture and a lab component. The lab requires students to use software to create maps that are then used to perform data analysis. The very specific nature of the assignments in this class lend themselves well to the specifications grading approach. Kelly noted that students are somewhat anxious about this approach at first, but settle into it once they fully understand. In addition to clearly laying out Grade bundles used in specifications gradingexpectations, Kelly lists the learning goals of the course and how they align with each assignment (see slides). She also provides students with a table showing the bundles of assignments required to reach final course grades. Additionally, she distributes a pacing guide to help students avoid procrastination.

The results that Kelly has experienced with specifications grading have been positive. Students generally like it because the expectations are very clear and initial failure does not count against them; there are multiple opportunities to succeed. Grading is quick and easy because of the pass/fail system; if something doesn’t meet the requirements, it is simply marked unsatisfactory. The quality of student work is high because there is no credit for sloppy work. Kelly acknowledged that specifications grading is not ideal for all courses, but feels the grade earned in her GIS course is a true representation of the student’s skill level in GIS.

Pedro Julian described a different grading practice that he is using, something he calls the “extra grade approach.” He currently uses this approach in Digital Systems Fundamentals, a hands-on design course for freshmen. In this course, Julian uses a typical grading scale: 20% for the midterm, 40% for labs and homework, and 40% for the final project. However, he augments the scale by offering another 20% if students agree to put in extra work throughout the semester. How much extra work? Students must commit to working collaboratively with instructors (and other students seeking the 20% credit) for one hour or more per week on an additional project.  This year, the project is to build a vending machine. Past projects include building an elevator out of Legos and building a robot that followed a specific path on the floor.

Julian described how motivated students are to complete the extra project once they commit to putting in the time. Students quickly realize that they learn all sorts of skills they would not have otherwise learned and are very proud and engaged. Student participation in the “extra grade” option has grown steadily since Julian started using this approach three years ago. The first year there were 5-10 students who signed up, and this year there are 30. Julian showed histograms (see slides) of student grades from past semesters in his class and how the extra grade has helped push overall grades higher.  The histograms also show that it’s not just students who may be struggling with the class who are choosing to participate in the extra grade, but “A students” as well.

Similar to Rebecca Kelly’s experience, Julian expressed how grade-focused JHU students are, much to his dismay. In an attempt to take some of the pressure off, he described how he repeatedly tells his students that if they work hard, they will get a good grade; he even includes this phrase in his syllabus. Julian explained how he truly wants students to concentrate more on the learning and not on the grade, which is his motivation behind the “extra grade” approach.

An interesting discussion with several questions from the audience followed the presentations. Below are some of the questions asked and responses given by Kelly and Julian, as well as audience members.

Q: (for Julian) Some students may not have the time or flexibility in their schedule to take part in an extra project. Do you have suggestions for them? Did you consider this when creating the “extra grade” option?

Julian responded that in his experience, freshmen seem to be available. Many of them make time to come in on the weekends. He wants students to know he’s giving them an “escape route,” a way for them to make up their grade, and they seem to find the time to make it happen.  Julian has never had a student come to him saying he/she cannot participate because of scheduling conflicts.

Q: How has grade distribution changed?

Kelly remarked how motivated the students are and therefore she had no Cs, very few Bs, and the rest As this past semester. She expressed how important it is to make sure that the A is attainable for students. She feels confident that she’s had enough experience to know what counts as an A. Every student can do it, the question is, will they?

Q: (for Kelly) Would there ever be a scenario where students would do the last half of the goals and skip the first half?

Kelly responded that she has never seen anyone jump over everything and that it makes more sense to work sequentially.

Q: (for Kelly) Is there detailed feedback provided when students fail an assignment?

Kelly commented that it depends on the assignment, but if students don’t follow the directions, that’s the feedback – to follow the directions. If it’s a project, Kelly will meet with the student, go over the assignment, and provide immediate feedback. She noted that she finds oral feedback much more effective than written feedback.

Q: (for Kelly) Could specs grading be applied in online classes?

Kelly responded that she thinks this approach could definitely be used in online classes, as long as feedback could be provided effectively. She also stressed the need for rubrics, examples, and clear goals.

Q: Has anyone tried measuring individual learning gains within a class? What skills are students coming in with? Are we actually measuring gain?

Kelly commented that specifications grading works as a compliment to competency based grading, which focuses on measuring gains in very specific skills.

Julian commented that this issue comes up in his class, students coming in with varying degrees of experience. He stated that this is another reason to offer the extra credit, to keep things interesting for those that want to move at a faster pace.

The discussion continued among presenters and audience members about what students are learning in a class vs. what they are bringing in with them. A point was raised that if students already know the material in a class, should they even be there?  Another comment was made regarding if it is even an instructor’s place to determine what students already know.  Additional comments were made about what grades mean and concerns about grades being used for different things, i.e. employers looking for specific skills, instructors writing recommendation letters, etc.

Q: Could these methods be used in group work?

Kelly responded that with specifications grading, you would have to find a way to evaluate the group. It might be possible to still score on an individual basis within the group, but it would depend on the goals. She mentioned peer evaluations as a possibility.

Julian stated that all grades are based on individual work in his class. He does use groups in a senior level class that he teaches, but students are still graded individually.

The event concluded with a discussion about how using “curve balls” – intentionally difficult questions designed to catch students off-guard – on exams can lead to challenging grading situations. For example, to ultimately solve a problem, students would need to first select the correct tools before beginning the solution process. Some faculty were in favor of including this type of question on exams, while others were not, noting the already high levels of exam stress.  A suggestion was made to give students partial credit for the process even if they don’t end up with the correct answer. Another suggestion was to give an oral exam in order to hear the student’s thought process as he/she worked through the challenge. This would be another way for students to receive partial credit for their ideas and effort, even if the final answer was incorrect.

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn Logo, slide from Kelly presentation

An Evidence-based Approach to Effective Studying

Dr. Culhane is Professor and Chair of the Department of Pharmaceutical Sciences at Notre Dame of Maryland University School of Pharmacy.

If you are like me, much of your time is spent ensuring that the classroom learning experience you provide for your students is stimulating, interactive and impactful. But how invested are we in ensuring that what students do outside of class is productive? Based on my anecdotal experience and several studies1,2,3 looking at study strategies employed by students, the answer to this question is not nearly enough! Much like professional athletes or musicians, our students are asked to perform at a high level, mastering advanced, information dense subjects; yet unlike these specialists who have spent years honing the skills of their craft, very few students have had any formal training in the basic skills necessary to learn successfully. It should be no surprise to us that when left to their own devices, our students tend to mismanage their time, fall victim to distractions and gravitate towards low impact or inefficient learning strategies. Even if students are familiar with high impact strategies and how to use them, it is easy for them to default back to bad habits, especially when they are overloaded with work and pressed for time.

Several years ago, I began to seriously think about and research this issue in hopes of developing an evidence-based process that would be easy for students to learn and implement. Out of this work I developed a strategy focused on the development of metacognition – thinking about how one learns. I based it on extensively studied, high impact learning techniques to include: distributed learning, self-testing, interleaving and application practice.4 I call this strategy the S.A.L.A.M.I. method. This method is named after a metaphor used by one of my graduate school professors. He argued that learning is like eating a salami. If you eat the salami one slice at a time, rather than trying to eat the whole salami in one setting, the salami is more likely to stay with you. Many readers will see that this analogy represents the effectiveness of distributed learning over the “binge and purge” method which many of our students gravitate towards.

S.A.L.A.M.I. is a “backronym” for Systematic Approach to Learning And Metacognitive Improvement. The method is structured around typical, daily learning experiences that I refer to as the five S.A.L.A.M.I. steps:

  1. Pre-class preparation
  2. In-class engagement
  3. Post-class review
  4. Pre-exam preparation
  5. Post-assessment review

When teaching the S.A.L.A.M.I. method, I explain how each of the five steps correspond to different “stages” or components of learning (see figure 1). Through mastery of skills associated with each of the five S.A.L.A.M.I. steps, students can more efficiently and effectively master a subject area.

S.A.L.A.M.I. Steps

Figure 1

Despite its simplicity, this model provides a starting point to help students understand that learning is a process that takes time, requires the use of different learning strategies and can benefit from the development of metacognitive awareness. Specific techniques designed to enhance metacognition and learning are employed during each of the five steps, helping students use their time effectively, maximize learning and achieve subject mastery. Describing all the tools and techniques recommended for each of the five steps would be beyond the scope of this post, but I would like to share two that I have found useful for students to evaluate the effectiveness of their learning and make data driven changes to their study strategies.

Let us return to our example of professional athletes and musicians: these individuals maintain high levels of performance by consistently monitoring and evaluating the efficacy of their practice as well as reviewing their performance after games or concerts. If we translate this example to an academic environment, the practice or rehearsal becomes student learning (in and out of class) and the game or concert acts as the assessment.  We often evaluate students’ formative or summative “performances” with grades, written or verbal feedback. But what type of feedback do we give them to help improve the efficacy of their preparation for those “performances?” If we do give them feedback about how to improve their learning process, is it evidenced-based and directed at improving metacognition, or do we simply tell them they need to study harder or join a study group in order to improve their learning? I would contend that we could do more to help students evaluate their approach to learning outside of class and examination performance. This is where a pre-exam checklist and exam wrapper can be helpful.

The inspiration for the pre-exam checklist came from the pre-flight checklist a pilot friend of mine uses to ensure that he and his private aircraft are ready for flight.  I decided to develop a similar tool for my students that would allow them to monitor and evaluate the effectiveness of their preparation for upcoming assessments. The form is based on a series of reflective questions that help students think about the effectiveness of their daily study habits. If used consistently over time and evaluated by a knowledgeable faculty or learning specialist, this tool can help students be more successful in making sustainable, data driven changes in their approach to learning.

Another tool that I use is called an exam wrapper. There are many examples of exam wrappers online, however, I developed my own wrapper based on the different stages or components of learning shown in figure 1. The S.A.L.A.M.I. wrapper is divided into five different sections. Three of the five sections focus on the following stages or components of learning: understanding and building context, consolidation, and application. The remaining two sections focus on exam skills and environmental factors that may impact performance. Under each of the five sections is a series of statements that describe possible reasons for missing an exam question. The student analyzes each missed question and matches one or more of the statements on the wrapper to each one. Based on the results of the analysis, the student can identify the component of learning, exam skill or environmental factors that they are struggling with and begin to take corrective action. Both the pre-exam checklist and exam wrapper can be used to help “diagnose” the learning issue that academically struggling students may be experiencing.

Two of the most common issues that I diagnose involve illusions of learning5. Students who suffer from the ‘illusion of knowledge’ often mistake their understanding of a topic for mastery. These students anticipate getting a high grade on an assessment but end up frustrated and confused when receiving a much lower grade than expected. Information from the S.A.L.A.M.I. wrapper can help them realize that although they may have understood the concept being taught, they could not effectively recall important facts and apply them. Students who suffer from the ‘illusion of productivity’ often spend extensive time preparing for an exam, however, the techniques they use are extremely passive. Commonly used passive study strategies include: highlighting, recopying and re-reading notes, or listening to audio/video recordings of lectures in their entirety. The pre-exam checklist can help students identify the learning strategies they are using and reflect on their effectiveness. When I encounter students favoring the use of passive learning strategies I use the analogy of trying to dig a six-foot deep hole with a spoon: “You will certainly work hard for hours moving dirt with a spoon, but you would be a lot more productive if you learned how to use a shovel.” The shovel in this case represents adopting strategies such as distributed practice, self-testing, interleaving and application practice.

Rather than relying on anecdotal advice from classmates or old habits that are no longer working, students should seek help early, consistently practice effective and efficient study strategies, and remember that digesting information (e.g. a  S.A.L.A.M.I.) in small doses is always more effective at ‘keeping the information down’ so it may be applied and utilized successfully later.

  1. Kornell, N., Bjork, R. The promise and perils of self-regulated study. Psychon Bull Rev. 2007;14 (2): 219-224.
  2. Karpicke, J. D., Butler, A. C., & Roediger, H. L. Metacognitive strategies in student learning: Do students practice retrieval when they study on their own? Memory. 2009; 17: 471– 479.
  3. Persky, A.M., Hudson, S. L. A snapshot of student study strategies across a professional pharmacy curriculum: Are students using evidence-based practice? Curr Pharm Teach Learn. 2016; 8: 141-147.
  4. Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J., Willingham, D.T. Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychol Sci Publ Int. 2013; 14 (1): 4-58.
  5. Koriat, A., & Bjork, R. A. Illusions of competence during study can be remedied by manipulations that enhance learners’ sensitivity to retrieval conditions at test. Memory & Cognition. 2006; 34: 959-972.

James M. Culhane, Ph.D.
Chair and Professor, School of Pharmacy, Notre Dame of Maryland University

Tips for Teaching International Students

As with many of our Innovative Instructor posts, this one was prompted by an inquiry from an instructor looking for resources, in this case for teaching international students. Johns Hopkins, among other American universities, has increased the number of international students admitted over the past ten years, both at the graduate and undergraduate level. These students bring welcome diversity to our campuses, but some of them face challenges in adapting to American educational practices and social customs. Fluency in English may be a barrier to their academic and social success. Following are three articles and an online guide that examine the issues and provide strategies for faculty teaching international students.

Silhouettes of people standing in a row, covered by flags of different nationalities.First up, a scholarly article that both summarizes some of the past research on international students and reports on a study undertaken by the authors: Best Practices in Teaching International Students in Higher Education: Issues and Strategies, Alexander Macgregor and  Giacomo Folinazzo, TESOL Journal, Volume 9, Issue 2, June 18, 2018, pp. 299-329. https://doi.org/10.1002/tesj.324  “This article discusses an online survey carried out in a Canadian college [Niagara College, Niagara-on-the-Lake, Ontario] that identified academic and sociocultural issues faced by international students and highlighted current or potential strategies from the input of 229 international students, 343 domestic students, and 125 professors.” The study sought to address the challenges that international students face in English-language colleges and universities, understand the difference in the perceptions of those challenges among faculty, domestic students, and the international students themselves, and suggest strategies for improving learning outcomes for international students.

International students need to know technical terms (and other vocabulary) and concepts to succeed, but complex cultural mores may hinder them from seeking assistance when needed and they may be reluctant to speak in class. These barriers exist even among students with high TOEFL (Test of English as a Foreign Language) scores. Unfamiliarity with American pedagogical practices, such as classroom participation and active learning, along with lack of awareness of American social rules and skills may further isolate these students.

The researchers used an online survey to identify the challenges that international students face and to suggest solutions. Key points in the findings include: 1) international students feel the area they most need to improve is proactive academic behavior, rather than language skills per se; 2) a lack of clarity on academic expectations of assessments and assignments hinders their success; 3) both faculty and domestic students feel that some accommodations for international students are appropriate (e.g., dictionary use in class and during exams, extra time for exams, lecture notes given out before class).

The authors conclude that “IS [International Student] input suggests professors could respond by providing clear guidelines for task expectations, aims, and instructions in multisensory formats (simplify the message without changing the material), clarifying content/format expectations with exemplars, and collecting exemplars of outstanding student work and substandard student work from past terms and using them as examples to clarify expectations.” The authors suggest faculty provide opportunities for language development, create a positive classroom climate, become informed about their students’ cultures, avoid fostering fear of error, reinforce students’ strengths, and emphasize the importance of office hours.

An article from Inside Higher Ed, Teaching International Students, Elizabeth Redden, December 1, 2014, looks at the challenges for institutions of higher education and their instructors in teaching international students and the implications for classroom “dynamics and practices”.

The author interviewed faculty at the University of Denver on the challenges they faced in teaching international students. Plagiarism is mentioned as a problem in some cases due to different practices in other countries. English as a second language (ESL) barriers were cited by a professor of classics and humanities, who has made an effort to teach a first-year seminar that compares Chinese and Western classical literature in order to bridge the cultural gap.

Faculty at University of Denver have pushed the administration to change admission policies in regards to the TOEFL, raising the score requirements. “In addition, Denver now requires admitted students who are non-native English speakers to take the university’s own English language proficiency test upon arrival. Despite having already achieved the standardized test scores required for admission, students who score poorly on Denver’s assessment may be required to enroll full-time in the university’s English Language Center before being allowed to begin their degree program.” This has meant potentially losing international students to competing undergraduate programs, but the school wanted to make sure that its students had a positive classroom experience.

Several faculty describe courses they have taught that “…will serve to enhance the quality of education by creating the opportunity for more cross-cultural conversations and a kind of perspective-shifting.”  This is an ideal situation, of course, and not all instructors have the flexibility to create new courses to take advantage of global viewpoints. None-the-less there are other strategies University of Denver faculty shared to improve learning experiences for international students, as well as their domestic counterparts.

Students may self-segregate themselves when seated in the classroom, so breaking up cultural groups and ensuring that students work across nationalities is important. Instructors should be aware that cultural references, slang, and idioms may not be understood by international students. Careful use of PowerPoint slides to reinforce course concepts, and sharing those slides with all students, ideally in advance of class, is recommended. Learn students’ names and how to pronounce them correctly. Learn something about their countries and cultures. “Professors talked about priming non-native speakers in various ways so they would be more apt to participate in class discussions, whether by allowing students to prepare their thoughts in a homework or in-class writing assignment, starting off class with a think-pair-share type activity, or appointing a different student to be a discussion leader each week.” The University of Denver Office of Teaching and Learning provides a web-page on Teaching International Students with helpful advice. Many of these recommendations are best practices for all students.

The article addresses the issues of consistency of standards and assessment. The consensus is that standards must be applied across the board to English-speakers and ESL-speakers alike. Writing assignments are particularly challenging. Doug Hesse, professor and executive director of the writing program at Denver notes that gaining fluency in writing for non-natives may take five to ten years. What, then, are fair expectations in terms of grading writing assignments?

“Hesse emphasizes the need to distinguish between global problems and micro-level errors in student writing. He isolates three dimensions of student writing: ‘aptness of content and approach to the task,’ ‘rhetorical fit,’ and ‘conformity to conventions of edited American English.’ He advises that professors ‘read charitably,’ reading for ‘content and rhetorical strategy’ as much as — or, actually, even prior to — reading for surface errors.” Hesse concedes that if the errors interfere with comprehension, that’s a problem, but he focuses his attention on content and approach. And he recommends “…sharing models for writing assignments, spending class time generating ideas for a paper, reading a draft and offering feedback, and structuring long projects in stages.” These, like the suggestions above, will be beneficial to all students. The University of Denver Writing Program offers a set of Guidelines for Responding to the Writing of International Students.

The University of Michigan, Center for Research on Teaching and Learning offers Teaching International Students: Pedagogical Issues and Strategies, another useful web guide for instructors. While some of the materials are specific to University of Michigan faculty, the topics Bridging Differences in Background Knowledge and Classroom Practice, Teaching Non-Native Speakers of English, Improving Climate, and Promoting Academic Integrity will be useful to all instructors.

If the deep dive of the first two articles is more than you are looking for, Teaching International Students: Six Ways to Smooth the Transition, Eman Elturki, Faculty Focus, June 29, 2018, cuts straight to the chase with practical tips. In a nutshell:

  • Communicate classroom expectations and policies clearly.
  • Encourage students to make use of office hours.
  • Discuss academic integrity.
  • Make course materials available.
  • Demystify assignment requirements.
  • Incorporate opportunities for collaborative learning.

More detail is provided on implementing these suggestions. Elturki sums up by repeating advice similar to that of the faculty at University of Denver, “…pursuing higher education in a foreign country can be challenging. Being mindful of international students in your classroom and incorporating ways to help them adapt to the new educational system can reduce their stress and help them succeed. In fact, adopting these practices have the potential to help all students, whether they grew up in the next town over or the other side of the globe.”

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay.com

Grading in the fast lane with Gradescope

[Guest post by Scott Smith, Professor, Computer Science, Johns Hopkins University]

Three speedometers for quality, grades per hour, and efficiency.Grading can be one of the most time consuming and tedious aspects of teaching a course, but it’s important to give prompt and meaningful feedback to your students. In large courses, aligning grading practices across multiple teaching assistants (TAs) necessitates a level of coordination that includes scheduling grading meetings, reviewing materials for correct answers, and calibrating point evaluations, all of which can take up valuable time during the semester.

In courses that teach programming, we typically assign students projects that require them to write programs to solve problems. When instructors grade this type of assignment, they not only have to observe the program’s results but also the student’s approach. If the results are not correct or the program doesn’t run, we have to spend time reviewing hundreds of lines of code to debug the program to give thoughtful feedback.

In the past, my method for grading assignments with my TAs may have been arduous but it worked. However, last year, no TAs were assigned to my Principles of Programming Languages course. Concerned that I wouldn’t have enough time to do all the work, I looked for another solution.

Consistent grading and providing meaningful feedback for student’s every submission, especially with multiple teaching assistants (TAs) can be challenging. Typically, when grading, I would schedule a time to sit down with all of my TAs, review the assignment or exam, give each TA a set of questions to grade, pass the submissions around until all were graded, and finally calculate the grades. When a TA had a question, we could address it as a group and make the related adjustments throughout the submissions as needed. While this system worked, it was tedious and time consuming. Occasionally, inconsistencies in the grades came up, which could prompt regrade requests from students. I kept thinking that there had to be a better way.

About year and a half ago, a colleague introduced me to an application called Gradescope to manage the grading of assignments and exams. I spent a relatively short amount of time getting familiar with the application and used it in a course in the fall of 2016, for both student-submitted homework assignments and in-class paper exams. In the case of the homework, students would upload a digital version of the assignment to Gradescope. The application would then prompt the student to designate the areas in the document where their answers can be found so that the application could sort and organize the submissions for the ease of grading. For the in-class exams, I would have the students work on a paper-based exam that I set up in Gradescope with the question areas established. I then would scan and upload the exams so that Gradescope could associate the established question areas to the student submissions automatically. The process of digitizing the completed tests and correlating them to the class roster was made easy with a scanner and Gradescope’s automatic roster matching feature. Gradescope became a centralized location where my TAs and I could grade student work.

There are a few ways to consider incorporating Gradescope into your course. Here is a non-exhaustive list of scenarios for both assignments and exams that can be accommodated:

  • Handwritten/drawn homework (students scan them and upload the images/PDFs)
  • Electronic written homework (students upload PDFs)
  • In-class exams (instructor scans them and uploads the PDFs)
  • Coding scripts for programming assignment (students upload their program’s files for auto-grading)
  • Code assignments graded by hand (students upload PDFs of code)

The real power of Gradescope is that it requires setting up a reusable rubric (a list of competencies or qualities used to assess correct answers) to grade each question. When grading, you select from or add to the rubric to add or deduct points. This keeps the grading consistent across multiple submissions. As the rubric is established as a part of the assignment, you can also update the point values at any time if you later determine that a larger point addition/deduction is advisable, and the grade calculations will update automatically.

Screenshot from Gradescope--Review grade for assignment feature.

Screenshot of Gradescope’s Review Grade for an assignment

After being informed that I wouldn’t have any TAs for my Principles of Programming Languages course the following semester, I was motivated to use one of Gradescope’s [features, the programming assignment auto-grader platform. Being able to automatically provide grades and feedback for students’ submitted code has long been a dream of instructors who teach programming. Gradescope offers a language-agnostic environment in which the instructor sets up the components and libraries needed for the students’ programs to run. The instructor establishes a grading script that is the basis for the analysis, providing grades and feedback for issues found in each student’s submitted program.

Overall, the use of Gradescope has reduced time spent grading and improves the quality of feedback that I am able to provide students. For instance, when I release grades to the students, they are able to review each of the descriptive rubrics that were used when grading their submissions, as well as any additional comments. Auto-grader was really the star feature in this case. Students were able to submit their code, determine if it would run, and make corrections before the deadline to increase their chances of a better grade. There are features to reduce the number of allowed submissions, but I choose not to set a limit so that the students could use an iterative approach to getting the right solution.

Gradescope is only effective if your rubrics and grading criteria are well thought out, and the auto-grading scripts require some time to set up.  Creating the grading scripts for the programming assignments may seem time intensive, but by frontloading the work with detailed rubrics and test cases, more time is saved in the grading process. The value of this preparation scales as enrollment increases, and the rubrics and scripts can be reused when you teach the course again. With more time during the semester freed up by streamlining the grading process, my TAs and I were able to increase office hours, which is more beneficial in the long run for the students.

Screenshot showing student's submission with rubric items used in grading.

Student’s submission with rubric items used in grading

The process for regrading is much easier for both students and instructors. Before Gradescope, a regrade request meant determining which TA graded that question, discussing the request with them, and then potentially adjusting the grade. With the regrade feature, students submit a regrade request, which gets routed to that question’s grader (me or the TA) with comments for the grader to consider. The grader can then award the regrade points directly to the student’s assignment. As the instructor, I can see all regrade requests, and can override if necessary, which helps to reduce the bureaucracy and logistics involved with manual regrading. Additionally, regrade requests and Gradescope’s assignment statistics feature may allow you to pinpoint issues with a particular question or how well students have understood a topic.

I have found that when preparing assignments with Gradescope, I am more willing to create multiple mini-assignments. With large courses, the tendency would be to create fewer assignments that are larger in scope to lessen the amount of grading. When there are too few submission points for students who are deadline oriented, I find that they wait till the last few days to start the assignment, which can make the learning process less effective. By adding more assignments, I can scaffold the learning to incrementally build on topics taught in class.

After using Gradescope for a year, I realized that it could be used to detect cheating. Gradescope allows you to see submissions to specific questions in sequence, making it easy to spot submissions that are identical, a red-flag for copied answers. While not a feature, it is an undocumented bonus. It should also be noted that Gradescope adheres to FERPA (Family Educational Rights and Privacy Act) standards for educational tools.

Additional Resources:

  • Gradescope website: https://gradescope.com
  • NOTE TO JHU READERS ONLY: The institutional version of Gradescope is currently available to JHU faculty users through a pilot program. If you are faculty at Johns Hopkins University’s Homewood campus interested in learning more about how Gradescope might work for your courses, contact Reid Sczerba in the Center for Educational Resources at rsczerb1@jhu.edu.

 

Scott Smith, Professor
Department of Computer Science, Johns Hopkins University

Scott Smith has been a professor of Computer Science at Hopkins for almost 30 years. His research specialty is programming languages. For the past several years, he has taught two main courses, Software Engineering, a 100 student project-based class, and Principles of Programming Languages, a mathematically-oriented course with both written and small programming assignments.

Images Sources: CC Reid Sczerba, Gradescope screenshots courtesy Scott Smith

Lunch and Learn: Creating Rubrics and Calibrating Multiple Graders

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Friday, December 15, the Center for Educational Resources (CER) hosted the second Lunch and Learn—Faculty Conversations on Teaching—for the 2017-2018 academic year.  Laura Foster, Academic Advisor, Public Health Studies, and Reid Mumford, Instructional Resource Advisor, Physics & Astronomy, presented on Creating Rubrics and Calibrating Multiple Graders.

Laura Foster led by giving us a demonstration of her use of Blackboard for creating rubrics. She noted that she might be “preaching to the choir” but hoped that those present might take back these best practices to their colleagues. Noting that many faculty have negative opinions of Blackboard, she put in a plug for its organizational benefits and facilitation of communication with students.

Foster started using Blackboard tools for a Public Health Studies class where she was grading student reflections. The subject matter—public health studies in the media—was outside of her field of physical chemistry. Blackboard facilitates creating a rubric that students can see when doing an assignment and the instructor then uses to grade that work. She showed the rubric detail that students see in Blackboard, and how the rubric can be used in grading. [See the CER Tutorial on Blackboard Rubrics and Rubrics-Helpful Hints] The rubric gives the students direction and assures that the instructor (or other graders) will apply the same standards across all student work.

It empowers students when they know exactly what criteria will be used in evaluating their work and how many points will be assigned to each component. Foster has found that using rubrics is an effective way to communicate assignment requirements to students, and that it helps her to clarify for herself what at the most important points. She noted that a rubric is very useful when there are multiple graders, such as Teaching Assistants (TAs), as it helps to calibrate the grading.

In response to questions from the audience, Foster stated that rubrics can be developed to cover both qualitative and quantitative elements. Developing good rubrics is an iterative process; it took her some time to sharpen her skills. There is flexibility in differentiating points allotted, but the instructor must be thoughtful, plan for a desired outcome, and communicate clearly. The rubric tool can be used to grade PDF files as well as Word documents. Foster noted that it is important to take opportunities to teach students to learn to write, learn to use technology, learn to read instructions, and learn to look at feedback given on assignments. Being transparent and explaining why you are using a particular technology will go a long way.

Reid Mumford gave his presentation on how he calibrates multiple graders (see slides). Mumford oversees the General Physics lab courses. This is a two semester, required sequence, so not all students are excited to be there. The sequences are on Mechanics and Electricity and Magnetism; both labs are taught every semester with multiple sections for each course. Approximately 600 to 700 students are taking these lab sequences each semester; students are divided into sections of about 24 students. The labs are open-ended and flexible, so students aren’t filling in blanks and checking boxes, which would be easier to grade. Lab sections are taught and graded by graduate student TAs, with about 30 TAs teaching each semester. Teaching and grading styles vary among these TAs as would be expected. Clearly, calibrating their grading is a challenge.

Grades are based on the best 9 of 10 lab activities, which consist of a pre-lab quiz and a lab note. All activities are graded using the same rubric. The grading scale used can be seen in the slides. One of the criteria for grading is “style,” which allows some flexibility and qualitative assessment. Students have access to the rubric, which is also shown in the slides.

About three years ago, Mumford adopted Turnitin (TII), the plagiarism detection tool, for Screen shot of Quick Mark grading tool.its efficient grading tools. It works well for his use because it is integrated with Blackboard. TII does its job in detecting cheating (and Mumford noted that lots of students are cheating), but it is the grading tools that are really important for the TAs. TAs are encouraged to be demanding in their grading and leave a lot of feedback, so grading takes them two to four hours each week. TII’s Feedback Studio (formerly known as GradeMark) allows TAs to accomplish their mission. [See CER tutorial on Feedback Studio and The Innovative Instructor post on GradeMark.] It was the QuickMark feature that sold Mumford on Feedback Studio and TII grading. Using the rubric for each activity, QuickMark can be pre-populated with commonly-used comments, which can then be dragged and dropped onto the student’s submitted work.

Graph showing General Physics Laboratory Section Grading Trends.These tools helped make the grading load more efficient, but calibrating the multiple graders was another challenge. Mumford found that the TAs need lots of feedback on their grading. Each week he downloads all the grades from Blackboard grade centers. He creates a plot that shows the average score for the weekly lab assignment. Outliers to the average scores are identified and these TAs are counseled so that their grading can be brought into line. Mumford also looks at section grading trends and can see which sections are being graded more leniently or harshly than average. He works with those TAs to standardize their grading.

In calculating final grades for the course, Mumford keeps three points in mind: final letter grades must be calculated, there should be no “easy” or “hard” sections of lab, and distribution should not vary (significantly) between sections. He makes use of per-section mapping and uses average and standard deviation to map results to a final letter grade model. Mumford noted that students are made aware, repeatedly, of the model being used. He is very transparent—everything is explained in the syllabus and reiterated weekly in lab sessions.

In conclusion, Mumford offered these take-aways:

  • Calibrating Multiple Graders is not easy
  • Tools are needed to handle multiple sections efficiently
  • Rubrics help but do not solve the calibration problem
  • Regular feedback to graders is essential
  • Limit of the system: student standing is ambiguous

In the future Mumford plans to give students a better understanding of course standing, to calculate a per-section curve each week, and to overcome some technical issues and the greater time investment that will be required with weekly calibrating and rescaling.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn Logo, slides from Mumford presentation

Facilitating and Evaluating Student Writing

Over the summer I worked on revising a manual for teaching assistants that we hand out each year at our annual TA Orientation. One of the sections deals with writing intensive courses across disciplines and how TAs can facilitate and evaluate writing assignments. The information, advice, and resources in the manual speak to an audience beyond graduate student teaching assistants. Even seasoned instructors may struggle with teaching writing skills and evaluating written assignments.

View from above and to the right of a woman's hands at a desk writing in a journal next to a lap top computer.Two mistakes that teachers may make are assuming that students in their courses know how to write a scholarly paper and not providing appropriate directions for assignments. These assumptions are likely to guarantee that the resulting student writing will disappoint.

As a quick aside, faculty often complain about the poor quality of student writing, claiming that students today don’t write as well as students in some vaguely imagined past, perhaps when the faculty member was a college freshman. However, the results of an interesting longitudinal study suggest otherwise. A report in JSTOR Daily, Student Writing in the Digital Age by Anne Trubek (October 19, 2016), summarizes the findings of the  2006 study by Andrea A. Lunsford and Karen J. Lunsford, Mistakes Are a Fact of Life: A National Comparative Study. “Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.” Their conclusion? “College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents.” Regardless of your take on the writing of current students, it is worth giving thoughtful consideration to your part in improving your students’ writing.

Good writing comes as a result of practice and it is the role of the instructor to facilitate that practice. Students may arrive at university knowing how to compose a decent five-paragraph essay, but no one has taught them how to write a scholarly paper. They must learn to read critically, summarize what they have read, identify an issue, problem, flaw, or new development that challenges what they have read. They must then construct an argument, back it with evidence (and understand what constitutes acceptable evidence), identify and address counter-arguments, and reach a conclusion. Along the way they should learn how to locate appropriate source materials, assemble a bibliography, and properly cite their sources. As an instructor, you must show them the way.

Students will benefit from having the task of writing a term paper broken into smaller components or assignments. Have students start with researching a topic and creating a bibliography. Librarians are often available to come to your class to instruct students in the art of finding sources and citing them correctly. Next, assign students to producing a summary of the materials they’ve read and identifying the issue they will tackle in their paper. Have them outline their argument. Ask for a draft. Considering using peer review for some of these steps to distribute the burden of commenting and grading. Evaluating other’s work will improve their own. [See the May 29, 2015 Innovative Instructor post Using the Critique Method for Peer Assessment.] And the opportunity exists to have students meet with you in office hours to discuss some of these assignments so that you may provide direct guidance and mentoring. Their writing skills will not develop in a vacuum.

Your guidance is critical to their success. This starts with clear directions for each assignment. For an essay you will be writing a prompt that should specify the topic choices, genre, length, formal requirements (whether outside sources should be used, your expectations on thesis and argument, etc.), and formatting, including margins, font size, spacing, titling, and student identification. Directions for research papers, fiction pieces, technical reports, and other writing assignments should include the elements that you expect to find in student submissions. Do not assume students know what to include or how to format their work.

As part of the direction you give, consider sharing with your students the rubric by which you will evaluate their work. See the June 26, 2014 Innovative Instructor post Sharing Assignment Rubrics with Your Students for more detail. Not sure how to create a rubric? See previous posts: from October 8, 2012 Using a Rubric for Grading Assignments, November 21, 2014 Creating Rubrics (by Louise Pasternak), and June 14, 2017 Quick Tips: Tools for Creating Rubrics. Rubrics will save you time grading, ensure that your grading is equitable, and provide you with a tangible defense against students complaining about their grades.

Giving feedback on writing assignments can be time consuming so focus on what is most important. This means, for example, noting spelling and grammar errors but not fixing them. That should be the student’s job. For a short assignment, writing a few comments in the margins and on the last page may be doable, but for a longer paper consider typing up your comments on a separate page. Remember to start with something positive, then offer a constructive critique.

As well, bring writing into your class in concrete ways. For example, at the beginning of class, have students write for three to five minutes on the topic to be discussed that day, drawing from the assigned readings. Discuss the assigned readings in terms of the authors’ writing skills. Make students’ writing the subject of class activities through peer review. Incorporate contributions to a class blog as part of the course work. Remember, good writing is a result of practice.

Finally, there are some great resources out there to help you help your students improve their writing. Purdue University’s Online Writing Lab—OWL—website is all encompassing with sections for instructors (K-12 and Higher Ed) and students. For a quick start go to the section Non-Purdue College Level Instructors and Students. The University of Michigan Center for Research on Learning and Teaching offers a page on Evaluating Student Writing that includes Designing Rubrics and Grading Standards, Rubric Examples, Written Comments on Student Writing, and tips on managing your time grading writing.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Photo by: Matthew Henry. CC License via Burst.com.

 

 

Considering the Use of Turnitin

Earlier this week an article from Inside Higher Ed (IHE) caught my eye. Sign with hand and text reading prevent plagiarism. In New Salvo Against Turnitin (June 19, 2017) Nick Roll summarizes an essay by Sean Michael Morris, Instructional Designer in the Office of Digital Learning at Middlebury College, and Jesse Stommel, Executive Director, Division of Teaching and Learning Technologies at the University of Mary Washington. The essay authors argue that faculty should rethink the use of Turnitin, questioning not only “…the control and use of people’s data by corporations…” but “…Turnitin’s entire business model, as well as the effects on academia brought on by its widespread popularity.” Morris and Stommel further contend that those using Turnitin “supplant good teaching with the use of inferior technology” reducing the student-instructor relationship to one where suspicion and mistrust are at the forefront. [Turnitin is a software application used to detect plagiarism, and Morris and Stommel are not the first to decry the company’s business model and practices.]

Although the IHE article provides a fair summary, as well as additional comments by Morris and Stommel, it is worth reading the 3,928 word essay—A Guide for Resisting Edtech: The Case Against Turnitin (Digital Pedagogy Lab, June 15, 2017)—to appreciate the complex argument. I agree with some of the concerns the authors address and feel we should be doing more individually and collectively to school ourselves and our students in the critical evaluation of digital tools, but disagree with what I feel are over-simplifications and unfair assumptions. Morris and Stommel cast faculty who use Turnitin as “surrendering efficiency over complication” by not taking the time and effort to use plagiarism as a teachable moment. Further, they state that Turnitin takes advantage of faculty who are characterized as being, at the core, mistrustful of students.

The assumption that faculty using Turnitin are not actively engaging in conversations around and instruction of ethical behavior, including plagiarism, and are not using other tools and resources in these activities is simply not correct. The assertion that faculty using Turnitin are suspicious teachers who are embracing an easy out via an efficient educational technology is also not accurate.

The reality is that some students will plagiarize, intentionally or not, and the Internet, social media practices, and cultural differences have rendered complicated students’ understanding of intellectual property. I believe that many of our institutions of higher learning, and faculty and library staff therein, make concerted efforts to teach students about academic integrity. This includes the meaning and value of intellectual property, as well as finer points of what constitutes plagiarism and strategies to avoid it.

I believe it is relevant to note that Middlebury College’s website boasts a mean class size of 16, while the University of Mary Washington lists an average class size of 19. Student-faculty ratios are 8 to1 and 14 to 1 respectively.  I cannot help but feel that Morris and Stommel are speaking from a point of privilege working in these two institutions. Instructors who teach at large, underfunded, state universities with classes of hundreds of students, relying on a corps of teaching assistants to grade their essays, are in a different boat.

The authors state: “So, if you’re not worried about paying Turnitin to traffic your students’ intellectual property, and you’re not worried about how the company has glossed a complicated pedagogical issue to offer a simple solution, you might worry about how Turnitin reinforces the divide between teachers and students, short-circuiting the human tools we have to cross that divide.” In fact, we may all be worried about Turnitin’s business model and be seeking a better solution. Yet in this essay nothing more concrete is given us on those human tools and how faculty in less privileged circumstances can realistically and effectively make use of them.

The Innovative Instructor has in the past posted on Teaching Your Students to Avoid Plagiarism (November 5, 2012, Macie Hall), and using Turnitin as a teaching tool: Plagiarism Detection: Moving from “Gotcha” to Teachable Moment (October 9, 2013, Brian Cole and Macie Hall). These articles may be helpful for faculty struggling with the issues at hand.

Yes, we should all be critical thinkers about the pedagogical tools we use; in the real world, sometimes we face hard choices and must fall back on less than ideal solutions.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Microsoft Clip Art edited by Macie Hall

Quick Tips: Tools for Creating Rubrics

The Innovative Instructor has previously shared posts on the value of using rubrics (Creating Rubrics, Louise Pasternak, November 21, 2014 and Sharing Assignment Rubrics with Your Students, Macie Hall June 26, 2014). Today’s Quick Tips post offers some tools and resources for creating rubrics.

Red sharpie-type marker reading "Rubrics Guiding Graders: Good Point" with an A+ marked below

Red Rubric Marker

If you are an instructor at Johns Hopkins or another institution that uses the Blackboard learning management system or Turnitin plagiarism detection, check out these platforms for their built-in rubric creation applications. Blackboard has an online tutorial here. Turnitin offers a user guide here.

If neither of these options are available to you, there is a free, online application called Rubistar that offers templates for rubric design based on various disciplines, projects, and assignments. If none of the templates fit your need, you can create a rubric from scratch. You must register to use Rubistar. A tutorial is available to get you started. And you can save a printable rubric at the end of the process.

Wondering how others in your field have designed rubrics for specific assignments or projects? Google for a model: e.g., “history paper rubric college,” “science poster rubric college,” “video project rubric college” will yield examples to get your started. Adding the word “college” to the search will ensure that you are seeing rubrics geared to an appropriate level.

With free, easy to use tools and plentiful examples to work from, there is no excuse for not using rubrics for your course assignments.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source © 2014 Reid Sczerba

 

 

To Curve or Not to Curve Revisited

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.The practice of normalizing grades, more popularly known as curving, was a subject of an Innovative Instructor post, To Curve or Not to Curve on May 13, 2013. That article discussed both norm-referenced grading (curving) and criterion-referenced grading (not curving). As the practice of curving has become more controversial in recent years, an op-ed piece in this past Sunday’s New York Times caught my eye. In Why We Should Stop Grading Students on a Curve (The New York Times Sunday Review, September 10, 2016), Adam Grant argues that grade deflation, which occurs when teachers use a curve, is more worrisome than grade inflation. First, by limiting the number of students who can excel, other students who may have mastered the course content are unfairly punished. Second, curving creates a “toxic” environment, a “hypercompetitive culture” where one student’s success means another’s failure.

Grant, a professor of psychology at the Wharton School at the University of Pennsylvania, cites evidence that curving is a “disincentive to study.” Taking observations from his work as an organizational psychologist and applying those in his classroom, Grant has found he could both disrupt the culture of cutthroat competition and get students to work together as a team to prepare for exams. Teamwork has numerous advantages in both the classroom and the workplace as Grant details. Another important aspect is “…that one of the best ways to learn something is to teach it.” When students study together for an exam they benefit from each other’s strengths and expertise. Grant details the methods he used in constructing the exams and how his students have leveraged teamwork to improve their scores on course assessments. One device he uses is a Who Wants to Be a Millionaire-type “lifeline” for students taking the final exam. While his particular approaches may not be suitable for your teaching, the article provides food for thought.

Because I am not advocating for one way of grading over another, but rather encouraging instructors to think about why they are taking a particular approach and whether it is the best solution, I’d like to present a counter argument. In praise of grading on a curve by Eugene Volokh appeared in The Washington Post on February 9, 2015. “Eugene Volokh teaches free speech law, religious freedom law, church-state relations law, a First Amendment Amicus Brief Clinic, and tort law, at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy.” He counters some of the standard arguments against curving by pointing out that students and exams will vary from year to year making it difficult to draw consistent lines between, say an A- and B+ exam. This may be even more difficult for a less experienced teacher. Volokh also believes in the value of the curve for reducing the pressure to inflate grades. He points out that competing law schools tend to align their curves, making it an accepted practice for law school faculty to curve. As well, he suggests some tweaks to curving that strengthen its application.

As was pointed out in the earlier post, curving is often used in large lecture or lab courses that may have multiple sections and graders, as it provides a way to standardize grades. However, that issue may be resolved by instructing multiple graders how to assign grades based on a rubric. See The Innovative Instructor on creating rubrics and calibrating multiple graders.

Designing effective assessments is another important skill for instructors to learn, and one that can eliminate the need to use curving to adjust grades on a poorly conceived test. A good place to start is Brown University’s Harriet W. Sheridan Center for Teaching and Learning webpages on designing assessments where you will find resources compiled from a number of Teaching and Learning Centers on designing “assessments that promote and measure student learning.”  The topics include: Classroom Assessment and Feedback, Quizzes, Tests and Exams, Homework Assignments and Problem Sets, Writing Assignments, Student Presentations, Group Projects and Presentations, Labs, and Field Work.

Macie Hall, Instructional Designer
Center for Educational Resources


Image Source: © Reid Sczerba, 2013.