Lunch and Learn: Innovative Grading Strategies

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Thursday, February 28, the Center for Educational Resources (CER) hosted the third Lunch and Learn for the 2018-2019 academic year. Rebecca Kelly, Associate Teaching Professor, Earth and Planetary Sciences and Director of the Environmental Science and Studies Program, and Pedro Julian, Associate Professor, Electrical and Computer Engineering, presented on Innovative Grading Strategies.

Rebecca Kelly began the presentation by discussing some of the problems in traditional grading. There is a general lack of clarity in what grades actually mean and how differently they are viewed by students and faculty. Faculty use grades to elicit certain behaviors from students, but it doesn’t necessarily mean that they are learning. Kelly noted that students, especially those at JHU, tend to be focused on the grade itself, aiming for a specific number and not the learning; this often results in high levels of student anxiety, something she sees often. She explained how students here don’t get many chances to fail and not have their grades negatively affected. Therefore, every assessment is a source of stress because it counts toward their grade. There are too few opportunities for students to learn from their mistakes.

Kelly mentioned additional challenges that faculty face when grading: it is often time consuming, energy draining, and stressful, especially when haggling over points, for example.  She makes an effort to provide clearly stated learning goals and rubrics for each assignment, which do help, but are not always enough to ease the burden.

Kelly introduced the audience to specifications grading and described how she’s recently started using this approach in Introduction to Geographic Information Systems (GIS). With specifications grading (also described in a recent CER Innovative Instructor article), students are graded pass/fail or satisfactory/unsatisfactory on individual assessments that align directly with learning goals. Course grades are determined by the number of learning goals mastered. This is measured by the number of assessments passed. For example, passing 20 or more assignments out of 23 would equate to an A; 17-19 assignments would equate to a B. Kelly stresses the importance of maintaining high standards; for rigor, the threshold for passing should be a B or better.

In Kelly’s class, students have multiple opportunities to achieve their goals. Each student receives three tokens that he/she can use to re-do an assignment that doesn’t pass, or select a different assignment altogether from the ‘bundle’ of assignments available. Kelly noted the tendency of students to ‘hoard’ their tokens and how it actually works out favorably; instead of risking having to use a token, students often seek out her feedback before turning anything in.

Introduction to GIS has both a lecture and a lab component. The lab requires students to use software to create maps that are then used to perform data analysis. The very specific nature of the assignments in this class lend themselves well to the specifications grading approach. Kelly noted that students are somewhat anxious about this approach at first, but settle into it once they fully understand. In addition to clearly laying out Grade bundles used in specifications gradingexpectations, Kelly lists the learning goals of the course and how they align with each assignment (see slides). She also provides students with a table showing the bundles of assignments required to reach final course grades. Additionally, she distributes a pacing guide to help students avoid procrastination.

The results that Kelly has experienced with specifications grading have been positive. Students generally like it because the expectations are very clear and initial failure does not count against them; there are multiple opportunities to succeed. Grading is quick and easy because of the pass/fail system; if something doesn’t meet the requirements, it is simply marked unsatisfactory. The quality of student work is high because there is no credit for sloppy work. Kelly acknowledged that specifications grading is not ideal for all courses, but feels the grade earned in her GIS course is a true representation of the student’s skill level in GIS.

Pedro Julian described a different grading practice that he is using, something he calls the “extra grade approach.” He currently uses this approach in Digital Systems Fundamentals, a hands-on design course for freshmen. In this course, Julian uses a typical grading scale: 20% for the midterm, 40% for labs and homework, and 40% for the final project. However, he augments the scale by offering another 20% if students agree to put in extra work throughout the semester. How much extra work? Students must commit to working collaboratively with instructors (and other students seeking the 20% credit) for one hour or more per week on an additional project.  This year, the project is to build a vending machine. Past projects include building an elevator out of Legos and building a robot that followed a specific path on the floor.

Julian described how motivated students are to complete the extra project once they commit to putting in the time. Students quickly realize that they learn all sorts of skills they would not have otherwise learned and are very proud and engaged. Student participation in the “extra grade” option has grown steadily since Julian started using this approach three years ago. The first year there were 5-10 students who signed up, and this year there are 30. Julian showed histograms (see slides) of student grades from past semesters in his class and how the extra grade has helped push overall grades higher.  The histograms also show that it’s not just students who may be struggling with the class who are choosing to participate in the extra grade, but “A students” as well.

Similar to Rebecca Kelly’s experience, Julian expressed how grade-focused JHU students are, much to his dismay. In an attempt to take some of the pressure off, he described how he repeatedly tells his students that if they work hard, they will get a good grade; he even includes this phrase in his syllabus. Julian explained how he truly wants students to concentrate more on the learning and not on the grade, which is his motivation behind the “extra grade” approach.

An interesting discussion with several questions from the audience followed the presentations. Below are some of the questions asked and responses given by Kelly and Julian, as well as audience members.

Q: (for Julian) Some students may not have the time or flexibility in their schedule to take part in an extra project. Do you have suggestions for them? Did you consider this when creating the “extra grade” option?

Julian responded that in his experience, freshmen seem to be available. Many of them make time to come in on the weekends. He wants students to know he’s giving them an “escape route,” a way for them to make up their grade, and they seem to find the time to make it happen.  Julian has never had a student come to him saying he/she cannot participate because of scheduling conflicts.

Q: How has grade distribution changed?

Kelly remarked how motivated the students are and therefore she had no Cs, very few Bs, and the rest As this past semester. She expressed how important it is to make sure that the A is attainable for students. She feels confident that she’s had enough experience to know what counts as an A. Every student can do it, the question is, will they?

Q: (for Kelly) Would there ever be a scenario where students would do the last half of the goals and skip the first half?

Kelly responded that she has never seen anyone jump over everything and that it makes more sense to work sequentially.

Q: (for Kelly) Is there detailed feedback provided when students fail an assignment?

Kelly commented that it depends on the assignment, but if students don’t follow the directions, that’s the feedback – to follow the directions. If it’s a project, Kelly will meet with the student, go over the assignment, and provide immediate feedback. She noted that she finds oral feedback much more effective than written feedback.

Q: (for Kelly) Could specs grading be applied in online classes?

Kelly responded that she thinks this approach could definitely be used in online classes, as long as feedback could be provided effectively. She also stressed the need for rubrics, examples, and clear goals.

Q: Has anyone tried measuring individual learning gains within a class? What skills are students coming in with? Are we actually measuring gain?

Kelly commented that specifications grading works as a compliment to competency based grading, which focuses on measuring gains in very specific skills.

Julian commented that this issue comes up in his class, students coming in with varying degrees of experience. He stated that this is another reason to offer the extra credit, to keep things interesting for those that want to move at a faster pace.

The discussion continued among presenters and audience members about what students are learning in a class vs. what they are bringing in with them. A point was raised that if students already know the material in a class, should they even be there?  Another comment was made regarding if it is even an instructor’s place to determine what students already know.  Additional comments were made about what grades mean and concerns about grades being used for different things, i.e. employers looking for specific skills, instructors writing recommendation letters, etc.

Q: Could these methods be used in group work?

Kelly responded that with specifications grading, you would have to find a way to evaluate the group. It might be possible to still score on an individual basis within the group, but it would depend on the goals. She mentioned peer evaluations as a possibility.

Julian stated that all grades are based on individual work in his class. He does use groups in a senior level class that he teaches, but students are still graded individually.

The event concluded with a discussion about how using “curve balls” – intentionally difficult questions designed to catch students off-guard – on exams can lead to challenging grading situations. For example, to ultimately solve a problem, students would need to first select the correct tools before beginning the solution process. Some faculty were in favor of including this type of question on exams, while others were not, noting the already high levels of exam stress.  A suggestion was made to give students partial credit for the process even if they don’t end up with the correct answer. Another suggestion was to give an oral exam in order to hear the student’s thought process as he/she worked through the challenge. This would be another way for students to receive partial credit for their ideas and effort, even if the final answer was incorrect.

Amy Brusini, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn Logo, slide from Kelly presentation

An Evidence-based Approach to Effective Studying

Dr. Culhane is Professor and Chair of the Department of Pharmaceutical Sciences at Notre Dame of Maryland University School of Pharmacy.

If you are like me, much of your time is spent ensuring that the classroom learning experience you provide for your students is stimulating, interactive and impactful. But how invested are we in ensuring that what students do outside of class is productive? Based on my anecdotal experience and several studies1,2,3 looking at study strategies employed by students, the answer to this question is not nearly enough! Much like professional athletes or musicians, our students are asked to perform at a high level, mastering advanced, information dense subjects; yet unlike these specialists who have spent years honing the skills of their craft, very few students have had any formal training in the basic skills necessary to learn successfully. It should be no surprise to us that when left to their own devices, our students tend to mismanage their time, fall victim to distractions and gravitate towards low impact or inefficient learning strategies. Even if students are familiar with high impact strategies and how to use them, it is easy for them to default back to bad habits, especially when they are overloaded with work and pressed for time.

Several years ago, I began to seriously think about and research this issue in hopes of developing an evidence-based process that would be easy for students to learn and implement. Out of this work I developed a strategy focused on the development of metacognition – thinking about how one learns. I based it on extensively studied, high impact learning techniques to include: distributed learning, self-testing, interleaving and application practice.4 I call this strategy the S.A.L.A.M.I. method. This method is named after a metaphor used by one of my graduate school professors. He argued that learning is like eating a salami. If you eat the salami one slice at a time, rather than trying to eat the whole salami in one setting, the salami is more likely to stay with you. Many readers will see that this analogy represents the effectiveness of distributed learning over the “binge and purge” method which many of our students gravitate towards.

S.A.L.A.M.I. is a “backronym” for Systematic Approach to Learning And Metacognitive Improvement. The method is structured around typical, daily learning experiences that I refer to as the five S.A.L.A.M.I. steps:

  1. Pre-class preparation
  2. In-class engagement
  3. Post-class review
  4. Pre-exam preparation
  5. Post-assessment review

When teaching the S.A.L.A.M.I. method, I explain how each of the five steps correspond to different “stages” or components of learning (see figure 1). Through mastery of skills associated with each of the five S.A.L.A.M.I. steps, students can more efficiently and effectively master a subject area.

S.A.L.A.M.I. Steps

Figure 1

Despite its simplicity, this model provides a starting point to help students understand that learning is a process that takes time, requires the use of different learning strategies and can benefit from the development of metacognitive awareness. Specific techniques designed to enhance metacognition and learning are employed during each of the five steps, helping students use their time effectively, maximize learning and achieve subject mastery. Describing all the tools and techniques recommended for each of the five steps would be beyond the scope of this post, but I would like to share two that I have found useful for students to evaluate the effectiveness of their learning and make data driven changes to their study strategies.

Let us return to our example of professional athletes and musicians: these individuals maintain high levels of performance by consistently monitoring and evaluating the efficacy of their practice as well as reviewing their performance after games or concerts. If we translate this example to an academic environment, the practice or rehearsal becomes student learning (in and out of class) and the game or concert acts as the assessment.  We often evaluate students’ formative or summative “performances” with grades, written or verbal feedback. But what type of feedback do we give them to help improve the efficacy of their preparation for those “performances?” If we do give them feedback about how to improve their learning process, is it evidenced-based and directed at improving metacognition, or do we simply tell them they need to study harder or join a study group in order to improve their learning? I would contend that we could do more to help students evaluate their approach to learning outside of class and examination performance. This is where a pre-exam checklist and exam wrapper can be helpful.

The inspiration for the pre-exam checklist came from the pre-flight checklist a pilot friend of mine uses to ensure that he and his private aircraft are ready for flight.  I decided to develop a similar tool for my students that would allow them to monitor and evaluate the effectiveness of their preparation for upcoming assessments. The form is based on a series of reflective questions that help students think about the effectiveness of their daily study habits. If used consistently over time and evaluated by a knowledgeable faculty or learning specialist, this tool can help students be more successful in making sustainable, data driven changes in their approach to learning.

Another tool that I use is called an exam wrapper. There are many examples of exam wrappers online, however, I developed my own wrapper based on the different stages or components of learning shown in figure 1. The S.A.L.A.M.I. wrapper is divided into five different sections. Three of the five sections focus on the following stages or components of learning: understanding and building context, consolidation, and application. The remaining two sections focus on exam skills and environmental factors that may impact performance. Under each of the five sections is a series of statements that describe possible reasons for missing an exam question. The student analyzes each missed question and matches one or more of the statements on the wrapper to each one. Based on the results of the analysis, the student can identify the component of learning, exam skill or environmental factors that they are struggling with and begin to take corrective action. Both the pre-exam checklist and exam wrapper can be used to help “diagnose” the learning issue that academically struggling students may be experiencing.

Two of the most common issues that I diagnose involve illusions of learning5. Students who suffer from the ‘illusion of knowledge’ often mistake their understanding of a topic for mastery. These students anticipate getting a high grade on an assessment but end up frustrated and confused when receiving a much lower grade than expected. Information from the S.A.L.A.M.I. wrapper can help them realize that although they may have understood the concept being taught, they could not effectively recall important facts and apply them. Students who suffer from the ‘illusion of productivity’ often spend extensive time preparing for an exam, however, the techniques they use are extremely passive. Commonly used passive study strategies include: highlighting, recopying and re-reading notes, or listening to audio/video recordings of lectures in their entirety. The pre-exam checklist can help students identify the learning strategies they are using and reflect on their effectiveness. When I encounter students favoring the use of passive learning strategies I use the analogy of trying to dig a six-foot deep hole with a spoon: “You will certainly work hard for hours moving dirt with a spoon, but you would be a lot more productive if you learned how to use a shovel.” The shovel in this case represents adopting strategies such as distributed practice, self-testing, interleaving and application practice.

Rather than relying on anecdotal advice from classmates or old habits that are no longer working, students should seek help early, consistently practice effective and efficient study strategies, and remember that digesting information (e.g. a  S.A.L.A.M.I.) in small doses is always more effective at ‘keeping the information down’ so it may be applied and utilized successfully later.

  1. Kornell, N., Bjork, R. The promise and perils of self-regulated study. Psychon Bull Rev. 2007;14 (2): 219-224.
  2. Karpicke, J. D., Butler, A. C., & Roediger, H. L. Metacognitive strategies in student learning: Do students practice retrieval when they study on their own? Memory. 2009; 17: 471– 479.
  3. Persky, A.M., Hudson, S. L. A snapshot of student study strategies across a professional pharmacy curriculum: Are students using evidence-based practice? Curr Pharm Teach Learn. 2016; 8: 141-147.
  4. Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J., Willingham, D.T. Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychol Sci Publ Int. 2013; 14 (1): 4-58.
  5. Koriat, A., & Bjork, R. A. Illusions of competence during study can be remedied by manipulations that enhance learners’ sensitivity to retrieval conditions at test. Memory & Cognition. 2006; 34: 959-972.

James M. Culhane, Ph.D.
Chair and Professor, School of Pharmacy, Notre Dame of Maryland University