We heard our guest writer, Stephanie Chasteen (Associate Director, Science Education Initiative, University of Colorado at Boulder), talk about feedback codes in the CIRTL MOOC, An Introduction to Evidence-Based Undergraduate STEM Teaching, now completed, but due to run again in the near future. She presented in Week 2: Learning Objectives and Assessment, segment 4.7.0 – Feedback Codes. Below is her explanation of this technique.
One of the most important things in learning is timely, targeted feedback. What exactly does that mean? It means that in order to learn to do something well, we need someone to tell us…
- Specifically, what we can do to improve
- Soon after we’ve completed the task.
Unfortunately, most feedback that students receive is too general to be of much use, and usually occurs a week or two after turning in the assignment – at which point the student is less invested in the outcome and doesn’t remember their difficulties as well. The main reason is that we, as instructors, just don’t have the time to give students feedback that is specific to their learning difficulties – especially in large classes.
So, consider ways to give that feedback that don’t put such a burden on you. One such method is using feedback codes.
The main idea behind feedback codes is to determine common student errors and assign each of those errors a code. When grading papers, you (or the grader) needs only to write down the letter of the feedback code, and the student can refer to the list of what these codes mean in order to get fairly rich feedback about what they did wrong.
Let me give an example of how this might work. In a classic physics problem, you might have two carts on a track, which collide and bounce off one another. The students must calculate the final speed of the cart.
Below is a set of codes for this problem that were developed by Ed Price at California State University at San Marcos.
How to come up with the codes?
If you already know what types of errors students make, you might come up with feedback codes on your own. In our classes, we typically have the grader go through the student work, and come up with a first pass of what those feedback codes might look like. This set of codes can be iterated during the grading process, resulting in a complete set of codes which describe most errors – along with feedback for improvement.
How does the code relate to a score?
Do these feedback codes correspond to the students’ grades? They might – for example, each code might have a point value. But, I wouldn’t communicate this to the students! The point of the feedback codes is to give students information about what they did wrong, so they can improve for the future. There is research that shows that when qualitative feedback like this is combined with a grade, the score trumps everything; students ignore the writing, and only pay attention to the evaluation.
Using Grademark to provide feedback codes
Mike Reese, a doctoral student at Johns Hopkins, uses the feedback codes function in Turnitin. The Grademark tool in Turnitin allows the instructor to create custom feedback codes for comments commonly shared with students. Mike provides feedback on the electronic copy of the document through Turnitin by dragging and dropping feedback codes on the paper and writing paper-specific comments as needed.
Advantages of feedback codes
The advantage of using feedback codes are:
- Give students feedback, without a lot of extra writing
- The instructor gets qualitative feedback on how student work falls into broad categories
- The grader uses the overall quality of the response to assign a score, rather than nit-picking the details
Another way to provide opportunities for this feedback is through giving students rubrics for their own success, and asking them to evaluate themselves or their peers – but that’s a topic for another article.
- Assessments that Support Student Learning (2-page research summary): http://www.colorado.edu/sei/fac-resources/files/Assessment_That_Support_Learning.pdf.
- An example of using feedback codes for mathematical modeling problems: Diefes-Dux, Heidi; Zawojewski,Judith S.; Hjalmarson,Margret A.; Cardella,Monica E. A Framework for Analyzing Feedback in a Formative Assessment System for Mathematical Modeling Problems, Engineering Education, 2012, 101, 2, 375-406.
- SERC at Carleton is geosciences focused, with good resources on rubrics and examples: http://serc.carleton.edu/introgeo/assessment/scorerubrics.html.
- The Rutgers Physics and Astronomy Education Research group has developed detailed rubrics on science skills: http://paer.rutgers.edu/ScientificAbilities/Rubrics/default.aspx.
Associate Director, Science Education Initiative
University of Colorado Boulder
Stephanie Chasteen earned a PhD in Condensed Matter Physics from University of California Santa Cruz. She has been involved in science communication and education since that time, as a freelance science writer, a postdoctoral fellow at the Exploratorium Museum of Science in San Francisco, an instructional designer at the University of Colorado, and the multimedia director of the PhET Interactive Simulations. She currently works with several projects aimed at supporting instructors in using research-based methods in their teaching.
Image Sources: Macie Hall, Colliding Carts Diagram, adapted from the CIRTL MOOC An Introduction to Evidence-Based Undergraduate STEM Teaching video 4.7.0; Ed Price, Feedback Codes Table; Amy Brusini, Screen Shot of GradeMark Example.