To Curve or Not to Curve Revisited

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.The practice of normalizing grades, more popularly known as curving, was a subject of an Innovative Instructor post, To Curve or Not to Curve on May 13, 2013. That article discussed both norm-referenced grading (curving) and criterion-referenced grading (not curving). As the practice of curving has become more controversial in recent years, an op-ed piece in this past Sunday’s New York Times caught my eye. In Why We Should Stop Grading Students on a Curve (The New York Times Sunday Review, September 10, 2016), Adam Grant argues that grade deflation, which occurs when teachers use a curve, is more worrisome than grade inflation. First, by limiting the number of students who can excel, other students who may have mastered the course content are unfairly punished. Second, curving creates a “toxic” environment, a “hypercompetitive culture” where one student’s success means another’s failure.

Grant, a professor of psychology at the Wharton School at the University of Pennsylvania, cites evidence that curving is a “disincentive to study.” Taking observations from his work as an organizational psychologist and applying those in his classroom, Grant has found he could both disrupt the culture of cutthroat competition and get students to work together as a team to prepare for exams. Teamwork has numerous advantages in both the classroom and the workplace as Grant details. Another important aspect is “…that one of the best ways to learn something is to teach it.” When students study together for an exam they benefit from each other’s strengths and expertise. Grant details the methods he used in constructing the exams and how his students have leveraged teamwork to improve their scores on course assessments. One device he uses is a Who Wants to Be a Millionaire-type “lifeline” for students taking the final exam. While his particular approaches may not be suitable for your teaching, the article provides food for thought.

Because I am not advocating for one way of grading over another, but rather encouraging instructors to think about why they are taking a particular approach and whether it is the best solution, I’d like to present a counter argument. In praise of grading on a curve by Eugene Volokh appeared in The Washington Post on February 9, 2015. “Eugene Volokh teaches free speech law, religious freedom law, church-state relations law, a First Amendment Amicus Brief Clinic, and tort law, at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy.” He counters some of the standard arguments against curving by pointing out that students and exams will vary from year to year making it difficult to draw consistent lines between, say an A- and B+ exam. This may be even more difficult for a less experienced teacher. Volokh also believes in the value of the curve for reducing the pressure to inflate grades. He points out that competing law schools tend to align their curves, making it an accepted practice for law school faculty to curve. As well, he suggests some tweaks to curving that strengthen its application.

As was pointed out in the earlier post, curving is often used in large lecture or lab courses that may have multiple sections and graders, as it provides a way to standardize grades. However, that issue may be resolved by instructing multiple graders how to assign grades based on a rubric. See The Innovative Instructor on creating rubrics and calibrating multiple graders.

Designing effective assessments is another important skill for instructors to learn, and one that can eliminate the need to use curving to adjust grades on a poorly conceived test. A good place to start is Brown University’s Harriet W. Sheridan Center for Teaching and Learning webpages on designing assessments where you will find resources compiled from a number of Teaching and Learning Centers on designing “assessments that promote and measure student learning.”  The topics include: Classroom Assessment and Feedback, Quizzes, Tests and Exams, Homework Assignments and Problem Sets, Writing Assignments, Student Presentations, Group Projects and Presentations, Labs, and Field Work.

Macie Hall, Instructional Designer
Center for Educational Resources


Image Source: © Reid Sczerba, 2013.

 

 

Quick Tips: Grading Essays and Papers More Efficiently

If you are among those who don’t teach during the summers, grading papers may be the furthest thing from your mind at the moment. Before we know it, however, a new semester will be starting. And now is a good time to be thinking about new directions in your assessment and evaluation of student work, especially if your syllabus will need changing as a result.

Male instructor 's head between two stacks of papers.Earlier this week (June 22, 2015) and article in The Chronicle of Higher Education by Rob Jenkins, an associate professor of English at Georgia Perimeter College, Conquering Mountains of Essays: How to effectively and fairly grade a lot of papers without making yourself miserable, caught my attention. Even the most dedicated instructors find grading to be a chore.

Jenkins, who teaches several writing-intensive courses every semester, notes that it is easy to take on the pose of a martyr when faced with stacks and stacks of multiple-paged papers, especially when the process is repeated a few times for each class. He offers eight guidelines for keeping grading in balance with the aspects of teaching that are more enjoyable. Jenkins proposes that you:

  1. Change your bad attitude about grading. Grading is an integral part of teaching. View grading student work as an opportunity to reinforce class concepts and use misconception that arise in their papers as a basis for class discussion.
  2. Stagger due dates. Plan in advance and have students in different sections turn in essays on different dates.
  3. Break it down. Determine an optimum number of papers to grade at one sitting. Take a break for an hour before starting another session.
  4. Schedule grading time. Literally. Put it on your calendar.
  5. Have a realistic return policy. Jenkins says, “I’ve chosen to define ‘a reasonable amount of time’ as one week, or two class sessions. Occasionally, if I get four stacks of papers in the same week, it might take me three class meetings to finish grading.”
  6. Be a teacher, not an editor. Stay out of the weeds and focus on the major problems with the essay. Jenkins limits editing “to situations where a simple change of wording or construction might have broader application than to that one essay.”
  7. Limit your comments. For undergraduates, a few observations will be more useful as a teaching strategy than pages of commentary. Jenkins tries to offer one positive comment and three suggestions for improvement.
  8. Limit grading time on each essay. Following the suggestions above will help you reduce the time you need to spend on each paper.

One thing Jenkins doesn’t mention is using a rubric for grading. Rubrics can be a powerful tool for consistent grading across the class or sections, as well as a means for students to understand how the assignment is being evaluated. See previous Innovative Instructor posts on rubrics: Creating Rubrics and Sharing Assignment Rubrics with Your Students.

You might also be interested in some of The Innovative Instructor’s past posts on grading: Feedback codes: Giving Student Feedback While Maintaining Sanity and Quick Tips: Paperless Grading.

******************************************************************************************************

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Microsoft Clip Art

Using the Critique Method for Peer Assessment

As a writer I have been an active participant in a formal critique group facilitated by a professional author and editor. The critique process, for those who aren’t familiar with the practice, involves sharing work (traditionally, writing and studio arts) with a group to review and discuss. Typically, the person whose work is being critiqued must listen without interrupting as others provide comments and suggestions. Critiques are most useful if a rubric and a set of standards for review is provided and adhered to during the commentary. For example, in my group, we are not allowed to say, “I don’t like stories that are set in the past.” Instead we must provide specific examples to improve the writing: “In terms of authoritative writing, telephones were not yet in wide use in 1870. This creates a problem for your storyline.”  After everyone has made their comments, the facilitator adds and summarizes, correcting any misconceptions. Then the writer has a chance to ask questions for clarification or offer brief explanations. In critique, both the creator and the reviewers benefit. Speaking personally, the process of peer evaluation has honed my editorial skills as well as improved my writing. Looking down on a group of four students with laptops sitting at a table in discussion.

With peer assessment becoming a pedagogical practice of interest to our faculty, could the critique process provide an established model that might be useful in disciplines outside the arts? A recent post on the Tomorrow’s Professor Mailing List, Teaching Through Critique: An Extra-Disciplinary Approach, by Johanna Inman, MFA Assistant Director, Teaching and Learning Center, Temple University, addresses this topic.

“The critique is both a learning activity and assessment that aligns with several significant learning goals such as critical thinking, verbal communication, and analytical or evaluation skills. The critique provides an excellent platform for faculty to model these skills and evaluate if students are attaining them.” Inman notes that critiquing involves active learning, formative assessment, and community building. Critiques can be used to evaluate a number of different assignments as might be found in almost any discipline including, short papers and other writing assignments, multimedia projects, oral presentations, performances, clinical procedures, interviews, and business plans. In short, any assignment that can be shared and evaluated through a specific rubric can be evaluated through critique.

A concrete rubric is at the heart of recommended best practices for critique. “Providing students with the learning goals for the assignment or a specific rubric before they complete the assignment and then reviewing it before critique can establish a focused dialogue. Additionally, prompts such as Is this work effective and why? or Does this effectively fulfill the assignment? or even Is the planning of the work evident? generally lead to more meaningful conversations than questions such as What do you think?

It is equally important to establish guidelines for the process, what Inman refers to as an etiquette for providing and receiving constructive criticism. Those on the receiving end should listen and keep an open mind. Learning to accept criticism without getting defensive is life skill that will serve students well. Those providing the assessment, Inman says, should critique the work not the student, and offer specific suggestions for improvement. The instructor or facilitator should foster a climate of civility.

Inman offers tips for managing class time for a critique session and specific advice for instructors to insure a balanced discussion.  For more on peer assessment more generally, see the University of Texas at Austin Center for Teaching and Learning’s page on Peer Assessment.  The Cornell Center for Teaching Excellence also has some good advice for instructors interested in Peer Assessment, answering some questions about how students might perceive and push back against the activity. Peer assessment, whether using a traditional critique method or another approach, benefits students in many ways. As they learn to evaluate others’ work, it strengthens their own.

********************************************************************************************************* Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Meeting. CC BY-SA Marco Antonio Torres https://www.flickr.com/photos/torres21/3052366680/in/photostream/

Feedback codes: Giving Student Feedback While Maintaining Sanity

We heard our guest writer, Stephanie Chasteen (Associate Director, Science Education Initiative, University of Colorado at Boulder), talk about feedback codes in the CIRTL MOOC, An Introduction to Evidence-Based Undergraduate STEM Teaching, now completed, but due to run again in the near future.  She presented in Week 2: Learning Objectives and Assessment, segment 4.7.0 – Feedback Codes. Below is her explanation of this technique.


One of the most important things in learning is timely, targeted feedback.  What exactly does that mean?  It means that in order to learn to do something well, we need someone to tell us…

  • Specifically, what we can do to improve
  • Soon after we’ve completed the task.

Unfortunately, most feedback that students receive is too general to be of much use, and usually occurs a week or two after turning in the assignment – at which point the student is less invested in the outcome and doesn’t remember their difficulties as well.  The main reason is that we, as instructors, just don’t have the time to give students feedback that is specific to their learning difficulties – especially in large classes.

So, consider ways to give that feedback that don’t put such a burden on you.  One such method is using feedback codes.

The main idea behind feedback codes is to determine common student errors and assign each of those errors a code. When grading papers, you (or the grader) needs only to write down the letter of the feedback code, and the student can refer to the list of what these codes mean in order to get fairly rich feedback about what they did wrong.

Example

Let me give an example of how this might work.  In a classic physics problem, you might have two carts on a track, which collide and bounce off one another.   The students must calculate the final speed of the cart.

Diagram of classic physics problem of colliding carts on a track.Below is a set of codes for this problem that were developed by Ed Price at California State University at San Marcos.Feedback codes table

How to come up with the codes?

If you already know what types of errors students make, you might come up with feedback codes on your own.  In our classes, we typically have the grader go through the student work, and come up with a first pass of what those feedback codes might look like.  This set of codes can be iterated during the grading process, resulting in a complete set of codes which describe most errors – along with feedback for improvement.

How does the code relate to a score?

Do these feedback codes correspond to the students’ grades?  They might – for example, each code might have a point value.  But, I wouldn’t communicate this to the students!  The point of the feedback codes is to give students information about what they did wrong, so they can improve for the future.  There is research that shows that when qualitative feedback like this is combined with a grade, the score trumps everything; students ignore the writing, and only pay attention to the evaluation.

Using Grademark to provide feedback codes

Mike Reese, a doctoral student at Johns Hopkins, uses the feedback codes function in Turnitin.  The Grademark tool in Turnitin allows the instructor to create custom feedback codes for comments commonly shared with students.  Mike provides feedback on the electronic copy of the document through Turnitin by dragging and dropping feedback codes on the paper and writing paper-specific comments as needed. Screen shot showing example of using GradeMark

Advantages of feedback codes

The advantage of using feedback codes are:

  1. Give students feedback, without a lot of extra writing
  2. The instructor gets qualitative feedback on how student work falls into broad categories
  3. The grader uses the overall quality of the response to assign a score, rather than nit-picking the details

Another way to provide opportunities for this feedback is through giving students rubrics for their own success, and asking them to evaluate themselves or their peers – but that’s a topic for another article.

Additional resources:

Stephanie Chasteen
Associate Director, Science Education Initiative
University of Colorado Boulder

Stephanie Chasteen earned a PhD in Condensed Matter Physics from University of California Santa Cruz.  She has been involved in science communication and education since that time, as a freelance science writer, a postdoctoral fellow at the Exploratorium Museum of Science in San Francisco, an instructional designer at the University of Colorado, and the multimedia director of the PhET Interactive Simulations.  She currently works with several projects aimed at supporting instructors in using research-based methods in their teaching.

Image Sources: Macie Hall, Colliding Carts Diagram, adapted from the CIRTL MOOC An Introduction to Evidence-Based Undergraduate STEM Teaching video 4.7.0; Ed Price, Feedback Codes Table; Amy Brusini, Screen Shot of GradeMark Example.

Creating Rubrics

Red sharpie-type marker reading "Rubrics Guiding Graders: Good Point" with an A+ marked below

Red Rubric Marker

Instructors have many tasks to perform during the semester. Among those is grading, which can be subjective and unstructured. Time spent constructing grading rubrics while developing assignments benefits all parties involved with the course: students, teaching assistants and instructors alike. Sometimes referred to as a grading schema or matrix, a rubric is a tool for assessing student knowledge and providing constructive feedback. Rubrics are comprised of a list of skills or qualities students must demonstrate in completing an assignment, each with a rating criterion for evaluating the student’s performance. Rubrics bring clarity and consistency to the grading process and make grading more efficient.

Rubrics can be established for a variety of assignments such as essays, papers, lab observations, science posters, presentations, etc. Regardless of the discipline, every assignment contains elements that address an important skill or quality. The rubric helps bring focus to those elements and serves as a guide for consistent grading that can be used from year to year.

Whether used in a large survey course or a small upper-level seminar, rubrics benefit both students and instructors. The most obvious benefit is the production of a structured, consistent guideline for assigning grades. With clearly established criteria, there is less concern about subjective evaluation. Once created, a rubric can be used every time to normalize grading across sections or semesters. When the rubric for an assignment is shared with teaching assistants, it provides guidance on how to translate the instructor’s expectations for evaluating student submissions consistently. The rubric makes it easier for teaching assistants to give constructive feedback to students. In addition, the instructor can supply pre-constructed comments for uniformity in grading.

Some instructors supply copies of the grading rubric to their students so they can use it as a guide for completing their assignments. This can also reduce grade disputes. When discussing grades with students, a rubric acts as a reminder of important aspects of the assignment and how each are evaluated.

Below are basic elements of rubrics, with two types to consider.

I. Anatomy of a rubric

All rubrics have three elements: the objective, its criteria, and the evaluation scores.

Learning Objective
Before creating a rubric, it is important to determine learning objectives for the assignment. What you expect your students to learn will be the foundation for the criteria you establish for assessing their performance. As you are considering the criteria or writing the assignment, you may revise the learning objectives or adjust the significance of the objective within the assignment. This iteration can help you hone in on what is the most important aspect of the assignment, choose the appropriate criteria, and determine how to weigh the scoring.

Criteria
When writing the criteria (i.e., evaluation descriptors), start by describing the highest exemplary result for the objective, the lowest that is still acceptable for credit, and what would be considered unacceptable. You can express variations between the highest and the lowest if desired. Be concise by using explicit verbs that relate directly to the quality or skill that demonstrates student competency. There are lists of verbs associated with cognitive categories found in Bloom’s taxonomy (Knowledge, Comprehension, Application, Evaluation, Analysis, and Synthesis). These lists express the qualities and skills required to achieve knowledge, comprehension or critical thinking (Google “verbs for Bloom’s Taxonomy”).

Evaluation Score
The evaluation score for the criterion can use any schema as long as it is clear how it equates to a total grade. Keep in mind that the scores for objectives can be weighted differently so that you can emphasize the skills and qualities that have the most significance to the learning objectives.

II. Types of rubrics

There are two main types of rubrics: holistic (simplistic) and analytical (detailed).

Selecting your rubric type depends on how multi-faceted the tasks are and whether or not the skill requires a high degree of proficiency on the part of the student.

Holistic rubric
A holistic rubric contains broad objectives and lists evaluation scores, each with an overall criterion summary that encompasses multiple skills or qualities of the objective. This approach is more simplistic and relies on generalizations when writing the criteria.

The criterion descriptions can list the skills or qualities as separate bullets to make it easier for a grader to see what makes up an evaluation score. Below is an example of a holistic rubric for a simple writing assignment.

Table showing an example of a holistic rubric

Analytical rubric
An analytical rubric provides a list of detailed learning objectives, each with its own rating scheme that corresponds to a specific skill or quality to be evaluated using the criterion. Analytical rubrics provide scoring for individual aspects of a learning objective, but they usually require more time to create. When using analytical rubrics, it may be necessary to consider weighing the score using a different scoring scale or score multipliers for the learning objectives. Below is an example of an analytical rubric for a chemistry lab that uses multipliers.

Table showing an example of an analytical rubric

It is beneficial to view rubrics for similar courses to get an idea how others evaluate their course work. A keyword search for “grading rubrics” in a web search engine like Google will return many useful examples. Both Blackboard and Turnitin have tools for creating grading rubrics for a variety of course assignments.

Louise Pasternack
Teaching Professor, Chemistry, JHU

Louise Pasternack earned a Ph.D. in chemistry from Johns Hopkins. Prior to returning to JHU as a senior lecturer, Louise Pasternack was a research scientist at the Naval Research Laboratory. She has been teaching introductory chemistry laboratory at JHU since 2001 and has taught more than 7000 students with the help of more than 250 teaching assistants. She became a teaching professor at Hopkins in 2013.

Image sources: © 2014 Reid Sczerba

Quick Tips: Paperless Grading

Just in time for the end of semester assignment and exam grading marathon, The Innovative Instructor has some tips for making these tasks a bit less stressful.

Male instructor 's head between two stacks of papers.Last year we wrote about the GradeMark paperless grading system, a tool offered within Turnitin, the plagiarism detection software product used at JHU. The application is fully integrated with Blackboard, our learning management system. For assignments and assessments where you don’t wish to use Turnitin, Blackboard offers another grading option for online submissions. Recent updates to Blackboard’s include new features built into the assignment tool that allow instructors to easily make inline comments, highlight or strikeout text, and use drawing tools for freeform edits. All this without having to handle a single piece of paper.

If you don’t use Blackboard, don’t despair. The Innovative Instructor has solutions for you, too.  A recent post in one of our favorite blogs, the Chronicle of Higher Education’s Professor Hacker, titled Using iAnnotate as a Grading Tool, offers another resource. According to its creators, the iAnnotate app “turns your tablet into a world-class productivity tool for reading, marking up, and sharing PDFs, Word documents, PowerPoint files, and images.” This means that if you students submit documents in any of these formats (Professor Hacker suggests using DropBox, Sky Drive, Google Drive, or other cloud storage services for submission and return of assignments), you can grade them on your iPad using iAnnotate.

Erin E. Templeton, Anne Morrison Chapman Distinguished Professor of International Study and an associate professor of English at Converse College and author of the post, has this to say about how she uses iAnnotate’s features.

With iAnnotate, you can underline or highlight parts of the paper. I will often highlight typos, sentences that are unclear, or phrases that I find especially interesting. I can add comments to the highlight to explain why I’ve highlighted that particular word or phrase. You can also add comment boxes to make more general observations or ask questions, or if you would prefer, you can type directly on the document and adjust the font, size, and color to fit the available space.

I frequently use the stamp feature, which offers letters and numbers (I use these to indicate scores or letter grades), check marks, question marks, stars of various colors, smiley faces–even a skull and crossbones…. And if you’d rather, you can transform a word or phrase that you find yourself repeatedly tying onto the document into a stamp–I have added things like “yes and?” and “example?” to my collection. Finally, there is a pencil tool for those who want to write with either a stylus or a finger on the document.

Not an iDevice user? iAnnotate is available for Androids too, although it is limited at the time of this posting to reading and annotating PDF files.

The Professor Hacker post offers additional links and resources for paperless grading and more generally for those looking to move to a paperless course environment.  Be sure to read the comments for additional solutions.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art

To Curve or Not to Curve

A version of this post appeared in the print series of The Innovative Instructor.

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.Instructors choose grading schemes for a variety of reasons. Some may select a method that reflects the way they were assessed as students; others may follow the lead of a mentor or senior faculty member in their department. To curve or not to curve is a big question. Understanding the motivations behind and reasons for curving or not curving grades can help instructors select the most appropriate grading schemes for their courses.

Curving defines grades according to the distribution of student scores. Grades are determined after all student scores for the assignment or test are assigned. Often called norm-referenced grading, curving assigns grades to students based on their performance relative to the class as a whole. Criterion-referenced grading (i.e., not curving) assigns grades without this reference. The instructor determines the threshold for grades before the assignment is submitted or the test is taken. For example, a 92 could be defined as the base threshold for an A, regardless of how many students score above or below the threshold.

Choosing to curve grades or use a criterion referenced grading system can affect the culture of competition and/or the students’ sense of faculty fairness in a class. Curving grades provides a way to standardize grades. If a department rotates faculty responsibility for teaching a course (such as a large introductory science course), norm-referenced grading can ensure that the distribution of grades is comparable from year-to-year. A course with multiple graders, such as a science lab that uses a fleet of graduate students in the grading, may also employ a norm referencing technique to standardize grades across sections. In this case, standardization across multiple graders should begin with training the graders. Curving grades should not be a substitute for instructing multiple graders how to assign grades based on a pre-defined rubric (The Innovative Instructor: “Calibrating Multiple Graders”).

In addition to standardizing grades, norm-referenced grading can enable faculty to design more challenging assignments that differentiate top performers who score significantly above the mean. More challenging assignments can skew the grade distribution; norm-referenced grading can then minimize the impact on the majority of students whose scores will likely be lower.

A critique of curving grades is that some students, no matter how well they perform, will be assigned a lower grade than they feel they deserve. Shouldn’t all students have an equal chance to earn an A? For this reason, some instructors do not pre-determine the distribution of grades. The benefit of using a criterion-referenced grading scheme is that it minimizes the sense of competition among students because they are not competing for a limited number of A’s or B’s. Their absolute score, not relative performance, determines their grade.

There are multiple ways to curve grades.

Image showing a bell curve.I. The Bell Curve

Normalizes scores using a statistical technique to reshape the distribution into a bell curve. An instructor then assigns a grade (e.g., C+) to the middle (median) score and determines grade thresholds based on the distance of scores from this reference point. A spreadsheet application like Excel can be used to normalize scores. CER staff can assist instructors in normalizing scores.

Image showing clumping.II. Clumping

The instructor creates a distribution of the scores and identifies clusters of scores separated by breaks in the distribution, then uses these gaps as a threshold for assigning grades.

 

Image showing quota system.III. Quota Systems

Often used in law schools, the instructor pre-determines the number of students who can earn each grade. The instructor applies these quotas after rank ordering student scores.

 

Image showing criterion-reference grading.IV. Criterion-reference grading

Using a pre-determined scale, assessments are based on clearly defined learning objectives and grading rubrics so students know the instructor’s expectations for an A, B, C, etc.

 

During the 2011 Robert Resnick Lecture at Johns Hopkins, Carl Wieman, Nobel Laureate and Associate Director for Science at the President’s Office of Science and Technology, argued that most instructors are not trained to create valid assessments of student learning. Curving can be used as a tool to adjust grades on a poorly designed test, but consistent use of curving should not be a substitute for designing assessments that accurately assess what the instructor wants students to learn by the end of the course. CER staff are happy to talk to faculty about defining learning objectives and/or strategies for designing challenging and accurate student assessment instruments.

Additional Resources

• Campbell, C. (2012). Learning-centered grading practices. Leadership. 41(5), 30-33

• Jacobson, N. (2001). A method for normalizing students’ scores when employing multiple gradersACM SIGCSE Bulletin. 33(4), 35-38.

Joe Champion’s Grading Transformation Spreadsheet. This spreadsheet automatically curves students’ scores after the instructor copies the scores into the spreadsheet and sets a variable defining the amount of curve.

Michael J. Reese, Associate Director
Center for Educational Resources


Image Sources: © Reid Sczerba, 2013.

GradeMark Paperless Grading

GradeMark is a paperless grading system that gives instructors the ability to add comments and corrections to assignments submitted electronically. It is a tool offered within Turnitin, the plagiarism detection software product used at JHU. With its drag and drop functionality, among other features, GradeMark has the potential to save instructors a great deal of time when grading online assignments.  It is also easily integrated with Blackboard.

(Note: In order to use GradeMark, online assignments must be created using Turnitin. If using Turnitin within Blackboard, accounts are automatically created for instructors and students through the Blackboard system. If using Turnitin outside of Blackboard, the instructor is responsible for creating separate accounts for each student. Please click here for more information on Turnitin’s integration with Blackboard.)

Screen shot showing example of using GradeMark

GradeMark contains several different grading features:

  • Dragging and Dropping Quickmarks – Quickmarks are frequently used comments that are readily available to drag and drop into a student’s assignment. While viewing an assignment, the instructor can select from a panel of standard Quickmarks that come with GradeMark, or from a custom set that s/he has created.  For example, the abbreviation ‘Awk.’ is a Quickmark indicating an awkward phrase. The ability to drag and drop Quickmarks to an assignment, instead of typing them over and over again, can save instructors a lot of time.
  • General Comments – Each assignment has a generous space where general comments can be added.  General comments can be used to further clarify any Quickmarks that were added as well as discuss the assignment as a whole.
  • Voice Comments – A recent addition to GradeMark is the ability to add voice comments. A voice comment can be added to the assignment lasting up to three minutes in length.  An instructor can use the built-in microphone in his/her computer to easily record the message.
  • Rubrics – Rubrics created within GradeMark can help streamline the grading process by using a ‘scorecard’ approach. Specific criteria and scores are defined in a rubric that is then associated with an assignment. Instructors grade the assignment by filling in the scores based on the evaluative criteria in the rubric. There is also the option of associating Quickmarks with rubrics when they are added to the assignment.

Students are able to view their graded assignments when the ‘post date’ is reached. The post date is set by the instructor when setting up the assignment. Students have the option to print or save a copy of the graded assignment and can view only their own submissions.

GradeMark Logo showing grade book and apple

Advantages:

  • Flexibility in marking up assignments – Quickmarks, rubrics, text, voice comments all available.
  • Time saved dragging and dropping reusable comments.
  • Increased consistency in grading.
  • Clear feedback to students, instead of ‘scribbled margins.’
  • Opportunity to provide more detailed feedback to students including links and resources.
  • No need to download assignments – everything is web-based, stored online.
  • If the instructor is using Blackboard, when the assignment is graded the grade is automatically transferred and recorded into the Blackboard Grade Center.

Amy Brusini, Course Management Training Specialist
Center for Educational Resources


Image sources: Amy Brusini screen shot of GradeMark example; GradeMark logo

Using a Rubric for Grading Assignments

Rubric comes from the Latin word rubrica meaning red chalk. In early medieval manuscripts the first letter of an important paragraph was often enlarged, painted in red, and called a rubric, leading to definitions of the term denoting the authority of what was written “under the rubric.” In the academic world the term has come to mean an authoritative rule or guide for assessment.

Instructor grading using a rubric

Most faculty, when preparing a graded assignment or exam, have expectations about how it should be completed, what will constitute a correct answer, or what will make the difference between an A and a C on a paper. Formalizing those thoughts into a written rubric – a template or checklist where those expectations are specified – has real advantages. First, it can save time when it comes to grading the assignment or test. Second, if you have Teaching Assistants, they will have a clear understanding of how to grade, and grading will be consistent across the sections. Third, it will make it easy to explain to students why they didn’t get that A they thought they deserved.

For a graded paper or project, it can be very helpful to share the rubric with the students when you give them the assignment. Seeing the rubric will help them to focus on what you feel is important. They will have a better understanding of the assignment and you will not only see better results, but have an easier time with the grading.

For more about creating rubrics see the CER’s Innovative Instructor print series article on Calibrating Multiple Graders: http://www.cer.jhu.edu/ii/InnovInstruct-BP_CalibratingGraders.pdf. For more on the reasons to use rubrics see: http://chronicle.com/blogs/profhacker/is-it-too-early-to-think-about-grading/22660.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image source: Microsoft Clip Art, edited by Macie Hall