Resources for Peer Learning and Peer Assessment

Students doing group workSeveral weeks ago our colleagues in the Center for Teaching and Learning at the Johns Hopkins School of Public Health presented the very informative half-day symposium Peer to Peer: Engaging Students in Learning and Assessment. Speakers presented on their real-life experiences implementing peer learning strategies in the classroom. There were hands-on activities demonstrating the efficacy of peer-to-peer learning. Two presentations focused on peer assessment highlighting the data on how peer assessment measures up to instructor assessment and giving examples of use of peer assessment.  If you missed it, the presentations were recorded and are now available.

A previous post highlighted Howard Rheingold’s presentation From Pedagogy to Peeragogy: Social Media as Scaffold for Co-learning. You will need to bring up the slides separately – there is a link for them on the CTL page.

For additional material on peer learning and assessment the JHSPH CTL resources page is loaded with information on the subject, outlining best practices and highlighting references and examples.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art

The Characteristics of High-Quality Formative Assessments

As we explore different theories of learning, two points seem salient: that students’ understanding of intelligence affects their self-perception, their determination, their motivation, and their achievement (Dweck, 2002); and that a students’ ability to self-regulate learning, to be metacognitive, ensures more successful learning and achievement (Ormrod, 2012, p.352-3).  As instructors plan curriculum and assessments, they ought to consider how to use these points as guides to ensure student learning and success.

Word cloud created from the text of the blog post.Formative assessment, understood as both a tool for instructors to gauge student learning and a teaching method, works iteratively with student understanding of intelligence and learner-regulation.  That is, formative assessment is based on the idea that learners should learn to take control of their learning, and that intelligence is a malleable quality.  In turn, formative assessment improves self-reflection in students and reinforces the idea that intelligence can be increased as opposed to it being a fixed entity, reflecting Carol S. Dweck’s important work on growth mind set, discussed in a recent the Innovative Instructor post.

An understanding of just what formative assessment entails highlights the recursive relationships of formative assessment, self-reflection, and a malleable view of intelligence.  Lorrie Shephard describes formative assessment as a process through which an instructor and a student come to better understand both the learning goals and the student’s work towards those goals in order to “alter the course of instruction and thus support the development of greater competence” (2005, p. 67).  This definition identifies formative assessment as a process of feedback that improves student learning.

Using formative feedback as a teaching method means that a classroom becomes the locus of ongoing dialogue that helps students measure and improve as they work to meet goals, expectations, and objectives.  The instructor takes in information about student progress and understanding, which creates the opportunity for a feedback loop that the instructor can use to shape teaching.  It is the moment when student progress shapes instruction that formative feedback becomes formative assessment.

When practiced effectively, this iterative relationship between instruction, feedback, student adjustment, and instructional adjustment maps onto self-reflection and a view of malleable intelligence.  As instructors provide formative feedback to students, they give students the tools to assess their own progress toward learning goals. Over time, students learn self-reflecting strategies (Shepard, 2005, p. 69; Wiggins, 2004, pp. 2-3, 6), allowing for moments such as Black and Wiliam noted when “one class, subsequently taught by a teacher not emphasizing assessment for learning, surprised that teacher by complaining, ‘Look, we’ve told you we don’t understand this. Why are you going on to the next topic?” (2004, p. 36).  As students reveal their learning progress, either directly (as in the example above) or indirectly through tasks that foster formative feedback, instructors have the opportunity to adapt their instruction. As teaching becomes more closely aligned with student progress, students are given increasingly refined opportunities for comprehension or alignment with expectations. As students chart their own progress, they implicitly buy in to the idea that they can improve their performance by making changes in their approach (Black & Wiliam, 2004, p. 30; Shepard, 2000, p. 43; Wiggins, 2004, p. 5). They come to understand, either overtly or tacitly, that their achievement is based on effort, not an unchanging quantity of intelligence (Shepard, 2005, 68; Lipnevich & Smith, 2009b, 364). When formative assessment works, students become self-regulating learners who practice self-reflection and learn a malleable view of intelligence—and are more motivated and more likely to achieve (Dweck, 2002).

Given the value of formative assessment, how can instructors use the characteristics of exemplary formative assessment as they plan their courses?  As opposed to inserting a few well-crafted formative assessments into the curriculum, instructors should understand that the adoption of formative assessment is the implementation of a course-long instructional approach.  Specifically, instructors can use formative feedback in every class through effective questioning strategies that elicit information about student understanding and help students monitor and adjust their learning (Black & Wiliam, 2004, pp. 25-7).  Instructors can assess students’ prior knowledge and use “knowledge-activation routines” such as the K-W-L strategy, to “develop students’ metacognitive abilities while providing relevant knowledge connections for specific units of study”(Shepard, 2005, p. 68). Comments on work, marking of papers (Black & Wiliam, 2004, pp. 27-31; Lipnevich, 2009a; Lipnevich, 2009b), peer-assessment, self-critique exercises (Black & Wiliam, 2004, pp 31-3), one-on-one tutorials, small group remediation, instructor and student modeling, analysis of exemplars (Wiggins, 2004), and revision exercises can be used throughout.

Although methods may be similar across disciplines, the precise use of formative feedback will naturally vary between disciplines (Black & Wiliam, 2004, pp. 36-37; Shepard, 2000, 36). Nonetheless, Black & Wiliam and Shephard (2005) stress that adopting formative assessment as an instructional approach requires a cultural change within a learning community. Because students activate and practice self-reflective strategies in an effective formative feedback loop, they ought to be given a chance to develop and hone these skills in every classroom.  Since formative assessment relies on students understanding clearly what the expected outcomes of their learning and work are, they need exemplars. If instructors within a department, discipline or, ideally, school can agree upon the characteristics of exemplary work and learning, student self-regulation is more natural and more likely to be accurate.

References

Black, P. & Wiliam, D. (2004). The Formative Purpose: Assessment Must First Promote Learning. Yearbook of the National Society for the Study of Education103 (2), 20-50.

Dweck, C. (2002). Messages That Motivate: How Praise Molds Students’ Beliefs, Motivation, and Performance (in Surprising Ways). In J. Aronson (Ed.), Improving Academic Acheivement: Impacts of Psychological Factors on Education (pp. 37-60). San Diego: Academic Press.

Lipnevich, A. & Smith, J. (2009a). “I Really Need Feedback to Learn:” Students’ Perspectives on the Effectiveness of the Differential Feedback MessagesEducational Assessment, Evaluation and Accountability , 21 (4), 347-67.

Lipnevich, A. &. (2009b). Effects of Differential Feedback on Students’ Examination Performance. Journal of Experimental Psychology: Applied , 15 (4), 319-33.

Ormrod, J. (2012). Human Learning (6th Edition ed.). Boston: Pearson.

Shepard, L. (2000). The Role of Classroom Assessment in Teaching and Learning. CSE Technical Report, University of California, Graduate School of Education & Information Studies, Los Angeles.

Shepard, L. (2005). Linking Formative Assessment to Scaffolding. Educational Leadership, 63, 66-70.

Shute, V. (2008). Focus on Formative Feedback. Review of Educational Research , 78, 153-89.

Wiggins, G. (2004). Assessment as Feedback. New Horizons for Learning Online Journal, 1-8.

Sarah Wilson is the co-director of the Upper School at Laurel School in Shaker Heights, Ohio. She has a B.A. (English) from Kenyon College, and an M.A. from Teachers College, Columbia University. She has taught middle and high school English for 13 years.


Image Source: Formative Assessment Wordle created by Macie Hall

A Tip of the Hat to Tomorrow’s Professor

For writing The Innovative Instructor blog posts I read a lot of books and articles related to teaching and follow various educational blogs.  One resource that I’d like to pass along is the Tomorrow’s Professor e-Newletter. Sponsored by the Stanford University Center for Teaching and Learning, Tomorrow’s Professor is edited by Richard M. Reis, Ph.D., a consulting professor in the Department of Mechanical Engineering at Stanford.

Screen shot of Tomorrow's Professor website logo.Twice a week (Mondays and Thursdays) during the academic year Reis passes along articles from journals or excerpts from books on a wide range of topics in the following categories:

  • Tomorrow’s Teaching and Learning
  • Tomorrow’s Academy
  • Tomorrow’s Graduate Students and Postdocs
  • Tomorrow’s Academic Careers
  • Tomorrow’s Research

“Tomorrows Professor seeks to foster a diverse, world-wide teaching and learning ecology among its over 49,000 subscribers at over 800 institutions and organizations in over 100 countries around the world.”

The more than 1250 posts to date have been archived so you can search for past posts as well as subscribe to receive new postings via email.

As an introduction, I found a recent post on The Three Most Time-Efficient Teaching Practices [#1218] to reflect some of the pedagogical best practices that The Innovative Instructor tries to promote.  The author, Linda C. Hodges, Associate Vice Provost for Faculty Affairs and Director of the Faculty Development Center,University of Maryland, Baltimore County, states:

What constitutes productivity in teaching is a point of debate, of course, but many of us agree that we want to facilitate student learning. When faculty are challenged to change traditional teaching practices to promote better student success, all we may see looming before us is additional class preparation time. The best kept secret, however, is how much more time-efficient some of these touted teaching practices are.

The three practices she describes are 1) beginning planning with the end in mind by using backward course design, 2) generating criteria or rubrics to describe disciplinary work for students, and 3) embedding “assessment” into assessments.

Hodges asserts that spending time in the planning and development of your courses using proven pedagogical methods will save you time in your teaching in the long run. Taking a few minutes each week to peruse Tomorrow’s Professor could help you in all aspects of your academic life.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Screenshot of Tomorrow’s Professor logo
http://cgi.stanford.edu/~dept-ctl/cgi-bin/tomprof/postings.php

Quick Tips: Paperless Grading

Just in time for the end of semester assignment and exam grading marathon, The Innovative Instructor has some tips for making these tasks a bit less stressful.

Male instructor 's head between two stacks of papers.Last year we wrote about the GradeMark paperless grading system, a tool offered within Turnitin, the plagiarism detection software product used at JHU. The application is fully integrated with Blackboard, our learning management system. For assignments and assessments where you don’t wish to use Turnitin, Blackboard offers another grading option for online submissions. Recent updates to Blackboard’s include new features built into the assignment tool that allow instructors to easily make inline comments, highlight or strikeout text, and use drawing tools for freeform edits. All this without having to handle a single piece of paper.

If you don’t use Blackboard, don’t despair. The Innovative Instructor has solutions for you, too.  A recent post in one of our favorite blogs, the Chronicle of Higher Education’s Professor Hacker, titled Using iAnnotate as a Grading Tool, offers another resource. According to its creators, the iAnnotate app “turns your tablet into a world-class productivity tool for reading, marking up, and sharing PDFs, Word documents, PowerPoint files, and images.” This means that if you students submit documents in any of these formats (Professor Hacker suggests using DropBox, Sky Drive, Google Drive, or other cloud storage services for submission and return of assignments), you can grade them on your iPad using iAnnotate.

Erin E. Templeton, Anne Morrison Chapman Distinguished Professor of International Study and an associate professor of English at Converse College and author of the post, has this to say about how she uses iAnnotate’s features.

With iAnnotate, you can underline or highlight parts of the paper. I will often highlight typos, sentences that are unclear, or phrases that I find especially interesting. I can add comments to the highlight to explain why I’ve highlighted that particular word or phrase. You can also add comment boxes to make more general observations or ask questions, or if you would prefer, you can type directly on the document and adjust the font, size, and color to fit the available space.

I frequently use the stamp feature, which offers letters and numbers (I use these to indicate scores or letter grades), check marks, question marks, stars of various colors, smiley faces–even a skull and crossbones…. And if you’d rather, you can transform a word or phrase that you find yourself repeatedly tying onto the document into a stamp–I have added things like “yes and?” and “example?” to my collection. Finally, there is a pencil tool for those who want to write with either a stylus or a finger on the document.

Not an iDevice user? iAnnotate is available for Androids too, although it is limited at the time of this posting to reading and annotating PDF files.

The Professor Hacker post offers additional links and resources for paperless grading and more generally for those looking to move to a paperless course environment.  Be sure to read the comments for additional solutions.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art

Should you stop telling your students to study for exams?

Male student in library studyingThe Innovative Instructor recently came across a thought-provoking article by David Jaffee in the Chronicle of Higher Education entitled Stop Telling Students to Study for Exams. In a nutshell, Jaffee advocates for telling students that they should study for learning and understanding rather than for tests or exams. He reminds us that just because content is covered in class does not mean that students really learn it. Regurgitating information for an exam does not equal long-term retention. He points out that there are real consequences to this traditional approach.

On the one hand, we tell students to value learning for learning’s sake; on the other, we tell students they’d better know this or that, or they’d better take notes, or they’d better read the book, because it will be on the next exam; if they don’t do these things, they will pay a price in academic failure. This communicates to students that the process of intellectual inquiry, academic exploration, and acquiring knowledge is a purely instrumental activity—designed to ensure success on the next assessment.

His claims are backed with evidence. Numerous studies have shown that students who use rote memorization to cram for tests and exams do not retain the information studied over the long term. Real learning, which involves retention and transfer of knowledge to new situations, is a complicated process reflected by the vast amount of research on the subject.

As a side note, for those interested in learning more about cognitive development and student learning, there is a nice summary of key studies and models in the book by James M. Lang On Course: A Week by Week Guide to Your First Semester of College Teaching [Harvard University Press, 2008]. See Week 7 Students as Learners for an overview and bibliography.

Instead of a cumulative final exam, Jaffee recommends using formative and authentic assessments, which “[u]sed jointly…can move us toward a healthier learning environment that avoids high-stakes examinations and intermittent cramming.” Formative assessments, performed in class, provide opportunities for students to understand where their knowledge gaps are. [See The Innovative Instructor 2013 GSI Symposium Breakout Session 2: Formative Assessment and Teaching Tips: Classroom Assessment.] Authentic assessments allow students “to demonstrate their abilities in a real-world context.” Examples include group and individual projects, in-class presentations, multi-media assignments, and poster sessions.

The article has obviously provoked some controversy as evidenced by the number of comments made – 225 as of this posting. One of the commenters supporting Jaffee with several rebuttals to critics is Robert Talbert, Professor of Mathematics at Mathematics Department at Grand Valley State University in Allendale, Michigan, and author of The Chronicle of Higher Education blog Casting Out Nines. Talbert has blogged extensively on his experiences with flipping his classroom.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art

To Curve or Not to Curve

A version of this post appeared in the print series of The Innovative Instructor.

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.Instructors choose grading schemes for a variety of reasons. Some may select a method that reflects the way they were assessed as students; others may follow the lead of a mentor or senior faculty member in their department. To curve or not to curve is a big question. Understanding the motivations behind and reasons for curving or not curving grades can help instructors select the most appropriate grading schemes for their courses.

Curving defines grades according to the distribution of student scores. Grades are determined after all student scores for the assignment or test are assigned. Often called norm-referenced grading, curving assigns grades to students based on their performance relative to the class as a whole. Criterion-referenced grading (i.e., not curving) assigns grades without this reference. The instructor determines the threshold for grades before the assignment is submitted or the test is taken. For example, a 92 could be defined as the base threshold for an A, regardless of how many students score above or below the threshold.

Choosing to curve grades or use a criterion referenced grading system can affect the culture of competition and/or the students’ sense of faculty fairness in a class. Curving grades provides a way to standardize grades. If a department rotates faculty responsibility for teaching a course (such as a large introductory science course), norm-referenced grading can ensure that the distribution of grades is comparable from year-to-year. A course with multiple graders, such as a science lab that uses a fleet of graduate students in the grading, may also employ a norm referencing technique to standardize grades across sections. In this case, standardization across multiple graders should begin with training the graders. Curving grades should not be a substitute for instructing multiple graders how to assign grades based on a pre-defined rubric (The Innovative Instructor: “Calibrating Multiple Graders”).

In addition to standardizing grades, norm-referenced grading can enable faculty to design more challenging assignments that differentiate top performers who score significantly above the mean. More challenging assignments can skew the grade distribution; norm-referenced grading can then minimize the impact on the majority of students whose scores will likely be lower.

A critique of curving grades is that some students, no matter how well they perform, will be assigned a lower grade than they feel they deserve. Shouldn’t all students have an equal chance to earn an A? For this reason, some instructors do not pre-determine the distribution of grades. The benefit of using a criterion-referenced grading scheme is that it minimizes the sense of competition among students because they are not competing for a limited number of A’s or B’s. Their absolute score, not relative performance, determines their grade.

There are multiple ways to curve grades.

Image showing a bell curve.I. The Bell Curve

Normalizes scores using a statistical technique to reshape the distribution into a bell curve. An instructor then assigns a grade (e.g., C+) to the middle (median) score and determines grade thresholds based on the distance of scores from this reference point. A spreadsheet application like Excel can be used to normalize scores. CER staff can assist instructors in normalizing scores.

Image showing clumping.II. Clumping

The instructor creates a distribution of the scores and identifies clusters of scores separated by breaks in the distribution, then uses these gaps as a threshold for assigning grades.

 

Image showing quota system.III. Quota Systems

Often used in law schools, the instructor pre-determines the number of students who can earn each grade. The instructor applies these quotas after rank ordering student scores.

 

Image showing criterion-reference grading.IV. Criterion-reference grading

Using a pre-determined scale, assessments are based on clearly defined learning objectives and grading rubrics so students know the instructor’s expectations for an A, B, C, etc.

 

During the 2011 Robert Resnick Lecture at Johns Hopkins, Carl Wieman, Nobel Laureate and Associate Director for Science at the President’s Office of Science and Technology, argued that most instructors are not trained to create valid assessments of student learning. Curving can be used as a tool to adjust grades on a poorly designed test, but consistent use of curving should not be a substitute for designing assessments that accurately assess what the instructor wants students to learn by the end of the course. CER staff are happy to talk to faculty about defining learning objectives and/or strategies for designing challenging and accurate student assessment instruments.

Additional Resources

• Campbell, C. (2012). Learning-centered grading practices. Leadership. 41(5), 30-33

• Jacobson, N. (2001). A method for normalizing students’ scores when employing multiple gradersACM SIGCSE Bulletin. 33(4), 35-38.

Joe Champion’s Grading Transformation Spreadsheet. This spreadsheet automatically curves students’ scores after the instructor copies the scores into the spreadsheet and sets a variable defining the amount of curve.

Michael J. Reese, Associate Director
Center for Educational Resources


Image Sources: © Reid Sczerba, 2013.

2013 GSI Symposium Breakout Session 2: Formative Assessment

A Report from the Trenches

We’re continuing with our reports from the JHU Gateway Sciences Initiative (GSI) 2nd Annual Symposium on Excellence in Teaching and Learning in the Sciences. Next up is “Assessing Student Learning during a Course: Tools and Strategies for Formative Assessment” presented by Toni Ungaretti, Ph.D., School of Education and Mike Reese, M.Ed., Center for Educational Resources.

Please note that links to examples and explanations in the text below were added by CER staff and were not included in the breakout session presentation.

The objectives for this breakout session were to differentiate summative and formative assessment, review and demonstrate approaches to formative assessment, and describe how faculty use assessment techniques to engage in scholarly teaching.

Summarizing Dr. Ungaretti’s key points:

Assessment is a culture of continuous improvement that parallels the University’s focus on scholarship and research. It ensures learners’ performance, program effectiveness, and unit efficiency. It is an essential feature in the teaching and learning process. Learners place high value on marks or grades: “Assessment defines what [learners] regard as important.” [Brown, G., Bull, J., & Pendlebury, M. 1997. Assessing Student Learning in Higher Education. Routledge.]  Assessment ensures that what is important is learned.

Summative Assessment is often referred to as assessment of learning. This is regarded as high stakes assessment – typically a test, exam, presentation, or paper at the midterm and end of a course.

Formative Assessment focuses on learning instead of assigning grades. “Creating a climate that maximizes student accomplishment in any discipline focuses on student learning instead of assigning grades. This requires students to be involved as partners in the assessment of learning and to use assessment results to change their own learning tactics.” [Fluckiger, J., Tixier y Virgil, Y., Pasco, R., and Danielson, K. (2010). Formative Feedback: Involving Students as Partners in Assessment to Enhance Learning. College Teaching, 58, 136-140.]

Effective formative assessment involves feedback. That feedback has the greatest benefit when it addresses multiple aspects of learning. It includes feedback on the product (the completed task), feedback on progress (the extent to which the learner is improving over time), and feedback on the process (If the learner is involved, feedback can be given more frequently.)

Diagram showing the Three Ps of Formative Assessment

 From this point on in the session, the participants engaged in active learning exercises that demonstrated various examples of formative assessment including utilizing graphic organizers (Venn Diagrams, Mind Maps, KWL Charts, and Kaizen/T-Charts – practices that focus upon continuous improvement), classroom discussion with higher order questioning (based on Bloom’s Taxonomy),  minute papers, and admit/exit slips.

Classroom discussions can tell the instructor much about student mastery of basic concepts. The teacher can initiate the discussion by presenting students with an open-ended question.

A minute paper is a quick in-class writing exercise where students answer a question focused on material recently presented, such as: What was the most important thing that you learned? What important question remains? This allows the instructor to gauge the understanding of concepts just taught.

Admit/exit slips are collected at the beginning or end of a class. Students provide short answers to questions such as: What questions do I have? What did I learn today? What did I find interesting?

There are many ways in which faculty can determine learner mastery. These may include the use of journaling or learning/response logs to gauge growth over time, constructive quizzes, using modifications of games such as Jeopardy, or structures such as a guided action or Jigsaw. There are also ways to quickly check student understanding such as using thumbs-up–thumbs-down, or i>Clickers.

Assessment may also be achieved by using “learner-involved” formative assessment.  Some ways to achieve this are through the use of three-color group quizzes, mid-term student conferencing, assignment blogs, think-pair-share, and practice presentations.

When incorporated into classroom practice, the formative assessment process provides information needed to adjust teaching and learning while they are still happening. Finally, faculty should look on formative assessment as an opportunity. No matter which methods are used it is important that they allow students to be creative, have fun, learn, and make a difference.

Faculty may also use assessment methods as research. This allows them the opportunity to advance hypotheses-based teaching, gather data on instructional changes and student outcomes, and to prepare scholarly submissions to advance the knowledge on teaching in their discipline. Teaching as research is the deliberate, systematic, and reflective use of research methods to develop and implement teaching practices that advance the learning experiences and outcomes of students and teachers.

Cheryl Wagner, Program/Administrative Manager
Center for Educational Resources

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Macie Hall

 

GradeMark Paperless Grading

GradeMark is a paperless grading system that gives instructors the ability to add comments and corrections to assignments submitted electronically. It is a tool offered within Turnitin, the plagiarism detection software product used at JHU. With its drag and drop functionality, among other features, GradeMark has the potential to save instructors a great deal of time when grading online assignments.  It is also easily integrated with Blackboard.

(Note: In order to use GradeMark, online assignments must be created using Turnitin. If using Turnitin within Blackboard, accounts are automatically created for instructors and students through the Blackboard system. If using Turnitin outside of Blackboard, the instructor is responsible for creating separate accounts for each student. Please click here for more information on Turnitin’s integration with Blackboard.)

Screen shot showing example of using GradeMark

GradeMark contains several different grading features:

  • Dragging and Dropping Quickmarks – Quickmarks are frequently used comments that are readily available to drag and drop into a student’s assignment. While viewing an assignment, the instructor can select from a panel of standard Quickmarks that come with GradeMark, or from a custom set that s/he has created.  For example, the abbreviation ‘Awk.’ is a Quickmark indicating an awkward phrase. The ability to drag and drop Quickmarks to an assignment, instead of typing them over and over again, can save instructors a lot of time.
  • General Comments – Each assignment has a generous space where general comments can be added.  General comments can be used to further clarify any Quickmarks that were added as well as discuss the assignment as a whole.
  • Voice Comments – A recent addition to GradeMark is the ability to add voice comments. A voice comment can be added to the assignment lasting up to three minutes in length.  An instructor can use the built-in microphone in his/her computer to easily record the message.
  • Rubrics – Rubrics created within GradeMark can help streamline the grading process by using a ‘scorecard’ approach. Specific criteria and scores are defined in a rubric that is then associated with an assignment. Instructors grade the assignment by filling in the scores based on the evaluative criteria in the rubric. There is also the option of associating Quickmarks with rubrics when they are added to the assignment.

Students are able to view their graded assignments when the ‘post date’ is reached. The post date is set by the instructor when setting up the assignment. Students have the option to print or save a copy of the graded assignment and can view only their own submissions.

GradeMark Logo showing grade book and apple

Advantages:

  • Flexibility in marking up assignments – Quickmarks, rubrics, text, voice comments all available.
  • Time saved dragging and dropping reusable comments.
  • Increased consistency in grading.
  • Clear feedback to students, instead of ‘scribbled margins.’
  • Opportunity to provide more detailed feedback to students including links and resources.
  • No need to download assignments – everything is web-based, stored online.
  • If the instructor is using Blackboard, when the assignment is graded the grade is automatically transferred and recorded into the Blackboard Grade Center.

Amy Brusini, Course Management Training Specialist
Center for Educational Resources


Image sources: Amy Brusini screen shot of GradeMark example; GradeMark logo

Using a Rubric for Grading Assignments

Rubric comes from the Latin word rubrica meaning red chalk. In early medieval manuscripts the first letter of an important paragraph was often enlarged, painted in red, and called a rubric, leading to definitions of the term denoting the authority of what was written “under the rubric.” In the academic world the term has come to mean an authoritative rule or guide for assessment.

Instructor grading using a rubric

Most faculty, when preparing a graded assignment or exam, have expectations about how it should be completed, what will constitute a correct answer, or what will make the difference between an A and a C on a paper. Formalizing those thoughts into a written rubric – a template or checklist where those expectations are specified – has real advantages. First, it can save time when it comes to grading the assignment or test. Second, if you have Teaching Assistants, they will have a clear understanding of how to grade, and grading will be consistent across the sections. Third, it will make it easy to explain to students why they didn’t get that A they thought they deserved.

For a graded paper or project, it can be very helpful to share the rubric with the students when you give them the assignment. Seeing the rubric will help them to focus on what you feel is important. They will have a better understanding of the assignment and you will not only see better results, but have an easier time with the grading.

For more about creating rubrics see the CER’s Innovative Instructor print series article on Calibrating Multiple Graders: http://www.cer.jhu.edu/ii/InnovInstruct-BP_CalibratingGraders.pdf. For more on the reasons to use rubrics see: http://chronicle.com/blogs/profhacker/is-it-too-early-to-think-about-grading/22660.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image source: Microsoft Clip Art, edited by Macie Hall

Teaching Tips: Classroom Assessment

Increasing emphasis is being placed on assessment, and many faculty are looking for evaluation practices that extend beyond giving a mid-term and final exam. In particular the concept of non-graded classroom assessment is gaining traction. In their book Classroom Assessment Techniques, Thomas Angelo and Patricia Cross (Jossey-Bass, 1993) stress the importance of student evaluation that is “learner-centered, teacher-directed, mutually beneficial, formative, context-specific, ongoing, and firmly rooted in good practice.”

Students in a classroom.

While the authors describe in detail numerous techniques for ascertaining in a timely manner whether or not students are learning what is being taught, here are several quick and easy to implement methods:

 

The Minute Paper: At an appropriate break, ask students to answer on paper a specific question pertaining to what has just been taught. After a minute or two, collect the papers for review after class, or, to promote class interaction, ask students to pair off and discuss their responses. After a few minutes, call on a few students to report their answers and results of discussion. If papers are turned in, there is value to both the anonymous and the signed approach. Grading, however, is not the point; this is a way to gather information about the effectiveness of teaching and learning.

In Class Survey: Think of this as a short, non-graded pop quiz. Pass out a prepared set of questions, or have students provide answers on their own paper to questions on a PowerPoint/Keynote slide. Focus on a few key concepts. Again, the idea is to assess whether students understand what is being taught.

Exit Ticket: Select one of the following items and near the end of class ask your students to write on a sheet of paper 1) a question they have that didn’t get answered, 2) a concept or problem that they didn’t understand, 3) a bullet list of the major points covered in class, or 4) a specific question to access their learning. Students must hand in the paper to exit class. Allow anonymous response so that students will answer honestly. If you do this regularly, you may want to put the exit ticket question on your final PowerPoint/Keynote slide.

Tools that can help with assessment

Classroom polling devices (a.k.a. clickers) offer an excellent means of obtaining evidence of student learning. See http://www.cer.jhu.edu/clickers.html for information about the in-class voting system used at JHU. Faculty who are interested in learning more should contact Brian Cole in the CER.

Faculty at the JHU School of Nursing have been piloting an online application called Course Canary to obtain student assessment data. Formative course evaluation surveys allow faculty to collect student feedback quickly and anonymously. A free account is available (offering two online surveys and two exit ticket surveys) at: https://coursecanary.com/.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image source: Microsoft Clip Art