An Evidence-based Approach to Effective Studying

Dr. Culhane is Professor and Chair of the Department of Pharmaceutical Sciences at Notre Dame of Maryland University School of Pharmacy.

If you are like me, much of your time is spent ensuring that the classroom learning experience you provide for your students is stimulating, interactive and impactful. But how invested are we in ensuring that what students do outside of class is productive? Based on my anecdotal experience and several studies1,2,3 looking at study strategies employed by students, the answer to this question is not nearly enough! Much like professional athletes or musicians, our students are asked to perform at a high level, mastering advanced, information dense subjects; yet unlike these specialists who have spent years honing the skills of their craft, very few students have had any formal training in the basic skills necessary to learn successfully. It should be no surprise to us that when left to their own devices, our students tend to mismanage their time, fall victim to distractions and gravitate towards low impact or inefficient learning strategies. Even if students are familiar with high impact strategies and how to use them, it is easy for them to default back to bad habits, especially when they are overloaded with work and pressed for time.

Several years ago, I began to seriously think about and research this issue in hopes of developing an evidence-based process that would be easy for students to learn and implement. Out of this work I developed a strategy focused on the development of metacognition – thinking about how one learns. I based it on extensively studied, high impact learning techniques to include: distributed learning, self-testing, interleaving and application practice.4 I call this strategy the S.A.L.A.M.I. method. This method is named after a metaphor used by one of my graduate school professors. He argued that learning is like eating a salami. If you eat the salami one slice at a time, rather than trying to eat the whole salami in one setting, the salami is more likely to stay with you. Many readers will see that this analogy represents the effectiveness of distributed learning over the “binge and purge” method which many of our students gravitate towards.

S.A.L.A.M.I. is a “backronym” for Systematic Approach to Learning And Metacognitive Improvement. The method is structured around typical, daily learning experiences that I refer to as the five S.A.L.A.M.I. steps:

  1. Pre-class preparation
  2. In-class engagement
  3. Post-class review
  4. Pre-exam preparation
  5. Post-assessment review

When teaching the S.A.L.A.M.I. method, I explain how each of the five steps correspond to different “stages” or components of learning (see figure 1). Through mastery of skills associated with each of the five S.A.L.A.M.I. steps, students can more efficiently and effectively master a subject area.

S.A.L.A.M.I. Steps

Figure 1

Despite its simplicity, this model provides a starting point to help students understand that learning is a process that takes time, requires the use of different learning strategies and can benefit from the development of metacognitive awareness. Specific techniques designed to enhance metacognition and learning are employed during each of the five steps, helping students use their time effectively, maximize learning and achieve subject mastery. Describing all the tools and techniques recommended for each of the five steps would be beyond the scope of this post, but I would like to share two that I have found useful for students to evaluate the effectiveness of their learning and make data driven changes to their study strategies.

Let us return to our example of professional athletes and musicians: these individuals maintain high levels of performance by consistently monitoring and evaluating the efficacy of their practice as well as reviewing their performance after games or concerts. If we translate this example to an academic environment, the practice or rehearsal becomes student learning (in and out of class) and the game or concert acts as the assessment.  We often evaluate students’ formative or summative “performances” with grades, written or verbal feedback. But what type of feedback do we give them to help improve the efficacy of their preparation for those “performances?” If we do give them feedback about how to improve their learning process, is it evidenced-based and directed at improving metacognition, or do we simply tell them they need to study harder or join a study group in order to improve their learning? I would contend that we could do more to help students evaluate their approach to learning outside of class and examination performance. This is where a pre-exam checklist and exam wrapper can be helpful.

The inspiration for the pre-exam checklist came from the pre-flight checklist a pilot friend of mine uses to ensure that he and his private aircraft are ready for flight.  I decided to develop a similar tool for my students that would allow them to monitor and evaluate the effectiveness of their preparation for upcoming assessments. The form is based on a series of reflective questions that help students think about the effectiveness of their daily study habits. If used consistently over time and evaluated by a knowledgeable faculty or learning specialist, this tool can help students be more successful in making sustainable, data driven changes in their approach to learning.

Another tool that I use is called an exam wrapper. There are many examples of exam wrappers online, however, I developed my own wrapper based on the different stages or components of learning shown in figure 1. The S.A.L.A.M.I. wrapper is divided into five different sections. Three of the five sections focus on the following stages or components of learning: understanding and building context, consolidation, and application. The remaining two sections focus on exam skills and environmental factors that may impact performance. Under each of the five sections is a series of statements that describe possible reasons for missing an exam question. The student analyzes each missed question and matches one or more of the statements on the wrapper to each one. Based on the results of the analysis, the student can identify the component of learning, exam skill or environmental factors that they are struggling with and begin to take corrective action. Both the pre-exam checklist and exam wrapper can be used to help “diagnose” the learning issue that academically struggling students may be experiencing.

Two of the most common issues that I diagnose involve illusions of learning5. Students who suffer from the ‘illusion of knowledge’ often mistake their understanding of a topic for mastery. These students anticipate getting a high grade on an assessment but end up frustrated and confused when receiving a much lower grade than expected. Information from the S.A.L.A.M.I. wrapper can help them realize that although they may have understood the concept being taught, they could not effectively recall important facts and apply them. Students who suffer from the ‘illusion of productivity’ often spend extensive time preparing for an exam, however, the techniques they use are extremely passive. Commonly used passive study strategies include: highlighting, recopying and re-reading notes, or listening to audio/video recordings of lectures in their entirety. The pre-exam checklist can help students identify the learning strategies they are using and reflect on their effectiveness. When I encounter students favoring the use of passive learning strategies I use the analogy of trying to dig a six-foot deep hole with a spoon: “You will certainly work hard for hours moving dirt with a spoon, but you would be a lot more productive if you learned how to use a shovel.” The shovel in this case represents adopting strategies such as distributed practice, self-testing, interleaving and application practice.

Rather than relying on anecdotal advice from classmates or old habits that are no longer working, students should seek help early, consistently practice effective and efficient study strategies, and remember that digesting information (e.g. a  S.A.L.A.M.I.) in small doses is always more effective at ‘keeping the information down’ so it may be applied and utilized successfully later.

  1. Kornell, N., Bjork, R. The promise and perils of self-regulated study. Psychon Bull Rev. 2007;14 (2): 219-224.
  2. Karpicke, J. D., Butler, A. C., & Roediger, H. L. Metacognitive strategies in student learning: Do students practice retrieval when they study on their own? Memory. 2009; 17: 471– 479.
  3. Persky, A.M., Hudson, S. L. A snapshot of student study strategies across a professional pharmacy curriculum: Are students using evidence-based practice? Curr Pharm Teach Learn. 2016; 8: 141-147.
  4. Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J., Willingham, D.T. Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychol Sci Publ Int. 2013; 14 (1): 4-58.
  5. Koriat, A., & Bjork, R. A. Illusions of competence during study can be remedied by manipulations that enhance learners’ sensitivity to retrieval conditions at test. Memory & Cognition. 2006; 34: 959-972.

James M. Culhane, Ph.D.
Chair and Professor, School of Pharmacy, Notre Dame of Maryland University

Tips for Teaching International Students

As with many of our Innovative Instructor posts, this one was prompted by an inquiry from an instructor looking for resources, in this case for teaching international students. Johns Hopkins, among other American universities, has increased the number of international students admitted over the past ten years, both at the graduate and undergraduate level. These students bring welcome diversity to our campuses, but some of them face challenges in adapting to American educational practices and social customs. Fluency in English may be a barrier to their academic and social success. Following are three articles and an online guide that examine the issues and provide strategies for faculty teaching international students.

Silhouettes of people standing in a row, covered by flags of different nationalities.First up, a scholarly article that both summarizes some of the past research on international students and reports on a study undertaken by the authors: Best Practices in Teaching International Students in Higher Education: Issues and Strategies, Alexander Macgregor and  Giacomo Folinazzo, TESOL Journal, Volume 9, Issue 2, June 18, 2018, pp. 299-329. https://doi.org/10.1002/tesj.324  “This article discusses an online survey carried out in a Canadian college [Niagara College, Niagara-on-the-Lake, Ontario] that identified academic and sociocultural issues faced by international students and highlighted current or potential strategies from the input of 229 international students, 343 domestic students, and 125 professors.” The study sought to address the challenges that international students face in English-language colleges and universities, understand the difference in the perceptions of those challenges among faculty, domestic students, and the international students themselves, and suggest strategies for improving learning outcomes for international students.

International students need to know technical terms (and other vocabulary) and concepts to succeed, but complex cultural mores may hinder them from seeking assistance when needed and they may be reluctant to speak in class. These barriers exist even among students with high TOEFL (Test of English as a Foreign Language) scores. Unfamiliarity with American pedagogical practices, such as classroom participation and active learning, along with lack of awareness of American social rules and skills may further isolate these students.

The researchers used an online survey to identify the challenges that international students face and to suggest solutions. Key points in the findings include: 1) international students feel the area they most need to improve is proactive academic behavior, rather than language skills per se; 2) a lack of clarity on academic expectations of assessments and assignments hinders their success; 3) both faculty and domestic students feel that some accommodations for international students are appropriate (e.g., dictionary use in class and during exams, extra time for exams, lecture notes given out before class).

The authors conclude that “IS [International Student] input suggests professors could respond by providing clear guidelines for task expectations, aims, and instructions in multisensory formats (simplify the message without changing the material), clarifying content/format expectations with exemplars, and collecting exemplars of outstanding student work and substandard student work from past terms and using them as examples to clarify expectations.” The authors suggest faculty provide opportunities for language development, create a positive classroom climate, become informed about their students’ cultures, avoid fostering fear of error, reinforce students’ strengths, and emphasize the importance of office hours.

An article from Inside Higher Ed, Teaching International Students, Elizabeth Redden, December 1, 2014, looks at the challenges for institutions of higher education and their instructors in teaching international students and the implications for classroom “dynamics and practices”.

The author interviewed faculty at the University of Denver on the challenges they faced in teaching international students. Plagiarism is mentioned as a problem in some cases due to different practices in other countries. English as a second language (ESL) barriers were cited by a professor of classics and humanities, who has made an effort to teach a first-year seminar that compares Chinese and Western classical literature in order to bridge the cultural gap.

Faculty at University of Denver have pushed the administration to change admission policies in regards to the TOEFL, raising the score requirements. “In addition, Denver now requires admitted students who are non-native English speakers to take the university’s own English language proficiency test upon arrival. Despite having already achieved the standardized test scores required for admission, students who score poorly on Denver’s assessment may be required to enroll full-time in the university’s English Language Center before being allowed to begin their degree program.” This has meant potentially losing international students to competing undergraduate programs, but the school wanted to make sure that its students had a positive classroom experience.

Several faculty describe courses they have taught that “…will serve to enhance the quality of education by creating the opportunity for more cross-cultural conversations and a kind of perspective-shifting.”  This is an ideal situation, of course, and not all instructors have the flexibility to create new courses to take advantage of global viewpoints. None-the-less there are other strategies University of Denver faculty shared to improve learning experiences for international students, as well as their domestic counterparts.

Students may self-segregate themselves when seated in the classroom, so breaking up cultural groups and ensuring that students work across nationalities is important. Instructors should be aware that cultural references, slang, and idioms may not be understood by international students. Careful use of PowerPoint slides to reinforce course concepts, and sharing those slides with all students, ideally in advance of class, is recommended. Learn students’ names and how to pronounce them correctly. Learn something about their countries and cultures. “Professors talked about priming non-native speakers in various ways so they would be more apt to participate in class discussions, whether by allowing students to prepare their thoughts in a homework or in-class writing assignment, starting off class with a think-pair-share type activity, or appointing a different student to be a discussion leader each week.” The University of Denver Office of Teaching and Learning provides a web-page on Teaching International Students with helpful advice. Many of these recommendations are best practices for all students.

The article addresses the issues of consistency of standards and assessment. The consensus is that standards must be applied across the board to English-speakers and ESL-speakers alike. Writing assignments are particularly challenging. Doug Hesse, professor and executive director of the writing program at Denver notes that gaining fluency in writing for non-natives may take five to ten years. What, then, are fair expectations in terms of grading writing assignments?

“Hesse emphasizes the need to distinguish between global problems and micro-level errors in student writing. He isolates three dimensions of student writing: ‘aptness of content and approach to the task,’ ‘rhetorical fit,’ and ‘conformity to conventions of edited American English.’ He advises that professors ‘read charitably,’ reading for ‘content and rhetorical strategy’ as much as — or, actually, even prior to — reading for surface errors.” Hesse concedes that if the errors interfere with comprehension, that’s a problem, but he focuses his attention on content and approach. And he recommends “…sharing models for writing assignments, spending class time generating ideas for a paper, reading a draft and offering feedback, and structuring long projects in stages.” These, like the suggestions above, will be beneficial to all students. The University of Denver Writing Program offers a set of Guidelines for Responding to the Writing of International Students.

The University of Michigan, Center for Research on Teaching and Learning offers Teaching International Students: Pedagogical Issues and Strategies, another useful web guide for instructors. While some of the materials are specific to University of Michigan faculty, the topics Bridging Differences in Background Knowledge and Classroom Practice, Teaching Non-Native Speakers of English, Improving Climate, and Promoting Academic Integrity will be useful to all instructors.

If the deep dive of the first two articles is more than you are looking for, Teaching International Students: Six Ways to Smooth the Transition, Eman Elturki, Faculty Focus, June 29, 2018, cuts straight to the chase with practical tips. In a nutshell:

  • Communicate classroom expectations and policies clearly.
  • Encourage students to make use of office hours.
  • Discuss academic integrity.
  • Make course materials available.
  • Demystify assignment requirements.
  • Incorporate opportunities for collaborative learning.

More detail is provided on implementing these suggestions. Elturki sums up by repeating advice similar to that of the faculty at University of Denver, “…pursuing higher education in a foreign country can be challenging. Being mindful of international students in your classroom and incorporating ways to help them adapt to the new educational system can reduce their stress and help them succeed. In fact, adopting these practices have the potential to help all students, whether they grew up in the next town over or the other side of the globe.”

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay.com

Grading in the fast lane with Gradescope

[Guest post by Scott Smith, Professor, Computer Science, Johns Hopkins University]

Three speedometers for quality, grades per hour, and efficiency.Grading can be one of the most time consuming and tedious aspects of teaching a course, but it’s important to give prompt and meaningful feedback to your students. In large courses, aligning grading practices across multiple teaching assistants (TAs) necessitates a level of coordination that includes scheduling grading meetings, reviewing materials for correct answers, and calibrating point evaluations, all of which can take up valuable time during the semester.

In courses that teach programming, we typically assign students projects that require them to write programs to solve problems. When instructors grade this type of assignment, they not only have to observe the program’s results but also the student’s approach. If the results are not correct or the program doesn’t run, we have to spend time reviewing hundreds of lines of code to debug the program to give thoughtful feedback.

In the past, my method for grading assignments with my TAs may have been arduous but it worked. However, last year, no TAs were assigned to my Principles of Programming Languages course. Concerned that I wouldn’t have enough time to do all the work, I looked for another solution.

Consistent grading and providing meaningful feedback for student’s every submission, especially with multiple teaching assistants (TAs) can be challenging. Typically, when grading, I would schedule a time to sit down with all of my TAs, review the assignment or exam, give each TA a set of questions to grade, pass the submissions around until all were graded, and finally calculate the grades. When a TA had a question, we could address it as a group and make the related adjustments throughout the submissions as needed. While this system worked, it was tedious and time consuming. Occasionally, inconsistencies in the grades came up, which could prompt regrade requests from students. I kept thinking that there had to be a better way.

About year and a half ago, a colleague introduced me to an application called Gradescope to manage the grading of assignments and exams. I spent a relatively short amount of time getting familiar with the application and used it in a course in the fall of 2016, for both student-submitted homework assignments and in-class paper exams. In the case of the homework, students would upload a digital version of the assignment to Gradescope. The application would then prompt the student to designate the areas in the document where their answers can be found so that the application could sort and organize the submissions for the ease of grading. For the in-class exams, I would have the students work on a paper-based exam that I set up in Gradescope with the question areas established. I then would scan and upload the exams so that Gradescope could associate the established question areas to the student submissions automatically. The process of digitizing the completed tests and correlating them to the class roster was made easy with a scanner and Gradescope’s automatic roster matching feature. Gradescope became a centralized location where my TAs and I could grade student work.

There are a few ways to consider incorporating Gradescope into your course. Here is a non-exhaustive list of scenarios for both assignments and exams that can be accommodated:

  • Handwritten/drawn homework (students scan them and upload the images/PDFs)
  • Electronic written homework (students upload PDFs)
  • In-class exams (instructor scans them and uploads the PDFs)
  • Coding scripts for programming assignment (students upload their program’s files for auto-grading)
  • Code assignments graded by hand (students upload PDFs of code)

The real power of Gradescope is that it requires setting up a reusable rubric (a list of competencies or qualities used to assess correct answers) to grade each question. When grading, you select from or add to the rubric to add or deduct points. This keeps the grading consistent across multiple submissions. As the rubric is established as a part of the assignment, you can also update the point values at any time if you later determine that a larger point addition/deduction is advisable, and the grade calculations will update automatically.

Screenshot from Gradescope--Review grade for assignment feature.

Screenshot of Gradescope’s Review Grade for an assignment

After being informed that I wouldn’t have any TAs for my Principles of Programming Languages course the following semester, I was motivated to use one of Gradescope’s [features, the programming assignment auto-grader platform. Being able to automatically provide grades and feedback for students’ submitted code has long been a dream of instructors who teach programming. Gradescope offers a language-agnostic environment in which the instructor sets up the components and libraries needed for the students’ programs to run. The instructor establishes a grading script that is the basis for the analysis, providing grades and feedback for issues found in each student’s submitted program.

Overall, the use of Gradescope has reduced time spent grading and improves the quality of feedback that I am able to provide students. For instance, when I release grades to the students, they are able to review each of the descriptive rubrics that were used when grading their submissions, as well as any additional comments. Auto-grader was really the star feature in this case. Students were able to submit their code, determine if it would run, and make corrections before the deadline to increase their chances of a better grade. There are features to reduce the number of allowed submissions, but I choose not to set a limit so that the students could use an iterative approach to getting the right solution.

Gradescope is only effective if your rubrics and grading criteria are well thought out, and the auto-grading scripts require some time to set up.  Creating the grading scripts for the programming assignments may seem time intensive, but by frontloading the work with detailed rubrics and test cases, more time is saved in the grading process. The value of this preparation scales as enrollment increases, and the rubrics and scripts can be reused when you teach the course again. With more time during the semester freed up by streamlining the grading process, my TAs and I were able to increase office hours, which is more beneficial in the long run for the students.

Screenshot showing student's submission with rubric items used in grading.

Student’s submission with rubric items used in grading

The process for regrading is much easier for both students and instructors. Before Gradescope, a regrade request meant determining which TA graded that question, discussing the request with them, and then potentially adjusting the grade. With the regrade feature, students submit a regrade request, which gets routed to that question’s grader (me or the TA) with comments for the grader to consider. The grader can then award the regrade points directly to the student’s assignment. As the instructor, I can see all regrade requests, and can override if necessary, which helps to reduce the bureaucracy and logistics involved with manual regrading. Additionally, regrade requests and Gradescope’s assignment statistics feature may allow you to pinpoint issues with a particular question or how well students have understood a topic.

I have found that when preparing assignments with Gradescope, I am more willing to create multiple mini-assignments. With large courses, the tendency would be to create fewer assignments that are larger in scope to lessen the amount of grading. When there are too few submission points for students who are deadline oriented, I find that they wait till the last few days to start the assignment, which can make the learning process less effective. By adding more assignments, I can scaffold the learning to incrementally build on topics taught in class.

After using Gradescope for a year, I realized that it could be used to detect cheating. Gradescope allows you to see submissions to specific questions in sequence, making it easy to spot submissions that are identical, a red-flag for copied answers. While not a feature, it is an undocumented bonus. It should also be noted that Gradescope adheres to FERPA (Family Educational Rights and Privacy Act) standards for educational tools.

Additional Resources:

  • Gradescope website: https://gradescope.com
  • NOTE TO JHU READERS ONLY: The institutional version of Gradescope is currently available to JHU faculty users through a pilot program. If you are faculty at Johns Hopkins University’s Homewood campus interested in learning more about how Gradescope might work for your courses, contact Reid Sczerba in the Center for Educational Resources at rsczerb1@jhu.edu.

 

Scott Smith, Professor
Department of Computer Science, Johns Hopkins University

Scott Smith has been a professor of Computer Science at Hopkins for almost 30 years. His research specialty is programming languages. For the past several years, he has taught two main courses, Software Engineering, a 100 student project-based class, and Principles of Programming Languages, a mathematically-oriented course with both written and small programming assignments.

Images Sources: CC Reid Sczerba, Gradescope screenshots courtesy Scott Smith

Lunch and Learn: Creating Rubrics and Calibrating Multiple Graders

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Friday, December 15, the Center for Educational Resources (CER) hosted the second Lunch and Learn—Faculty Conversations on Teaching—for the 2017-2018 academic year.  Laura Foster, Academic Advisor, Public Health Studies, and Reid Mumford, Instructional Resource Advisor, Physics & Astronomy, presented on Creating Rubrics and Calibrating Multiple Graders.

Laura Foster led by giving us a demonstration of her use of Blackboard for creating rubrics. She noted that she might be “preaching to the choir” but hoped that those present might take back these best practices to their colleagues. Noting that many faculty have negative opinions of Blackboard, she put in a plug for its organizational benefits and facilitation of communication with students.

Foster started using Blackboard tools for a Public Health Studies class where she was grading student reflections. The subject matter—public health studies in the media—was outside of her field of physical chemistry. Blackboard facilitates creating a rubric that students can see when doing an assignment and the instructor then uses to grade that work. She showed the rubric detail that students see in Blackboard, and how the rubric can be used in grading. [See the CER Tutorial on Blackboard Rubrics and Rubrics-Helpful Hints] The rubric gives the students direction and assures that the instructor (or other graders) will apply the same standards across all student work.

It empowers students when they know exactly what criteria will be used in evaluating their work and how many points will be assigned to each component. Foster has found that using rubrics is an effective way to communicate assignment requirements to students, and that it helps her to clarify for herself what at the most important points. She noted that a rubric is very useful when there are multiple graders, such as Teaching Assistants (TAs), as it helps to calibrate the grading.

In response to questions from the audience, Foster stated that rubrics can be developed to cover both qualitative and quantitative elements. Developing good rubrics is an iterative process; it took her some time to sharpen her skills. There is flexibility in differentiating points allotted, but the instructor must be thoughtful, plan for a desired outcome, and communicate clearly. The rubric tool can be used to grade PDF files as well as Word documents. Foster noted that it is important to take opportunities to teach students to learn to write, learn to use technology, learn to read instructions, and learn to look at feedback given on assignments. Being transparent and explaining why you are using a particular technology will go a long way.

Reid Mumford gave his presentation on how he calibrates multiple graders (see slides). Mumford oversees the General Physics lab courses. This is a two semester, required sequence, so not all students are excited to be there. The sequences are on Mechanics and Electricity and Magnetism; both labs are taught every semester with multiple sections for each course. Approximately 600 to 700 students are taking these lab sequences each semester; students are divided into sections of about 24 students. The labs are open-ended and flexible, so students aren’t filling in blanks and checking boxes, which would be easier to grade. Lab sections are taught and graded by graduate student TAs, with about 30 TAs teaching each semester. Teaching and grading styles vary among these TAs as would be expected. Clearly, calibrating their grading is a challenge.

Grades are based on the best 9 of 10 lab activities, which consist of a pre-lab quiz and a lab note. All activities are graded using the same rubric. The grading scale used can be seen in the slides. One of the criteria for grading is “style,” which allows some flexibility and qualitative assessment. Students have access to the rubric, which is also shown in the slides.

About three years ago, Mumford adopted Turnitin (TII), the plagiarism detection tool, for Screen shot of Quick Mark grading tool.its efficient grading tools. It works well for his use because it is integrated with Blackboard. TII does its job in detecting cheating (and Mumford noted that lots of students are cheating), but it is the grading tools that are really important for the TAs. TAs are encouraged to be demanding in their grading and leave a lot of feedback, so grading takes them two to four hours each week. TII’s Feedback Studio (formerly known as GradeMark) allows TAs to accomplish their mission. [See CER tutorial on Feedback Studio and The Innovative Instructor post on GradeMark.] It was the QuickMark feature that sold Mumford on Feedback Studio and TII grading. Using the rubric for each activity, QuickMark can be pre-populated with commonly-used comments, which can then be dragged and dropped onto the student’s submitted work.

Graph showing General Physics Laboratory Section Grading Trends.These tools helped make the grading load more efficient, but calibrating the multiple graders was another challenge. Mumford found that the TAs need lots of feedback on their grading. Each week he downloads all the grades from Blackboard grade centers. He creates a plot that shows the average score for the weekly lab assignment. Outliers to the average scores are identified and these TAs are counseled so that their grading can be brought into line. Mumford also looks at section grading trends and can see which sections are being graded more leniently or harshly than average. He works with those TAs to standardize their grading.

In calculating final grades for the course, Mumford keeps three points in mind: final letter grades must be calculated, there should be no “easy” or “hard” sections of lab, and distribution should not vary (significantly) between sections. He makes use of per-section mapping and uses average and standard deviation to map results to a final letter grade model. Mumford noted that students are made aware, repeatedly, of the model being used. He is very transparent—everything is explained in the syllabus and reiterated weekly in lab sessions.

In conclusion, Mumford offered these take-aways:

  • Calibrating Multiple Graders is not easy
  • Tools are needed to handle multiple sections efficiently
  • Rubrics help but do not solve the calibration problem
  • Regular feedback to graders is essential
  • Limit of the system: student standing is ambiguous

In the future Mumford plans to give students a better understanding of course standing, to calculate a per-section curve each week, and to overcome some technical issues and the greater time investment that will be required with weekly calibrating and rescaling.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn Logo, slides from Mumford presentation

Facilitating and Evaluating Student Writing

Over the summer I worked on revising a manual for teaching assistants that we hand out each year at our annual TA Orientation. One of the sections deals with writing intensive courses across disciplines and how TAs can facilitate and evaluate writing assignments. The information, advice, and resources in the manual speak to an audience beyond graduate student teaching assistants. Even seasoned instructors may struggle with teaching writing skills and evaluating written assignments.

View from above and to the right of a woman's hands at a desk writing in a journal next to a lap top computer.Two mistakes that teachers may make are assuming that students in their courses know how to write a scholarly paper and not providing appropriate directions for assignments. These assumptions are likely to guarantee that the resulting student writing will disappoint.

As a quick aside, faculty often complain about the poor quality of student writing, claiming that students today don’t write as well as students in some vaguely imagined past, perhaps when the faculty member was a college freshman. However, the results of an interesting longitudinal study suggest otherwise. A report in JSTOR Daily, Student Writing in the Digital Age by Anne Trubek (October 19, 2016), summarizes the findings of the  2006 study by Andrea A. Lunsford and Karen J. Lunsford, Mistakes Are a Fact of Life: A National Comparative Study. “Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.” Their conclusion? “College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents.” Regardless of your take on the writing of current students, it is worth giving thoughtful consideration to your part in improving your students’ writing.

Good writing comes as a result of practice and it is the role of the instructor to facilitate that practice. Students may arrive at university knowing how to compose a decent five-paragraph essay, but no one has taught them how to write a scholarly paper. They must learn to read critically, summarize what they have read, identify an issue, problem, flaw, or new development that challenges what they have read. They must then construct an argument, back it with evidence (and understand what constitutes acceptable evidence), identify and address counter-arguments, and reach a conclusion. Along the way they should learn how to locate appropriate source materials, assemble a bibliography, and properly cite their sources. As an instructor, you must show them the way.

Students will benefit from having the task of writing a term paper broken into smaller components or assignments. Have students start with researching a topic and creating a bibliography. Librarians are often available to come to your class to instruct students in the art of finding sources and citing them correctly. Next, assign students to producing a summary of the materials they’ve read and identifying the issue they will tackle in their paper. Have them outline their argument. Ask for a draft. Considering using peer review for some of these steps to distribute the burden of commenting and grading. Evaluating other’s work will improve their own. [See the May 29, 2015 Innovative Instructor post Using the Critique Method for Peer Assessment.] And the opportunity exists to have students meet with you in office hours to discuss some of these assignments so that you may provide direct guidance and mentoring. Their writing skills will not develop in a vacuum.

Your guidance is critical to their success. This starts with clear directions for each assignment. For an essay you will be writing a prompt that should specify the topic choices, genre, length, formal requirements (whether outside sources should be used, your expectations on thesis and argument, etc.), and formatting, including margins, font size, spacing, titling, and student identification. Directions for research papers, fiction pieces, technical reports, and other writing assignments should include the elements that you expect to find in student submissions. Do not assume students know what to include or how to format their work.

As part of the direction you give, consider sharing with your students the rubric by which you will evaluate their work. See the June 26, 2014 Innovative Instructor post Sharing Assignment Rubrics with Your Students for more detail. Not sure how to create a rubric? See previous posts: from October 8, 2012 Using a Rubric for Grading Assignments, November 21, 2014 Creating Rubrics (by Louise Pasternak), and June 14, 2017 Quick Tips: Tools for Creating Rubrics. Rubrics will save you time grading, ensure that your grading is equitable, and provide you with a tangible defense against students complaining about their grades.

Giving feedback on writing assignments can be time consuming so focus on what is most important. This means, for example, noting spelling and grammar errors but not fixing them. That should be the student’s job. For a short assignment, writing a few comments in the margins and on the last page may be doable, but for a longer paper consider typing up your comments on a separate page. Remember to start with something positive, then offer a constructive critique.

As well, bring writing into your class in concrete ways. For example, at the beginning of class, have students write for three to five minutes on the topic to be discussed that day, drawing from the assigned readings. Discuss the assigned readings in terms of the authors’ writing skills. Make students’ writing the subject of class activities through peer review. Incorporate contributions to a class blog as part of the course work. Remember, good writing is a result of practice.

Finally, there are some great resources out there to help you help your students improve their writing. Purdue University’s Online Writing Lab—OWL—website is all encompassing with sections for instructors (K-12 and Higher Ed) and students. For a quick start go to the section Non-Purdue College Level Instructors and Students. The University of Michigan Center for Research on Learning and Teaching offers a page on Evaluating Student Writing that includes Designing Rubrics and Grading Standards, Rubric Examples, Written Comments on Student Writing, and tips on managing your time grading writing.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Photo by: Matthew Henry. CC License via Burst.com.

 

 

Considering the Use of Turnitin

Earlier this week an article from Inside Higher Ed (IHE) caught my eye. Sign with hand and text reading prevent plagiarism. In New Salvo Against Turnitin (June 19, 2017) Nick Roll summarizes an essay by Sean Michael Morris, Instructional Designer in the Office of Digital Learning at Middlebury College, and Jesse Stommel, Executive Director, Division of Teaching and Learning Technologies at the University of Mary Washington. The essay authors argue that faculty should rethink the use of Turnitin, questioning not only “…the control and use of people’s data by corporations…” but “…Turnitin’s entire business model, as well as the effects on academia brought on by its widespread popularity.” Morris and Stommel further contend that those using Turnitin “supplant good teaching with the use of inferior technology” reducing the student-instructor relationship to one where suspicion and mistrust are at the forefront. [Turnitin is a software application used to detect plagiarism, and Morris and Stommel are not the first to decry the company’s business model and practices.]

Although the IHE article provides a fair summary, as well as additional comments by Morris and Stommel, it is worth reading the 3,928 word essay—A Guide for Resisting Edtech: The Case Against Turnitin (Digital Pedagogy Lab, June 15, 2017)—to appreciate the complex argument. I agree with some of the concerns the authors address and feel we should be doing more individually and collectively to school ourselves and our students in the critical evaluation of digital tools, but disagree with what I feel are over-simplifications and unfair assumptions. Morris and Stommel cast faculty who use Turnitin as “surrendering efficiency over complication” by not taking the time and effort to use plagiarism as a teachable moment. Further, they state that Turnitin takes advantage of faculty who are characterized as being, at the core, mistrustful of students.

The assumption that faculty using Turnitin are not actively engaging in conversations around and instruction of ethical behavior, including plagiarism, and are not using other tools and resources in these activities is simply not correct. The assertion that faculty using Turnitin are suspicious teachers who are embracing an easy out via an efficient educational technology is also not accurate.

The reality is that some students will plagiarize, intentionally or not, and the Internet, social media practices, and cultural differences have rendered complicated students’ understanding of intellectual property. I believe that many of our institutions of higher learning, and faculty and library staff therein, make concerted efforts to teach students about academic integrity. This includes the meaning and value of intellectual property, as well as finer points of what constitutes plagiarism and strategies to avoid it.

I believe it is relevant to note that Middlebury College’s website boasts a mean class size of 16, while the University of Mary Washington lists an average class size of 19. Student-faculty ratios are 8 to1 and 14 to 1 respectively.  I cannot help but feel that Morris and Stommel are speaking from a point of privilege working in these two institutions. Instructors who teach at large, underfunded, state universities with classes of hundreds of students, relying on a corps of teaching assistants to grade their essays, are in a different boat.

The authors state: “So, if you’re not worried about paying Turnitin to traffic your students’ intellectual property, and you’re not worried about how the company has glossed a complicated pedagogical issue to offer a simple solution, you might worry about how Turnitin reinforces the divide between teachers and students, short-circuiting the human tools we have to cross that divide.” In fact, we may all be worried about Turnitin’s business model and be seeking a better solution. Yet in this essay nothing more concrete is given us on those human tools and how faculty in less privileged circumstances can realistically and effectively make use of them.

The Innovative Instructor has in the past posted on Teaching Your Students to Avoid Plagiarism (November 5, 2012, Macie Hall), and using Turnitin as a teaching tool: Plagiarism Detection: Moving from “Gotcha” to Teachable Moment (October 9, 2013, Brian Cole and Macie Hall). These articles may be helpful for faculty struggling with the issues at hand.

Yes, we should all be critical thinkers about the pedagogical tools we use; in the real world, sometimes we face hard choices and must fall back on less than ideal solutions.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Microsoft Clip Art edited by Macie Hall

Quick Tips: Tools for Creating Rubrics

The Innovative Instructor has previously shared posts on the value of using rubrics (Creating Rubrics, Louise Pasternak, November 21, 2014 and Sharing Assignment Rubrics with Your Students, Macie Hall June 26, 2014). Today’s Quick Tips post offers some tools and resources for creating rubrics.

Red sharpie-type marker reading "Rubrics Guiding Graders: Good Point" with an A+ marked below

Red Rubric Marker

If you are an instructor at Johns Hopkins or another institution that uses the Blackboard learning management system or Turnitin plagiarism detection, check out these platforms for their built-in rubric creation applications. Blackboard has an online tutorial here. Turnitin offers a user guide here.

If neither of these options are available to you, there is a free, online application called Rubistar that offers templates for rubric design based on various disciplines, projects, and assignments. If none of the templates fit your need, you can create a rubric from scratch. You must register to use Rubistar. A tutorial is available to get you started. And you can save a printable rubric at the end of the process.

Wondering how others in your field have designed rubrics for specific assignments or projects? Google for a model: e.g., “history paper rubric college,” “science poster rubric college,” “video project rubric college” will yield examples to get your started. Adding the word “college” to the search will ensure that you are seeing rubrics geared to an appropriate level.

With free, easy to use tools and plentiful examples to work from, there is no excuse for not using rubrics for your course assignments.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source © 2014 Reid Sczerba

 

 

To Curve or Not to Curve Revisited

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.The practice of normalizing grades, more popularly known as curving, was a subject of an Innovative Instructor post, To Curve or Not to Curve on May 13, 2013. That article discussed both norm-referenced grading (curving) and criterion-referenced grading (not curving). As the practice of curving has become more controversial in recent years, an op-ed piece in this past Sunday’s New York Times caught my eye. In Why We Should Stop Grading Students on a Curve (The New York Times Sunday Review, September 10, 2016), Adam Grant argues that grade deflation, which occurs when teachers use a curve, is more worrisome than grade inflation. First, by limiting the number of students who can excel, other students who may have mastered the course content are unfairly punished. Second, curving creates a “toxic” environment, a “hypercompetitive culture” where one student’s success means another’s failure.

Grant, a professor of psychology at the Wharton School at the University of Pennsylvania, cites evidence that curving is a “disincentive to study.” Taking observations from his work as an organizational psychologist and applying those in his classroom, Grant has found he could both disrupt the culture of cutthroat competition and get students to work together as a team to prepare for exams. Teamwork has numerous advantages in both the classroom and the workplace as Grant details. Another important aspect is “…that one of the best ways to learn something is to teach it.” When students study together for an exam they benefit from each other’s strengths and expertise. Grant details the methods he used in constructing the exams and how his students have leveraged teamwork to improve their scores on course assessments. One device he uses is a Who Wants to Be a Millionaire-type “lifeline” for students taking the final exam. While his particular approaches may not be suitable for your teaching, the article provides food for thought.

Because I am not advocating for one way of grading over another, but rather encouraging instructors to think about why they are taking a particular approach and whether it is the best solution, I’d like to present a counter argument. In praise of grading on a curve by Eugene Volokh appeared in The Washington Post on February 9, 2015. “Eugene Volokh teaches free speech law, religious freedom law, church-state relations law, a First Amendment Amicus Brief Clinic, and tort law, at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy.” He counters some of the standard arguments against curving by pointing out that students and exams will vary from year to year making it difficult to draw consistent lines between, say an A- and B+ exam. This may be even more difficult for a less experienced teacher. Volokh also believes in the value of the curve for reducing the pressure to inflate grades. He points out that competing law schools tend to align their curves, making it an accepted practice for law school faculty to curve. As well, he suggests some tweaks to curving that strengthen its application.

As was pointed out in the earlier post, curving is often used in large lecture or lab courses that may have multiple sections and graders, as it provides a way to standardize grades. However, that issue may be resolved by instructing multiple graders how to assign grades based on a rubric. See The Innovative Instructor on creating rubrics and calibrating multiple graders.

Designing effective assessments is another important skill for instructors to learn, and one that can eliminate the need to use curving to adjust grades on a poorly conceived test. A good place to start is Brown University’s Harriet W. Sheridan Center for Teaching and Learning webpages on designing assessments where you will find resources compiled from a number of Teaching and Learning Centers on designing “assessments that promote and measure student learning.”  The topics include: Classroom Assessment and Feedback, Quizzes, Tests and Exams, Homework Assignments and Problem Sets, Writing Assignments, Student Presentations, Group Projects and Presentations, Labs, and Field Work.

Macie Hall, Instructional Designer
Center for Educational Resources


Image Source: © Reid Sczerba, 2013.

 

 

Quick Tips: Grading Essays and Papers More Efficiently

If you are among those who don’t teach during the summers, grading papers may be the furthest thing from your mind at the moment. Before we know it, however, a new semester will be starting. And now is a good time to be thinking about new directions in your assessment and evaluation of student work, especially if your syllabus will need changing as a result.

Male instructor 's head between two stacks of papers.Earlier this week (June 22, 2015) and article in The Chronicle of Higher Education by Rob Jenkins, an associate professor of English at Georgia Perimeter College, Conquering Mountains of Essays: How to effectively and fairly grade a lot of papers without making yourself miserable, caught my attention. Even the most dedicated instructors find grading to be a chore.

Jenkins, who teaches several writing-intensive courses every semester, notes that it is easy to take on the pose of a martyr when faced with stacks and stacks of multiple-paged papers, especially when the process is repeated a few times for each class. He offers eight guidelines for keeping grading in balance with the aspects of teaching that are more enjoyable. Jenkins proposes that you:

  1. Change your bad attitude about grading. Grading is an integral part of teaching. View grading student work as an opportunity to reinforce class concepts and use misconception that arise in their papers as a basis for class discussion.
  2. Stagger due dates. Plan in advance and have students in different sections turn in essays on different dates.
  3. Break it down. Determine an optimum number of papers to grade at one sitting. Take a break for an hour before starting another session.
  4. Schedule grading time. Literally. Put it on your calendar.
  5. Have a realistic return policy. Jenkins says, “I’ve chosen to define ‘a reasonable amount of time’ as one week, or two class sessions. Occasionally, if I get four stacks of papers in the same week, it might take me three class meetings to finish grading.”
  6. Be a teacher, not an editor. Stay out of the weeds and focus on the major problems with the essay. Jenkins limits editing “to situations where a simple change of wording or construction might have broader application than to that one essay.”
  7. Limit your comments. For undergraduates, a few observations will be more useful as a teaching strategy than pages of commentary. Jenkins tries to offer one positive comment and three suggestions for improvement.
  8. Limit grading time on each essay. Following the suggestions above will help you reduce the time you need to spend on each paper.

One thing Jenkins doesn’t mention is using a rubric for grading. Rubrics can be a powerful tool for consistent grading across the class or sections, as well as a means for students to understand how the assignment is being evaluated. See previous Innovative Instructor posts on rubrics: Creating Rubrics and Sharing Assignment Rubrics with Your Students.

You might also be interested in some of The Innovative Instructor’s past posts on grading: Feedback codes: Giving Student Feedback While Maintaining Sanity and Quick Tips: Paperless Grading.

******************************************************************************************************

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Microsoft Clip Art

Using the Critique Method for Peer Assessment

As a writer I have been an active participant in a formal critique group facilitated by a professional author and editor. The critique process, for those who aren’t familiar with the practice, involves sharing work (traditionally, writing and studio arts) with a group to review and discuss. Typically, the person whose work is being critiqued must listen without interrupting as others provide comments and suggestions. Critiques are most useful if a rubric and a set of standards for review is provided and adhered to during the commentary. For example, in my group, we are not allowed to say, “I don’t like stories that are set in the past.” Instead we must provide specific examples to improve the writing: “In terms of authoritative writing, telephones were not yet in wide use in 1870. This creates a problem for your storyline.”  After everyone has made their comments, the facilitator adds and summarizes, correcting any misconceptions. Then the writer has a chance to ask questions for clarification or offer brief explanations. In critique, both the creator and the reviewers benefit. Speaking personally, the process of peer evaluation has honed my editorial skills as well as improved my writing. Looking down on a group of four students with laptops sitting at a table in discussion.

With peer assessment becoming a pedagogical practice of interest to our faculty, could the critique process provide an established model that might be useful in disciplines outside the arts? A recent post on the Tomorrow’s Professor Mailing List, Teaching Through Critique: An Extra-Disciplinary Approach, by Johanna Inman, MFA Assistant Director, Teaching and Learning Center, Temple University, addresses this topic.

“The critique is both a learning activity and assessment that aligns with several significant learning goals such as critical thinking, verbal communication, and analytical or evaluation skills. The critique provides an excellent platform for faculty to model these skills and evaluate if students are attaining them.” Inman notes that critiquing involves active learning, formative assessment, and community building. Critiques can be used to evaluate a number of different assignments as might be found in almost any discipline including, short papers and other writing assignments, multimedia projects, oral presentations, performances, clinical procedures, interviews, and business plans. In short, any assignment that can be shared and evaluated through a specific rubric can be evaluated through critique.

A concrete rubric is at the heart of recommended best practices for critique. “Providing students with the learning goals for the assignment or a specific rubric before they complete the assignment and then reviewing it before critique can establish a focused dialogue. Additionally, prompts such as Is this work effective and why? or Does this effectively fulfill the assignment? or even Is the planning of the work evident? generally lead to more meaningful conversations than questions such as What do you think?

It is equally important to establish guidelines for the process, what Inman refers to as an etiquette for providing and receiving constructive criticism. Those on the receiving end should listen and keep an open mind. Learning to accept criticism without getting defensive is life skill that will serve students well. Those providing the assessment, Inman says, should critique the work not the student, and offer specific suggestions for improvement. The instructor or facilitator should foster a climate of civility.

Inman offers tips for managing class time for a critique session and specific advice for instructors to insure a balanced discussion.  For more on peer assessment more generally, see the University of Texas at Austin Center for Teaching and Learning’s page on Peer Assessment.  The Cornell Center for Teaching Excellence also has some good advice for instructors interested in Peer Assessment, answering some questions about how students might perceive and push back against the activity. Peer assessment, whether using a traditional critique method or another approach, benefits students in many ways. As they learn to evaluate others’ work, it strengthens their own.

********************************************************************************************************* Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Meeting. CC BY-SA Marco Antonio Torres https://www.flickr.com/photos/torres21/3052366680/in/photostream/