Facilitating and Evaluating Student Writing

Over the summer I worked on revising a manual for teaching assistants that we hand out each year at our annual TA Orientation. One of the sections deals with writing intensive courses across disciplines and how TAs can facilitate and evaluate writing assignments. The information, advice, and resources in the manual speak to an audience beyond graduate student teaching assistants. Even seasoned instructors may struggle with teaching writing skills and evaluating written assignments.

View from above and to the right of a woman's hands at a desk writing in a journal next to a lap top computer.Two mistakes that teachers may make are assuming that students in their courses know how to write a scholarly paper and not providing appropriate directions for assignments. These assumptions are likely to guarantee that the resulting student writing will disappoint.

As a quick aside, faculty often complain about the poor quality of student writing, claiming that students today don’t write as well as students in some vaguely imagined past, perhaps when the faculty member was a college freshman. However, the results of an interesting longitudinal study suggest otherwise. A report in JSTOR Daily, Student Writing in the Digital Age by Anne Trubek (October 19, 2016), summarizes the findings of the  2006 study by Andrea A. Lunsford and Karen J. Lunsford, Mistakes Are a Fact of Life: A National Comparative Study. “Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.” Their conclusion? “College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents.” Regardless of your take on the writing of current students, it is worth giving thoughtful consideration to your part in improving your students’ writing.

Good writing comes as a result of practice and it is the role of the instructor to facilitate that practice. Students may arrive at university knowing how to compose a decent five-paragraph essay, but no one has taught them how to write a scholarly paper. They must learn to read critically, summarize what they have read, identify an issue, problem, flaw, or new development that challenges what they have read. They must then construct an argument, back it with evidence (and understand what constitutes acceptable evidence), identify and address counter-arguments, and reach a conclusion. Along the way they should learn how to locate appropriate source materials, assemble a bibliography, and properly cite their sources. As an instructor, you must show them the way.

Students will benefit from having the task of writing a term paper broken into smaller components or assignments. Have students start with researching a topic and creating a bibliography. Librarians are often available to come to your class to instruct students in the art of finding sources and citing them correctly. Next, assign students to producing a summary of the materials they’ve read and identifying the issue they will tackle in their paper. Have them outline their argument. Ask for a draft. Considering using peer review for some of these steps to distribute the burden of commenting and grading. Evaluating other’s work will improve their own. [See the May 29, 2015 Innovative Instructor post Using the Critique Method for Peer Assessment.] And the opportunity exists to have students meet with you in office hours to discuss some of these assignments so that you may provide direct guidance and mentoring. Their writing skills will not develop in a vacuum.

Your guidance is critical to their success. This starts with clear directions for each assignment. For an essay you will be writing a prompt that should specify the topic choices, genre, length, formal requirements (whether outside sources should be used, your expectations on thesis and argument, etc.), and formatting, including margins, font size, spacing, titling, and student identification. Directions for research papers, fiction pieces, technical reports, and other writing assignments should include the elements that you expect to find in student submissions. Do not assume students know what to include or how to format their work.

As part of the direction you give, consider sharing with your students the rubric by which you will evaluate their work. See the June 26, 2014 Innovative Instructor post Sharing Assignment Rubrics with Your Students for more detail. Not sure how to create a rubric? See previous posts: from October 8, 2012 Using a Rubric for Grading Assignments, November 21, 2014 Creating Rubrics (by Louise Pasternak), and June 14, 2017 Quick Tips: Tools for Creating Rubrics. Rubrics will save you time grading, ensure that your grading is equitable, and provide you with a tangible defense against students complaining about their grades.

Giving feedback on writing assignments can be time consuming so focus on what is most important. This means, for example, noting spelling and grammar errors but not fixing them. That should be the student’s job. For a short assignment, writing a few comments in the margins and on the last page may be doable, but for a longer paper consider typing up your comments on a separate page. Remember to start with something positive, then offer a constructive critique.

As well, bring writing into your class in concrete ways. For example, at the beginning of class, have students write for three to five minutes on the topic to be discussed that day, drawing from the assigned readings. Discuss the assigned readings in terms of the authors’ writing skills. Make students’ writing the subject of class activities through peer review. Incorporate contributions to a class blog as part of the course work. Remember, good writing is a result of practice.

Finally, there are some great resources out there to help you help your students improve their writing. Purdue University’s Online Writing Lab—OWL—website is all encompassing with sections for instructors (K-12 and Higher Ed) and students. For a quick start go to the section Non-Purdue College Level Instructors and Students. The University of Michigan Center for Research on Learning and Teaching offers a page on Evaluating Student Writing that includes Designing Rubrics and Grading Standards, Rubric Examples, Written Comments on Student Writing, and tips on managing your time grading writing.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Photo by: Matthew Henry. CC License via Burst.com.

 

 

Lunch and Learn: Team-Based Learning

Logo for Lunch and Learn program showing the words Lunch and Learn in orange with a fork above and a pen below the lettering. Faculty Conversations on Teaching at the bottom.On Friday, December 16, the Center for Educational Resources (CER) hosted the second Lunch and Learn—Faculty Conversations on Teaching, for the 2016-1017 academic year. Eileen Haase, Senior Lecturer in Biomedical Engineering, and Mike Reese, Director, Center for Educational Resources, and Instructor in Sociology, discussed their approaches to team-based learning (TBL).

Eileen Haase teaches a number of core courses in Biomedical Engineering at the Whiting School of Engineering, including Freshmen Modeling and Design, BME Teaching Practicum, Molecules and Cells, and System Bioengineering Lab I and II, as well as being course director for Cell and Tissue Engineering and assisting with System Bioengineering II. She has long been a proponent of team work in the classroom.

In her presentation, Haase focused on the Molecules and Cells course, required for BME majors in the sophomore year, which she co-teaches with Harry Goldberg, Assistant Dean at the School of Medicine, Director of Academic Computing and faculty member, Department of Biomedical Engineering. The slides from Haase’s presentation are available here.

In the first class, Haase has the students do a short exercise that demonstrates the value of teamwork. Then the students take the VARK Questionnaire. VARK stands for Visual Aural Read/Write Kinesthetic and is a guide to learning styles. The questionnaire helps students and instructors by suggesting strategies for teaching and learning that align with these different styles. Haase and Goldberg found that 62% of their students were “multimodal” learners who will benefit from having the same material presented in several modes in order to learn it. In Haase’s class, in addition to group work, students work at the blackboard, use clickers, have access to online materials, participate in think-pair-share exercises, and get some content explained in lecture form.

Team work takes place in sections most FridSlide from Eileen Haase's presentation on Team-based Learning showing a scratch card test.ays. At the start of class, students take an individual, 10 question quiz called the iRAT, Individual Readiness Assurance Test, which consists of multiple-choice questions based on pre-class assigned materials. The students then take the test as a group (gRAT). Haase uses IF-AT scratch cards for these quizzes. Both tests count towards the students’ grades.

To provide evidence for the efficacy of team-based learning, Haase and Goldberg retested students from their course five months after the original final exam (99 of the 137 students enrolled in the course were retested). The data showed that students scored significantly better on the final exam on material that had been taught using team-based learning strategies and on the retest, retained significantly more of the TBL taught material.

Slide from Mike Reese's presentation on Team-based Learning showing four students doing data collection at a Baltimore neighborhood market.Mike Reese, Director of the Center for Educational Resources and instructor in the Department of Sociology, presented on his experiences with team-based learning in courses that included community-based learning in Baltimore City neighborhoods [presentation slides]. His courses are typically small and discussion oriented. Students read papers on urban issues and, in class, discuss these and develop research methodologies for gathering data in the field. Students are divided into teams, and Reese accompanies each team as they go out into neighborhoods to gather data by talking to people on the street and making observations on their surroundings. The students then do group presentations on their field work and write individual papers. Reese says that team work is hard, but students realize that they could not collect and analyze data in such a short time-frame without a group effort.

Reese noted that learning is a social process. We are social beings, and while many students dislike group projects, they will learn and retain more (as Haase and Goldberg demonstrated). This is not automatic. Instructors need to be thoughtful about structuring team work in their courses. The emotional climate created by the teacher is important. Reese shared a list of things to consider when designing a course that will incorporate team-based learning.

  1. Purpose: Why are you doing it? For Reese, teamwork is a skill that students should acquire, but primarily it serves his learning objectives.  If students are going to conduct a mini-research project in a short amount of time, they need multiple people working collectively to help with data collection and analysis.
  2. Group Size: This depends on the context and the course, but experts agree that having three to five students in a group is best to prevent slacking by team members.
  3. Roles: Reese finds that assigning roles works well as students don’t necessarily come into the course with strong project management skills, and projects typically require a division of labor. It was suggested that assigning roles is essential to the concept of true team-based learning as opposed to group work.
  4. Formation: One key to teamwork success is having the instructor assign students to groups rather than allowing them to self-select. [Research supports this. See Fiechtner, S. B., & Davis, E. A. (1985). Why some groups fail: A survey of students’ experiences with learning groups. The Organizational Behavior Teaching Review, 9(4), 75-88.] In Reese’s experience assigning students to groups helps them to build social capital and relationships at the institution beyond their current group of friends.
  5. Diversity: It is important not to isolate at-risk minorities. See: Heller, P. and Hollabaugh, M. (1992). Teaching problem solving through cooperative grouping. American Journal of Physics, 60 (7), 637-644.
  6. Ice Breakers: The use of ice breakers can help establish healthy team relationships. Have students create a team name, for example, to promote an identity within the group.
  7. Contracts: Having a contract for teamwork is a good idea. In the contract, students agree to support each other and commit to doing their share of the work. Students can create contracts themselves, but it is best if the instructor provides structured questions to guide them.
  8. Persistence: Consider the purpose of having groups and how long they will last. Depending on learning goals, teams may work together over an entire semester, or reform after each course module is completed.
  9. Check-ins: It is important to check in with teams on a regular basis, especially if the team is working together over an entire semester, to make sure that the group hasn’t developed problems and become dysfunctional.
  10. Peer Evaluation: Using peer evaluation keeps a check on the students to ensure that everyone is doing a fair share of the work. The instructor can develop a rubric, or have students work together to create one. Evaluation should be on specific tasks. Ratings should be anonymous (to the students, not the instructor) to ensure honest evaluation, and students should also self-evaluate.

In the discussion that followed the presentation, mentoring of teams and peer assessment were key topics. Several faculty with experience working with team-based learning recommended providing support systems in the form of mentors and or coaches who are assigned to the groups. These could be teaching assistants or undergraduate assistants who have previously taken the course. Resources for team-based learning were mentioned. CATME, “which stands for ‘Comprehensive Assessment of Team Member Effectiveness,’ is a free set of tools designed to help instructors manage group work and team assignments more effectively.”

Doodle was suggested as another tool for scheduling collaborative work. Many are familiar with the Doodle poll concept, but there are also free tools such as Connect Calendars and Meet Me that can be used by students.

An Innovative Instructor print article, Making Group Projects Work by Pam Sheff and Leslie Kendrick, Center for Leadership Education,  August 2012, covers many aspects of successful teamwork.

Another resource of interest is a scholarly article by Barbara Oakley and Richard Felder, Turning Student Groups into Effective Teams [Oakley, B., Felder, R.M., Brent, R., Elhajj, I. Journal of student centered learning, 2004]. “This paper is a guide to the effective design and management of team assignments in a college classroom where little class time is available for instruction on teaming skills. Topics discussed include forming teams, helping them become effective, and using peer ratings to adjust team grades for individual performance. A Frequently Asked Questions section offers suggestions for dealing with several problems that commonly arise with student teams, and forms and handouts are provided to assist in team formation and management.

If you are an instructor on the Homewood campus, staff in the Centerfor Educational Resources will be happy to talk with you about team-based learning and your courses.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Sources: Lunch and Learn logo by Reid Sczerba, presentation slides by Eileen Haase and Mike Reese

Tips for Writing Effective Multiple Choice Questions

Writing test questions is a daunting task for many instructors. It can be challenging to come up with questions that correctly assess students on the comprehension of course objectives. Multiple choice questions are no exception; despite being very popular, instructors often struggle to create well-constructed questions.

Piece of notebook paper with Questions at the top, followed by numbers and ABCD for each of the six numbers. Answers are circled in red.Multiple choice questions have several advantages. They lend themselves to covering a broad range of content and assessing a wide variety of learning objectives. They are very useful when testing a student’s lower level knowledge of a topic, such as factual recall and definitions, but if written correctly, they can be used to assess at the higher levels of analysis, evaluation, and critical thinking skills. Multiple choice questions are scored efficiently (even automatically, if an electronic test is used), therefore, they are frequently the evaluation method preferred by instructors of large courses.

There are some disadvantages, including the fact that this type of question can be time-consuming to construct. Multiple choice questions are made up of two parts: the stem, which identifies the question, and the alternative responses which include the correct answer as well as incorrect alternatives, known as distractors. Coming up with plausible distractors for each question can be a difficult task. And, while some higher level thinking skills can be addressed, multiple choice questions cannot measure a student’s ability to organize and express ideas.  Another thing to consider is that student success when answering multiple choice questions can be influenced by factors unrelated to the subject matter, such as reading ability, deductive reasoning, and the use of context clues.

The following guidelines are offered to help streamline the process of creating multiple choice questions as well as minimize the disadvantages of using them.

General guidelines for writing stems:

  1. When possible, prepare the stem as a clearly written question rather than an incomplete statement.

Poor Example: Psychoanalysis is….

Better example: What is the definition of psychoanalysis? 

  1. Eliminate excessive or irrelevant information from the stem.

Poor example: Jane recently started a new job and can finally afford her own car, a Honda Civic, but is surprised at the high cost of gasoline. Gasoline prices are affected by:

Better example: Which of the following are factors that affect the consumer price of gasoline? 

  1. Include words/phrases in the stem that would otherwise be repeated in the alternatives.

Poor example: Which of the following statements are true?
1. Slowing population growth can prevent global warming
2. Halting deforestation can prevent global warming
3.  Increasing beef production on viable land can prevent global warming
4.  Improving energy efficiency can prevent global warming

Better example: Which of the following techniques can be used to prevent global warming?
1. Slowing population growth
2. Halting deforestation
3. Increasing beef production on viable land
4. Improving energy efficiency 

  1. Avoid using negatively stated stems. If you must use them, highlight the negative word so that it is obvious to students.

Poor example: Which of the following is not a mandatory qualification to be the president of the United States?

Better example: Which of the following is NOT a mandatory qualification to be the president of the United States?

General guidelines for writing alternative responses:

  1. Make sure there is only one correct answer.
  1. Create distractors that are plausible to avoid students guessing the correct answer.

Poor example:
Who was the third president of the United States?
1. George Washington
2. Bugs Bunny
3. Thomas Jefferson
4. Daffy Duck

Better example: Who was the third president of the United States?
1. George Washington
2. Benjamin Franklin
3. Thomas Jefferson
4. John Adams 

  1. Make sure alternative responses are grammatically parallel to each other.

Poor example: Which of the following is the best way to build muscle?
1. Sign up to run a marathon
2. Drinking lots of water
3. Exercise classes
4. Eat protein

Better example: Which of the following is the best way to build muscle?
1. Running on a treadmill
2. Drinking lots of water
3. Lifting weights
4. 
Eating lots of protein 

  1. When possible, list the alternative responses in a logical order (numerical, alphabetical, etc.)

Poor example: How many ounces are in a gallon?
1. 16
2. 148
3. 4
4. 128

Better example: How many ounces are in a gallon?
1. 4
2. 16
3. 128
4. 148

  1. Avoid using ‘All of the above’ or ‘None of the above’ to prevent students from using partial knowledge to arrive at the correct answer.
  2. Use at least four alternative responses to enhance the reliability of the test.

References:

Brame, C., (2013) Writing good multiple choice test questions. Retrieved December 14, 2016 from https://cft.vanderbilt.edu/guides-sub-pages/writing-good-multiple-choice-test-questions/

Burton, S. J., Sudweeks, R. R., Merrill, P.F., and Wood, B. (1991). How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty. Provo, Utah: Brigham Young University Testing Services and The Department of Instructional Science.

“Multiple Choice Questions.” The University of Texas at Austin Faculty Innovation Center, 14 Dec. 2016, https://facultyinnovate.utexas.edu/teaching/check-learning/question-types/multiple-choice.

Amy Brusini, Blackboard Training Specialist
Center for Educational Resources

Image Source: Pixabay.com

To Curve or Not to Curve Revisited

Yellow traffic signs showing a bell curve and a stylized graph referencing criterion-referenced grading.The practice of normalizing grades, more popularly known as curving, was a subject of an Innovative Instructor post, To Curve or Not to Curve on May 13, 2013. That article discussed both norm-referenced grading (curving) and criterion-referenced grading (not curving). As the practice of curving has become more controversial in recent years, an op-ed piece in this past Sunday’s New York Times caught my eye. In Why We Should Stop Grading Students on a Curve (The New York Times Sunday Review, September 10, 2016), Adam Grant argues that grade deflation, which occurs when teachers use a curve, is more worrisome than grade inflation. First, by limiting the number of students who can excel, other students who may have mastered the course content are unfairly punished. Second, curving creates a “toxic” environment, a “hypercompetitive culture” where one student’s success means another’s failure.

Grant, a professor of psychology at the Wharton School at the University of Pennsylvania, cites evidence that curving is a “disincentive to study.” Taking observations from his work as an organizational psychologist and applying those in his classroom, Grant has found he could both disrupt the culture of cutthroat competition and get students to work together as a team to prepare for exams. Teamwork has numerous advantages in both the classroom and the workplace as Grant details. Another important aspect is “…that one of the best ways to learn something is to teach it.” When students study together for an exam they benefit from each other’s strengths and expertise. Grant details the methods he used in constructing the exams and how his students have leveraged teamwork to improve their scores on course assessments. One device he uses is a Who Wants to Be a Millionaire-type “lifeline” for students taking the final exam. While his particular approaches may not be suitable for your teaching, the article provides food for thought.

Because I am not advocating for one way of grading over another, but rather encouraging instructors to think about why they are taking a particular approach and whether it is the best solution, I’d like to present a counter argument. In praise of grading on a curve by Eugene Volokh appeared in The Washington Post on February 9, 2015. “Eugene Volokh teaches free speech law, religious freedom law, church-state relations law, a First Amendment Amicus Brief Clinic, and tort law, at UCLA School of Law, where he has also often taught copyright law, criminal law, and a seminar on firearms regulation policy.” He counters some of the standard arguments against curving by pointing out that students and exams will vary from year to year making it difficult to draw consistent lines between, say an A- and B+ exam. This may be even more difficult for a less experienced teacher. Volokh also believes in the value of the curve for reducing the pressure to inflate grades. He points out that competing law schools tend to align their curves, making it an accepted practice for law school faculty to curve. As well, he suggests some tweaks to curving that strengthen its application.

As was pointed out in the earlier post, curving is often used in large lecture or lab courses that may have multiple sections and graders, as it provides a way to standardize grades. However, that issue may be resolved by instructing multiple graders how to assign grades based on a rubric. See The Innovative Instructor on creating rubrics and calibrating multiple graders.

Designing effective assessments is another important skill for instructors to learn, and one that can eliminate the need to use curving to adjust grades on a poorly conceived test. A good place to start is Brown University’s Harriet W. Sheridan Center for Teaching and Learning webpages on designing assessments where you will find resources compiled from a number of Teaching and Learning Centers on designing “assessments that promote and measure student learning.”  The topics include: Classroom Assessment and Feedback, Quizzes, Tests and Exams, Homework Assignments and Problem Sets, Writing Assignments, Student Presentations, Group Projects and Presentations, Labs, and Field Work.

Macie Hall, Instructional Designer
Center for Educational Resources


Image Source: © Reid Sczerba, 2013.

 

 

Rethinking Oral Examinations for Undergraduate Students

Oral examinations, also called viva voce, have long been associated with graduate studies, but many years ago, when I was an undergraduate, oral exams were not unheard of. All undergraduates at my university were required to write a thesis, and many of us took comprehensive written and oral examinations in our fields. I had several courses in my major field, art history, which held oral examinations as the final assessment of our work. At the time, this practice was not uncommon in British and European universities for undergraduates. Since then it has become a rarity both here and abroad, replaced by other forms of assessment for undergraduate students.

Stack of triangular, caution-type road signs with red border and the word TEST in the white center.Recently I learned that Richard Brown, Director of Undergraduate Studies and an associate teaching professor in the JHU Department of Mathematics, had experimented with oral examinations of his undergraduate students in Honors Multivariable Calculus.

Some background: Honors Multivariable Calculus is designed to be a course for students who are very interested in mathematics, but are still learning basics. Students must have the permission of the instructor to enroll. They are likely to be highly motivated learners. In this instance, Brown had only 6 students in the class—five freshmen and one sophomore. For the freshmen, this fall course was their first introduction to a college math course. They came in with varying levels of skill and knowledge, knowing that the course would be challenging. The course format was two 75 minute lectures a week and one hour-long recitation (problem solving) session with a graduate student teaching assistant. This is the part of the course where students work in an interactive environment, applying theory to practice, answering questions, and getting an alternate point of view from the graduate student assistant instructor.

Assessments in the course included two in-class midterms (written and timed), weekly graded homework assignment (usually problems), and the final exam. As Brown thought about the final exam, he realized that he had already seen his students approach to timed and untimed “mathematical writing” in the midterms and homeworks. So, why not try a different environment for the final and do an oral examination? He discussed the concept with the students in class and allowed the students to decide as a class which option they preferred. The students agreed to the oral exam.

Brown made sequential appointments with the students, giving them 20 minutes each for the exam. He asked them different questions to minimize the potential for sharing information, but the questions were of the same category. For example, one student might be asked to discuss the physical or geometric interpretation of Gauss’s Theorem, and another would be given the same question about Stokes’s Theorem. If a student got stuck in answering, Brown would reword the question or provide a small hint. In contrast, on a written exam, if a student gets stuck, they are stuck. You may never identify exactly what they know and don’t know. Another advantage, Brown discovered, was that by seeing how a student answered a question, he could adjust follow up questions to get a deeper understanding of the student’s depth of learning. He could probe to assess understanding or push to see how far the student could go. He found the oral exam gave him a much more comprehensive view of their knowledge than a written one.

In terms of grading, Brown noted that by the end of the semester he knew the students quite well and had a feel for their levels of comprehension, so in many ways the exam was a confirmation. He did not have a written rubric for the exam, as he did for the midterms, but he did take notes to share with the students if they wanted to debrief on their performance. He saw this as a more subjective assessment, balanced by the relatively objective assessment of the homeworks and midterms.

Following up with students after the exam, Brown found that four of the six students really liked the format and found it easier than anticipated. Only two of the students had planned to become majors at the start of the course, but ultimately four declared a mathematics major. Brown noted that he would like to use the oral examination again in the future, but felt that it would not be possible with more than 10 students in a class.

After talking with Brown, I searched to find recent literature on undergraduate oral exams. Two papers are worth reading if the concept is of interest:

Oral vs. Written Evaluation of Students, Ulf Asklund and Lars Bendix, Department of Computer Science, Lund Institute of Technology, Pedagogisk Inspirationskonferens, Lund University Publications, 2003. A conference paper detailing advantages and disadvantage of the two formats. The authors, based on their experience, found that oral examinations are better suited than written for evaluating higher levels of understanding based on Bloom’s Taxonomy.

Oral versus written assessments: A test of student performance and attitudes, Mark Huxham, Fiona Campbell, and Jenny Westwood, Assessment & Evaluation in Higher Education 37(1):125-136, January 2012. This study of two cohorts of students examined “…[s]tudent performance in and attitudes towards oral and written assessments using quantitative and qualitative methods.” Many positive aspects of oral examinations were found. See also a SlideShare Summary of this paper. Possible benefits of oral assessment included: “1) Development of oral communication skills 2) More ‘authentic’ assessment 3) More inclusive 4) Gauging understanding & Encouraging critical thinking 5) Less potential for plagiarism 6) Better at conveying nuances of meaning 7) Easier to spot rote-learning.”

A site to explore is the University of Pittsburgh’s Speaking in the Disciplines initiative. “…committed to the centrality of oral expression in the educational process.” Detailed information for those considering oral examinations is provided, including benefits (“direct, dialogic feedback,” “encourages in-depth preparation,” “demands different skills,” “valuable practice for future professional activity,” and “reduced grading stress”) and potential drawbacks (“time,” “student resistance and inexperience,” “subjective grading,” and “inadequate coverage of material”).

**********************************************************************************

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Images source: Pixabay.com

Report on the JHU Symposium on Excellence in Teaching and Learning in the Sciences

On January 11th and 12th Johns Hopkins University held its fourth Symposium on Excellence in Teaching and Learning in the Sciences. The event was part of a two-day symposium co-sponsored by the Science of Learning Institute and the Gateway Sciences Initiative (GSI). The first day highlighted cognitive learning research; theLogo for the JHU Gateway Sciences Initiative second day examined the practical application of techniques, programs, tools, and strategies that promote gateway science learning. The objective was to explore recent findings about how humans learn and pair those findings with the latest thinking on teaching strategies that work.  Four hundred people attended over the course of the two days; approximately 80% from Johns Hopkins University, with representation from all divisions and 20% from other universities, K-12 school systems, organizations, and companies. Videos of the presentations from the January 12th presentations are now available.

The GSI program included four guest speakers and three Johns Hopkins speakers. David Asai, Senior Director of Science Education at Howard Hughes Medical Institute, argued persuasively for the impact of diversity and inclusion as essential to scientific excellence.  He said that while linear interventions (i.e., summer bridge activities, research experiences, remedial courses, and mentoring/advising programs) can be effective at times, they are not capable of scaling to support the exponential change needed to mobilize a diverse group of problem solvers prepared to address the difficult and complex problems of the 21st Century.  He asked audience participants to consider this:  “Rather than developing programs to ‘fix the student’ and measuring success by counting participants, how can we change the capacity of the institution to create an inclusive campus climate and leverage the strengths of diversity?” [video]

Sheri Sheppard, professor of mechanical engineering at Stanford University, discussed learning objectives and course design in her presentation: Cooking up the modern undergraduate engineering education—learning objectives are a key ingredient [video].

Eileen Haase, senior lecturer in biomedical engineering at Johns Hopkins, discussed the development of the biomedical engineering design studio from the perspective of both active learning classroom space and curriculum [video]. Evidenced-based approaches to curriculum reform and assessment was the topic addressed by Melanie Cooper, the Lappan-Phillips Chair of Science Education at Michigan State University [video]. Tyrel McQueen, associate professor of chemistry at Johns Hopkins talked about his experience with discovery-driven experiential learning in a report on the chemical structure and bonding laboratory, a new course developed for advanced freshman [video]. Also from Hopkins, Robert Leheny, professor of physics, spoke on his work in the development of an active-learning- based course in introductory physics [video].

Steven Luck, professor of psychology at the University of California at Davis, provided an informative and inspiring conclusion to the day with his presentation of the methods, benefits, challenges, and assessment recommendations for how to transform a traditional large lecture course into a hybrid format [video].

Also of interest may be the videos of the presentations from the Science of Learning Symposium on January 11, 2016. Speakers included: Ed Connor, Johns Hopkins University; Jason Eisner, Johns Hopkins University; Richard Huganir, Johns Hopkins University; Katherine Kinzler, University of Chicago; Bruce McCandliss, Stanford University; Elissa Newport, Georgetown University; Jonathan Plucker, University of Connecticut; Brenda Rapp, Johns Hopkins University; and Alan Yuille, Johns Hopkins University.

*********************************************************************************************************

Kelly Clark, Program Manager
Center for Educational Resources

Image Source: JHU Gateway Sciences Initiative logo

Using the Critique Method for Peer Assessment

As a writer I have been an active participant in a formal critique group facilitated by a professional author and editor. The critique process, for those who aren’t familiar with the practice, involves sharing work (traditionally, writing and studio arts) with a group to review and discuss. Typically, the person whose work is being critiqued must listen without interrupting as others provide comments and suggestions. Critiques are most useful if a rubric and a set of standards for review is provided and adhered to during the commentary. For example, in my group, we are not allowed to say, “I don’t like stories that are set in the past.” Instead we must provide specific examples to improve the writing: “In terms of authoritative writing, telephones were not yet in wide use in 1870. This creates a problem for your storyline.”  After everyone has made their comments, the facilitator adds and summarizes, correcting any misconceptions. Then the writer has a chance to ask questions for clarification or offer brief explanations. In critique, both the creator and the reviewers benefit. Speaking personally, the process of peer evaluation has honed my editorial skills as well as improved my writing. Looking down on a group of four students with laptops sitting at a table in discussion.

With peer assessment becoming a pedagogical practice of interest to our faculty, could the critique process provide an established model that might be useful in disciplines outside the arts? A recent post on the Tomorrow’s Professor Mailing List, Teaching Through Critique: An Extra-Disciplinary Approach, by Johanna Inman, MFA Assistant Director, Teaching and Learning Center, Temple University, addresses this topic.

“The critique is both a learning activity and assessment that aligns with several significant learning goals such as critical thinking, verbal communication, and analytical or evaluation skills. The critique provides an excellent platform for faculty to model these skills and evaluate if students are attaining them.” Inman notes that critiquing involves active learning, formative assessment, and community building. Critiques can be used to evaluate a number of different assignments as might be found in almost any discipline including, short papers and other writing assignments, multimedia projects, oral presentations, performances, clinical procedures, interviews, and business plans. In short, any assignment that can be shared and evaluated through a specific rubric can be evaluated through critique.

A concrete rubric is at the heart of recommended best practices for critique. “Providing students with the learning goals for the assignment or a specific rubric before they complete the assignment and then reviewing it before critique can establish a focused dialogue. Additionally, prompts such as Is this work effective and why? or Does this effectively fulfill the assignment? or even Is the planning of the work evident? generally lead to more meaningful conversations than questions such as What do you think?

It is equally important to establish guidelines for the process, what Inman refers to as an etiquette for providing and receiving constructive criticism. Those on the receiving end should listen and keep an open mind. Learning to accept criticism without getting defensive is life skill that will serve students well. Those providing the assessment, Inman says, should critique the work not the student, and offer specific suggestions for improvement. The instructor or facilitator should foster a climate of civility.

Inman offers tips for managing class time for a critique session and specific advice for instructors to insure a balanced discussion.  For more on peer assessment more generally, see the University of Texas at Austin Center for Teaching and Learning’s page on Peer Assessment.  The Cornell Center for Teaching Excellence also has some good advice for instructors interested in Peer Assessment, answering some questions about how students might perceive and push back against the activity. Peer assessment, whether using a traditional critique method or another approach, benefits students in many ways. As they learn to evaluate others’ work, it strengthens their own.

********************************************************************************************************* Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Meeting. CC BY-SA Marco Antonio Torres https://www.flickr.com/photos/torres21/3052366680/in/photostream/

Feedback codes: Giving Student Feedback While Maintaining Sanity

We heard our guest writer, Stephanie Chasteen (Associate Director, Science Education Initiative, University of Colorado at Boulder), talk about feedback codes in the CIRTL MOOC, An Introduction to Evidence-Based Undergraduate STEM Teaching, now completed, but due to run again in the near future.  She presented in Week 2: Learning Objectives and Assessment, segment 4.7.0 – Feedback Codes. Below is her explanation of this technique.


One of the most important things in learning is timely, targeted feedback.  What exactly does that mean?  It means that in order to learn to do something well, we need someone to tell us…

  • Specifically, what we can do to improve
  • Soon after we’ve completed the task.

Unfortunately, most feedback that students receive is too general to be of much use, and usually occurs a week or two after turning in the assignment – at which point the student is less invested in the outcome and doesn’t remember their difficulties as well.  The main reason is that we, as instructors, just don’t have the time to give students feedback that is specific to their learning difficulties – especially in large classes.

So, consider ways to give that feedback that don’t put such a burden on you.  One such method is using feedback codes.

The main idea behind feedback codes is to determine common student errors and assign each of those errors a code. When grading papers, you (or the grader) needs only to write down the letter of the feedback code, and the student can refer to the list of what these codes mean in order to get fairly rich feedback about what they did wrong.

Example

Let me give an example of how this might work.  In a classic physics problem, you might have two carts on a track, which collide and bounce off one another.   The students must calculate the final speed of the cart.

Diagram of classic physics problem of colliding carts on a track.Below is a set of codes for this problem that were developed by Ed Price at California State University at San Marcos.Feedback codes table

How to come up with the codes?

If you already know what types of errors students make, you might come up with feedback codes on your own.  In our classes, we typically have the grader go through the student work, and come up with a first pass of what those feedback codes might look like.  This set of codes can be iterated during the grading process, resulting in a complete set of codes which describe most errors – along with feedback for improvement.

How does the code relate to a score?

Do these feedback codes correspond to the students’ grades?  They might – for example, each code might have a point value.  But, I wouldn’t communicate this to the students!  The point of the feedback codes is to give students information about what they did wrong, so they can improve for the future.  There is research that shows that when qualitative feedback like this is combined with a grade, the score trumps everything; students ignore the writing, and only pay attention to the evaluation.

Using Grademark to provide feedback codes

Mike Reese, a doctoral student at Johns Hopkins, uses the feedback codes function in Turnitin.  The Grademark tool in Turnitin allows the instructor to create custom feedback codes for comments commonly shared with students.  Mike provides feedback on the electronic copy of the document through Turnitin by dragging and dropping feedback codes on the paper and writing paper-specific comments as needed. Screen shot showing example of using GradeMark

Advantages of feedback codes

The advantage of using feedback codes are:

  1. Give students feedback, without a lot of extra writing
  2. The instructor gets qualitative feedback on how student work falls into broad categories
  3. The grader uses the overall quality of the response to assign a score, rather than nit-picking the details

Another way to provide opportunities for this feedback is through giving students rubrics for their own success, and asking them to evaluate themselves or their peers – but that’s a topic for another article.

Additional resources:

Stephanie Chasteen
Associate Director, Science Education Initiative
University of Colorado Boulder

Stephanie Chasteen earned a PhD in Condensed Matter Physics from University of California Santa Cruz.  She has been involved in science communication and education since that time, as a freelance science writer, a postdoctoral fellow at the Exploratorium Museum of Science in San Francisco, an instructional designer at the University of Colorado, and the multimedia director of the PhET Interactive Simulations.  She currently works with several projects aimed at supporting instructors in using research-based methods in their teaching.

Image Sources: Macie Hall, Colliding Carts Diagram, adapted from the CIRTL MOOC An Introduction to Evidence-Based Undergraduate STEM Teaching video 4.7.0; Ed Price, Feedback Codes Table; Amy Brusini, Screen Shot of GradeMark Example.

Sharing Assignment Rubrics with Your Students

We’ve written about rubrics before, but it is certainly a topic that bears additional clip art image of an Instructor grading using a rubriccoverage. In its broadest meaning, a rubric is a guide for evaluation. More specifically, rubrics establish the criteria, qualitative markers, and associated scores for assessment of student work. Recently I have been talking and thinking about rubrics in a number of contexts – in consultations with faculty, as a workshop facilitator, and in planning for a hands-on exercise for an instruction module.

In consultation with faculty on assessing assignments I sometimes hear, “I’ve been teaching this course for years. It’s a small seminar so I assign a term paper. I don’t need a rubric because I know what qualifies as an “A” paper.” What that means is that the person has a rubric of sorts in his or her head. The problem is that the students aren’t mind readers. As useful as rubrics are for an instructor to insure that grading is consistent across the class, they are equally useful when shared with students, who then can understand the criteria, qualitative markers, and associated scores for the assignment.

As a workshop facilitator I recently saw the advantage for students in having a rubric to guide them in preparing a final project. Beyond the instructions for the assignment, they could see clearly the points on which their work would be evaluated and what would constitute excellent, good, and unacceptable work. Unsure about how to create rubrics to share with your students? There are some great resources to help you develop rubrics for your classes.

The University of California at Berkeley Center for Teaching and Learning webpage on rubrics offers general information on using rubrics and on how to create a rubric. The CTL notes that “[r]ubrics help students focus their efforts on completing assignments in line with clearly set expectations.” There are also examples rubrics for downloading and bibliography for further reading.

The Eberly Center for Teaching Excellence & Educational Innovation at Carnegie Mellon University has a section on rubrics as part of their resources on designing and teaching a course (also worth a look).  Their advice on sharing rubrics with students: “A rubric can help instructors communicate to students the specific requirements and acceptable performance standards of an assignment. When rubrics are given to students with the assignment description, they can help students monitor and assess their progress as they work toward clearly indicated goals. When assignments are scored and returned with the rubric, students can more easily recognize the strengths and weaknesses of their work and direct their efforts accordingly.” There are also examples of rubrics for paper assignments (Philosophy, Psychology, Anthropology, History); projects (including an Engineering Design project); oral presentations (including group presentations); and for assessing student in-class participation.

The Cornell University Center for Teaching Excellence section on rubrics states that “[r]ubrics are a powerful tool for supporting learning by guiding learners activities and increasing their understanding of their own learning process.” They provide a template for creating a rubric – a rubric for rubrics, so to speak. There are a number of sample rubrics and scoring feedback sheets, sources for sample rubrics, and links to presentations on using rubrics.

All three of these sites gave me useful examples and resources for developing a rubric to use in the instructional module I’ll teach in August.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image source: Microsoft Clip Art, edited by Macie Hall

Resources for Peer Learning and Peer Assessment

Students doing group workSeveral weeks ago our colleagues in the Center for Teaching and Learning at the Johns Hopkins School of Public Health presented the very informative half-day symposium Peer to Peer: Engaging Students in Learning and Assessment. Speakers presented on their real-life experiences implementing peer learning strategies in the classroom. There were hands-on activities demonstrating the efficacy of peer-to-peer learning. Two presentations focused on peer assessment highlighting the data on how peer assessment measures up to instructor assessment and giving examples of use of peer assessment.  If you missed it, the presentations were recorded and are now available.

A previous post highlighted Howard Rheingold’s presentation From Pedagogy to Peeragogy: Social Media as Scaffold for Co-learning. You will need to bring up the slides separately – there is a link for them on the CTL page.

For additional material on peer learning and assessment the JHSPH CTL resources page is loaded with information on the subject, outlining best practices and highlighting references and examples.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art