The Hazards of Teaching for the First Time

This post was submitted by Atousa Saberi, a graduate student in the Johns Hopkins Teaching Academy who reflected on her first-time teaching experience.

I would like to share my learning experience as a PhD student teaching my first undergraduate course during fall 2020: Natural Hazards.

Teaching this course, I wrestled with several questions: How can I engage students in a virtual setting? How can I make them think? What is the purpose of education after all and what do I want them to take away from the course?

About the course setting

Fall semester 2020 was a unique time to teach a course on natural hazards in the sense that all students were directly impacted by at least one type of disaster – the global pandemic. In addition, the semester coincided with a record-breaking Atlantic hurricane season on the East Coast and fires on the West Coast.  I used these events as an opportunity to spark students’ curiosity and motivate them to learn about the science of natural hazards.

As a student, my best learning experiences happened through dialogues and exchange of ideas between classmates and instructors that continued back and forth during class time. This experience inspired me to hold more than half of the class sessions synchronously.

To focus students’ attention, I motivated every class session by posing questions. For example, which hazards are the most destructive, frequent, or deadly? What is the effect of climate change on these hazards? What can we do about them?  Some of these questions are open ended and may sound overwhelming at first, but to me, the essential step in learning is to become curious enough to engage with questions and take steps to answer them. Isn’t the purpose of education to train future thinkers?

The course included clear learning objectives following Bloom’s Taxonomy to target both lower- and higher-level thinking skills. I designed multiple forms of assignments such as conducting readings, listening to podcasts, watching documentaries, completing analytical exercises, and participating in group discussions. To motivate the sense of exploration in students, instead of exams, I assigned a final term paper in which students investigated a natural disaster case study of their own interest.

The assessment was structured using specifications grading. The method directly links course grades to achievements of learning objectives and motivates students to focus on learning instead of earning points (Kelly, 2018). Grading rubrics were provided for each individual assignment.

Lab demonstrations

Just as a picture is worth a thousand words, lab demonstrations go a long way to supplement lectures and to improve conceptual understanding of learning materials. But is it possible to perform them in a remote setting?

Simple demonstrations were still possible. I just needed to get creative in implementing them! For example, I used a rubber band and a biscuit to demonstrate the strength of brittle versus elastic materials under various modes of deformation to explain how the choice of materials can make a drastic difference in what modes of deformation a building tolerates during an earthquake, which impacts the survival rate during an earthquake.

I also used a musical instrument, my Setar, as an analogy for seismic waves. Just seeing the instrument immediately captured the students’ attention. I played the same note at different octaves and reminded them how that results in a different pitch due to the string being confined to two different lengths. This is analogous to having a short versus long earthquake fault and therefore higher or lower frequency in seismic waves (Figure 1). Students were also given an exercise to listen to the sound of earthquakes from an archive to infer the fault length.

Figure 1. Comparison of seismic waves to the sound waves generated by a string instrument. (a) length of two Earthquake faults (USGS). (b) music instrument producing analogous sound waves. The red and green arrows show the note, D, played on the same string in different octaves.

Freedom to learn

Noam Chomsky often says in his interviews about education that students are taught to be passive and obedient rather than independent and creative (Robichaud, 2014). He believes education is a matter of laying out a string along which students will develop, but in their own way (Chomsky & Barsamian, 1996). Chomsky quotes his colleague’s response to students asking about course content, saying “it is not important what we cover in the class but rather what we discover” (Chomsky & Barsamian, 1996). I was inspired by this perspective and decided to encourage the enlightenment style of learning in my students by giving them freedom in their final term paper writing style. I encouraged the students to pick a case study based on what they loved to learn about natural hazards and gave them freedom in how to structure their writing or what to expand on (the science of the disaster, the losses, the social impacts, the aftermath, etc.). I was surprised to see so many of the students asked for strict guidelines, templates and sample term papers from previous semesters, as if the meaning of freedom and creativity in learning was unfamiliar to them!

Student perceptions of the class

I administered two anonymous feedback surveys, one in the middle of the semester and the other at the end. The mid-semester survey was focused on understanding what is working (not working) for students that I should keep (stop) doing, and what additional activities we could start doing to better adapt to the unexpected transition to online learning. I learned that students had a lot to say, some of which I incorporated in the second half of the semester, such as taking a class session to practice writing the term paper and hold a Q&A session.

The end-of-semester survey was more focused on their takeaways from the class, and what assignments/activities were most helpful in their learning experience. I specifically asked them questions such as, “What do you think you will remember from this course?  What did you discover?”

The final survey revealed that by the end of the semester students, regardless of their background, comprehended the major earth processes and reflected on the relation between humans and natural disasters. They grasped the interdisciplinary nature of the course and how one can learn about intersection of physics, humanities, and international relations through studying natural hazards and disasters. They also developed a sense of appreciation for the role of science in predicting and dealing with natural hazards.

What I learned

Even though universities like Hopkins often train Ph.D. students to focus on producing publications rather than doing curiosity-driven research, I found that teaching a course like this led me to ask the kind of fundamental questions that could stimulate future research. This experience helped me develop as a teacher, as well as a true scientist, while raising awareness and sharing important knowledge about natural hazards in a changing climate in which the frequency of hazardous events will likely increase. I captured students’ attention by making the learning relevant to their lives, which inspired their curiosity. Feedback surveys revealed and reinforced my idea that synchronous class discussions, constant questioning, and interesting lab demos would hook the students and motivate them to engage in dialogue.

I am grateful to the KSAS Dean’s Office for making teaching as a graduate student possible, to the Center for Educational Resources for providing great teaching resources, and to Dr. Rebecca Kelly for her continuous support and valuable insights during the period I was teaching, to Dr. Sabine Stanley and Thomas Haine for their encouragement and feedback on this essay.

Atousa Saberi

References:

Kelly, R. (2018). Meaningful grades with specification grading. https://cer.jhu.edu/files/InnovInstruct-Ped-18 specifications-grading.pdf

Robichaud, A. (2014). Interview with Noam Chomsky on education. Radical Pedagogy, 11 (1), 4.

Chomsky, N., & Barsamian, D. (1996). Class warfare: interviews with David Barsamian. Monroe, Maine: Common Courage Press. 5

USGS (2020), Listening to earthquake: https://earthquake.usgs.gov/education/listen/index.php.

Image Source: Pixabay, Atousa Saberi

Quick Tips: Formative Assessment Strategies

Designing effective assessments is a critical part of the teaching and learning process. Instructors use assessments, ideally aligned with learning objectives, to measure student achievement and determine whether or not they are meeting the objectives. Assessments can also inform instructors if they should consider making changes to their instructional method or delivery.

Assessments are generally categorized as either summative or formative. Summative assessments, usually graded, are used to measure student comprehension of material at the end of an instructional unit. They are often cumulative, providing a means for instructors to see how well students are meeting certain standards. Instructors are largely familiar with summative assessments. Examples include:

  • Final exam at the end of the semester
  • Term paper due mid-semester
  • Final project at the end of a course

In contrast, formative assessments provide ongoing feedback to students in order to help identify gaps in their learning. They are lower stakes than summative assessments and often ungraded. Additionally, formative assessments help instructors determine the effectiveness of their teaching; instructors can then use this information to make adjustments to their instructional approach which may lead to improved student success (Boston). As discussed in a previous Innovative Instructor post about the value of formative assessments, when instructors provide formative feedback to students, they give students the tools to assess their own progress toward learning goals (Wilson). This empowers students to recognize their strengths and weaknesses and may help motivate them to improve their academic performance.

Examples of formative assessment strategies:

  • Surveys – Surveys can be given at the beginning, middle, and/or end of the semester.
  • Minute papers – Very short, in-class writing activity in which students summarize the main ideas of a lecture or class activity, usually at the end of class.
  • Polling – Students respond as a group to questions posed by the instructor using technology such as iclickers, software such as Poll Everywhere, or simply raising their hands.
  • Exit tickets – At the end of class, students respond to a short prompt given by the instructor usually having to do with that day’s lesson, such as, “What readings were most helpful to you in preparing for today’s lesson?”
  • Muddiest point – Students write down what they think was the most confusing or difficult part of a lesson.
  • Concept map – Students create a diagram of how concepts relate to each other.
  • First draft – Students submit a first draft of a paper, assignment, etc. and receive targeted feedback before submitting a final draft.
  • Student self-evaluation/reflection
  • Low/no-grade quizzes

Formative assessments do not have to take a lot of time to administer. They can be spontaneous, such as having an in-class question and answer session which provides results in real time, or they can be planned, such as giving a short, ungraded quiz used as a knowledge check. In either case, the goal is the same: to monitor student learning and guide instructors in future decision making regarding their instruction. Following best practices, instructors should strive to use a variety of both formative and summative assessments in order to meet the needs of all students.

References:

Boston, C. (2002). The Concept of Formative Assessment. College Park, MD: ERIC Clearinghouse on Assessment and Evaluation. (ERIC Document Reproduction Service No. ED470206).

Wilson, S. (February 13, 2014). The Characteristics of High-Quality Formative Assessments. The Innovative Instructor Blog. http://ii.library.jhu.edu/2014/02/13/the-characteristics-of-high-quality-formative-assessments/

Amy Brusini
Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay

New Mobile Application to Improve Your Teaching

Tcrunch logo. Tcrunch in white letters on blue background.Finding time to implement effective teaching strategies can be challenging, especially for professors where teaching is only one of their many responsibilities. PhD student John Hickey is trying to solve this problem with Tcrunch, a new application (available on the Apple and Google App stores for free) he has created.

Tcrunch enables more efficient and frequent teacher-student communication. You can think about it as an electronic version of the teaching strategy called an “exit ticket.” An “exit ticket” is traditionally a 3×5 card given to students at the end of class; the teacher asks a question to gain feedback from the students and the students write a brief response. Here you can do the same thing, but Tcrunch eliminates any paper and performs all collecting and analyzing activities in real-time.

Tcrunch Teacher Portal screen shot.There is both a teacher and student portal into the app. Teachers can create and manage different classes. Within a class, teachers can create a question or prompt and release it to their students, who will also have Tcrunch. Students can then see this question, click on it, and answer it. Student answers come into the teacher’s app in real-time. Teachers can evaluate the results in the app or email themselves the results in the form of an Excel document. Other functionalities include multiple choice, a bank of pre-existing questions to help improve teaching, and an anonymous setting for student users.

John developed Tcrunch because of his own struggles with time and improving learning in the classroom:

“I taught my first university-level class at Johns Hopkins, and I wanted more regular feedback to my teaching style, classroom activities, and student comprehension than just the course evaluation at the end of the year. As an engineer, frequent feedback is critical to iterative improvements. I also knew that I was not going to handout, collect, read, and analyze dozens of papers at the end of each class. So, I created Tcrunch.”

The app development process took nearly a year, with iterative coding and testing with Tcrunch student view of app. Screen shot.teachers and students. Both student and teacher users have enjoyed using Tcrunch. They have referenced enjoying the ease of use, being able to create and answer questions on the go, and having a platform for all their classes in one place. John has personally found Tcrunch has helped him to restructure classroom time and assignment load, and even to find out why students are missing class.

John cites this development process as the main difference between his app and already existing polling technologies.

“Finding out what the professors and students wanted allowed me to see the needs that were not filled by existing technologies. This resulted in an app specifically designed to help teachers, instead of the other way around, for example, a generalized polling tool that is also applied to teaching. The specificity in design gives it its unique functionality and user experience.”

In the future John wants to extend the reach of Tcrunch to more teachers through advertising and partnering with Edtech organizations.

While the app may not be as flashy as Pokemon Go, Tcrunch has great utility and potential in the classroom.

To find and use the app, search Tcrunch in the Apple or Google App stores and download. John Hickey can be contacted at jhickey8@jhmi.edu

John Hickey
National Science Foundation Fellow
Biomedical Engineering Ph.D. Candidate
Johns Hopkins University

Images source: John Hickey 2018

Midterm Course Evaluations

Many of us are reaching the mid-semester mark and students are anticipating or completing midterm exams. Perhaps you are in the throes of grading.  Now is a good time to think about letting your students grade you, in the sense of evaluating your teaching. Think of this as a type of formative assessment, an opportunity for you to make corrections to your teaching strategies and clarify student misconceptions.

There are several ways to obtain feedback and these evaluations do not needTwo buttons, green with a thumbs up and red with a thumbs down. to be lengthy. Examples and resources are explored below. Popular among instructors I’ve talked to are short, anonymous surveys, offered either online or on paper. Blackboard and other course management systems allow you to create surveys where student responses are anonymous but you can see who has responded and who has not, making it easy to track. You want to keep these evaluations focused with three or four questions, which might include: What is working in the class/what is not working? What change(s) would you suggest to improve [class discussions/lectures/lab sessions]? What is something you are confused about? Have you found [specific course assignment] to be a useful learning activity?

As the Yale Center for Teaching and Learning states on their website page Midterm Student Course Evaluations: “Midterm course evaluations (MCE) are a powerful tool for improving instructors’ teaching and students’ learning.  … MCE provide two critical benefits for teaching and learning: the temporal advantage of improving the course immediately, and the qualitative benefit of making teaching adjustments specific to the particular needs and desires of current students. In addition, MCE generally produce better quality feedback than end-of-term evaluations since students have a shared stake in the results and instructors can seek clarification on any contradicting or confusing responses.” The Yale site offers useful examples, strategies, and resources.

Michigan State University Academic Advancement Network offers a comprehensive guide with Mid-term Student Feedback, which includes research citations as well as examples. Here, too, you will find a list of resources from other universities on the topic, as well as more in-depth methods to gain student feedback. There is also a section with tips on effective use of information gained from student feedback.

A sampling survey-type midterm evaluations can be found in PDF format at the UC Berkeley Center for Teaching and Learning: Teaching Resources: Sample Midterm Evaluations. This document will get you off and running with little effort.

Ideally you will be using the results on the midterm exam or other learning assessment as a gauge along with the teaching evaluations. If the learning assessment is indicating gaps in content understanding, you can see how it aligns with feedback gained from the student evaluations. The value is that you can make timely course corrections. Another plus—students will see that you are genuinely interested in your teaching and their learning.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Pixabay.com

The Characteristics of High-Quality Formative Assessments

As we explore different theories of learning, two points seem salient: that students’ understanding of intelligence affects their self-perception, their determination, their motivation, and their achievement (Dweck, 2002); and that a students’ ability to self-regulate learning, to be metacognitive, ensures more successful learning and achievement (Ormrod, 2012, p.352-3).  As instructors plan curriculum and assessments, they ought to consider how to use these points as guides to ensure student learning and success.

Word cloud created from the text of the blog post.Formative assessment, understood as both a tool for instructors to gauge student learning and a teaching method, works iteratively with student understanding of intelligence and learner-regulation.  That is, formative assessment is based on the idea that learners should learn to take control of their learning, and that intelligence is a malleable quality.  In turn, formative assessment improves self-reflection in students and reinforces the idea that intelligence can be increased as opposed to it being a fixed entity, reflecting Carol S. Dweck’s important work on growth mind set, discussed in a recent the Innovative Instructor post.

An understanding of just what formative assessment entails highlights the recursive relationships of formative assessment, self-reflection, and a malleable view of intelligence.  Lorrie Shephard describes formative assessment as a process through which an instructor and a student come to better understand both the learning goals and the student’s work towards those goals in order to “alter the course of instruction and thus support the development of greater competence” (2005, p. 67).  This definition identifies formative assessment as a process of feedback that improves student learning.

Using formative feedback as a teaching method means that a classroom becomes the locus of ongoing dialogue that helps students measure and improve as they work to meet goals, expectations, and objectives.  The instructor takes in information about student progress and understanding, which creates the opportunity for a feedback loop that the instructor can use to shape teaching.  It is the moment when student progress shapes instruction that formative feedback becomes formative assessment.

When practiced effectively, this iterative relationship between instruction, feedback, student adjustment, and instructional adjustment maps onto self-reflection and a view of malleable intelligence.  As instructors provide formative feedback to students, they give students the tools to assess their own progress toward learning goals. Over time, students learn self-reflecting strategies (Shepard, 2005, p. 69; Wiggins, 2004, pp. 2-3, 6), allowing for moments such as Black and Wiliam noted when “one class, subsequently taught by a teacher not emphasizing assessment for learning, surprised that teacher by complaining, ‘Look, we’ve told you we don’t understand this. Why are you going on to the next topic?” (2004, p. 36).  As students reveal their learning progress, either directly (as in the example above) or indirectly through tasks that foster formative feedback, instructors have the opportunity to adapt their instruction. As teaching becomes more closely aligned with student progress, students are given increasingly refined opportunities for comprehension or alignment with expectations. As students chart their own progress, they implicitly buy in to the idea that they can improve their performance by making changes in their approach (Black & Wiliam, 2004, p. 30; Shepard, 2000, p. 43; Wiggins, 2004, p. 5). They come to understand, either overtly or tacitly, that their achievement is based on effort, not an unchanging quantity of intelligence (Shepard, 2005, 68; Lipnevich & Smith, 2009b, 364). When formative assessment works, students become self-regulating learners who practice self-reflection and learn a malleable view of intelligence—and are more motivated and more likely to achieve (Dweck, 2002).

Given the value of formative assessment, how can instructors use the characteristics of exemplary formative assessment as they plan their courses?  As opposed to inserting a few well-crafted formative assessments into the curriculum, instructors should understand that the adoption of formative assessment is the implementation of a course-long instructional approach.  Specifically, instructors can use formative feedback in every class through effective questioning strategies that elicit information about student understanding and help students monitor and adjust their learning (Black & Wiliam, 2004, pp. 25-7).  Instructors can assess students’ prior knowledge and use “knowledge-activation routines” such as the K-W-L strategy, to “develop students’ metacognitive abilities while providing relevant knowledge connections for specific units of study”(Shepard, 2005, p. 68). Comments on work, marking of papers (Black & Wiliam, 2004, pp. 27-31; Lipnevich, 2009a; Lipnevich, 2009b), peer-assessment, self-critique exercises (Black & Wiliam, 2004, pp 31-3), one-on-one tutorials, small group remediation, instructor and student modeling, analysis of exemplars (Wiggins, 2004), and revision exercises can be used throughout.

Although methods may be similar across disciplines, the precise use of formative feedback will naturally vary between disciplines (Black & Wiliam, 2004, pp. 36-37; Shepard, 2000, 36). Nonetheless, Black & Wiliam and Shephard (2005) stress that adopting formative assessment as an instructional approach requires a cultural change within a learning community. Because students activate and practice self-reflective strategies in an effective formative feedback loop, they ought to be given a chance to develop and hone these skills in every classroom.  Since formative assessment relies on students understanding clearly what the expected outcomes of their learning and work are, they need exemplars. If instructors within a department, discipline or, ideally, school can agree upon the characteristics of exemplary work and learning, student self-regulation is more natural and more likely to be accurate.

References

Black, P. & Wiliam, D. (2004). The Formative Purpose: Assessment Must First Promote Learning. Yearbook of the National Society for the Study of Education103 (2), 20-50.

Dweck, C. (2002). Messages That Motivate: How Praise Molds Students’ Beliefs, Motivation, and Performance (in Surprising Ways). In J. Aronson (Ed.), Improving Academic Acheivement: Impacts of Psychological Factors on Education (pp. 37-60). San Diego: Academic Press.

Lipnevich, A. & Smith, J. (2009a). “I Really Need Feedback to Learn:” Students’ Perspectives on the Effectiveness of the Differential Feedback MessagesEducational Assessment, Evaluation and Accountability , 21 (4), 347-67.

Lipnevich, A. &. (2009b). Effects of Differential Feedback on Students’ Examination Performance. Journal of Experimental Psychology: Applied , 15 (4), 319-33.

Ormrod, J. (2012). Human Learning (6th Edition ed.). Boston: Pearson.

Shepard, L. (2000). The Role of Classroom Assessment in Teaching and Learning. CSE Technical Report, University of California, Graduate School of Education & Information Studies, Los Angeles.

Shepard, L. (2005). Linking Formative Assessment to Scaffolding. Educational Leadership, 63, 66-70.

Shute, V. (2008). Focus on Formative Feedback. Review of Educational Research , 78, 153-89.

Wiggins, G. (2004). Assessment as Feedback. New Horizons for Learning Online Journal, 1-8.

Sarah Wilson is the co-director of the Upper School at Laurel School in Shaker Heights, Ohio. She has a B.A. (English) from Kenyon College, and an M.A. from Teachers College, Columbia University. She has taught middle and high school English for 13 years.


Image Source: Formative Assessment Wordle created by Macie Hall

Should you stop telling your students to study for exams?

Male student in library studyingThe Innovative Instructor recently came across a thought-provoking article by David Jaffee in the Chronicle of Higher Education entitled Stop Telling Students to Study for Exams. In a nutshell, Jaffee advocates for telling students that they should study for learning and understanding rather than for tests or exams. He reminds us that just because content is covered in class does not mean that students really learn it. Regurgitating information for an exam does not equal long-term retention. He points out that there are real consequences to this traditional approach.

On the one hand, we tell students to value learning for learning’s sake; on the other, we tell students they’d better know this or that, or they’d better take notes, or they’d better read the book, because it will be on the next exam; if they don’t do these things, they will pay a price in academic failure. This communicates to students that the process of intellectual inquiry, academic exploration, and acquiring knowledge is a purely instrumental activity—designed to ensure success on the next assessment.

His claims are backed with evidence. Numerous studies have shown that students who use rote memorization to cram for tests and exams do not retain the information studied over the long term. Real learning, which involves retention and transfer of knowledge to new situations, is a complicated process reflected by the vast amount of research on the subject.

As a side note, for those interested in learning more about cognitive development and student learning, there is a nice summary of key studies and models in the book by James M. Lang On Course: A Week by Week Guide to Your First Semester of College Teaching [Harvard University Press, 2008]. See Week 7 Students as Learners for an overview and bibliography.

Instead of a cumulative final exam, Jaffee recommends using formative and authentic assessments, which “[u]sed jointly…can move us toward a healthier learning environment that avoids high-stakes examinations and intermittent cramming.” Formative assessments, performed in class, provide opportunities for students to understand where their knowledge gaps are. [See The Innovative Instructor 2013 GSI Symposium Breakout Session 2: Formative Assessment and Teaching Tips: Classroom Assessment.] Authentic assessments allow students “to demonstrate their abilities in a real-world context.” Examples include group and individual projects, in-class presentations, multi-media assignments, and poster sessions.

The article has obviously provoked some controversy as evidenced by the number of comments made – 225 as of this posting. One of the commenters supporting Jaffee with several rebuttals to critics is Robert Talbert, Professor of Mathematics at Mathematics Department at Grand Valley State University in Allendale, Michigan, and author of The Chronicle of Higher Education blog Casting Out Nines. Talbert has blogged extensively on his experiences with flipping his classroom.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Microsoft Clip Art

2013 GSI Symposium Breakout Session 2: Formative Assessment

A Report from the Trenches

We’re continuing with our reports from the JHU Gateway Sciences Initiative (GSI) 2nd Annual Symposium on Excellence in Teaching and Learning in the Sciences. Next up is “Assessing Student Learning during a Course: Tools and Strategies for Formative Assessment” presented by Toni Ungaretti, Ph.D., School of Education and Mike Reese, M.Ed., Center for Educational Resources.

Please note that links to examples and explanations in the text below were added by CER staff and were not included in the breakout session presentation.

The objectives for this breakout session were to differentiate summative and formative assessment, review and demonstrate approaches to formative assessment, and describe how faculty use assessment techniques to engage in scholarly teaching.

Summarizing Dr. Ungaretti’s key points:

Assessment is a culture of continuous improvement that parallels the University’s focus on scholarship and research. It ensures learners’ performance, program effectiveness, and unit efficiency. It is an essential feature in the teaching and learning process. Learners place high value on marks or grades: “Assessment defines what [learners] regard as important.” [Brown, G., Bull, J., & Pendlebury, M. 1997. Assessing Student Learning in Higher Education. Routledge.]  Assessment ensures that what is important is learned.

Summative Assessment is often referred to as assessment of learning. This is regarded as high stakes assessment – typically a test, exam, presentation, or paper at the midterm and end of a course.

Formative Assessment focuses on learning instead of assigning grades. “Creating a climate that maximizes student accomplishment in any discipline focuses on student learning instead of assigning grades. This requires students to be involved as partners in the assessment of learning and to use assessment results to change their own learning tactics.” [Fluckiger, J., Tixier y Virgil, Y., Pasco, R., and Danielson, K. (2010). Formative Feedback: Involving Students as Partners in Assessment to Enhance Learning. College Teaching, 58, 136-140.]

Effective formative assessment involves feedback. That feedback has the greatest benefit when it addresses multiple aspects of learning. It includes feedback on the product (the completed task), feedback on progress (the extent to which the learner is improving over time), and feedback on the process (If the learner is involved, feedback can be given more frequently.)

Diagram showing the Three Ps of Formative Assessment

 From this point on in the session, the participants engaged in active learning exercises that demonstrated various examples of formative assessment including utilizing graphic organizers (Venn Diagrams, Mind Maps, KWL Charts, and Kaizen/T-Charts – practices that focus upon continuous improvement), classroom discussion with higher order questioning (based on Bloom’s Taxonomy),  minute papers, and admit/exit slips.

Classroom discussions can tell the instructor much about student mastery of basic concepts. The teacher can initiate the discussion by presenting students with an open-ended question.

A minute paper is a quick in-class writing exercise where students answer a question focused on material recently presented, such as: What was the most important thing that you learned? What important question remains? This allows the instructor to gauge the understanding of concepts just taught.

Admit/exit slips are collected at the beginning or end of a class. Students provide short answers to questions such as: What questions do I have? What did I learn today? What did I find interesting?

There are many ways in which faculty can determine learner mastery. These may include the use of journaling or learning/response logs to gauge growth over time, constructive quizzes, using modifications of games such as Jeopardy, or structures such as a guided action or Jigsaw. There are also ways to quickly check student understanding such as using thumbs-up–thumbs-down, or i>Clickers.

Assessment may also be achieved by using “learner-involved” formative assessment.  Some ways to achieve this are through the use of three-color group quizzes, mid-term student conferencing, assignment blogs, think-pair-share, and practice presentations.

When incorporated into classroom practice, the formative assessment process provides information needed to adjust teaching and learning while they are still happening. Finally, faculty should look on formative assessment as an opportunity. No matter which methods are used it is important that they allow students to be creative, have fun, learn, and make a difference.

Faculty may also use assessment methods as research. This allows them the opportunity to advance hypotheses-based teaching, gather data on instructional changes and student outcomes, and to prepare scholarly submissions to advance the knowledge on teaching in their discipline. Teaching as research is the deliberate, systematic, and reflective use of research methods to develop and implement teaching practices that advance the learning experiences and outcomes of students and teachers.

Cheryl Wagner, Program/Administrative Manager
Center for Educational Resources

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image Source: Macie Hall

 

Teaching Tips: Classroom Assessment

Increasing emphasis is being placed on assessment, and many faculty are looking for evaluation practices that extend beyond giving a mid-term and final exam. In particular the concept of non-graded classroom assessment is gaining traction. In their book Classroom Assessment Techniques, Thomas Angelo and Patricia Cross (Jossey-Bass, 1993) stress the importance of student evaluation that is “learner-centered, teacher-directed, mutually beneficial, formative, context-specific, ongoing, and firmly rooted in good practice.”

Students in a classroom.

While the authors describe in detail numerous techniques for ascertaining in a timely manner whether or not students are learning what is being taught, here are several quick and easy to implement methods:

 

The Minute Paper: At an appropriate break, ask students to answer on paper a specific question pertaining to what has just been taught. After a minute or two, collect the papers for review after class, or, to promote class interaction, ask students to pair off and discuss their responses. After a few minutes, call on a few students to report their answers and results of discussion. If papers are turned in, there is value to both the anonymous and the signed approach. Grading, however, is not the point; this is a way to gather information about the effectiveness of teaching and learning.

In Class Survey: Think of this as a short, non-graded pop quiz. Pass out a prepared set of questions, or have students provide answers on their own paper to questions on a PowerPoint/Keynote slide. Focus on a few key concepts. Again, the idea is to assess whether students understand what is being taught.

Exit Ticket: Select one of the following items and near the end of class ask your students to write on a sheet of paper 1) a question they have that didn’t get answered, 2) a concept or problem that they didn’t understand, 3) a bullet list of the major points covered in class, or 4) a specific question to access their learning. Students must hand in the paper to exit class. Allow anonymous response so that students will answer honestly. If you do this regularly, you may want to put the exit ticket question on your final PowerPoint/Keynote slide.

Tools that can help with assessment

Classroom polling devices (a.k.a. clickers) offer an excellent means of obtaining evidence of student learning. See http://www.cer.jhu.edu/clickers.html for information about the in-class voting system used at JHU. Faculty who are interested in learning more should contact Brian Cole in the CER.

Faculty at the JHU School of Nursing have been piloting an online application called Course Canary to obtain student assessment data. Formative course evaluation surveys allow faculty to collect student feedback quickly and anonymously. A free account is available (offering two online surveys and two exit ticket surveys) at: https://coursecanary.com/.

Macie Hall, Senior Instructional Designer
Center for Educational Resources


Image source: Microsoft Clip Art