Case Studies for an Inclusive STEM Classroom

As part of our work on the TILE – Toolkit for Inclusive Learning Environments – project (see previous post) my colleagues and I have been uncovering some great resources for fostering diversity and inclusion in the classroom.  I am always on the lookout for sources for case studies (see Quick Tips: Using Case Studies) and the Gendered Innovations project covers both bases.

Screenshot of the Gendered Innovations science case studies web page.

Gendered Innovations is a peer-reviewed project developed by Londa Schiebinger at Stanford University.  “Londa Schiebinger is the John L. Hinds Professor of History of Science in the History Department at Stanford University and Director of the EU/US Gendered Innovations in Science, Health & Medicine, Engineering, and Environment Project. Over the past twenty years, Schiebinger’s work has been devoted to teasing apart three analytically distinct but interlocking pieces of the gender and science puzzle: the history of women’s participation in science; the structure of scientific institutions; and the gendering of human knowledge.” [http://web.stanford.edu/dept/HPS/schiebinger.html]

From the Gendered Innovations website we learn that research has shown that sex and gendered bias is counterproductive and costly.  It can result in human suffering and death in the case of drugs developed and released without proper testing on women, and leads to “missed market opportunities” when products don’t consider shorter people – women and men. For research, failing to recognize gender differences may yield faulty results. The goal of the Gendered Innovations project is to provide scientists and engineers with practical methods for sex and gender analysis.

As a means to that end, there are a number of case studies provided for science, health and medicine, engineering, and the environment. These include extensive bibliographies. There is also a wealth of information on the website that provides a framework for thinking and teaching differently in your classroom.

Macie Hall, Senior Instructional Designer, Center for Educational Resources

Image Source: Screenshot of the Gendered  Innovations science case studies web page - http://genderedinnovations.stanford.edu/case-studies-science.html

Fostering an Inclusive Classroom

Logo for TILE - Toolkit for Inclusive Learning EnvironmentsI am excited to report on a project here at Johns Hopkins that will provide resources (available to all) for supporting inclusive practices in the classroom.  Sharing diverse perspectives and validating students’ and minorities’ varied experiences is a challenge for many faculty. Even those with the best intentions may unwittingly create classroom environments where students from minority communities feel uncomfortable or excluded. However, when executed effectively, an inclusive classroom becomes a layered and rich learning environment that not only engages students, but creates more culturally competent citizens. Enter TILE – Toolkit for Inclusive Learning Environments.

Funded by a Diversity Innovation Grant (DIG) of the Diversity Leadership Council (DLC), TILE will be a repository of examples and best practices that instructors use in order to spark conversations in the classroom that foster diversity and inclusion.

Funding would be used to begin a conversation with faculty who are currently implementing inclusive practices in the classroom. The conversations will result in a report-out session, scheduled for April 2015, when faculty will share ways in which they specifically support and foster an environment of inclusion that can then be replicated in other classrooms. These conversations will lead to the development of a toolkit that will include examples of best practices. The toolkit will offer inclusive instructional approaches from across the disciplines. For example, a biology professor might discuss intersex development as part of the curriculum, and an introductory engineering class might discuss Aprille Ericsson and some of her challenges at NASA.  When professors use these best practices in the classroom, they not only help students learn about some of the issues surrounding diverse populations, but also help give students the voice to be able to be more conversant about diverse issues. Most important is the engagement of students who otherwise may feel marginalized when their own unique experiences remain invisible.

Project collaborators are Demere Woolway, Director of LGBTQ Life; Shannon Simpson, Student Engagement and Information Fluency Librarian, and myself, with support from the Sheridan Libraries and Museums Diversity Committee. Most important will be the various lecturers and faculty from across the disciplines who will work with us on developing the toolkit.

More information on TILE can be found here. While TILE is in development, here are two resources for those interested in exploring ways to improve classroom climate.

The National Education Association (NEA) offers strategies for developing cultural competence for educators. “Cultural competence is the ability to successfully teach students who come from a culture or cultures other than our own. It entails developing certain personal and interpersonal awareness and sensitivities, understanding certain bodies of cultural knowledge, and mastering a set of skills that, taken together, underlie effective cross-cultural teaching and culturally responsive teaching.”

The Center for Integration of Research, Teaching and Learning (CIRTL) has some excellent diversity resources on its website, including a literature review, case studies, and a resource book for new instructors.

 

Macie Hall, Senior Instructional Designer, Center for Educational Resources
Shannon Simpson, Librarian for Student Engagement and Information Fluency, Sheridan Libraries and Museums

Image Source: TILE logo © 2015 Shannon Simpson

A Guide to Bloom’s Taxonomy

A few years ago at an instructional workshop for university professors the following question was posed to the attendees: “What do you know about Bloom’s Taxonomy of the Cognitive Domain?” Most of the respondents answered, “Whose taxonomy of what?”

That answer indicates a general lack of knowledge about one of the most basic pedagogical principles in education. Here are some straightforward guidelines on what Bloom’s taxonomy is and how you can use it in your class.

In 1956, Benjamin Bloom (an American educational psychologist),with collaborators Max Englehart, Edward Furst, Walter Hill, and David Krathwohl, published a framework for categorizing educational goals: Taxonomy of Educational Objectives familiarly known as Bloom’s Taxonomy. The framework consisted of six major categories: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The categories after Knowledge were presented as “skills and abilities,” with the understanding that knowledge was the necessary precondition for putting these skills and abilities into practice.

The New Version of Bloom's TaxonomyIn 2001 Bloom’s taxonomy was revised by a group of cognitive psychologists, led by Lorin Anderson (a former student of Bloom). To update the taxonomy to reflect 21st century work the authors used verbs to re-label the six categories and included “action words” to describe the cognitive processes by which learners encounter and work with knowledge. The figures accompanying this article reflect that work. This revised Bloom’s taxonomy proves to be a very useful tool that can be used in all classrooms for several reasons listed below.

Bloom’s Levels of Understanding - ActionsAbout ninety percent of the questions students handle in any class are memory questions. The memory level is perfectly respectable and even essential in many learning situations. There are, however, disadvantages in using pure memory that an instructor should keep in mind. The memory level is a tool that promotes the use of short term memory, and the information may be forgotten if it is not used. Another problem with the memory level is that it does not guarantee understanding. We often assume that just because a student can cough up words, facts, and figures that s/he has “learned” and understands the material. That is simply not the case. By moving up the scale to teaching that involves students understanding, applying, and analyzing information, their learning outcomes will improve.

That is not likely to happen, though, without some thoughtful preparation. In instructional design, questioning strategies can be as simple as the intentional progression of questions leading to higher levels of thinking and involvement. Bloom’s revised taxonomy can provide a framework for constructing those questions.

Some examples of how to incorporate Bloom’s taxonomy into classes include the following:

1. Creating Course Learning Objectives 

In education, learning objectives are brief statements that describe what students will be expected to learn by the end of a course, unit, or class period. Instructors can benefit from using a framework to construct and organize learning objectives for themselves and for students. Having an organized set of learning objectives helps instructors plan and deliver appropriate instruction, design valid assessment tasks and strategies, and ensure that instruction and assessment are aligned with the objectives.

For example, learning objectives following Bloom’s revised taxonomy could be constructed as follows.
Students should be able to:

  1. Exhibit previously learned material by recalling facts, terms and basic concepts.
  2. Demonstrate understanding of facts and ideas by organizing, comparing, interpreting and giving descriptions and stating main ideas.
  3. Solve problems by applying acquired knowledge, facts, techniques and rules in a different way.
  4. Examine and break information into parts by identifying motives or causes; making inferences, and finding evidence to support generalizations.
  5. Compile information together in a different way by combining elements in a new pattern or proposing alternative solutions.
  6. Present and defend opinions by making judgments about information, validity of ideas or quality of work based on a set of criteria.

2. Asking Questions

In-class questioning can be varied from the most simple to those that require more thought. These questions can be categorized following Bloom’s hierarchy of cognitive skills. Here are some examples of questions asked about the story Goldilocks and the Three Bears. Do you remember the story line? The little girl Goldilocks visits the home of the papa, mamma, and baby bear where she sleeps in their beds, eats their food, and sits in their chairs.

Remembering: List the items used by Goldilocks while she was in the Bears’ house.
Understanding: Explain why Goldilocks liked Baby Bear’s chair the best?
Applying: Demonstrate what Goldilocks would use if she came to your house.
Analyzing: Compare this story to reality. What events could not really happen?
Evaluating: Propose how the story would be different if it was Goldilocks and the Three Fish.
Creating: Judge whether Goldilocks was good or bad. Defend your opinion.

3: Constructing Test or Exam Questions

This is a combination of the above two points. If the course is arranged around learning objectives, designed with Bloom’s taxonomy in mind, then those objectives can be used to construct test and exam questions. This process will ensure alignment between instruction and assessment and provide validity to your evaluation of students’ knowledge and skills.

Additional Resources

  1. Anderson, L. W., & Krathwohl, D. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.
  2. Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.
  3. Davis, B.G (2009) Tools for Teaching, 2nd edition, Jossey-Bass, San Francisco
  4. Southey, R. (1837) The Three Bears. [Note this original version involves a nameless old woman instead of the little girl Goldilocks.]

Richard Shingles, Lecturer, Department of Biology
Director, TA Training Institute and The Summer Teaching Institute, Center for Educational Resources

Richard Shingles is a faculty member in the Biology department and also works with the Center for Educational Resources at Johns Hopkins University. He is the Director of the TA Training Institute and The Summer Teaching Institute on the Homewood campus of JHU. Dr. Shingles also provides pedagogical and technological support to instructional faculty, post-docs and graduate students

Image Source – CC Revised Bloom’s Taxonomy: Andrea Hernandez
Image Source – Bloom’s Levels of Understanding – Actions: Preparing Future Faculty Teaching Academy, Johns Hopkins University
http://www.cer.jhu.edu/graduatestudents/pffta.html

Should you ban laptops (and other devices) from your classroom?

Students using laptops in a lecture hall, view from the back looking at the students' screens.This question was cogently addressed in two recent articles. One by Tal Gross, an Assistant Professor at Columbia University, appeared December 30, 2014 in a Washington Post op-ed piece titled, This Year, I Resolve to Ban Laptops from my Classroom. Gross references the other article, by Clay Shirkey, professor at New York University, Why I Just Asked My Students To Put Their Laptops Away, which appeared September 8, 2014 on Medium. To be clear, neither is teaching in an active learning classroom where laptops might be considered a necessary piece of equipment for the pedagogical process.  Gross describes a lecture format with 85 students. Shirkey, who call himself “an advocate and activist for the free culture movement, [and] a pretty unlikely candidate for internet censor” asked the students in his “fall seminar to refrain from using laptops, tablets, and phones in class.”

Shirkey noticed a change over time as mobile devices grew to be both more technically robust and widely used. Rather than being a useful tool for note taking, these devices have become a distraction. There is also the issue of multitasking. Shirkey states, “We’ve known for some time that multi-tasking is bad for the quality of cognitive work, and is especially punishing of the kind of cognitive work we ask of college students.” Any number of studies have shown that multi-taskers are deluded in their belief that the practice enhances their work performance. The seductive immediacy of social media makes it even more difficult for students using laptops, tablets, and cellphones in the classroom to focus on the material being taught. But what tipped Shirkey over was the paper Laptop Multitasking Hinders Classroom Learning for Both Users and Nearby Peers, with results that “demonstrate that multitasking on a laptop poses a significant distraction to both users and fellow students and can be detrimental to comprehension of lecture content.” In justifying his decision to have students put away their laptops (and other devices), he says that he now sees teaching and learning as a collaborative effort with his students. “It’s not me demanding that they focus — its (sic) me and them working together to help defend their precious focus against outside distractions.”

Tal Gross focuses on another aspect of the issue—that of note taking. Typing on laptops can become “an exercise in dictation.” In a study undertaken by Pam A. Mueller (Princeton) and Daniel M. Oppenheimer (UCLA) titled The Pen Is Mightier Than the KeyboardAdvantages of Longhand Over Laptop Note Taking, the results showed “…that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand.” [Psychological Science, April 23, 2014, doi: 10.1177/ 0956797614524581.]

Both articles provide food for thought. Anecdotal evidence from our faculty here at Johns Hopkins suggests that students are becoming less adept at taking notes by hand, and even writing by hand at all. Old-fashioned essay-style exams taken in blue books seem to provide a challenge to students who complain of hand cramps at the end of the test. Yet the learning gains may be significant. Maybe it’s time to revive an old, tried and true practice. For students (and instructors) who need some tutoring on how to take notes, here is a resource to check out: The Sketchnote Handbook: The Illustrated Guide to Visual Note Taking, by Mike Rohde [Peachpit Press, November 30, 2012.]

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: CC MCGunner on Imgur at http://imgur.com/N2PYK8S?tags

A Manual for Flipping Your Classroom

The Innovative Instructor has featured several posts on flipping your classroom (see here, here, here, and here) a technique that has students learning content on their own time and using class time to work on problems, discuss materials, or engage in collaborative activities.

Text reading flipping the classroom with the classroom upside downJust in time for the upcoming semester, the Chronicle of Higher Education has published A Guide to the Flipped Classroom, available for free download. The manual, in PDF form, collects seven case studies and articles on the process of flipping the classroom that appeared in the CHE over the past three years. Faculty teaching evolutionary biology, chemistry, mathematics, and business topics weigh in on their experiences.

The experiences of Andrew Martin, a professor of ecology and evolutionary biology at the University of Colorado, Boulder, are highlighted in the first article. The article notes that innovations in pedagogy, technology such as clickers, support and advocacy from those who want to improve higher education, and economic realities have helped to popularize this teaching technique.

The second article describes a student’s view of a flipped chemistry course at Southwestern University in Georgetown, Texas. With the flipped classroom, learning takes center stage over teaching.

Stephen Neshyba describes his experience flipping his chemistry class at University of Puget Sound noting that moving to a flipped class may change “which kinds of students excel and which ones struggle.”

Two articles by Robert Talbert, a mathematician and educator at Grand Valley State University, look at the pedagogical reasons and advantages for flipping a class, and why students may push back when a course is flipped. There are suggestions on how to handle this. Talbert also blogs for the CHE at Casting Out Nines, where he has documented in detail his experiences with flipping his classes.

A study shows that physics faculty often try new methods and then abandon it in the face of student challenges. An article addresses what faculty who want to explore new teaching methods can learn from this research.

Finally there is a profile of Norman Nemrows, a professor of business at Brigham Young University. He began recording his lectures about 15 years ago. His experience raises the question “Are professors willing to become sidekicks to slick video productions?”

At the end of the manual there is a short list of resources to help you whether you are a novice or a seasoned flipper.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: © Macie Hall, 2013

Scalar: A Multimedia Authoring Tool to Investigate

For a new initiative here in the JHU Center for Educational Resources I have been researching multimedia authoring tools.  What is a multimedia authoring tool? These are software or online applications that allow for the creation web- or computer-based content using multimedia objects. Media includes, but is not limited to, text, image, audio, video files. This is a broad definition and there are many examples of such applications. I’m especially interested in tools that can be used by students (and faculty) for course projects, especially ones that allow for collaboration. Omeka, which I wrote about here, allows for the creation of online exhibitions and display of collections of content, and can be used collaboratively or individually.

Scalar logoRecently another tool came to my attention: Scalar. Scalar, advertised as “born-digital, open source, media-rich scholarly publishing that’s as easy as blogging,” was developed by the Alliance for Networking Visual Culture. ANVC includes people from an impressive list of universities. Scalar was developed with funding from the Andrew Mellon Foundation and the National Endowment for the Humanities.

Scalar allows a user to take media files from multiple sources, lay them out in a variety of ways, and provide extensive annotation or commentary. It is flexible in that it allows users to “take advantage of the unique capabilities of digital writing, including nested, recursive, and non-linear formats.” Collaborative authoring is supported and readers can comment on the materials presented. Showing is better than telling, so take a look at the Scalar Showcase for some examples of how it has been used.

I found a number of articles on using Scalar in teaching by Googling for “using scalar for student projects.” Two immediately caught my attention.

In the Educause Review published on Monday, October 13, 2014, Practicing Collaborative Digital Pedagogy to Foster Digital Literacies in Humanities Classrooms by Anita Say Chan and Harriett Green, has a case study describing students using both Omeka and Scalar in courses on information ethics and economics of the media. The article also mentions two other tools that might be of interest – Voyant (“a web-based reading and analysis environment for digital texts”) and Easel.ly (an application for creating infographics). I liked the article because it addressed some of the challenges in introducing “digital pedagogy practices” to students.

Jentery Sayers, Assistant Professor of English at the University of Victoria, notes “research interests in comparative media studies, digital humanities, Anglo-American modernism, computers and composition, and teaching with technologies.” He has a blog and posted examples of his and other faculty use of Scalar in their teaching.

It’s free and easy to create an account and try out Scalar for yourself. Just click on the Sign Up button found on most of the site’s webpages.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Scalar logo - http://scalar.usc.edu/scalar/

Feedback codes: Giving Student Feedback While Maintaining Sanity

We heard our guest writer, Stephanie Chasteen (Associate Director, Science Education Initiative, University of Colorado at Boulder), talk about feedback codes in the CIRTL MOOC, An Introduction to Evidence-Based Undergraduate STEM Teaching, now completed, but due to run again in the near future.  She presented in Week 2: Learning Objectives and Assessment, segment 4.7.0 – Feedback Codes. Below is her explanation of this technique.


One of the most important things in learning is timely, targeted feedback.  What exactly does that mean?  It means that in order to learn to do something well, we need someone to tell us…

  • Specifically, what we can do to improve
  • Soon after we’ve completed the task.

Unfortunately, most feedback that students receive is too general to be of much use, and usually occurs a week or two after turning in the assignment – at which point the student is less invested in the outcome and doesn’t remember their difficulties as well.  The main reason is that we, as instructors, just don’t have the time to give students feedback that is specific to their learning difficulties – especially in large classes.

So, consider ways to give that feedback that don’t put such a burden on you.  One such method is using feedback codes.

The main idea behind feedback codes is to determine common student errors and assign each of those errors a code. When grading papers, you (or the grader) needs only to write down the letter of the feedback code, and the student can refer to the list of what these codes mean in order to get fairly rich feedback about what they did wrong.

Example

Let me give an example of how this might work.  In a classic physics problem, you might have two carts on a track, which collide and bounce off one another.   The students must calculate the final speed of the cart.

Diagram of classic physics problem of colliding carts on a track.Below is a set of codes for this problem that were developed by Ed Price at California State University at San Marcos.Feedback codes table

How to come up with the codes?

If you already know what types of errors students make, you might come up with feedback codes on your own.  In our classes, we typically have the grader go through the student work, and come up with a first pass of what those feedback codes might look like.  This set of codes can be iterated during the grading process, resulting in a complete set of codes which describe most errors – along with feedback for improvement.

How does the code relate to a score?

Do these feedback codes correspond to the students’ grades?  They might – for example, each code might have a point value.  But, I wouldn’t communicate this to the students!  The point of the feedback codes is to give students information about what they did wrong, so they can improve for the future.  There is research that shows that when qualitative feedback like this is combined with a grade, the score trumps everything; students ignore the writing, and only pay attention to the evaluation.

Using Grademark to provide feedback codes

Mike Reese, a doctoral student at Johns Hopkins, uses the feedback codes function in Turnitin.  The Grademark tool in Turnitin allows the instructor to create custom feedback codes for comments commonly shared with students.  Mike provides feedback on the electronic copy of the document through Turnitin by dragging and dropping feedback codes on the paper and writing paper-specific comments as needed. Screen shot showing example of using GradeMark

Advantages of feedback codes

The advantage of using feedback codes are:

  1. Give students feedback, without a lot of extra writing
  2. The instructor gets qualitative feedback on how student work falls into broad categories
  3. The grader uses the overall quality of the response to assign a score, rather than nit-picking the details

Another way to provide opportunities for this feedback is through giving students rubrics for their own success, and asking them to evaluate themselves or their peers – but that’s a topic for another article.

Additional resources:

Stephanie Chasteen
Associate Director, Science Education Initiative
University of Colorado Boulder

Stephanie Chasteen earned a PhD in Condensed Matter Physics from University of California Santa Cruz.  She has been involved in science communication and education since that time, as a freelance science writer, a postdoctoral fellow at the Exploratorium Museum of Science in San Francisco, an instructional designer at the University of Colorado, and the multimedia director of the PhET Interactive Simulations.  She currently works with several projects aimed at supporting instructors in using research-based methods in their teaching.

Image Sources: Macie Hall, Colliding Carts Diagram, adapted from the CIRTL MOOC An Introduction to Evidence-Based Undergraduate STEM Teaching video 4.7.0; Ed Price, Feedback Codes Table; Amy Brusini, Screen Shot of GradeMark Example.

Using Twitter in Your Course

The Innovative Instructor has written about using Facebook in the classroom, what about Twitter? What’s next? you might ask, Pinterest? Yes, even Pinterest seems to have inspired faculty to find uses for its boards in the classroom. Today, however, I want to make a case for using Twitter.

Twitter Logo Blue BirdWhat is Twitter? Wikipedia tells us that “Twitter is an online social networking service that enables users to send and read short 140-character messages called ‘tweets’. Registered users can read and post tweets, but unregistered users can only read them.” From celebrities to revolutionaries, the Twitterverse (aka the Twittersphere) is comprised of more than 500 million users; 271 million of these use Twitter actively. While many complain that the content is mostly inane babble, there are serious, even scholarly, conversations taking place on Twitter every day.

This example of an educational use comes from the CIRTL MOOC, An Introduction to Evidence-Based Undergraduate STEM Teaching, now completed, but due to run again in the near future.  If you signed up for the MOOC, you may still be able to access the content. The Twitter example was presented in Week Five: Inclusive Teaching and Student Motivation.

Margaret Rubega, Associate Professor in the Department of Ecology and Evolutionary Biology at the University of Connecticut with a PhD in ornithology, decided to use Twitter, appropriately enough, for her introductory ornithology course. Rubega describes the course as face-to-face with approximately 100 students each semester it is taught. There is no lab component, so she struggled to find ways to introduce active learning in what has been primarily a lecture format. Another issue is that most of the students have grown up watching nature programs on TV (or YouTube videos), which exposed them to the concept that animals and birds are exotic species that live in remote areas. To her incoming students, nature was something that takes place somewhere else.

Rebega wanted to get her students to appreciate the way that biology plays out in their world. That it is something that they could observe when they walked out of the classroom onto campus. She knew that telling them (in lecture form) did not equal an appreciation that comes from observation and experience. She wondered if she could get students to use their electronic devices in some way that would force them to look up and see what was happening around them.

Thus was born #birdclass. The # sign is called a hashtag and is used to identify a specific conversation within the cacophony of tweets. By using the hashtag, Rubega and her students could have a targeted discussion. You can search Twitter for #birdclass to see the class-related tweets. Rubega assigned her students to tweet once a week. Each tweet was to 1) identify where they were, 2) what bird-related phenomena they saw, and 3) how it connected to course content. If it had the required three components, the tweet was awarded three points. She put a cap on the total number of points she would award each student.

Rubega’s initial goal was to make students take the course content outside of the classroom and see that what was described in class actually occurs in their world. She looked at Twitter as a tool that would allow her and her students to gather their observations in a way that was immediate and easy to access. She was not thinking about the social implications.

As soon as the students started using Twitter (and Rubega was posting to encourage them and provide examples of her expectations), their interest in engaging in conversation with her and their peers became immediately apparent. She began retweeting (forwarding and promoting in Twitter parlance) their best tweets to a larger audience interested in ornithology and thus facilitating a broader conversation outside of the class. This provided feedback from others in the field. The social aspect created instructional value that Rubega had not anticipated.

The second year she taught the course using Twitter, she traveled to Belize during spring break. She had not mentioned this trip to her students. While in Belize she began posting a list of birds she seen and asked if her students could identify where she was. Even though it was spring break and she had no expectation that any of her students would be monitoring their Twitter feeds, several student responded immediately. In a series of tweets, they worked on figuring out her location by looking at bird range and distribution charts. Rubega described being “blown away” by this experience. Further, when she returned to class, she gave the winning (first to correctly guess her location) student a token souvenir as a prize. This young women commented that she had learned more about geography in doing research during this tweet exchange than she had in high school.

Rubega maintains that Twitter works for her students because it allows self-directed, real-life discovery of the world around them. Their observations bring affirmation of what they have heard in class. The reward comes via interaction with their peers and a larger community of ornithologists, as well as acknowledgement of their tweets with the point system. By the end of the course, the students are using their knowledge to teach others in the Twitter ornithology community – by correcting and commenting on others’ identifications and observations, for example.

In thinking about the kind of learning that students achieve in the tweeting assignment, many of their tweets involved application and analysis (Bloom’s Taxonomy). This represents a higher level than might normally be associated with a straight lecture format – typically, transfer of knowledge and comprehension by the students.

You can see Margaret Rubega’s tweets at https://twitter.com/profrubega. Besides teaching at the University of Connecticut, she is also Connecticut’s state ornithologist.

If you are interested in using social network applications, such as Twitter, in your classroom, there are several articles by Derek Bruff, director of the Vanderbilt University Center for Teaching and a senior lecturer in the Vanderbilt Department of Mathematics, that will be informative. In an article in the Chronicle of Higher Education, A Social Network Can Be a Learning Network (November 6, 2011), Bruff references the concept of “social pedagogies,” a term coined by Randall Bass and Heidi Elmendorf, of Georgetown University. “They define these as “design approaches for teaching and learning that engage students with what we might call an ‘authentic audience’ (other than the teacher), where the representation of knowledge for an audience is absolutely central to the construction of knowledge in a course.” Leveraging student interests through social bookmarking, a CIRTL Network blog post from August 22, 2012, describes Bruff’s experiences using social bookmarking in two classes he has taught. And his students’ preferences for social bookmarking tools are discussed in a post, Diigo Versus Pinterest: The Student Perspective (May 31, 2012), on Bruff’s Agile Learning blog.

Macie Hall, Senior Instructional Designer
Center for Educational Resources

Image Source: Twitter blue logo https://about.twitter.com/press/brand-assets

Managing Teamwork with CATME

Many instructors recognize the value of having students work collaboratively on team-based assignments. Not only is it possible for students to experience a greater understanding of the subject material, but several life-long learning skills can be gained through active engagement with team members. Managing team-based assignments, however, is not something most instructors look forward to; the administrative tasks can be quite cumbersome, especially with large classes. Thankfully there is a tool to help with this process: CATME.

Logo for CATMECATME, which stands for ‘Comprehensive Assessment of Team Member Effectiveness,’ is a free set of tools designed to help instructors manage group work and team assignments more effectively. It was developed by a diverse group of professors with extensive teaching experience, as well as researchers and students. First released in 2005, CATME takes away much of the administrative burden that instructors face when trying to organize and manage teams, communicate with students, and facilitate effective peer evaluation.

‘Team Maker,’ one of two main parts of CATME, assists with the team creation process. First, it allows instructors to easily create and send a survey to students. The survey collects various demographic data, previously completed coursework, and student availability information. Instructors can also add their own questions to the survey if desired. Once the data are collected, instructors decide which criteria will be used to create the teams and then assign weights to each of the criterion. Team Maker then uses the weights in an algorithm to create the teams.  Instructors are free to adjust the teams, if necessary, to their satisfaction. Once the teams are finalized, the instructor releases the results to students, who are provided with their team members’ names, email addresses, and a schedule matrix showing member availability.

‘Peer Evaluation,’ the other core component of CATME, is used by students to evaluate their teammates’ performance as well as their own.  The web-based ratings page is presented on one screen, making it easy to fill out and submit results. Students select from a set of behaviors which most closely describes themselves and their peers. There is also a place where students can include confidential comments which are only seen by the instructor.  Once completed, instructors can decide when to release the evaluation results to students. Peer ratings appear anonymous to students but are identified for instructors.

Another tool included in CATEME is the ‘Rater Calibration’ tool, which helps train students in the peer evaluation process. Students are asked to rate a series of fictional team members and then receive feedback about their ratings. Other tools include the ‘Student Team Training’ tool, designed to help students recognize effective team behaviors, and the ‘Meeting Support’ tool, which provides templates that students can use to plan and organize meetings, such as writing a team charter, taking minutes, etc.

To view a video demo of CATME and learn more about the product, visit the CATME website. Instructors interested in using CATME can go to https://www.catme.org/login/request to register for an account.

Amy Brusini, Course Management Training Specialist Center for Educational Resources

Image Source: CATME logo from http://info.catme.org/

Creating Rubrics

Red sharpie-type marker reading "Rubrics Guiding Graders: Good Point" with an A+ marked below

Red Rubric Marker

Instructors have many tasks to perform during the semester. Among those is grading, which can be subjective and unstructured. Time spent constructing grading rubrics while developing assignments benefits all parties involved with the course: students, teaching assistants and instructors alike. Sometimes referred to as a grading schema or matrix, a rubric is a tool for assessing student knowledge and providing constructive feedback. Rubrics are comprised of a list of skills or qualities students must demonstrate in completing an assignment, each with a rating criterion for evaluating the student’s performance. Rubrics bring clarity and consistency to the grading process and make grading more efficient.

Rubrics can be established for a variety of assignments such as essays, papers, lab observations, science posters, presentations, etc. Regardless of the discipline, every assignment contains elements that address an important skill or quality. The rubric helps bring focus to those elements and serves as a guide for consistent grading that can be used from year to year.

Whether used in a large survey course or a small upper-level seminar, rubrics benefit both students and instructors. The most obvious benefit is the production of a structured, consistent guideline for assigning grades. With clearly established criteria, there is less concern about subjective evaluation. Once created, a rubric can be used every time to normalize grading across sections or semesters. When the rubric for an assignment is shared with teaching assistants, it provides guidance on how to translate the instructor’s expectations for evaluating student submissions consistently. The rubric makes it easier for teaching assistants to give constructive feedback to students. In addition, the instructor can supply pre-constructed comments for uniformity in grading.

Some instructors supply copies of the grading rubric to their students so they can use it as a guide for completing their assignments. This can also reduce grade disputes. When discussing grades with students, a rubric acts as a reminder of important aspects of the assignment and how each are evaluated.

Below are basic elements of rubrics, with two types to consider.

I. Anatomy of a rubric

All rubrics have three elements: the objective, its criteria, and the evaluation scores.

Learning Objective
Before creating a rubric, it is important to determine learning objectives for the assignment. What you expect your students to learn will be the foundation for the criteria you establish for assessing their performance. As you are considering the criteria or writing the assignment, you may revise the learning objectives or adjust the significance of the objective within the assignment. This iteration can help you hone in on what is the most important aspect of the assignment, choose the appropriate criteria, and determine how to weigh the scoring.

Criteria
When writing the criteria (i.e., evaluation descriptors), start by describing the highest exemplary result for the objective, the lowest that is still acceptable for credit, and what would be considered unacceptable. You can express variations between the highest and the lowest if desired. Be concise by using explicit verbs that relate directly to the quality or skill that demonstrates student competency. There are lists of verbs associated with cognitive categories found in Bloom’s taxonomy (Knowledge, Comprehension, Application, Evaluation, Analysis, and Synthesis). These lists express the qualities and skills required to achieve knowledge, comprehension or critical thinking (Google “verbs for Bloom’s Taxonomy”).

Evaluation Score
The evaluation score for the criterion can use any schema as long as it is clear how it equates to a total grade. Keep in mind that the scores for objectives can be weighted differently so that you can emphasize the skills and qualities that have the most significance to the learning objectives.

II. Types of rubrics

There are two main types of rubrics: holistic (simplistic) and analytical (detailed).

Selecting your rubric type depends on how multi-faceted the tasks are and whether or not the skill requires a high degree of proficiency on the part of the student.

Holistic rubric
A holistic rubric contains broad objectives and lists evaluation scores, each with an overall criterion summary that encompasses multiple skills or qualities of the objective. This approach is more simplistic and relies on generalizations when writing the criteria.

The criterion descriptions can list the skills or qualities as separate bullets to make it easier for a grader to see what makes up an evaluation score. Below is an example of a holistic rubric for a simple writing assignment.

Table showing an example of a holistic rubric

Analytical rubric
An analytical rubric provides a list of detailed learning objectives, each with its own rating scheme that corresponds to a specific skill or quality to be evaluated using the criterion. Analytical rubrics provide scoring for individual aspects of a learning objective, but they usually require more time to create. When using analytical rubrics, it may be necessary to consider weighing the score using a different scoring scale or score multipliers for the learning objectives. Below is an example of an analytical rubric for a chemistry lab that uses multipliers.

Table showing an example of an analytical rubric

It is beneficial to view rubrics for similar courses to get an idea how others evaluate their course work. A keyword search for “grading rubrics” in a web search engine like Google will return many useful examples. Both Blackboard and Turnitin have tools for creating grading rubrics for a variety of course assignments.

Louise Pasternack
Teaching Professor, Chemistry, JHU

Louise Pasternack earned a Ph.D. in chemistry from Johns Hopkins. Prior to returning to JHU as a senior lecturer, Louise Pasternack was a research scientist at the Naval Research Laboratory. She has been teaching introductory chemistry laboratory at JHU since 2001 and has taught more than 7000 students with the help of more than 250 teaching assistants. She became a teaching professor at Hopkins in 2013.

Image sources: © 2014 Reid Sczerba