Description of a grade: An inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite material. -†P. Dressel
Rubrics can be thought of as rating scales or scoring guides (Mertler, 2001) with defined performance criteria. Rubrics have two characteristics that distinguish them from other assessment tools. Rubrics list the criteria for the assignment and provide degrees of quality, usually in descriptive language (Goodrich Andrade, 2000). A rubric can be used as both a scoring tool and an instructional tool. Well-designed rubrics clearly describe teachersí expectations and provide indicators of quality to be used by the students and the instructor. Rubrics are useful in assessment to inform learning since they contain qualitative descriptions of performance criteria, serving a valuable purpose for formative evaluation (Tierney & Simon, 2004).
There are two general types of rubrics: Holistic and analytic. Holistic rubrics provide a holistic or general score for the total product or process. Analytic rubrics score individual components of the product or process and sum these scores to determine the total score (Mertler, 2001).
Goodrich Andrade (n.d.) suggests the following advantages to rubrics
- Improve student performance
- Monitor student performance
- Make instructorsí expectations transparent
- Provide feedback during and after the activity
- Develop self-assessment and quality indicators in students
- Reduce time spent evaluating student work
- Flexibility to accommodate varied studentsí abilities
- Ease of use
- Ease of explanation
Mertler (2001) provides the following process for designing rubrics:
- Match learning objectives, instruction, and the rubricís categories
- Identify specific observable attributes demonstrating mastery or understanding
- Define characteristics to describe the attributes
- Write narrative descriptions
- Holistic: Write complete descriptions for the highest and lowest level of performance using each attribute in each description (high level with all attributes and low level with all attributes)
- Analytic: Write complete descriptions for the highest and lowest level of performance for each attribute (each individual attribute will have a high and a low level)
- Describe intermediate levels of performance
- Collect student work samples illustrating each level
- Revise the rubric as necessary
Tierney and Simon (2004) provide some suggested thoughts to help establish consistent criteria descriptors.
- Performance criteria should be explicitly stated.
- Attributes should be explicitly stated for each performance criterion.
- Attributes should be consistently addressed across each level.
Tierney and Simon (2004, ¶ 5) suggest that many rubrics are flawed due to a lack of consistency across the performance criteria descriptors. Performance criteria reflect the "dimensions of the performance or product that is being taught and assessed." These authors recommend that the performance criteria should remain consistent from level to level. Simply, the attributes listed in each criterion should be the same across all levels of quality. Another common mistake that causes a lack of consistency across the criteria is the use of excessively negative language to describe categories on the lower end of the quality continuum and excessively positive descriptors for the opposite end of the continuum.
- Avoid imprecise language
- Avoid negative language
- Establish valid categories
- Match criteria to task requirements and goals/objectives
- Use observable behaviors or product characteristics
- Use clear language that students can understand
- Difference between levels should be clear
Moskal (2003) recommends several tips for accurate scoring (provided the rubric is consistent):
- Multiple raters should yield similar scores if scoring the same product or performance.
- A single rater should be able to yield consistent scores across time.
- Anchor papers (student work samples for categories or levels) assist the scoring process.
- Anchor papers assist student understanding of the task requirements.
- The connection between the score and the rubric should be transparent.
All links below open in new windows.
- Sample Discussion Board Rubrics
- Midlink Magazine Teacher Tools Rubrics and Evaluation Resources
- Rubistar Rubric Generator
- Rubrics for Wikis
Adapted from the following sources [all links below open in new windows]:
- Goodrich Andrade, H. (2000). Using rubrics to promote thinking and learning. Educational Leadership 57(5). 13-18.
- Goodrich Andrade, H. (n.d.). Understanding rubrics.
- Mertler, C. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25).
- Moskal, B. (2003). Recommendations for developing classroom performance assessments and scoring rubrics. Practical Assessment, Research & Evaluation, 8(14).
- Tierney, R. & Simon, M. (2004). What's still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation, 9(2).
page last updated 12/4/2013 2:09 PM