A rubric is a scoring guide composed of criteria used to evaluate performance, a product, or a project. For instructors and students alike, a rubric defines what will be assessed. They enable students to identify what the instructor expects from their assignment submission. It allows evaluation according to specified criteria, making grading and ranking simpler, fairer and more transparent.


  • Creates objectivity and consistency across all students
  • Clarifies grading criteria in specific terms for performance or product
  • Shows expectations and how work will be evaluated
  • Promotes students' awareness and provides benchmarks to improve their performance or product


  • Creating effective rubrics is time consuming
  • Cannot measure all aspects of student learning
  • May require additional feedback after students receive their score

What does a rubric look like? 

On the left side, the criteria describe the key elements of a student work or product.  At the top, the rating scale (the numbers) identifies levels of performance.  The blank boxes in the table below contain the indicators which provide examples or concrete descriptors for each level of performance. 

  4 3 2 1
Criterion 1        
Criterion 2        
Criterion 3        

Things to consider when developing a rubric

Start the process by consulting professional literature and online resources to find rubrics that others have already created.  Many rubrics, already in use in a variety of subject areas, have been refined using professional standards and empirical research; others have been classroom-tested by instructors and their students.  It makes a lot of sense to use these, at least as models, in designing our own rubrics. 

Once you find a model, you can adapt the criteria, rating scale, and indicators to your needs.  Sample sites for rubric models can be found in Additional Resources below.

Whether you have found a rubric to adapt or are designing a rubric from scratch, the developmental process is the same.  It begins with identifying basic components of a rubric:  the performance criteria, the rating scale, and the indicators of performance.


Figure out what areas really matter to the quality of the work that’s being produced.  Whether it’s an essay, a project, or a presentation, what evidence of learning or thinking do you want to see in the final product?  Then:

List this evidence for selection of the components that are most important to evaluate in the given task and instructional context.  These will become your criteria.

Decide which of those criteria are “non-negotiable.”  Ideally, your rubric will have three to five performance criteria. If you’re having a hard time deciding, prioritize the criteria by asking:

  1. What are the learning outcomes of this unit?
  2. Which learning outcomes will be listed in the rubric?
  3. Which skills are essential at competent or proficiency levels for the task or assignment to be complete?
  4. How important is the overall completion of the task or project (interest, logic, organization, creativity)?

The selected criteria should reflect both process and product quality.


Rating scales can include either numerical or descriptive labels.  Usually, a rating scale consists of an even number of performance levels.  If an odd number is used, the middle level tends to become a catch-all category. 

On the chart below, the highest level of performance is described on the left.   A few possible labels for a four-point scale include:

4 points
3 points
1 point
0 points
Exceeds expectations
Meets expectations
Not there yet
Needs work
Needs improvement
Highly competent
Fairly competent
Not yet competent


Define the performance quality of the ideal assessment for each criteria, one at a time.  Begin with the highest level of the scale to define top quality performance. Remember, this is the level that you want all students to achieve and it should be challenging.  Then:

  1. Create indicators that are present at all performance levels.
  2. Make certain there is continuity in the difference between the criteria for exceeds vs. meets, and meets vs. does not meet expectations. The difference between a 2 and a 3 performance should not be more than the difference between a 3 and a 4 performance.
  3. Edit the indicators to ensure that the levels reflect variance in quality and not a shift in importance of the criteria.
  4. Make certain that the indicators reflect equal steps along the scale. The difference between 4 and 3 should be equivalent to the difference between 3 - 2 and 2 - 1. “Yes, and more,” “Yes,” “Yes, but,” and “No” are ways for the rubric developer to think about how to describe performance at each scale point.

Some common descriptive terms to indicate that progression are listed below:

Task requirements
Very few or none
Some of the time
Rarely or not at all
No errors
Few errors
Some errors
Frequent errors
Always comprehensible
Almost always comprehensible
Gist and main ideas are comprehensible
Isolated bits are comprehensible
Content coverage
Fully developed, fully supported
Adequately developed, adequately supported
Partially developed, partially supported
Minimally developed minimally supported
Vocabulary Range
Very limited
Highly varied; non-repetitive
Varied; occasionally repetitive
Lacks variety; repetitive
Basic, memorized; highly repetitive

Things to consider when reviewing a rubric

The following questions can help determine if the rubric will be effective:

  1. Are the characteristics of each performance level clear?  Will students be able to self-assess by using the descriptors?  Will the descriptors give students enough information to know what they need to improve?
  2. Does the rubric adequately reflect the range of levels at which students may actually perform given tasks?
  3. Are the criteria defined at each level clear enough to ensure accurate, unbiased, and consistent scoring? Could several instructors use the rubric and score a student’s performance within the same range?
  4. Does the rubric reflect both process and product?
  5. Are all criteria equally important, or is one variable stronger than the others?
  6. Is the language used descriptive enough for students to determine what is being measured through both qualitative and quantitative methods?

Additional considerations related to rubrics are listed below:

  • When possible, rubrics need to be piloted, or field tested, to ensure they measure the variable intended by the designer.
  • Rubrics should be discussed with students to establish an understanding of expectations.
  • Rubrics should increase the chance that scoring is accurate, unbiased, and consistent.
  • Expectations of student performance mentioned in rubrics should align with the conceptual lesson or unit delivered. Students shouldn’t be expected to do what they haven’t been previously taught or shown.

Rubric Types

The rubrics shown above are analytic rubrics which provide feedback along several dimensions. They have the following advantages: feedback more detailed  and scoring is more consistent across students and graders when compared to other rubric types. Disadvantages are that they are more time consuming to score. They are best used when you need: to see relative strengths and weaknesses, detailed feedback, to assess complicated skills or performance, and to encourage students to self-assess their understanding or performance.

Holistic rubrics are the converse of analytic rubrics because they provide a single score based on an overall impression of a student’s performance on a task. Their advantages are quick scoring that provides a broad overview. Disadvantages are that detailed information is excluded and it may be difficult to provide one general score.  They are best used when you want a quick snapshot of achievement and a single dimension is adequate to define quality.

Additionally, rubrics can be considered general in which they contain criteria that are general across tasks, or task specific in which their criteria are specific to each task.

Task Specific
Can use same criteria across multiple tasks
More reliable for grading (scores from multiple scorers are consistent)
Feedback many not be specific enough, less reliable for grading
Difficult to construct criteria for all tasks
Use When
Assessing reasoning, skills, and products and all students who are not doing the same task
Assessing knowledge and consistency of scoring is important