Sometimes it seems like collaborative teams of educators are on different pages when it comes to assessment. Are we assessing students to put grades in our grade books? To measure what they’ve learned? To determine what (if any) interventions are necessary?

It’s been my experience that sometimes we try to accomplish all of these in a single assessment… and often when we try to do it all at once, we wind up not doing any single thing particularly well. So, in this post I want to reflect on why we assess our students.

To begin, I want to share my favorite analogy for understanding the differences between two major types of assessment: formative and summative. This quote comes from Robert Stake:

“When the cook tastes the soup, that’s formative; when the guests taste the soup, that’s summative.”

I love this analogy because it makes the line between formative and summative so much clearer. When a cook tastes the soup, she is primarily interested in getting information that will help her know if she needs to modify the soup before it is done and cannot be changed. By the time the customer eats the soup, it is done and no changes can be made by the cook. The important takeaway is that they both taste the soup for different reasons.

So, here are some of my thoughts on assessment broken down by formative and summative.

Formative Assessment
When we formatively assess students, we are trying to get information that we can use to help students right away. Generally, you have a specific question and you need data to answer it. For example, “Why are students struggling with adding integers?”

Ideally, you know what you will do if the assessment results show that they do understand the concept as well as what you will do if they don’t. You can also ask yourself, “What’s the fewest number of questions I need to give students to get this data?” For example, why would you want to give students 20 questions when 2 would do just fine? Similarly, why would the cook taste 20 spoonfuls of soup when 2 spoonfuls would give her the information she needs?

Sometimes I see teachers adding more questions to an assessment, just so it takes the entire period! This winds up being a waste of time for formative assessments because why would you ask more questions than just enough to answer your question? That’s more work for you to grade and doesn’t help you answer your question any better! A good rule to follow is this: if you can’t explain what you will do as a result of a student getting each additional question right or wrong, then you’ve got too many questions.

Also, something strange about formative assessment is that you don’t have to review every single student’s assessment. Consider that in a class of 30 students, if 29 students showed that no intervention was necessary, would you do a whole-class intervention if the 30th student was struggling? Similarly, if 29 students showed that they needed intervention, would you skip doing intervention if the 30th student was fine? Depending on your situation, you might only need to grade 50% to 70% of them to have sufficient data to make a decision.

Remember that with formative assessment, you are looking for general trends that can inform your instruction. It’s similiar to what the United States does with the Census. For some questions, they don’t survey everyone but rather ~5% of the population and then use that data to extrapolate results for the rest of the United States.

Finally, formative assessments should rarely (or never?) be used as grades in the grade book. Again, that’s not their purpose.


Summative Assessment
Summative assessments tend to be more comprehensive with more questions to more reliably know what students do or don’t know. These might be end of chapter/unit assessments where you are not planning on returning to the topic immediately, but still need to measure what students learned for mixed review.

This can lead to other problems though. I’ve talked to teachers in multiple districts that told me about putting additional easy questions on their assessments because of administrator pressure to raise assessment scores. Obviously, this leads to false positive results where students appear to understand a topic but don’t.

Also consider the pros and cons of doing reviews before assessments. How might a review affect the assessment results? What does it mean if scores are different when there is or is not a review? Which would be a more accurate measure of what students know?


Collaborative teams must have conversations about why they’re giving assessments. It’s been my experience that educators assume that they have the same intentions as their colleagues, but often find out that it isn’t the case upon digging deeper.

I’ll end with a quote from John Hattie’s Visible Learning for Teachers: Maximizing Impact on Learning

The major reason for administering tests in classrooms is for teachers to find out what they taught well or not, who they taught well or not, and where they should focus next.


  1. I found this thoughtful and interesting. It certainly addressed some questions that I hadn’t thought about directly myself and should have done.
    It occurs to me, though, than there is (at least) one other major reason for assessment that is really distinct from the two you mention. That is the issue of student motivation. When you’re working really hard on something, it can be quite demoralising if you feel that no one is really noticing how well you’re doing. A graded test, even if it doesn’t count in any way towards any official score for the course, is a way for a student to show to the world how well they’re doing and to know that it’s been recognised. In fact, up until that point even a very strong student may have very little idea that their performance is satisfactory – this is particularly true in a new environment such as their first course in a new school.

    • Hmm. Very interesting. I hadn’t thought of it this way. I don’t know how often this actually takes place in classrooms though. For example, how often is a teacher giving an assessment solely to give students a potentially positive self-reflection. I think that more often it might be a shorter, non-graded assignment.

      I’ll keep this on my radar though as maybe it’s been happening but I haven’t paid enough attention. Thanks for letting me know.

  2. This is interesting and a good reminder of the main purposes of different types of assessments. I would add that when we do formative assessments, we may be missing the boat if we don’t have students quickly reflect on how well they’ve learned the information so far and/or involve students in analyzing the assessments. For example, if a student is doing three problems to assess understanding of the distributive property, they might also comment on their level of confidence with the topic, or the class can quickly analyze errors shown by the teacher (without names) as a large group, providing both a quick reflection on their own understanding and an in-the-moment review. It seems that formative assessments can be a great opportunity to build student self-efficacy, as opposed to receiving a summative grade – which students often see as an event wherein the teacher has informed them that they are good or bad at the skills in the unit (or worse – good or bad at math).

  3. I do grade formative quizzes, so that they study and try their best. If grades are very low, I do then another quiz after we try again, with the objective (for them) to raise their grade.

    • I wonder how that affects your results. Might you ever get false positive results because they’re graded?

  4. Whatever the nature of the assessment, one characteristic must be necessary – that the assessment is reliable and trustworthy. This is not the case for national school exams in the UK: GCSE (age 16, graded U, 1, 2… 9 (top)), AS (age 17, graded U, E, D… A (top)) and A level (age 18, graded U, E, D… A, A* (top)). According to Ofqual, the government regulator, grades are “reliable to one grade either way”. This is true, but disguises a deeper truth – on average, 1 grade in every 4 is wrong. This does much damage. The full story can be found in “Missing the Mark – Why so many school exam grades are wrong, and how to get results we can trust” –

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment