Insights into Personalized Literacy

The Essential Difference Between Evaluation and Assessment

I recently had a conversation with a classroom teacher regarding multiple measures of assessment in literacy. Tell me if you have heard this one before:

Concerned Educator: Billy, the data I’m viewing on my recent NWEA MAP assessment paints a different picture for several of my students when compared to other assessment results in class.

Me: Yes, that can absolutely happen when you take into account the many variables that are in play when students take high-stakes tests. Is the data a substantial number?

Concerned Educator: Yes! Some of my students who are showing big gains in class using a variety of collection tools including our personalized digital literacy platform, one-on-one conferring, and running records are showing minimal to no gains on the MAP assessment.

Me: You aren’t the only teacher to observe this pattern with a sample of students in your classroom. We know the data isn’t always going to match up across different assessment tools, but the big differences in growth should certainly be worth bringing up in discussion with colleagues and administrators.

Concerned Educator: Right! Is one means of assessment more valuable and valid than the other? What do I say when parents ask questions about their child's growth or lack thereof?

Me: Just remember, test scores do one thing really well: they tell you whether or not a student in that exact moment in time can accurately answer a question, not how deeply they understand a concept or skill. I’m not saying they provide useless data, but rather are one part of the puzzle when viewing the reading identity of a young learner. Putting too much emphasis on one area will possibly limit your reach with that student. Instead, turn your attention to the personalized digital literacy platform which contains multiple measures of growth through Lexile benchmarks, as well as your one-on-one assessments through running records where you are having a conversation with the student. That’s the data I’m most interested in, not the test scores.

Similar conversations are occurring in classrooms across the globe: teachers asking “What do I do now?” when test scores don’t necessarily match up with other measures in class. These can be difficult conversations to have, especially if the leadership within a district bases future decisions about students and teacher evaluation on said test scores. As an instructional coach, I have participated in countless discussions with classroom educators and administrators seeking guidance in how to properly synthesize similar findings, what they mean, how to use them accurately, and how to form an action plan.

First things first: test scores are but one part of the profile of a learner. Relying on them too much will produce a very narrow profile of a student’s capabilities. Educational author and researcher Alfie Kohn takes the facts a bit further when it comes to testing: “Standardized-test scores often measure superficial thinking.” In a study published in the Journal of Educational Psychology, elementary school students were classified as “actively” engaged in learning if they asked questions of themselves while they read and tried to connect what they were doing to past learning; and as “superficially” engaged if they just copied down answers, guessed a lot, and skipped the hard parts.” Once educators can recognize the tests for what they are and are not, they can begin to change the way they assess growth in reading.

Assessment? Evaluation?

There’s an importance difference between assessment and evaluation. Evaluation should involve making judgments, usually by assigning a quantitative number to a task, while assessment . If the goal is to move into an authentic environment for assessing students’ reading growth, it starts with moving beyond the test scores. Meaningful assessment should help clarify three very important questions within a unit of instruction:

1. Where are they now?
2. Where are they going?
3. How are they going to get there? 

In other words, are we being effective in our methods of instruction? If the answer is no, we don’t simply continue to push the same multiple-choice test in front of our learners. Instead, we reflect on our strategy and approach, and change things up. Responsive teaching goes with assessment like chips go with guacamole. A strong instructional leader will recognize this.

According to Dr. Royce Avery, the Superintendent of Manor ISD, everyone in his district has started assessing reading in a new way. Dr. Avery said, “Our students and teachers have started tracking literacy engagement by measuring time spent reading and analyzing data weekly to ensure improvement is happening in every classroom and on every level.” Dr. Avery and his teachers are assessing reading with—wait for it—reading! Better yet, they are doing it each and every day through meaningful measures. Daily assessment findings inform future instruction, while evaluation is saved for periods when they need to assess students’ progress for demonstrating understanding.

Teaching the Value of Thinking About Thinking

It’s no accident that John Hattie’s Visible Learning study has revealed metacognition as one of the top impact strategies for teachers. Guess what is not on the list of high-impact strategies? Test preparation. Recent research has demonstrated that educators can improve students’ reading skills by explicitly teaching metacognitive strategies. Look no further than the Visible Thinking Project at Harvard University.

I first came across Visible Thinking-Project Zero a few years ago when I began reading about visible thinking strategies as a more meaningful approach to reading comprehension with my students and was inspired to begin trying some of them straight away. I had found what I was looking for in terms of providing students with meaningful strategies that were collaborative in design and foster a greater motivation for learning.

These strategies quickly became embedded in the culture of our classroom and we used them throughout the content areas. We used them to build our background in science leading up to investigations, Socratic seminars, and book clubs. It was amazing to witness the discourse in discussions and the new enthusiasm students brought to discussions about text. As a result, an amazing thing happened: it became easier to assess their thinking and understanding. Why? I could see, hear, and observe students' thinking because misconceptions, prior knowledge, reasoning, and levels of understanding were more likely to be uncovered.

Young readers reveal their comprehension by responding to what they read, hear, or view. With the purposeful use of technology, educators can move beyond test scores and, “work smarter, not harder.” Through the use of digital tools, students are able to take control of their comprehension by sharing out their learning and receiving meaningful feedback from both their teacher and their peers. 

Tools to Assess Thinking and Understanding

There are a handful of go-to tools I continue to share with teachers who are seeking new and authentic ways to assess students’ growth in reading. Each one puts the student in control of communicating their learning while also seeking a genuine audience.

Explain Everything allows students to learn crucial 21st-century skills in a presentation platform that gives them the ability to create multimedia slides while narrating on top of them. Students learn how to present information using multiple forms of expression (images, text, video, and audio) and easy options for sharing/exporting to the final destination of choice.

In addition to introducing visible thinking strategies to my students, the other big game-changer was giving my students a place to write whenever and wherever they wanted. Better still was giving them an authentic audience across the globe. Kidblog did all of this and then some. If personal responses give us a window into students’ minds, then Kidblog gave me the squeegee to polish up each and every window! Students in my classroom had blog expectations throughout the week, including at least one reading response post. The amount of insight these posts provided to me was stunning and went a long way in providing a roadmap for future instruction.

Seesaw is a student-driven digital portfolio platform that empowers students to independently document what they are learning in class. Seesaw quickly became the place where screencasts from Explain Everything and published Kidblog posts would live. The focus with this tool is reflection. Seesaw helps capture the learning process, not just the end result. Along the way, teachers can provide comments and feedback to students as they work towards publishing their best work.

Culture of Thinking

By making the move beyond test scores, I not only reaffirmed my commitment to providing my students with meaningful strategies as readers, but to the means by which I assessed their learning. What I realized is that I was not looking for a new, packaged set of assessments to print and distribute in class. I was looking for a change in culture. The results brought new learning for me, including the way I assessed reading among a wide range of readers, as well as providing my students with a pathway towards a deeper understanding of content and a greater motivation to learn. 

Billy Spicer

Billy currently teaches in Lake Zurich Community Unit School District 95 in suburban Chicago. After spending a decade teaching 3rd thru 5th grades, Billy served students and teachers as an instructional coach. He recently spent time in the Bahamas with the Shedd Aquarium where he lived on a research vessel for a week conducting scientific inquiry. Prior to teaching in Lake Zurich he worked at Walt Disney World as a member of the Animal Programs department in entertaining, educating, and inspiring conservation action.