| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Tech Tools for Assessment How Do They Measure Up

Page history last edited by Keith Schoch 7 years, 3 months ago

 

Tech Tools for Assessment:

How Do They Measure Up?

 

By Keith Schoch

 

“It is only through assessment that we can discover whether the instructional activities in which we engaged our students resulted in the intended learning. Assessment really is the bridge between teaching and learning” (Wiliam, 2013, p. 15).


INTRODUCTION

 

When my older daughter was five, she purchased a rubber squeaky hammer from the dollar store. For days she walked around the house, asking what needed to be hammered. So I guess she proved Abraham Maslow to be correct when he said, "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail" (Maslow, 1966, pp. 15-16). I sometimes feel that teachers are like that when it comes to a new technology. They'll get excited about a single site or application and seek ways to use it immediately and often, regardless of its appropriateness to the task. In a position paper on formative assessment, the NCTE (National Council of the Teachers of English) states, “While well-designed tools or assessment strategies are a key component to authentic formative assessment, if they are not what teachers consider the right tools for the immediate task at hand, they are frustrating and counterproductive” (Formative assessment that truly informs instruction, 2013, p. 2).

 

We should strive to work in a more purposeful manner. Let's first consider our instructional objective, which is never "to use a new technology." Let's then ask, "Can I incorporate a technology to somehow facilitate, extend, or improve this lesson?" When it comes to tools for assessment, we are seeking measures that are timely, frequent, authentic, engaging, practical, collaborative, and reflective. That’s a long list, and I apologize that I haven't developed a clever acronym with which to brand it. You might also wonder if every assessment needs to meet every criterion, and the answer is no. We as mindful teachers need to select judiciously; if a technology doesn't fit the bill, we shouldn't force it.

 

Technology tools are especially effective in administering formative assessment. Formative assessment can be defined as a deliberate process used by teachers and students during instruction that provides actionable feedback used to adjust ongoing teaching and learning strategies to improve students’ attainment of curricular learning targets or goals.” Mursky (2015) goes on to describe its attributes, which include clarifying intended learning, eliciting evidence, interpreting evidence, and acting on that evidence. Few teachers, however, were trained in how to elicit evidence of learning in a manner that is authentic, practical, or engaging. Other teachers fail to differentiate between formative and summative assessments; instead, they simply set assessment tasks, not fully realizing “that formative assessments are for learning, not necessarily of it” (Miller, 2015). As a result, these teachers are likely to associate assessments with summative examinations and standardized testing and to see them as something that happens discretely apart from instruction. It’s even possible, in cases such as the PARCC test, that teachers view assessment as an intrusion on, or distraction from, classroom instruction and learning. Heritage (2007) confirms this, stating “In a context in which assessment is overwhelmingly identified with the competitive evaluation of schools, teachers, and students, it is scarcely surprising that classroom teachers identify assessment as something external to their everyday practice.” We need for teachers to see assessment as a key factor in learning, and digital tools can play a huge role in this mission.

 

The purpose of a collection such as Tech Tools for Assessment (http://techtoolsforassessment.pbworks.com) is to introduce teachers to digital assessment applications which will motivate and inspire students while yielding practical outcomes for reflection and continued growth. The tools provide measures that are timely, frequent, authentic, engaging, practical, collaborative, and reflective. Whenever possible, teaching applications and exemplars are included in each site’s description to assist the teacher in understanding each application’s use and to aid the teacher in introducing the applications to students. As teachers, we need to “effectively communicate to our learners both a description of how they will perform an assessment activity as well as a description of how we will judge the quality of their performance” (Vega, 2015). These tools can play a critical role in meeting these objectives.

 

TIMELINESS

 

When my younger daughter searches Pinterest to find new projects for her Crafts for Charity group, she’ll often run across an intriguing idea and immediately ask, “How’d they do that?” She’ll follow the link to the directions, or more often she’ll head over to YouTube and find a crafter who demonstrates the process on-screen. In these moments, my daughter’s brain is primed to receive this information. She craves it in the same way that, ten minutes later, she might crave strawberries or an orange and isn't satisfied until she gets one or the other. Through our anticipatory sets, our prompts, and our discrepant events, we create these same moments in the classroom. This just-in-time learning stokes students’ minds and prepares them for the lesson ahead. Likewise, when a student discovers he’s missing a critical technical skill to complete a project, he is exceptionally ready to receive instruction, whether from the teacher, a classmate, or a YouTube video.

 

Whenever possible, assessments should occur as closely to the targeted instruction as possible. Instruction makes the biggest impact on students when they are developmentally ready to receive it, in what Vygostky (1978) labeled the “zone of proximal development.” But in addition to just-in-time learning, students deserve “just-in-time feedback” (Miller, 2015). Too often, by the time most assessments are graded and returned, the data is well past what Wiliam (2015) calls its “sell-by date,” and classroom instruction has moved on to new concepts and skills. What makes a formative assessment formative is whether students are intentionally provided a chance to receive and use feedback in a later version of the ‘same’ performance (Wiggins, 2011).

 

This emphasis on immediacy and timeliness is what makes technology such a powerful tool for formative assessment. Self-checking sites like Testmoz, Quizlet, and Zaption offer students instantaneous feedback. Some of my students have taken the same Testmoz assessment over and over, striving each time to remedy those learning gaps (they call them mistakes) which were revealed in the first attempt. Teachers can upload more complex questions and prompts to sites like Curriculet and Edulastic, but even these sites provide immediate feedback to let students know if they are grasping the concepts of texts they’re attempting to analyze.

 

Bill Ferriter (2011) reports that in What Works in Schools, Robert J. Marzano shares findings that suggest that “providing students with timely and specific feedback on their levels of mastery” can account for percentile gains of anywhere from 21 to 41 points—higher than gains caused by other school-based achievement factors including parent and community involvement, safe and orderly environments, and collegiality in the schoolhouse.” Busy teachers can’t always deliver timely feedback, but technology can. Sites like ForAllRubrics additionally provide teachers ways to increase the specificity of responses to students in ways that take far less time than traditional pen-and-paper commenting.

 

FREQUENCY

 

When I sign up for a gym membership, I might plan to spend seven hours a week at that facility in pursuit of my fitness goals. So should I spend seven hours there on Monday and take the rest of the week off to recuperate, or should I instead distribute my efforts, spending just one hour an evening on each of the seven days? The answer is obvious, and yet we sometimes fail to see the connection to our assessment.

 

In addition to timeliness, the frequency with which students are assessed is key to success. How many of us in the early stages of our teaching career have given a large summative assignment, only to discover that students failed to comprehend a key concept or skill? A final unit test is too late in the game to reveal this gap. Had we instead distributed our assessments, in the same way that we distributed the instruction, then we would have detected these gaps in learning much sooner and adjusted our teaching in response.

 

Some teachers may conduct diagnostic assessments in hopes of discovering these gaps early on. While diagnostic assessments do serve very real purposes, including compacting and differentiation, they cannot replace frequent formative measures. Heritage (2007) points out that “If teachers use evidence effectively to inform their instruction, it will render previous assessment information out of date… For this reason, a constant stream of evidence from formative assessment is necessary during lessons.” In my own ELA classroom, for example, students are assigned periodic written responses to current events articles found on Tween Tribune. After students have submitted their work, I assess it using with an online rubric at ForAllRubrics which permits me not only to assign points per rubric item, but also to include extensive written comments. These rubrics are then emailed to all students and their parents with the click of a mouse. In some cases, I’ll also review the results with students one on one. Over time, a marked improvement is seen in all responses due to the thorough feedback I’m able to provide. Even when the written expectation for responses becomes more rigorous in the third marking period, the combination of an exemplar and the previous feedback forms guarantees students success. This confirms what researchers discovered, that if students “revisit content over carefully spaced intervals, they retain information longer than if presented with information once and then only assessed immediately after initial (short-term) mastery” (Kornell, Castel, Eich, & Bjork, 2010, p. 489-503).

 

Is it always possible for teachers to provide feedback in a timely manner? Probably not, especially in the case of large classes. That is why many teachers have devised way for students to assess the work of their peers. Some educators might feel that students are poorly qualified to assess the work of classmates, but I’d argue that if students understand the critical attributes being assessed, either through a rubric or checklist, then they are capable of providing useful feedback. According to Graham Gibbs (2015), “quick and dirty feedback (such as model answers, generic feedback on a sample of the cohort of students’ work, and peer feedback or discussion) can work much better than slow and perfect feedback – it has to be fast enough that students are still interested.” When using Google Docs, for example, students can offer peers anecdotal comments and suggestions using the Comments feature. When looking at work that doesn't allow online commenting, peers can use simple one point rubrics to assess work (Gonzalez, 2015). Due to the amount of writing involved, one point rubrics would be prohibitive for teachers, but are perfect for students who often have lots to share when critiquing a partner. John Hattie writes, “The most powerful single modification that enhances achievement is feedback. The simplest prescription for improving education must be 'dollops of feedback'” (as quoted in Marzano, 2003, p. 37).


AUTHENTICITY

 

In my first year teaching Grade 6 English Language Arts, we had discussed portmanteau words, those words like smog and motel which are created by blending two existing words while dropping some letters. I had done my due diligence by explaining how they differed from compound words (which don’t lose letters) and contractions (which signify dropped letters by the inclusion of an apostrophe). Students had quizzed well and we had moved on. Or so I thought, until one day Keira, one of my students, paused by the door before leaving.

 

“Is something wrong?” I asked.

 

“No,” she responded, “I’m just waiting for the eighth graders to pass. I don’t want to get stampled.”

 

“Stampled?”

 

“Yeah, it’s a portmanteau I invented. It’s when you get trampled by a stampede. You can use it if you want.” And off she went.

 

The next day I overheard another student complaining that Spanish class was aggronizing. “Is that a real word?” I asked. He replied that of course it was, since it combined agonizing and aggravating.  It turns out my that students had been coining new words at lunch each day, and using them in an effort to make the better ones stick. When I asked Keira why she hadn't told me about this, she shrugged and said, “I thought we were done with that in class.”

 

Her remark haunted me for days, and greatly influenced the way I taught reading and writing from that day forward. I became obsessed with showing students the reasons we were learning all this “stuff,” from portmanteaus to paragraphs, and why we would never consider ourselves “done with” anything we discussed in class.

 

Grant Wiggins defines authentic assessment as "...Engaging and worthy problems or questions of importance, in which students must use knowledge to fashion performances effectively and creatively. The tasks are either replicas of or analogous to the kinds of problems faced by adult citizens and consumers or professionals in the field" (Wiggins, 1993, p. 229). What his definition fails to mention is the motivation and sense of ownership which well designed authentic tasks can generate. Nearly every assignment created in the traditional classroom is owned by the teacher, and remains the property of the teacher, until he or she anoints it as correct and completed. The authentic task, however, becomes the property of the student as she must choose the best methods and materials to complete the task. In the example above, students took over ownership of portmanteaus long after we had finished with them in class.

 

How important is ownership of a task to a student? Scott Keller, author of Beyond Performance: How Great Organizations Build Ultimate Competitive Advantage, recounts the following anecdote in the Harvard Business Review:

 

In a famous experiment, researchers ran a lottery with a twist. Half the participants were randomly assigned a lottery number. The remaining half were given a blank piece of paper and a pen and asked to write down any number they would like as their lottery number. Just before drawing the winning number, the researchers offered to buy back the tickets. The question researchers wanted to answer is, “How much more do you have to pay someone who ‘wrote their own number’ versus someone who was handed a number randomly?” The rational answer would be that there is no difference (given that a lottery is pure chance and therefore every ticket number, chosen or assigned, should have the same value). A more savvy answer would be that you would have to pay less for the tickets where the participant chose the number, given the possibility of duplicate numbers in the population who wrote their own number. The real answer? No matter what location or demographic the experiment has taken place in, researchers have always found that they have to pay at least five times more to those who wrote their own number. This result reveals an inconvenient truth about human nature: When we choose for ourselves, we are far more committed to the outcome — by a factor of five to one.

 

In most classrooms, formative assessment bears little resemblance to summative assessment, and even less resemblance to “real life.”  Learners rarely struggle to construct their own understandings, and complex questions with open-ended or divergent answers are rarely asked. Most assessments are multiple choice, matching, or fill-in-the-blank. Joyner & Muri (2011) call these types of questions the “tip of the iceberg,” as they yield little information about what students understand, especially since students can guess with fairly good odds. Conversely, constructed response strategies such as open-ended questions, interviews, journals, and short answer questions tell us “what’s below the surface” because these types of assessment provide more evidence about what students understand or don’t understand. Typically, the entirety of an authentic assessment occurs below the surface, where more complex thinking resides.

 

As part of an argumentative unit, for example, I told my students about a new museum called the Hunters of the Wild Lands Museum, or The HOWL for short. I assigned students their own predators and challenged them to research and present those traits and skills which made their animal a perfect candidate for exhibition in this new museum. This project involved not only an open approach to the research process, but to the final presentation as well. Students chose myriad options for the latter, including Hstry, Screenr, EDPuzzle, Prezi, Lino, and PowerPoint. Some students changed their minds and restarted after whole periods of work. This just proves that one of the most profound benefits of authentic learning is that it provides learners with ongoing feedback. As students progress from one step of a performance task to the next, their decisions yield immediate consequences which guide future actions. Wiggins (2010) explains this phenomenon best by saying, “Sometimes, feedback comes from teachers. In the most powerful cases, however, it comes from the activity itself.” By project’s end, students truly owned their animal and their work. Bernard Bull, assistant vice president of academics and education professor at Concordia University, would describe this as creating a “culture of learning” instead of a “culture of earning” (Schwartz, 2014). The most telling piece of evidence that authentic assessment works: in the five years I've assigned the HOWL project, less than a dozen students have ever asked about their grades.

 

ENGAGEMENT

 

Have you ever found yourself so involved in an activity that time just “got away” from you? Whether it’s reading, gardening, running, sewing, or spending time with family, those activities which engage us rarely seem like hard work, and instead leave us feeling rewarded and refreshed. When was the last time you assessed students and they left your classroom feeling that same way?

 

Some assessments, through their game-like play, are naturally engaging to students. A site like Kahoot seems more like a game show than a serious assessment tool. Similarly, Socrative, Zaption, and WeJITS all have potential to be fun as well as educational. Other sites like Hstry, Telescopic Texts, and Google Story Builder are engaging because they allow students the opportunity to create. From sand castles to fingerpaints to Legos, the human desire to bring something from figment to form lives well beyond childhood; and yet, how many of us provide opportunities for creation in our classrooms? This, again, is a perfect place for technology to play a role.

 

Why is engagement so powerful? Because it puts the mind to work. Rather than just thinking and remembering, the mind now commands the rest of the body to action, whether it’s through a painting or a poem or a performance assessment. Karpicke and Blunt (2011) explain that “while any assessment requires some type of active retrieval, having students reconstruct what they know through alternative assessments leads to deeper understanding and consolidates learning in more powerful ways than traditional testing.”

 

An added benefit of engagement is that it builds confidence in the students’ grasp of the content. “Because critical assessment techniques, or CATs, are worked into classroom activities and content, they eliminate the anxiety surrounding quizzes and tests, and they allow teachers to better monitor student morale and confidence” (Teachthought Staff, 2013). If you and I find ourselves staring at a unit test for a novel and it seems more a trivia exercise than a measurement of true student learning, then we've likely realized a thing or two about authenticity and engagement. The bubble test is neither authentic nor engaging for the student, and it's selling us all short in the end.

 

PRACTICALITY

 

For formative assessment to be truly formative, it needs to be practical. The word “practical,” in its most etymological sense, means "fit for action... active, effective, vigorous" (Harper, 2015).  In other words, formative assessment should be actionable. It should provide both the teacher and the student with a game plan for future instruction and growth. John Hattie (2013) states, “While teachers see feedback as corrections, criticism, comments, and clarifications, for students unless it includes “where to next” information they tend to not use it. Students want feedback just for them, just in time, and just helping nudge forward.” In other words, our feedback needs to implicitly or explicitly include the notion of “from now on…,” providing students with keys to improving of their work.

 

For assessment to be practical, then, it needs to be specific. If we were rated “average” in our job performance, wouldn't we desire specific information which would help us to improve? Grant Wiggins (2010) explains it in this way: "Good job!" is not feedback.  "You used many interesting details to make your characters come alive in this story," is feedback.  "B-" is not feedback.  "Your thesis is an interesting one, but you have not provided sufficient evidence to support it," is.” Likewise, telling a student or a parent that reading comprehension is an area of difficulty isn't especially useful. What aspect of the reading process is lacking?

 

When discussing the site Edulastic, for example, I mention that teachers need to exercise caution when designing assessments in order to focus in on those exact skills which they mean to measure.  Additional questions meant to "fill out" the quiz or to increase the likelihood (percentage-wise) that students will score well do not help teachers determine acquisition of a standard. For instance, an assessment on summarizing and drawing conclusions should not be cluttered with questions about vocabulary or character motives. Additionally, the two skills mentioned (summarizing and drawing conclusions) should have their particular questions coded to different standards (possible with many of these tools, such as Actively Learn and Curriculet), so that the resulting data reads correctly as two discrete skills, rather than a lumped score generically called something like "reading comprehension." William Ferriter (2014) explains that “assessing nonessential standards just makes it more difficult to get — and to take action on — information quickly and easily.”

 

I mentioned earlier that I provide clear rubrics for my students, which I later return with specific comments, praising what worked well while suggesting ways to improve. What I neglected to mention is that I also provide a model essay to which they can refer as they work. Ron Berger, chief academic officer of Expeditionary Learning, does the same for his students. “The main thing for me about student work is it creates a model and a discussion point of what we’re aiming for.” Berger has also created a downloadable “museum of beautiful student work,” hand-selected from the thousands of projects he’s seen over his career, that educators can use to help inspire their students (Schwartz, 2014). The goal here isn't to suggest that students copy these exemplars, nor is it to shame them about the work they've produced in the past. Instead the goal is to provide them with targets for their own work. “If we hope to improve student learning, we need to get inside student minds and turn up the dial for quality. Most importantly, we need to build into every student a growth mindset — the confidence that he or she can improve through hard work — and a passion for becoming a better student and a better person” (Berger, 2015).

 

One danger here, of course, is expecting too much progress at once. When we meet our students in September, we still recall those students we taught in June. We wonder why these new students are “so far behind,” and it’s tempting to demand immediate results which exceed their capabilities. A balanced sense of expectation is in order because, “if the gap is perceived as too large by a student, the goal may be unattainable, resulting in a sense of failure and discouragement on the part of the student. Similarly, if the gap is perceived as too ‘small,’ closing it might not be worth any individual effort. Hence, to borrow from Goldilocks, formative assessment is a process that needs to identify the ‘just right gap’” (Sadler, 1989, p. 130).

 

COLLABORATION

 

Some teachers might be surprised that I've included collaboration as key to formative assessment. But think back to the last time you purchased something online from Amazon. Did you, like me, check out the number of stars that the product had received through customer feedback? And did you, like me, go to the bottom of the page to read both the best and the worst ratings before making a decision to buy? The fact is, there is truth to be found in crowdsourcing. Often the wisdom of the crowd trumps the bias of the lone individual. Realize, also, that by collaboration I don’t mean just between students, but between the student and teacher as well.

 

Many tools on this site can foster collaboration or crowdsourcing in the classroom. The power of sites such as AnswerGarden, EpicDecide, Soapbox, ClassResponder, Plickers, Padlet, Lino, Kahoot, WeJITS, and Today’sMeet lies within the communal dynamic. The saddest day I ever spent in school was attempting to generate ideas for a Lino board with just four students. They managed it, but the exponential power of my later class of twenty dwarfed the efforts of the four in no time!

 

Collaboration signals to students that their input is crucial to the classroom. In too many classrooms students sit as passive receivers of knowledge, rarely sharing out what they know about a topic. Dean Shareski (2014) explains it this way:

 

Some think of it as a contract. They see themselves as consumers and teachers are selling a product. They either buy it or they don’t.  I’m trying to create a community where everyone has a stake and responsibility. The ultimate goal is empowerment. Sometimes structure and scaffolding can lead to that, but that scaffolding still requires student input. The more you are the sole creator of this structure, the less ownership the learner has. You perpetuate the idea of expert and novice. Yes, there are some types of learning and situations where the learner is without any background knowledge but this is rare. Most of us come to new learning with some background, some familiarity and this is what a great host/teacher does. They help them see those connections and use background knowledge to build upon.

 

Think of your own school experience. It’s likely that your love or disdain for a certain subject was largely determined by your feelings about the teacher of that subject or your perception of that teacher’s feelings about you. It’s important that we, as teachers, get to know our students and show an interest in what interests them. Recently, my students engaged in a brainstorming session using AnswerGarden. They loved the site, so I gave them a second prompt which asked, “How could a teacher use this?” Many students suggested it as a tool for teachers to get to know their students at the beginning of the year. Students do desire this, even though most high schoolers wouldn’t readily admit it. Why is this important? Because it’s the beginning of building trust between teacher and student. After all, “When teachers know their students well, they know when to push and when to back off. Moreover, if students don’t believe their teachers know what they’re talking about or don’t have the students’ best interests at heart, they won’t invest the time to process and put to work the feedback teachers give them” (Wiliam, 2014). It’s a truly reciprocal process, made only that way when a constant stream of assessment is flowing. Schwartz (2014) elaborates by explaining “With formative assessment, students and teachers can continually make small adjustments. This... makes it easier for teachers and students to become partners in learning, giving students ownership over their success and asking them to show responsibility for improvement.”

 

REFLECTION

 

As part of a graduate course, I completed a twenty page paper which involved a significant amount of work. I received the assignment a week later with an A written on the last page. Nothing else. I wondered if the teacher had even read it. I stuffed it into my backpack, pleased with the grade nonetheless. Looking over that same paper ten years later, I noticed some instances of brilliance, but also some instances of banality. I wasn't credited for the former, and I wasn't criticized for the latter. I think an opportunity was lost for all involved.

 

Students make meaningful gains when they are provided with feed-forward ideas and related instruction that supports their continuing growth (Lapp, Fisher, & Frey, 2012, p. 7). Teacher generated letter grades alone don’t deliver this feedback, especially if they’re withheld until it’s too late for the student to improve their practice. After all,  “it's not teaching that causes successful, eventual learning – i.e. accomplishment. It's the attempts and adjustments by the learner to perform that cause accomplishment.  And without feedback, all of the teaching, no matter how extensive, remains theoretical to the learner” (Wiggins, 2010).

 

One of my greatest victories as a teacher has been to increase my students’ audience. When I was in school, I would dutifully complete my work, hand it to the teacher, and receive it back with a grade. That was it. This simple academic transaction was hardly worth the effort! Schwartz (2014) echoes this sentiment, saying, “Much of the work students produce is read only by their teachers. That’s why examining and critiquing student work as a regular part of classroom interactions can be a powerful way for both teachers and students to reflect on their work, while building a community culture that focuses on the process of learning.”

 

One way to get students to think about the quality of their work is to have them examine the work of others. Most students love sharing their ideas, and responding to the ideas of others. Wiliam (2015) explains that “Some teachers believe that it is wasteful to take time that students could be generating their own work to look at the work of others, but there are two immediate benefits of… look(ing) at samples of student work. First, we are all better at spotting mistakes in the work of others than we are in our own work. Second, when we notice mistakes in the work of others, we are less likely to make the same mistakes in our own work.” Students likewise feel a sense of comfort and camaraderie knowing that what confuses them might confuse others; Actively Learn, for example, allows students to annotate texts as they read and share out their notes to the whole class. WeJITs and Today’sMeet provide opportunity for threaded discussions and interaction below the verbal discourse in the classroom. Graham Gibbs (2015) suggests putting class time aside for students to “discuss their assignments, and the marks and feedback they received, with other students, and to draw conclusions about how they should tackle future assignments.”  We need to allow this time, and hold students accountable to serious reflection on their work first. Now that they know what they know, how will this impact future actions? “The only thing that matters with feedback is the reaction of the recipient. That’s it. Feedback, no matter how well designed, that is not acted upon by the student is a waste of time” (Wiliam, 2015).

 

Rubrics are another tool which teachers can use to increase reflection. With a rubric in hand, a student has a real fighting chance of completing a task with competency, certainly more so than if he had to guess what the teacher wanted. A tool like ForAllRubrics provides students with specific comments in addition to scores, which will increase understanding of what was done right and what is to be improved. By combining exemplars with rubrics, students have both the map and the destination in mind before setting themselves to a task. “Rubrics can be useful tools, but absent a picture of what the final goal actually looks like, for many students they are just a bunch of words. Students need to see high-quality student essays, geometric proofs, experimental designs, book reviews, research papers — whatever the genre — so that they can understand and analyze what “good” is” (Berger, 2015).

 

Finally, students to be taught how to self-assess, and then be given the time to do it. Because “improving learning through formative assessment also depends on the active involvement of students in their own assessment,” (Assessment Reform Group, 1999) we can’t skip this step with students simply because “they’re not good at it.” No one is good at anything unless they’re taught how to do it well, and are then given time and opportunity to practice! Imagine if, when your child first tried to read a two-wheeler, that they fell over onto the driveway and you simply shrugged your shoulders and said, “Never mind, you’re no good at this. I’ll ride the bike from now on.” Gibbs (2015) also recommends “requiring students to complete a self-assessment sheet, attached to the assignment when it is submitted, structured around the formal criteria and standards, so that they are obliged to reflect upon their own work before they receive feedback.” This self-reflection is likely to result in redoubled efforts as students notice discrepancies in their own work. This leads to an added bonus for teacher and students alike, because “when a student can effectively self-evaluate…grading gets simpler! To the learner, this makes your classroom feel fair, and feelings of instructional fairness is a sign of well-designed assessment and curriculum alignment” (Vega, 2015).

 

IN CLOSING

 

What now?

 

I’m in agreement with Heick (2013), who argues “if the goal of any assessment is to provide data to refine planned instruction, then the primary function of any assessment, whether an authentic, challenge-based learning performance or a standardized test, should be to answer the following question for any teacher: ‘What now?’” The primary function of many of the technology tools I've reviewed and vetted is likewise to assess what the student understands or can do, and then to ask ““What now?”

 

According to Stockman (2015), assessments “aren't about grades, evaluation, or standardization. They’re about making expectations as transparent as possible, providing learners a clear pathway to follow as they strive to get better at something that really matters, and giving everyone criteria to speak to as they provide one another formative feedback.” Online tools can assist, but not replace, teachers in fulfillment of all of these goals for effective assessment. All that remains now is for teachers to put them into play, on a regular basis, in order to see their real power.

 

Hopefully this collection will get you started on that journey.

 

________________________________________________________________________________________________________________

 

 

 

References

 

Assessment Reform Group. (1999). Assessment for learning: Beyond the black box. Cambridge: University of Cambridge, School of Education.

 

DeWitt, P. (2015, February 1). Do students need a bill of assessment rights? Retrieved from http://blogs.edweek.org/edweek/finding_common_ground/2015/02/do_students_need_a_bill_of_assessment_rights.html

 

Ferriter, W. (2014, April 4). Ten tips for writing common formative assessments |. Retrieved from http://blog.williamferriter.com/2014/04/04/ten-tips-for-writing-common-formative-assessments/

 

Formative assessment that truly informs instruction. (2013). Retrieved from National Council of Teachers of English website: http://www.ncte.org/library/NCTEFiles/Resources/Positions/formative-assessment_single.pdf

 

Gallagher, C. W. (2009). Kairos and and formative assessment: Rethinking the formative/summative distinction in Nebraska. Theory Into Practice, 48, 81-88. doi:10.1080/00405840802577676

 

Gibbs, G. (2015, January). Making feedback work involves more than giving feedback – Part 1 the assessment context. Retrieved from http://www.seda.ac.uk/resources/files/publications_176_27%20Making%20feedback%20work%20Part%201.pdf

 

Gibbs, G. (2015, January). Making feedback work involves more than giving feedback - Part 2 The students. Retrieved from http://www.seda.ac.uk/resources/files/publications_177_28%20Making%20Feedback%20work%20Part%202.pdf

 

Gonzalez, J. (2015, February 4). Show us your single point rubric | Cult of Pedagogy. Retrieved from http://www.cultofpedagogy.com/single-point-rubric/

 

Harper, D. (n.d.). Online Etymology Dictionary. Retrieved March 10, 2015, from http://www.etymonline.com/

 

Hattie, J. (2013, January). John Hattie: “Think of feedback that is received not given” | VISIBLE LEARNING. Retrieved from http://visible-learning.org/2013/01/john-hattie-visible-learning-interview/

 

Heick, T. (2013, September 26). The most important question every assessment should answer. Retrieved from http://www.teachthought.com/learning/the-most-important-question-every-assessment-should-answer/

 

Heritage, M. (2007, October). Formative assessment: What do teachers need to know and do? PHI DELTA KAPPAN, 89(02), 140-145.

 

Heritage, M. (2011). Formative assessment: An enabler of learning. Better: Evidence-based Education, 18-19. Retrieved from http://www.amplify.com/assets/regional/Heritage_FA.pdf

 

Joyner, J. M., & Muri, M. (2011). INFORMative assessment: Formative assessment to improve math achievement, grades K-6. Sausalito, Calif.: Math Solutions.

 

Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science. doi:10.1126/science.1199327

 

Keller, S. (2012, April 26). Increase your team’s motivation five-fold - Harvard Business Review. Retrieved from https://hbr.org/2012/04/increase-your-teams-motivation/

 

Kornell, N., Castel, A. D., Eich, T. S., & Bjork, R. A. (2010). Spacing as the friend of both memory and induction in young and older adults. Psychology and Aging,25(2), 498-503. doi:10.1037/a0017807

 

Lapp, D., Fisher, D., & Frey, N. (2012, December). Feed-forward: Linking instruction with assessment. Voices from the Middle, 21(2), 7-9.

 

Marzano, R. J. (2003). What works in schools: Translating research into action. Alexandria, VA: Association for Supervision and Curriculum Development.

 

Maslow, A. H. (1966). The psychology of science: A reconnaissance. New York: Harper & Row.

 

Miller, A. (2015, February 3). Formative assessment is transformational! Retrieved from http://www.edutopia.org/blog/formative-assessment-is-transformational-andrew-miller

 

Mursky, C. (2015, January 30). Formative assessment practices to support student learning. Retrieved from https://www.teachingchannel.org/blog/2015/01/30/formative-assessment-practices-sbac/

 

Pforts, A. (2015, March 5). Formative teaching and learning [Web log post]. Retrieved from https://www.teachingchannel.org/blog/2015/03/05/formative-teaching-and-learning-sbac/#more-181774

 

Sadler, D. R. (1989). Formative assessment and the design of instructional systems.Instructional Science, 18, 130. doi:10.1007/BF00117714

 

Schwartz, K. (2014, September 19). How Looking at Student Work Keeps Teachers and Kids on Track | MindShift [Web log post]. Retrieved from http://blogs.kqed.org/mindshift/2014/09/how-looking-at-student-work-keeps-teachers-and-kids-on-track/

 

Schwartz, K. (2014, January 6). The Importance of Low-Stakes Student Feedback | MindShift [Web log post]. Retrieved from http://blogs.kqed.org/mindshift/2014/01/the-importance-of-low-stakes-student-feedback/

 

Sharesky, D. (2014, July 16). Encouraging ownership | Ideas and thoughts. Retrieved from http://ideasandthoughts.org/2014/07/16/what-ownership-means/

 

Siwak, H. (2015, January 29). Allowing students to deeply understand assessment. | The Amaryllis | Documenting the transformation of my classroom into a 21st Century place for learning [Web log post]. Retrieved from http://www.heidisiwak.com/2015/01/allowing-students-to-deeply-understand-assessment/

 

Stockman, A. (2015, January 19). That's not a rubric, and you're using it wrong: 5 ways to clean up the mess [Web log post]. Retrieved from www.brilliant-insane.com/2015/01/thats-not-rubric-youre-using-wrong-5-ways-clean-mess.html

 

TeachThought Staff. (2013, April 19). 12 strategies for critical assessment. Retrieved from http://www.teachthought.com/learning/assessment/12-strategies-for-critical-assessment/

 

Vega, A. (2015, January 27). Blended and online assessment taxonomy design. Retrieved from http://www.fulltiltahead.com/edtech/blended-online-assessment-taxonomy-design-infographic/

 

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Whitman, G. (2014, June 27). Assessment, choice, and the learning brain | Edutopia [Web log post]. Retrieved from http://www.edutopia.org/blog/assessment-choice-and-learning-brain-glenn-whitman

 

Wiggins, G. (1993). Assessing student performance. San Francisco: Jossey-Bass Publishers.

 

Wiggins, G. (2010, May 22). Authentic education - Feedback: how learning occurs. Retrieved from http://www.authenticeducation.org/ae_bigideas/article.lasso?artId=61

 

Wiggins, G. (2011, August 25). Formative vs summative assessment – and unthinking policy about them. Retrieved from https://grantwiggins.wordpress.com/2011/08/25/formative-vs-summative-assessment-and-unthinking-policy-about-them/

 

Wiliam, D. (2014, November 29). Is the feedback you’re giving students helping or hindering? | Learning Sciences Dylan Wiliam Center. Retrieved from http://www.dylanwiliamcenter.com/is-the-feedback-you-are-giving-students-helping-or-hindering/

 

Wiliam, D. (2103, December). Assessment: The bridge between teaching and learning.Voices from the Middle, 21(2), 15-20.

 

Wiliam, D. (2015, February 3). Practical ideas for classroom formative assessment | Learning Sciences Dylan Wiliam Center. Retrieved from http://www.dylanwiliamcenter.com/practical-ideas-for-classroom-formative-assessment/

 


Return to Main Page 

Comments (0)

You don't have permission to comment on this page.