Teacher-structured to Student-structured: When completing a traditional assessment, what a student can and will demonstrate has been carefully structured by the person(s) who developed the test. A student's attention will understandably be focused on and limited to what is on the test. In contrast, authentic assessments allow more student choice and construction in determining what is presented as evidence of proficiency. Even when students cannot choose their own topics or formats, there are usually multiple acceptable routes towards constructing a product or performance. Obviously, assessments more carefully controlled by the teachers offer advantages and disadvantages. Similarly, more student-structured tasks have strengths and weaknesses that must be considered when choosing and designing an assessment.
Indirect Evidence to Direct Evidence: Even if a multiple-choice question asks a student to analyze or apply facts to a new situation rather than just recall the facts, and the student selects the correct answer, what do you now know about that student? Did that student get lucky and pick the right answer? What thinking led the student to pick that answer? We really do not know. At best, we can make some inferences about what that student might know and might be able to do with that knowledge. The evidence is very indirect, particularly for claims of meaningful application in complex, real-world situations. Authentic assessments, on the other hand, offer more direct evidence of application and construction of knowledge. As in the golf example above, putting a golf student on the golf course to play provides much more direct evidence of proficiency than giving the student a written test. Can a student effectively critique the arguments someone else has presented (an important skill often required in the real world)? Asking a student to write a critique should provide more direct evidence of that skill than asking the student a series of multiple-choice, analytical questions about a passage, although both assessments may be useful.
The accountability movement has placed a great deal of stress upon teachers to prepare students for state standardized tests and even greater stress upon students to perform well on those tests, which were mandated by the No Child Left Behind legislation, and which will continue under provisions of the Every Student Succeeds Act. Assessments for math students are mandated for grades 3-8 and once in grades 9-12 (Moran, 2015; 114th Congress, 2015).
Pick the right assessment upfront and you’ll be interviewing better candidates. Get a shortlist of candidates tailor made for your organization. Do you need math whizzes, team builders, or safety-conscious mechanics? It’s 100% up to you.
Performance assessments will also be included in assessments related to the implementation of the Common Core State Standards. To this end, W. James Popham (2007) suggested that schools also need interim tests that they "can administer every few months to predict students' performances on upcoming accountability tests" (p. 80).
Popham, W. J. (2011, Spring). Exposing the imbalance in 'balanced assessment'. Baltimore, MD: John Hopkins University, Better: Evidenced-based Education, 14-15.
Fisher, D., & Frey, N. (2014). Checking for understanding: Formative assessment techniques for your classroom, 2nd edition. Alexandria, VA: ASCD.
Davies, A. (2004). Transforming learning and teaching through quality classroom assessment: What does the research say? National Council of Teachers of English. School Talk, 10(1), 2-3.
Chappuis, S., & Chappuis, J. (2007/2008, December/January). The best value in formative assessment. Educational Leadership, 65(4), 14-18.
Brown, C., & Mevs, P. (2012). Quality performance assessments: Harnessing the power of teacher and student learning. Quincy and Boston, MA: Nellie Mae Education Foundation and Center for Collaborative Education. Retrieved from
Black, P., & Wiliam, D. (1998, October). Inside the black box: Raising standards through classroom assessment [Online].Phi Delta Kappan, 80(2), 139-144, 146-148. [Note: also seethe article at].
Heritage, M. (2010). Formative assessment and next-generation assessment systems: Are we losing an opportunity? Washington, DC: Council of Chief School Officers. Retrieved from
Foster, D., & Poppers, A. (2009, November). Using formative assessment to drive learning: The Silicon Valley Mathematics Initiative: A twelve-year research and development project. Retrieved from
Everyone makes mistakes. In viewing assessment for learning, "One of the best ways to encourage students to learn from their mistakes is to allow them to redo their work" for full credit (Lent, 2012, p. 141). However, there are some guidelines to consider so that redos do not become a logistic nightmare, nor used inappropriately just to grade swap. The goal for redos is to engage learners in deeper learning. Releah Lent provided tips to help educators develop their policy on redos. Key ideas included:
As vendors are creating tests for online delivery, it is important to note that the types of assessment questions being developed have moved beyond traditional multiple choice, true-false, and fill-in the blank. Per the U.S. Department of Education (2016), technology-based assessments enable expanded question types. Examples include graphic responses in which students might draw, move, arrange, or select graphic regions; hot text within passages where students select or rearrange sentences or phrases; math questions in which students respond by entering an equation; and performance-based assessments in which students perform a series of complex tasks. A math task might ask students to analyze a graph of actual data and determine the linear relationship between quantities, thus testing their cognitive thinking skills and ability to apply their knowledge to solving real-world problems (p. 55).