Maryam Wagner, McGill University

In general, feedback is information that is provided to learners following assessment. Arguably, feedback has the most impact and potential for contributing to advancing learning when it is used formatively because its primary purpose is aimed at modifying learners’ thinking or behaviour (Nicol & MacFarlane-Dick, 2006; Sadler, 1998; Shute, 2008). Cognitively diagnostic feedback (CDF) (Jang & Wagner, 2014; Wagner, 2015) brings together this formative potential alongside cognitively-based theories of diagnostic assessment (Alderson, 2005; Hartz & Roussos, 2008; Huhta, 2010; Jang, 2005; Leighton & Gierl, 2007; Nichols, Chipman & Brennan, 1995). CDF targets gaps in learners’ cognitive and processing and strategy use rather than knowledge gaps.

The characteristics of CDF can be discussed across several domains including purpose, content, and grain size (Jang & Wagner, 2014). The purpose of CDF is ultimately to advance learners’ self-regulated learning through provision of feedback that addresses conceptual errors, cognitive gaps and strategy use. The purpose and content of CDF is in contrast to feedback that delivers holistic judgements and is outcome-based. Another goal of CDF is to provide feedback that is fine-grained rather than coarse or excessively detailed that learners’ attention is drawn to micro aspects of their work. For example, CDF on writing would provide sub-skill specific (e.g., vocabulary use, content generation, organizational strategies) focusing on learners’ strengths and areas for improvement, rather than identifying typographical errors or misplaced commas (Wagner, 2015). A question that I have been grappling with recently is the extent to which the provision of feedback, and more specifically CDF, would/should be impacted by context.

I am a new scholar. My emergence and development in research has focused primarily in assessment in classroom-based educational settings. I have recently shifted my focus to include assessment in workplace-based contexts. Workplace-based contexts are characterized as ‘real-life’ settings in which learners are engaged in on-the-job tasks (Hamdy, 2009). Some examples include training contexts for physicians, nurses, and pilots. There are numerous similarities between these workplace-based contexts, and traditional classroom-based learning environments. For example, both contexts provide opportunities for in vivo or in situ assessments wherein teachers are directly observing tasks in the setting in which they are used (Hamdy, 2009; Wigglesworth, 2008). Another commonality is that in both contexts the curriculum, teaching and assessment need to be aligned to advance learning, and feedback needs to be delivered during and/or after assessment tasks (Norcini & Burch, 2007). Numerous other similarities exist; however, two of the primary differences between these two assessment contexts are: 1) the characteristics of the tasks; and 2) the agents delivering the feedback (Greenberg, 2012). Table 1 summarizes the similarities differences across these domains.

Table 1.

Task and Agent Characteristics in Workplace- and Classroom-Based Assessment Contexts

  Assessment Context
Workplaced-Based Classroom-Based
Task Characteristics ·      Primarily performance-based

 

·      Variety of task types employed including performance-based, essays, portfolios
·      Setting and content authentic to real-life situations (defines relationship between task and performance) (Bachman & Palmer, 1996; Wigglesworth, 2008) ·      Struggles to balance authenticity with generalizability of outcomes to specific contexts Wigglesworth, 2008)
·      Tasks serve as tools for eliciting samples for assessment and provision of feedback ·      Tasks serve as tools for eliciting samples for assessment and provision of feedback
Agent Characteristics ·      Assessment and subsequent provision of feedback is the primary responsibility of content experts (Greenberg, 2012) ·      Assessment and provision of feedback is the primary responsibility of task experts (Greenberg, 2012)
·      Assessments are driven by external stakeholders who define requisite knowledge and skills ·      Teachers drive assessment and the type of feedback generated
  ·      Frequently employs peer- and self-assessments

The use of tasks is similar across both contexts: it is primarily used for eliciting evidence of learning and generating opportunities for feedback. However, the nature of the tasks are not necessarily identical. While workplaced-based settings employ primarily performance-based tasks that replicate real-life, classroom-based contexts use a variety of tasks, but struggle with the authenticity of some task types to real world settings. Therefore, the delivery of CDF would not necessarily be influenced by the context, but rather, the opportunities to provide it could be impacted as there are generally more variety in task types in classroom-based contexts. This variability arguably provides more diversity in the types of activities in which learners are engaged, and thus provide different opportunities for observing and generating information about learners’ strengths and areas for improvement.

The primary differences between the feedback providers in the different contexts is their knowledge and expertise. In workplace-based contexts, the agents are primarily content experts, while in classroom-based contexts, the agents are more likely to be task experts. Again, while both contexts engage learners in tasks which could be used to generate and deliver CDF, the differences in the agents might impact the content of the feedback and if there is emphasis or priority placed on some facets (based on the agents’ knowledge and expertise).

My transition to a new research context has provided rich opportunities for work, exploration, and investigation of educational issues, including cognitively diagnostic feedback, which extend across contexts. I greatly welcome the opportunity to connect with anyone interested in discussing these topics further. Please email me: maryam.wagner@mcgill.ca

References

Alderson, J. C. (2005). Diagnosing foreign language proficiency: the interface between learning and assessment. London: Continuum.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.

Greenberg, I. (2012). ESL Needs Analysis and Assessment in the Workplace. In P. Davidson, B. O’Sullivan, C. Coombe, & S. Stoynoff (Eds.), The Cambridge guide to second language assessment (pp. 178-181). Cambridge University Press.

Hamdy, H. (2009). AMEE Guide Supplements: Workplace-based assessment as an educational tool. Guide supplement 31.1–Viewpoint. Medical teacher31(1), 59-60.

Hartz, S., & Roussos, L. (2008). The fusion model for skills building diagnosis: Blending theory with practicality (Report No. RR-08-71). Princeton, NJ: Educational Testing Service. Retrieved from http://www.ets.org/Media/Research/pdf/RR-08-71.pdf

Huhta, A. (2010). Diagnostic and formative assessment. In B. Spolsky & F.M. Hult (Eds.), The handbook of educational linguistics (pp. 469-482). Oxford: Wiley-Blackwell

Jang, E. E. (2005). A validity narrative: the effects of cognitive reading skills diagnosis on ESL adult learners’ reading comprehension ability in the context of Next Generation TOEFL. Unpublished doctoral dissertation. University of Illinois at Urbana Champaign.

Jang, E. E., & Wagner, M. (2014). Diagnostic feedback in the classroom. In A.J. Kunnan (Ed.), Companion to Language Assessment, (pp. 693-711). Wiley-Blackwell.

Leighton, J. P., & Gierl, M. J. (Ed.). (2007). Cognitive diagnostic assessment for education: Theory and practices. Cambridge: Cambridge University Press.

Nichols, P. D., Chipman, S. F., & Brennan, R. L. (Ed.). (1995). Cognitively diagnostic assessment. NJ: Lawrence Erlbaum.

Nicol, D.J., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical teacher29(9-10), 855-871.

Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77-84.

Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-

Wagner, M. (2015). The Centrality of cognitively diagnostic assessment for advancing secondary school ESL students’ writing: A mixed methods study (unpublished doctoral dissertation). Ontario Institute for Studies in Education/University of Toronto, Toronto, Ontario, Canada.

Wigglesworth, G. (2008). Task and performance based assessment. In Encyclopedia of language and education (pp. 2251-2262). Springer US.

Advertisements