Author Archives: clarissalau67

Using Scenario-Based Tasks to assess along a Learning Progression

03 Wednesday Jan 2018

Posted by clarissalau67 in Assessment Design

≈ Leave a comment

Gabrielle Cayton-Hodges, ETS

Learning progressions (LPs; which are similar to learning trajectories) have been defined in many ways over decades of cognitive research and in different subject areas. At ETS, through the Cognitively-Based Assessment of, for, and as Learning, (CBAL™) we developed a singular definition to be used across content areas: “a description of qualitative change in a student’s level of sophistication for a key concept, process, strategy, practice, or habit of mind. Change in student standing on such a progression may be due to a variety of factors, including maturation and instruction. Each progression is presumed to be modal—i.e., to hold for most, but not all, students. Finally, it is provisional, subject to empirical verification and theoretical challenge” (ETS, 2012).

The development and documentation of LPs can be useful in the creation of proper diagnostic tools for both formative and summative assessment. By developing research-based LPs that are intended for use in assessment development and developing an assessment around them, the assessment system can provide teachers with information regarding the location of their students on the progression and from that derive information needed to move the students forward.

However, this is not always a simple and straight-forward task. Since LPs often articulate not just what students can do, but also what students understand as well as what misconceptions they may have, a single item, even with a solution or explanation, may only give us a small part of the story.

Scenario-Based Tasks (SBTs) lead students through a larger, often real-world context, in which the students are able to apply various aspects of subject matter knowledge towards solving the problem. SBTs and Learning Progressions fit nicely hand-in-hand as students demonstrate knowledge in context, and scaffolding can be applied as needed to determine the level at which a student can perform both with and without assistance. In mathematics, for example, since both SBTs and LPs are designed for one particular content area, they can be tailored to highlight a precise area of mathematics and even specific LP levels while filling in the other mathematical content knowledge needed for the particular problem. The SBT can also provide some amount of “branching” whereby students who do not need the scaffolding move on to show what they can do independently while others who may have been left floundering with an open-ended problem can be provided the results of some parts of the problem where they may be stumbling, allowing them to advance to other aspects of the problem that they may be able to solve without difficulty.

While it is true that there are multiple approaches to assessment along a Learning Progression, and SBTs alone may not provide all of the information we need, the incorporation of SBTs into summative or formative assessment can certainly help us to fill some inevitable gaps.

For more information on the design and development of assessments that incorporate SBTs, see Oranje, A., Keehner, M., Persky, H., Cayton‐Hodges, G., & Feng, G. (2016).

 

References

Educational Testing Service. (2012). Outline of provisional learning progressions. Retrieved from the The CBAL English language arts (ELA) compentency model and provisional learning progressions web site: http://elalp.cbalwiki.ets.org/Outline+of+Provisional+Learning+Progressions

Oranje, A., Keehner, M., Persky, H., Cayton‐Hodges, G., & Feng, G. (2016). Educational Survey Assessments. In A. Rupp & Leighton, J. (Eds.), The Wiley Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications (pp. 427-445). Wiley-Blackwell.

 

Advertisements

Cognitively Diagnostic Feedback in Context

27 Monday Nov 2017

Posted by clarissalau67 in Assessment Design

≈ Leave a comment

Maryam Wagner, McGill University

In general, feedback is information that is provided to learners following assessment. Arguably, feedback has the most impact and potential for contributing to advancing learning when it is used formatively because its primary purpose is aimed at modifying learners’ thinking or behaviour (Nicol & MacFarlane-Dick, 2006; Sadler, 1998; Shute, 2008). Cognitively diagnostic feedback (CDF) (Jang & Wagner, 2014; Wagner, 2015) brings together this formative potential alongside cognitively-based theories of diagnostic assessment (Alderson, 2005; Hartz & Roussos, 2008; Huhta, 2010; Jang, 2005; Leighton & Gierl, 2007; Nichols, Chipman & Brennan, 1995). CDF targets gaps in learners’ cognitive and processing and strategy use rather than knowledge gaps.

The characteristics of CDF can be discussed across several domains including purpose, content, and grain size (Jang & Wagner, 2014). The purpose of CDF is ultimately to advance learners’ self-regulated learning through provision of feedback that addresses conceptual errors, cognitive gaps and strategy use. The purpose and content of CDF is in contrast to feedback that delivers holistic judgements and is outcome-based. Another goal of CDF is to provide feedback that is fine-grained rather than coarse or excessively detailed that learners’ attention is drawn to micro aspects of their work. For example, CDF on writing would provide sub-skill specific (e.g., vocabulary use, content generation, organizational strategies) focusing on learners’ strengths and areas for improvement, rather than identifying typographical errors or misplaced commas (Wagner, 2015). A question that I have been grappling with recently is the extent to which the provision of feedback, and more specifically CDF, would/should be impacted by context.

I am a new scholar. My emergence and development in research has focused primarily in assessment in classroom-based educational settings. I have recently shifted my focus to include assessment in workplace-based contexts. Workplace-based contexts are characterized as ‘real-life’ settings in which learners are engaged in on-the-job tasks (Hamdy, 2009). Some examples include training contexts for physicians, nurses, and pilots. There are numerous similarities between these workplace-based contexts, and traditional classroom-based learning environments. For example, both contexts provide opportunities for in vivo or in situ assessments wherein teachers are directly observing tasks in the setting in which they are used (Hamdy, 2009; Wigglesworth, 2008). Another commonality is that in both contexts the curriculum, teaching and assessment need to be aligned to advance learning, and feedback needs to be delivered during and/or after assessment tasks (Norcini & Burch, 2007). Numerous other similarities exist; however, two of the primary differences between these two assessment contexts are: 1) the characteristics of the tasks; and 2) the agents delivering the feedback (Greenberg, 2012). Table 1 summarizes the similarities differences across these domains.

Table 1.

Task and Agent Characteristics in Workplace- and Classroom-Based Assessment Contexts

  Assessment Context
Workplaced-Based Classroom-Based
Task Characteristics ·      Primarily performance-based

 

·      Variety of task types employed including performance-based, essays, portfolios
·      Setting and content authentic to real-life situations (defines relationship between task and performance) (Bachman & Palmer, 1996; Wigglesworth, 2008) ·      Struggles to balance authenticity with generalizability of outcomes to specific contexts Wigglesworth, 2008)
·      Tasks serve as tools for eliciting samples for assessment and provision of feedback ·      Tasks serve as tools for eliciting samples for assessment and provision of feedback
Agent Characteristics ·      Assessment and subsequent provision of feedback is the primary responsibility of content experts (Greenberg, 2012) ·      Assessment and provision of feedback is the primary responsibility of task experts (Greenberg, 2012)
·      Assessments are driven by external stakeholders who define requisite knowledge and skills ·      Teachers drive assessment and the type of feedback generated
  ·      Frequently employs peer- and self-assessments

The use of tasks is similar across both contexts: it is primarily used for eliciting evidence of learning and generating opportunities for feedback. However, the nature of the tasks are not necessarily identical. While workplaced-based settings employ primarily performance-based tasks that replicate real-life, classroom-based contexts use a variety of tasks, but struggle with the authenticity of some task types to real world settings. Therefore, the delivery of CDF would not necessarily be influenced by the context, but rather, the opportunities to provide it could be impacted as there are generally more variety in task types in classroom-based contexts. This variability arguably provides more diversity in the types of activities in which learners are engaged, and thus provide different opportunities for observing and generating information about learners’ strengths and areas for improvement.

The primary differences between the feedback providers in the different contexts is their knowledge and expertise. In workplace-based contexts, the agents are primarily content experts, while in classroom-based contexts, the agents are more likely to be task experts. Again, while both contexts engage learners in tasks which could be used to generate and deliver CDF, the differences in the agents might impact the content of the feedback and if there is emphasis or priority placed on some facets (based on the agents’ knowledge and expertise).

My transition to a new research context has provided rich opportunities for work, exploration, and investigation of educational issues, including cognitively diagnostic feedback, which extend across contexts. I greatly welcome the opportunity to connect with anyone interested in discussing these topics further. Please email me: maryam.wagner@mcgill.ca

References

Alderson, J. C. (2005). Diagnosing foreign language proficiency: the interface between learning and assessment. London: Continuum.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.

Greenberg, I. (2012). ESL Needs Analysis and Assessment in the Workplace. In P. Davidson, B. O’Sullivan, C. Coombe, & S. Stoynoff (Eds.), The Cambridge guide to second language assessment (pp. 178-181). Cambridge University Press.

Hamdy, H. (2009). AMEE Guide Supplements: Workplace-based assessment as an educational tool. Guide supplement 31.1–Viewpoint. Medical teacher, 31(1), 59-60.

Hartz, S., & Roussos, L. (2008). The fusion model for skills building diagnosis: Blending theory with practicality (Report No. RR-08-71). Princeton, NJ: Educational Testing Service. Retrieved from http://www.ets.org/Media/Research/pdf/RR-08-71.pdf

Huhta, A. (2010). Diagnostic and formative assessment. In B. Spolsky & F.M. Hult (Eds.), The handbook of educational linguistics (pp. 469-482). Oxford: Wiley-Blackwell

Jang, E. E. (2005). A validity narrative: the effects of cognitive reading skills diagnosis on ESL adult learners’ reading comprehension ability in the context of Next Generation TOEFL. Unpublished doctoral dissertation. University of Illinois at Urbana Champaign.

Jang, E. E., & Wagner, M. (2014). Diagnostic feedback in the classroom. In A.J. Kunnan (Ed.), Companion to Language Assessment, (pp. 693-711). Wiley-Blackwell.

Leighton, J. P., & Gierl, M. J. (Ed.). (2007). Cognitive diagnostic assessment for education: Theory and practices. Cambridge: Cambridge University Press.

Nichols, P. D., Chipman, S. F., & Brennan, R. L. (Ed.). (1995). Cognitively diagnostic assessment. NJ: Lawrence Erlbaum.

Nicol, D.J., & MacFarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. Medical teacher, 29(9-10), 855-871.

Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education, 5(1), 77-84.

Shute, V. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-

Wagner, M. (2015). The Centrality of cognitively diagnostic assessment for advancing secondary school ESL students’ writing: A mixed methods study (unpublished doctoral dissertation). Ontario Institute for Studies in Education/University of Toronto, Toronto, Ontario, Canada.

Wigglesworth, G. (2008). Task and performance based assessment. In Encyclopedia of language and education (pp. 2251-2262). Springer US.

RSS Subscribe to feed

  • Using Scenario-Based Tasks to assess along a Learning Progression January 3, 2018 clarissalau67
  • Cognitively Diagnostic Feedback in Context November 27, 2017 clarissalau67
  • Metaphors for Learning and Psychometrics October 17, 2017 ruppandr
  • How to assess hard-to-measure constructs like creativity? September 12, 2017 ruppandr
  • Principled Approaches to Assessment Design, Development, and Implementation: Illuminating Examinee Thinking August 29, 2017 ruppandr
  • Deep Learning, Measurement, and Bayesian Networks July 31, 2017 cognitionassessment
  • Moving from a Craft to a Science in Assessment Design September 9, 2016 stingir
  • Beginning of a Series: Cognition and Assessment Handbook August 10, 2016 rgalmond
  • Paper Rubric for 2017 AERA July 7, 2016 rgalmond
  • 2017 AERA Call for Proposals June 25, 2016 rgalmond

Relevant Links

  • SIG Website
  • SIG Wiki
  • SIG LinkedIn
  • AERA
  • AERA Division D

Archives

  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • June 2015
  • May 2015
  • April 2015
Advertisements

Create a free website or blog at WordPress.com.