An exploration into the criteria used in assessing design activities with adaptive comparative judgment in technology education.
Abstract
The use of design assignments for teaching, learning, and assessment is considered a signature of technology education. However, there are difficulties in the valid and reliable assessment of features of quality within designerly outputs. In light of recent educational reforms in Ireland, which see the introduction of classroom based assessments centring on design in the technology subjects, it is paramount that the implementation of design assessment is critically considered.
An exploratory study was conducted with a 1st year cohort of initial technology teacher education students (N = 126) which involved them completing a design assignment and subsequent assessment process through the use of adaptive comparative judgement (ACJ). In considering the use of ACJ as a potential tool for design assessment at post-primary level, data analysis focused on criteria used for assessment. Results indicate that quantitative variables, i.e., the amount of work done, can significantly predict performance (R2 = .333, p <.001), however qualitative findings suggest that quantity may simply align with quality. Further results illustrate a significant yet practically meaningless bias may exist in the judgement of work through ACJ (φ = .082, p <.01) and that there was need to use varying criteria in the assessment of design outputs.
The following license files are associated with this item: