A few weeks ago I had the privilege of attending a talk hosted by the Department of Physics at the University of Guelph. The talk – Integrated Testlets for Physics and Chemistry: Multiple-Choice Testing 2.0 – was led by Dr. Aaron Slepkov, Associate Professor and Canada Research Chair in the Physics of Biomaterials at Trent University.
In the talk, Dr. Slepkov presented findings from several years of work investigating the utility of Integrated Testlets (ITs) to evaluate higher-level thinking while providing immediate feedback to students in undergraduate Chemistry and Physics classes. Using an Answer Until Correct (AUC) multiple choice format, he was able to demonstrate improvements in both student grades and satisfaction.
For those who are unfamiliar with AUC multiple choice (as I was before the talk), imagine a scratch lottery ticket that tasks the user with revealing hidden information by scratching away the silvery covering on top. But, replace the lottery concept with, for example, five answers (a through e) to a multiple choice exam question. The student reads the question and available answers, then scratches the covering to the answer they believe correct. If they reveal a star underneath the scratched silvery covering, they know immediately that they have selected the correct answer. However, if they do not reveal a star, they have the opportunity to rethink what they are doing, then scratch again. They can continue this until they have uncovered the correct answer.
From the student point of view, grading is essentially immediate. Through inspection, a student can determine very quickly how many questions they’ve answered by revealing one answer, and how many questions they’ve answered by revealing two or more answers. Full credit is given for each question that requires only one scratch, partial credit is assigned for each question that requires more than one scratch – with more scratches translating to fewer points achieved.
On its own, AUC multiple choice struck me as a potentially powerful tool. However, when this was combined with the development of ITs – which are scaffolded structures that use AUC multiple choice to assess a student’s higher-order knowledge on a topic by sequentially building from low to higher-order thinking1,2 – I was immediately wondering how I might use them in my own classes.
After the talk I asked Dr. Slepkov if AUC and ITs had been used in undergraduate design classrooms – or any classroom that might not follow the structured empirical/quantitative logic of (for example) a chemistry, physics, mathematics, or statistics course. To his knowledge, he did not believe so.
And so I was left wondering if the success of AUC and ITs would carry over into a software design classroom that includes qualitative logic, design thinking, and more best practices than hard and fast rules?
Fortunately, I have just received funding from the University of Guelph’s Office of Teaching and Learning through their Scholarship of Teaching and Learning Fund to explore exactly these questions. In particular, the funding will be used to evaluate the use of ITs and AUC multiple choice as an effective immediate feedback tool for measuring student mastery of course learning outcomes in CIS3750 – a required design course in the School of Computer Science. To do this, I am looking to hire a graduate research assistant for the summer to develop a set of ITs specific to the major learning outcomes of the course. I will also be hiring a graduate research assistant in both the fall and winter semesters to evaluate the utility of ITs and AUC to achieve learning outcome mastery, and to evaluate student engagement and satisfaction associated with this form of immediate feedback.
If you are a graduate student who is interested in taking part in this research, please contact me.
1Slepkov AD, Vreugdenhil AJ, Shiell RC (2016). Score Increase and Partial-Credit Validity When Administering Multiple-Choice Tests Using an Answer-Until-Correct Format. J. Chem. Educ., 93, 1839-1846.
2Slepkov AD (2013). Integrated Testlets and the Immediate Feedback Assessment Technique. Am J. Phys. 81, 782-791.