Dr. DiBattista also shares his statistical analysis approach for weeding
out poor-performing questions and advocates for professors putting more
time into writing, reviewing and revising their tests. “We have an ethical
obligation to do this work. If we are measuring our students’ performance
and have no idea how well our tests are doing it’s entirely possible we are
measuring very badly.”
But in the digital era, testing formats themselves may change. “Multiple
choice testing is on its last legs,” asserts Mark Gierl, professor of educa-
tional psychology and holder of the Canada Research Chair in Educational
Measurement at the University of Alberta. He researches and uses digital
exams and thinks they will eventually become the norm; he says it will
save schools millions of dollars, since dealing with paper eats up two-
thirds of most institutions’ assessment budget.
Computers also allow for multi-part answers, drop-down menus,
drawing or manipulating maps, videos or sound clips, graphs and other
visuals. “A computer is a very powerful tool,” says Dr. Gierl. Duplicating a
paper-based format on a computer “is kind of a waste of having it online
in the first place,” he says. The software will mark the paper as the student
writes it. (For more on a software program that specializes in math and
sciences, see “Maple T.A.’s web-based testing for math courses” at the top
of this page.)
Algorithms can mark just about anything, and well. To test the effectiveness of the array of computer-based testing platforms on the market,
in 2012 the Hewlett Foundation staged the Automated Student Assessment Prize, offering $60,000 to the product that could most reliably
mark short-answer essays. The results: all nine software entries met or
exceeded the marking accuracy of human graders. Dr. Gierl is not surprised. “This is just a sliver of what computers can do right now,” he says.
Such programs have been widely adopted by private organizations
offering large-scale testing of English-language competency and by the
company which provides Graduate Record Examinations, or the GRE.
Universities, on the other hand, have been much slower to bring computers on board to mark tests or long essays – although the data show they’re
capable of both. “We prefer humans to score essays,” says Dr. Gierl.
Technology aside, the best new ideas in assessment are those that are
falling in line with emerging pedagogical research. That includes a shift
towards looking at outcomes as a more integral part of course design.
“It used to be we’d think about what content we’d want to cover. Now
we’re asking, what do you want students to be able to demonstrate by
the end of a course?” says Donna Ellis, director of the Centre for Teach-
Maple T.A.’s web-based testing for math courses
Of the numerous computer-based testing products
on the market, one of the most innovative comes
from a company launched by University of Waterloo
professors. Maple T.A. allows instructors to write
questions directly onto a computer using math
notation; the students answer in kind. It uses algorithms to mark the answers that account for variations
in correct answers and different ways to notate
that answer (for instance, x+y is the same as y+x).
The software, which is used by an estimated 100,000
students around the world, can also offer so-called
adaptive answers, where a student who’s getting
everything wrong is dropped down to an easier set
of questions. “You can use that for placement testing,
or just for homework and practice testing,” says
Paul DeMarco, director of development for Maplesoft,
which makes the software.
ing Excellence at the University of Waterloo. She runs a four-day course-redesign academy that’s driven by the idea of course outcomes and how
they influence assignments. The program caused one faculty member, for
instance, to realize that the final project and final exam for the course
were assessing the same skill set.
Dr. Ellis has found the learning-objective approach makes creating
rubrics and designing and weighting assignments much more straightforward. Beyond that, Dr. Ellis encourages assessments that don’t simply
measure skills and knowledge, but are teaching tools themselves.
Meanwhile, studies are showing that students respond best to quick
feedback. “The frequency of feedback is more important than getting high
quality or individualized feedback,” says U of Guelph’s Dr. Aspenlieder.
The combination of these two ideas is leading to a rise in so-called
formative assignments: smaller projects throughout the term that offer
a stepping stone to later projects. Assignments such as quizzes on readings or writing essay outlines and bibliographies give students the tools
to succeed at the summative assessments like final essays, presentations
Ms. McNeilly, the journalism professor at Ryerson, came across an
approach called minimal marking, first proposed in the 1980s, as a way
to separate language mechanics from style and content. She finds her stu-
dents’ papers are often riddled with grammar, punctuation and other ba-
sic errors. “They’re really bright, capable students, but they never learned
this stuff,” she says. “They don’t know what a subject or a verb is.”
She has been following research that shows children who know their
multiplication tables do better at advanced mathematics and she feels the
same is true of basic grammar: “When you learn the basics, only then do
you have the scaffolding to express yourself more articulately.” Moreover,
other studies, she says, show that people marking essays often don’t look
past those distracting surface errors to assess the content of a paper.
She admits the name “minimal marking” is misleading – the technique
actually takes longer than the usual way of correcting errors (for more
details, see “Minimal marking: how it works” on the next page). But it has
helped her separate content from errors and gets students motivated to
finally bone up on their grammar.
Meanwhile, the desire to speed up marking is creating interest in stu-dent-peer assessment, which in theory can allow an instructor to offer students more regular feedback. “Students think peer review just makes the
job easier for the instructor, but in fact it helps raise the bar. There’s some
meta learning that goes on when you’re reading someone else’s paper,” says