Engaging Students: Essays in Music Pedagogy, vol. 2

Table of Contents
Bibliography
Contributors

Part 3: Assessing Problem-Based Learning

Kris Shaffer, University of Colorado–Boulder

How does one assess student work in a problem-based learning environment? As we have seen, problem-based learning often aligns with coursework other than daily homework assignments and regular quizzes and exams. Without these regular check-in moments, how do we provide regular feedback to students? And how do students know where they stand in the course? (passing/failing, keeping/losing a scholarship, etc.) Likewise, problem-based courses often involve collaborative projects in which students do not perform identical work. How do we assess students fairly, when they perform different work and, therefore, cannot be assessed equally?

While these questions are pressing in any teaching situation, the emphasis on more open-ended learning experiences and collaborative learning found in PBL environments invites us to consider how adjusting learning objectives and teaching practices demands a fresh approach to feedback, assessment, and grading.

Purpose and ideology of assessment

Assessment and standards are elephants in almost every room where discussions of education are underway.
Jesse Stommel, “MMDU: ’I Would Prefer Not To.’”

Before addressing any questions about assessment, instructors need to make explicit the purpose of their assessments. No system of academic assessment is intrinsically good, only good for a purpose, and that purpose must be established first.

The learning goals, and thus the purposes behind assessment practices, will change depending on the students, instructor, institution, and many other influencing factors on a course. My purpose as an educator is largely not to instill content knowledge. My general goal is for my students to learn how to learn, and to gain skills that will allow them to continue learning independently once the course is over—in other words, general intellectual maturity. My specific goal in music theory courses is for students to gain fluency in the discipline of music. While there are “grammar” and “vocabulary” elements to fluency in music (as there are in a language), fluency and intellectual maturity require much more than ready knowledge of musical “vocabulary” and “grammar.” These aspects of musical “content” are important primarily to the extent that they supports students’ intellectual growth within the discipline.

In line with this goal, assessment serves three purposes: 1) assessment determines if a student has sufficiently mastered a concept in order to apply it and build on it in subsequent coursework or professional activity (a combination of what some educators call summative assessment and formative assessment); if not, 2) assessment identifies areas that need correction and provides feedback for improvement (more specific formative assessment); finally 3) assessment guides students to assess themselves better, so they can better direct their own learning and work. (For more detail on my goals and practices in assessment, please see my blog post, “Why Grade?”)

How do we assess student work fairly in light of these goals when students do different work? And how do we ensure regular helpful feedback outside of a daily or weekly homework schedule?

Fair grading of differentiated student work

The question of fairness arises at the summative assessment stage when grades are assigned, under the assumption that grades are rewards, and rewards must be doled out consistently. All A-minuses must reflect comparable work and/or mastery of the material, and all A-minuses must be incrementally better than all B-pluses. As such it is also primarily a ranking question, not a question of instructing students and guiding them towards intellectual growth. If we clearly define specific learning objectives and remove ranking from consideration, this question is much easier to answer.

For example, I recently taught a vertically integrated (masters-level and undergraduate students together), interdisciplinary (music and computer science), project-based course, built around a single collaborative project seeking an answer to the question, “What different harmonic practices are represented in the McGill Billboard Pop/Rock Corpus?” The class divided into three groups of mixed background and experience level. Because of these differences, the work each student did was sometimes radically different from the others.

In order to accommodate these differences, we used a method of contract grading for summative assessment. (See the syllabus and sample contracts for musicians and computer scientists for this course.) Students were assessed according to four conceptual areas that anchored the content of the course, and the students themselves (with guidance) suggested the appropriate skills and work output that would demonstrate requisite knowledge of each concept given their discipline, level, and interests.

Removing the top-down nature of the assessment standards minimized concern over fairness. Students proposed their own idea of mastery for each standard, and I negotiated that standard with them before much work had been performed. Further, the syllabus left open the option of re-negotiating the contract after work began, if the standard proved to be too high, or if the research direction shifted on the basis of their findings. This allowed complaints (or regrets) to be acted upon in ways that led both to satisfactory research results and student satisfaction with the assessment standards.

At the end of the course, final summative assessment was quick and easy. Contract expectations were articulated clearly, as a result students almost universally fulfilled them, and the syllabus laid out a simple formula for final grades that left no surprises. Students learned a lot and produced high-level work (see “Results” on the course website), and there were zero complaints about assessments.

While this whole course was based around a single collaborative project, instructors of undergraduate music theory courses that incorporate PBL will likely do so in smaller units. For instance, in a unit on keyboard-style voice-leading, I gave second-year theory students a set of exemplars and asked them to work in small groups to analyze the exemplars and use them as the basis of a voice-leading primer that expressed as many of the contrapuntal characteristics of those exemplars in as few basic principles (or “rules”) as possible. Along the way, we devoted some periods of class time to comparing notes (and stealing ideas!). The primers should express the ideas clearly, concisely, and in the students’ own words, and the students were graded based on self-evaluations (similar, but not identical to, contracts), in which they made the case to me that their contributions to the primer demonstrated mastery of the musical concepts on which the unit of study focused (broad, synthetic understanding of tonal harmony and voice-leading, in this case).

Even smaller PBL problems can be done in this way. The voice-leading primer focused on broad concepts after many individual elements had already been explored in other ways. But contract grading (or self evaluations) can be employed on smaller concepts. For example, the same primer project can be performed on a smaller concept like voice-leading augmented-sixth chords or composing a third-species counterpoint exercise (both of which could be chapters in larger primers on tonal chromaticism or species counterpoint). Contract-based grading can also apply well to projects like Philip Duker’s “Day in the Life of a Forensic Musicologist.”  In each case, core musical concepts can be identified, and collaborative student groups can then articulate project components for each group member that will allow them to develop, and demonstrate, mastery of those concepts.

Contract grading is not the only way to accomplish fair assessment of differentiated student work. The core elements of this contract-based system can be found in other assessment systems that also can eliminate questions of fairness when students do not perform identical work:

By balancing clarity, flexibility, and student agency, we can generate fair and meaningful assessments of differentiated student work.

Regular feedback without regular assignments

Regular formative feedback is essential to student growth. As discussed above, my ultimate goal is for students to self-assess along the way. However, I may need to guide them along a process away from top-down grade dependence through various forms of instructor-, peer-, and self-evaluation, until they reach the point where they can confidently and reliably manage their own progress. With that progression in mind, I offer formative feedback possibilities for multiple stages in that progression.

In simpler PBL settings, a rubric, checklist, or set of objectives can frame both feedback and self-assessment. At SMT 2013, I described a graduated process for generating a species counterpoint portfolio. Students worked in pairs to compose and perform two examples of well formed, two-voice counterpoint for each species before proceeding to the next. Dividing a project into a series of small components with sequential soft deadlines (or hard deadlines followed by reassessment opportunities) minimally disturbs the “traditional” progression of class meetings and homework assignments while helping students focus on the bigger picture of the larger project.

In inquiry-driven projects where a clear rubric or checklist is not possible (or at least not desirable), Google Drive can be a helpful tool for communication between students, their groupmates, and the instructor. The course contracts I described above were all Google Documents in GDrive, built on a template I created. Under each conceptual heading, students articulated the planned activities that will demonstrate an appropriate level mastery of the concept, followed by links to work they did to demonstrate that mastery. At every stage in the process, the student kept track of their progress in the document, which I looked at regularly and on which I commented with critiques or suggestions—and at the end, a summative assessment. Jan Miyake describes a similar process for non-PBL settings in her essay in this volume. In fact, even in “traditional” homework/quiz/test-based classes, having a single document per student in which course objectives are defined, coursework linked, and progress discussed can be a very helpful pedagogical tool.

Lastly, as many PBL projects involve collaboration, and as my ultimate goal is to help students assess themselves, a measure of peer assessment can be valuable. This is the core of Cathy Davidson’s crowdsourced, contract grading model. In that model, the class collaboratively sets summative assessment standards and evaluates student work in light of those standards, and they provide each other with formative feedback along the way. Much of the instructor’s assessment work thus involves giving feedback to the peer feedback. Doing work that will be presented to peers is a significant motivation for quality work, and peer assessment helps students learn how to articulate goals and provide good feedback on progress towards those goals, which they can then apply to their own work. It also builds working relationships with their peers and multiplies the number of possible feedback providers whom they can consult.

Conclusion

Asking how to assess new kinds of student work in PBL re-opens the question that we have left unasked for too long: what student work should we assess, and why? As educators, we should constantly question our pedagogical practices, especially those surrounding assessment. As Peter Elbow argues, our assessment practices can hinder learning and harm students. Thus, we should carefully consider how our assessment practices fulfill our pedagogical goals, in the specific institutional, musical, and human contexts in which we teach.

The techniques I offer here have worked in specific contexts. While they may be helpful in other contexts, I hope that those reading this article will take the opportunity first and foremost to lay out their pedagogical goals and think critically and carefully about what kinds of assessment practices raised here or elsewhere will help them accomplish those goals in the the contexts in which they teach.

Part 1: Problem-Based Learning in the Music Classroom, A Rationale, Daniel Stevens
Part 2: Applying Problem-Based Learning, Philip Duker

This work is copyright ⓒ2014 Kris Shaffer and licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.