Engaging Students: Essays in Music Pedagogy

Table of Contents
Bibliography
Contributors

From Distress to Success: Collaborative Learning in Music Theory Assessments

Deborah Rifkin, Ithaca College

In music theory and sight-singing classes, we often assess student learning individually in hearings, in which students conduct, clap, tap, sing and/or play the piano for their teacher. From a student’s perspective, these assessments can be harrowing because they require on-the-spot application of complex concepts in a time-pressured context. From a teacher’s perspective, these assessments are not always accurate because students are too nervous and stressed to perform up to their abilities. The assessment itself can create debilitating anxiety that impedes performance. For years, I followed traditional assessment techniques, but grew increasingly dissatisfied with the results. For a particular kind of student, the assessment itself promoted acute frustration and self-loathing that affected morale and subsequent motivation to learn.

Cooperative Learning as an Antidote to Test Anxiety

The awareness that test anxiety has a negative influence on performance is not a new concept. In the 1930s, test anxiety received considerable attention through the work of Luria (1932), Neumann (1933), and Brown (1938a, 1938b). Several decades later, Sarason (1958, 1960, 1961, 1975 studied the relationship between test anxiety and test performance, concluding that higher test anxiety correlated with lower test performance. More recent meta-analyses of research also support this claim (Hembree 1988, Seipp 1991, Smith & Smith 2002).

While test anxiety can be present in all fields, it is often heightened in music because one’s identity, sense of self, and personal vulnerability is involved in the performance. For some students, the results of evaluations in music classes form the basis of their sense of identity as a musician. Consequently, feelings of fear or embarrassment can be magnified because of the student’s intense personal investment in the outcomes. P.J. Howard’s neuroscience research supports the idea that negative feelings, such as fear and embarrassment, can interfere with a learner’s ability to process information (Howard 2006). In addition, unlike a written test, in which a student usually has the opportunity to consider their responses and change or modify them if necessary, a music performance affords only one opportunity for students to demonstrate their skills. An effective performance must not only be correct, but also creative, emotional, and communicative (Mitchell 2011). This makes assessments of music performance particularly vulnerable to test anxiety and false outcomes because there is an inherent time pressure involved. Hill and Eaton (1977) observed that the degree of time pressure during an exam affects the anxiety/performance relationship.

One antidote to the stressful, hierarchical environment of a one-on-one performance assessment between a teacher and student is to depressurize the situation by using peer interaction and self-assessment. Small group work can transform the assessment into a supportive learning team, keeping students motivated and energized. The benefits of cooperative, small-group work have been well established. Seminal studies and meta-analyses published by Johnson and Johnson (1989, 1999, 2004) indicate that cooperative learning helps raise the achievement of all students, helps build positive relationships among students, and gives students experiences necessary for healthy social, psychological, and cognitive development. In an extensive study that involved over a thousand students, Stevens and Slavin (1995) also affirmed that cooperative learning had positive effects on academic achievement and social relations.

Aware that the traditional method had flaws and knowing the benefits of cooperative learning, I revised my performance assessments with the hope of engaging deeper levels of learning and to create a more positive student experience. Changing the assessment into small-group exercises enables a student to undertake multiple roles (performer, listener, assessor), while benefitting from the more relaxed atmosphere of peer-to-peer learning. For those in teaching situations where individual hearings are mandated (for example, large departments with shared grading schemas,) or for those not keen to jettison the one-on-one format, group assessments could also serve as an alternate, ancillary method.

Group Assessments

In my revised assessments students perform similar tasks to those in traditional hearings, yet instead of performing for the teacher, they perform for their peers by following a guided script of activities. At the beginning of the semester, students choose two other peers to be in their assessment group and they determine a 45-minute time when they can all meet outside of class on the weeks that an assessment is due, which is usually 3-4 times a semester. In the group, each student performs for 10 minutes, just like in the traditional format, while the other two listen. When in the listener’s role, students have specific objectives of their own, so they are not listening passively. For instance, while the performer improvises a tune in a particular mode, the listeners try to identify which mode the improvisation is in. In addition, the listeners provide feedback to the performer about what went well and what didn’t. Once areas for improvement are identified, the group brainstorms ways to improve the performance. Appendix 1 provides a sample group assessment that I’ve used in a first-year seminar for non-music majors.

In my revised assessments, students undertake multiple roles, as performer, listener, and assessor. Through group interaction, students gain insights about different learning styles and they develop skills for explaining their perceptions. Compared to traditional hearings, students participate in a much fuller learning experience, traversing all stages of Rifkin and Stoecker’s taxonomy for music learning: recognize, imitate, conceptualize, apply, improvise, and evaluate (Rifkin & Stoecker 2011).

After the practicum, students assess their own work individually by completing an online form that asks short-answer questions about their experience. Following recommendations by Diane Hart (1999), I use evaluative questions that force students to think about their work and to consider the extent to which they achieved the learning goals of the activity. The first question lists the learning objectives of the practicum and asks the student to explain how those objectives had (or had not) been met. The second question asks the student which of the activities was their favorite and why. By answering this question, students not only evaluate their own learning, but also their interactions and proclivities. The last question on the survey is open-ended and provides a space for the student to share confidential concerns or observations.

I grade the surveys pass/fail. If a student responds honestly and meaningfully to the survey questions, he or she passes. For the first practicum, I help them understand what I consider to be meaningful by providing models of both good and poor responses. In addition, after each practicum I anonymously share notable insights students wrote on their surveys, which not only reveals how the activities benefitted their colleagues (or not), but also helps model what a meaningful response looks like. Understandably, some may be wary of assigning a grade based upon self-assessment rather than outcome—especially for an aural skills course. I share this concern and will address it a bit later. An ancillary benefit is that grading these surveys takes me under a minute each, which is considerably less time than administering one-on-one hearings.

Outcomes

For students who have experienced both traditional hearings and my revised practicums, nearly all report that the practicums are less stressful and more fun. In the third question on the reporting form, which asks students if there is anything else they’d like to share with the teacher, students often describe how much more they like these group assessments compared to the traditional hearings. Here are a few typical student statements:

“Thanks for making us do these this way. . . . It is a much more relaxed environment.”

“I think these practicums were very helpful. I could sit in a practice room for 3 hours and memorize everything I had to play or sing, [for a hearing,] but it does not have the same effect. I learn more by my mistakes in a comfortable learning environment with my peers than I do in a formal assessment setting with an instructor. I like learning from my mistakes and my peer’s mistakes. It really makes the practicum interesting, fun, and informative!”

“Being a student with learning disabilities I am focused on how I learn and what styles of learning work best for me. One thing that stuck out with the practicums is that I retained the material much more than I would with material for hearings. I believe that because the high level of stress and anxiety that hearings often produce was missing during the practicums that material was retained. There often would be so much stress surrounding hearings that once the hearing process was over the material that was assessed was quickly forgotten.”

Importantly, students not only describe a less stressful environment, but they are also very specific about the higher quality learning, the diverse feedback, and a greater retention of material.

From a student’s perspective, the group assessments provide an improved experience; however, from a teacher’s perspective there can be some potential drawbacks. First, students’ self-reporting of skills acquisition may not be as informed or reliable as a teacher’s evaluation. For students who did not get overly anxious, I had a better grasp of their skill level when I heard them perform for me individually. Second, individual appointments yielded a finer calibration of grades (A-F) compared to the pass/fail ratings of group assessments. Third, there is the aforementioned problem of evaluating self-assessments, rather than skills acquisition. Because practicums are only one of several different kinds of assessments used in my class (e.g., exams, quizzes, papers, in-class performances, compositions, etc.), I remain comfortable forgoing some evaluative capacity in favor of a better student experience with deeper learning and richer feedback from their peers. Certainly, the grading rubric for the course needs to be carefully planned such that passing grades on group assessments cannot result in a student advancing to the next course without the requisite skills. I’ve found that even when the group practicums are worth very little percentage-wise, (say 10% of the course grade), students remain intensely invested in the outcomes. Interestingly, even students who tend toward apathy on traditional assessments become engaged in group work because of the peer interaction.

To mitigate these disadvantages for the teacher, I have considered incorporating more formal peer-review processes. Specifically, I will pursue suggestions by Michaelsen and Fink (2004), who recommend that peer evaluations be included as part of the grade-calculation process for all group work. Their peer evaluation procedures allow students to assess the overall contribution of the other members of their group, yielding a number (or percentage multiplier) to be included in a student’s course grade. Although introducing peer review won’t address the lack of expert evaluation of performances, the confidential, crowd-sourced element of Michaelsen’s and Fink’s peer-review procedures will enable a more granular grade for group assessments beyond pass/fail.

Creative Commons License
This work is copyright 2013 Deborah Rifkin and licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.