http://blog.artsusa.org/2013/09/12/kindergartners-stage-fright-and-educator-effectiveness/?utm_source=rss&utm_medium=rss&utm_campaign=kindergartners-stage-fright-and-educator-effectiveness

Here in Pennsylvania, we are currently mired in educator effectiveness. Before I left the elementary music classroom in 2007, my effectiveness as a teacher was measured by variations on these steps:

1. Around May 1, I would meet my principal accidentally in the hall. That person would inform me that he/she had forgotten to observe my class that year and said our spring performance would serve as my evaluation.

2. In mid-May, I would herd approximately 100 kindergarten students into our gymatorium. In between tears, loud exclamations of “Hi, Mommy!” accompanied by violent waving, dresses pulled over faces to hide from the audience, and other manifestations of 5-year-olds’ stage fright, we managed to sing, play instruments, and move. I may or may not have noticed my principal standing in the back of the room.

3. A few days later, I was called into the office, told everything was great, and asked to sign a paper saying just that. Then I went back to my classroom.

Two significant events in the accountability landscape have occurred in Pennsylvania since then. In 2010, the Bill & Melinda Gates Foundation awarded Pennsylvania an $800,000 Momentum Grant. The purpose of the grant was to develop an evaluation system that included student achievement as one significant part. The Pennsylvania Department of Education (PDE), working with other stakeholders, closely examined Charlotte Danielson’s revised 2011 Framework for Teaching Evaluation Instrument and piloted it in 2010-2011 with three school districts and one intermediate unit. This measurement tool included four domains on which teachers would assess themselves and also be assessed by their supervisor:

-Planning and Preparation

-Classroom Environment

-Instruction

-Professional Responsibilities

 

In the tool, each domain included components describing specific behaviors at four levels: Distinguished, Proficient, Needs Improvement, and Failing.

Then, in December 2011, Pennsylvania was awarded a Race to the Top grant, along with Arizona, Colorado, Illinois, Kentucky, Louisiana, and New Jersey. As part of that application, PDE outlined plans for a statewide educator effectiveness system. The goal stated in the application was to “implement new teacher and principal evaluation tools and processes to ensure effective educators in every classroom and building” (p. 4).

While PDE and other entities were working to develop operational systems to measure educator effectiveness, the Pennsylvania state legislature was working to craft and pass policies to put the systems in place. In 2012, the legislature passed Act 82, which required the following to be fully implemented by 2015-2016:

  • 50% of a teacher’s rating is based on his/her evaluation using the revised Danielson measurement tool.
  • 15% is based on building-level data, which include test scores, value-added assessment calculations, graduation and promotion rates, and participation in AP courses.
  • For those who teach content areas or grade levels included on state tests, 15% is based on teacher-specific data.
  • 20% is based on teacher-specific elective data, which can include tests, projects, and portfolios. For those who teach content areas or grade levels not included on state tests, the entire 35% of teacher-specific data falls into this elective data category.

 

As teachers who are responsible for teaching content not assessed using state-mandated tests, arts educators have 35% of their educator effectiveness rating based on elective data. To collect those elective data, PDE is requiring districts to adopt Student Learning Objectives (SLOs). The purpose of SLOs is to document student learning over a defined period of time; they follow the template contained in this document (pp. 32-34). These SLOs must include the percentage of students required to attain performance indicator targets in order for the teacher to be considered Distinguished, Proficient, Needs Improvement, or Failing.

In 2009, a year before the state received the Momentum grant, Pittsburgh Public Schools (PPS) received a separate $40 million grant from the Bill & Melinda Gates Foundation to develop an evaluation system in the city’s schools. The resulting model, Empowering Effective Teachers, was piloted in 2012-2013, and 85.4% of the district’s teachers were rated as Proficient or Distinguished. 5.3% were rated as Needs Improvement, and 9.3% were rated as Failing. Beginning in 2013-2014, PPS teachers identified as Distinguished, Proficient, and Needs Improvement will be labeled as satisfactory. Those identified as Failing will be placed on a performance improvement plan and will have two years to improve before possibly facing dismissal. It is important to note that Pennsylvania chose not to link educator effectiveness to compensation, promotion, tenure, or retention at the state level in its Race to the Top application (pp. 53-54). However, local education agencies like PPS have the power to decide differently.

Educator effectiveness is one of the hottest topics in education right now. While it may be daunting to think about receive a rating based on our teaching, I wish I could have received the type of feedback about my teaching that this model has the potential to foster. Now that I think about, though, I’m not sure what the measurement tool would have to say about the stage fright of kindergartners.

Image: