Evaluating Public Art: Webinar Seminar

by Katherine Gressel On August 16, 2012, I participated in the webinar Public Art Evaluation: Principles & Methodology for Measuring Social Impact, organized through the Public Art Network of Americans for the Arts (PAN). Also participating were Dr. Elizabeth Morton, Professor in Practice, Urban Affairs and Planning, Virginia Tech, Alexandria campus; Angela Adams, Public Art Administrator, Arlington Cultural Affairs, Arlington, VA; and Pam Korza, Co-Director, Animating Democracy, Americans for the Arts. The presenters discussed their experiences applying different evaluation frameworks and tools in three very different public art settings. PAN director Liesel Fenner began the discussion by acknowledging the challenge of this task: “How do we measure an art form that is elusive to traditional measurement tools?” Because there is no single definition of public art, it is also important to define “what exactly [we are] measuring and why [we are] measuring it,” and use methods specific to the unique “context” of each project or site. As a participant, I addressed the challenge of collecting audience responses to permanent public art. I discussed my 2007 arts administration graduate thesis research, in which I observed and interviewed passersby, on several occasions, at a series of community murals in two public housing developments in Crown Heights, Brooklyn. I based my process on the “public art watch” method pioneered by Harriet Senie—in which Senie’s graduate students intercept people at public art sites and ask about their opinions and knowledge of the work. I also employed Association for Public Art’s Museum Without Walls track how many people engage with public art and which sites they visit the most; visitors are encouraged to leave comments and photos). Adams’ and Morton’s presentation focused on two pilot evaluations of different aspects of Arlington’s public art program, designed and implemented by Morton’s Virginia Tech students (also summarized in the Public Art Network blog). These speakers emphasized the importance of integrating evaluation into all stages of a project, including establishing “baseline data” such as “people’s attitude right now” about a future public art site that can be compared to attitudes after an artwork is completed. The presenters also emphasized identifying a “variety of stakeholders” who may be impacted by a public art project—in their analysis of Arlington’s Water Pollution Control Plant fence enhancement site (slated to be a future public art site), students surveyed the official advisory committee, plant workers, and park users. Adams and Morton introduced a word cloud-generating tool to compare and contrast the most common words expressed by these three different stakeholder groups to describe their opinions about the site and expectations for public art. Another group of Arlington students discovered, based on artist surveys, that artists did very little site analysis before beginning a project. This led to the art program’s development of new technical training for artists, and new outreach expectations in their contracts. It was reassuring to hear examples of an evaluation effort so heavily influencing the future practices of a public art organization. However, I am curious how ongoing evaluation can continue without the partnership of the Virginia Tech students. Pam Korza, in contrast to the other presenters, focused on temporary and participatory work that is “intentional about contributing to community, civic, or social change in some way.” She cited projects that related to each of six “impacts” in Animating Democracy’s “Continuum of Impact” framework, used to collect data around changes in knowledge, discourse, attitudes, capacity, action, and policies as a result of art initiatives. Korza addressed the danger of unsupported claims about what art can do, for example: concrete policy change can hardly ever be attributed to just one factor or project. Instead it is important to show evidence such as what people know or believe after a project, or how many people participate. When presenting data, Korza believes “there should be no numbers without stories, and no stories without numbers.” Yet data collection tools do not necessarily need to be high-tech: Korza cited one program leader’s use of manila folders, labeled with different “impacts,” to collect and categorize every anecdote of success she hears people say. Reflecting on all the presentations, Adams mentioned being particularly struck by “the at times fuzzy line between evaluation and participation.” My interviews with public housing residents, for example, demonstrated that the public art on its own was not really educating and engaging onlookers in the way that the presenting organizations had hoped, even though most people liked and identified with the murals. However, both Morton’s students and I found that interviewing people about their existing opinions and knowledge of public art made them want to know more and get more involved. Similarly, when people engage with public art tours or programs, they become better equipped to discuss the artwork’s meaning and impact. In such circumstances, how can we isolate the impact of the artwork vs. the impact of our engagement efforts? How can we assess the lasting impact of arts and civic engagement projects (like the ones described by Korza) and pinpoint the specific role of art? Perhaps some important take-aways from the discussion are that evaluation will be most feasible and most beneficial in projects and programs with built-in community engagement components and well-defined community stakeholders and goals; and that engagement and evaluation can and should take place at all stages of a project.
Fall 2012 | Vol 4, Issue 2
select another issue