In part 1 of this story, I examined the complex and often contradictory relationship between Arts Council England and the UK government. I suggested that Arts Council’s widely panned cuts to the classical music sector were partially enabled by this “arm’s-length” relationship and the ambiguities of control that it creates.

In part 2 of this story, I examine the implications of this leadership structure for the Arts Council’s treatment of the organizations that it funds, showing how the ambiguities of “arm’s-length” arts funding resulted in the latest brouhaha over opera audience statistics.


The Arts Council keeps the organizations that it funds in a near-permanent state of precarity.

In order to get funding, arts organizations apply to the Arts Council to become “National Portfolio Organizations” or NPOs.

Every arts organization in England must reapply for its National Portfolio status every three to five years. During this reapplication process, the Arts Council can cut or increase its funding or cut the organization from its National Portfolio altogether.

Thus, even England’s most treasured artistic institutions must apply over and over again to ensure their funding is maintained: neither Royal Opera House nor the National Theater are immune from losing National Portfolio status.

The November 4 announcement which heralded cuts to many of England’s most important classical music organizations came as the Arts Council revealed which arts organizations had been chosen as National Portfolio Organizations for the next three years and which ones had not.

The Arts Council views this precarity as a net positive. “You sometimes feel that people think that we’re actually there to fund certain organizations,” Claire Mera-Nelson, the Arts Council’s director of music, quips: “They have received public funding, and we [the Arts Council] have been the vehicle through which they’ve received it, for decades.”

Fixed-term funding provision allows the Arts Council to ensure that arts organizations who receive tax-payer money are operating at their best: “ultimately, our job is to safeguard public investment: we don’t want to keep putting money into an organization that is causing problems.”

The Arts Council likes to position itself as a “development agency.” They don’t see themselves as merely funding arts organizations; they also aim to nudge these organizations towards best industry practices through their funding decisions. Mera-Nelson cites, as an example, the Arts Council’s work in promoting environmental sustainability in the arts.

It is implicit in the way that the Arts Council operates that the constant state of financial precarity perpetuated by its NPO program allows the Arts Council to compel the organizations that it funds to comply with its developmental agenda.

An unfortunate side effect of the Art’s Council’s focus on “development” is its unique preoccupation with measuring the performance of the organizations that it funds. In an annual survey, National Portfolio Organizations must respond to a series of 147 metrics in order to demonstrate compliance with the Arts Council’s developmental goals.

Some of these metrics do seem like important data to monitor (number of tickets sold; diversity statistics). But others appear rather frivolous (i.e., whether or not an arts organization has a TikTok; whether the organization’s website has interactive games.)

One can only imagine the enormous amount of time that it must take for arts organizations to fill out these surveys and the vast infrastructure that must be constructed in order to collect this data.

And one also wonders whether an obsession with metrics and measurement has the potential to create arts organizations that are more preoccupied with finding systems that quickly and efficiently tick the Arts Council’s boxes than with creating meaningful, impactful art.

The main problem with these surveys is that they reduce art to a series of discrete, measurable components.

However, art is a complex and ephemeral cultural construction: it cannot be reduced to the number of bums on seats in an opera house or the number of social media followers an orchestra has. All the alchemy, the magic—indeed, the creativity—of art can so easily be lost when it is reduced to raw data.

The Centre for Cultural Value, a cultural policy think tank based at the University of Leeds, explicitly recommends against this quantitative approach to artistic evaluation. They argue that the evaluation of the cultural sector should “[prioritize] the learning (and learning what to do differently) over monitoring goals of evaluation” and should “explore why and how things happen, as well as what.”

This more qualitative approach makes sense: a life-changing performance that plays to just twenty people is infinitely more valuable than a completely forgettable performance that attracts hundreds of people from the Arts Council’s target demographics but leaves them bored and disinterested by the end.

Indeed, it is precisely the Arts Council’s reluctance to consider the human impact of art that has led to this debacle over opera funding. Opera means a great deal to a great many people. And opera fans are passionate—even fanatic—in their love for the artform.

The Arts Council has no way of accounting for this passion in its evaluative mechanisms. Its measurement systems—by design—value raw figures of participation over the individual experiences of audience members. Simply put, an overreliance on data and metrics risks tearing the heart and soul out of artistic engagement: how people actually think and feel about the art.

The Arts Council does measure artistic impact. But it has outsourced this job to a private company called Counting What Counts Ltd., seemingly run not by artistic experts, but by economists and consultants.

This company created an “impact and insight toolkit” that uses a series of audience surveys to measure arts organizations – again, against a set of discrete “quality metrics.” These metrics include such hazy categories as “inquisitiveness,” “growth,” “authenticity,” and “contribution.”

As any seasoned arts lover will tell you, these categories are so context-dependent as to be nonsensical. “Authenticity” means something completely different in concert performance of Monteverdi’s Orfeo with a period orchestra than in a new promenade staging of Glass’s Einstein on the Beach. Its meaning is culturally and aesthetically contingent.

When applied universally across all forms of art (even just across all forms of opera), these terms become so multivalent as to be completely ambiguous. I am skeptical that hazy terms like “growth” can be used to compare one performance to the next, let alone to compare one arts organization to another. If these terms mean different things to different people, how are they useful metrics?

The “impact and insight toolkit,” to me, seems like a crude attempt to turn something as deeply qualitative and subjective as personal reactions to art into quantitative data. It barely scratches the surface on the complexity of thought and feeling that constitutes aesthetic impact.

But there is a cruel twist on the Arts Council’s obsession with measurement and evaluation: NPOs can work hard to measure up to the Arts Council’s various metrics and it can still be denied funding in the next funding round.

This is precisely what happened to organizations like ENO.

In this most recent round of funding decisions, the Arts Council opted to ignore applicants’ previous evaluation history with the Arts Council—a policy the Arts Council calls “NPO sensitivity.”

According to the Arts Council, this meant that existing NPOs received limited-to-no feedback on their performance as they prepared their applications for the 2023-2026 portfolio. Where previously the Arts Council (in its “developmental” role) had met with NPOs who were reapplying for funding and reviewed their applications in light of previous performance, the Arts Council opted to maintain radio silence with the organizations it was already funding.

Mera-Nelson argues that “NPO sensitivity” makes the application process more equitable, citing the UK government’s Nolan Principles (which govern fairness and integrity in British public life) as legal underpinning for this: “We wanted everybody—no matter where they were starting from—to come into the conversation with the same level of advantage and disadvantage.”

However, this immediately poses the question: What is the relationship between the data that the Arts Council collects from the organizations that it funds and the Arts Council’s funding decisions? And if the Arts Council can choose to sideline its evaluation metrics when making funding allocations, what is the point of forcing NPOs to comply with these metrics in the first place?

The issue is made all the murkier by then Secretary of State for Digital, Culture, Media and Sport, Nadine Dorries, who stated prior to the NPO application process that she had “asked Arts Council to consider the track record of organizations in access closely when considering applications for funding.” The Arts Council’s “blank-slate” NPO sensitivity approach is surely directly in conflict with this directive.

And surely the main point of collecting so much data from NPOs is so that the Arts Council can help these organizations to improve their operations and become more competitive in future funding rounds? Surely the Arts Council, as a development agency, would want to do its utmost to ensure that the organizations that it funds tick all the boxes required to retain their funding?

But the goal of the Arts Council, with its pervasive culture of precarity, doesn’t seem to be ensuring sustained funding to England’s most treasured cultural institutions, but rather to remind their NPOs that their continued funding is never guaranteed.

“NPO sensitivity”—that is to say, withholding feedback from NPOs about their performance and prospects for receiving sustained funding—may help to make the application process fairer for arts organizations who have never applied for public funding before, but it also seems like a grand betrayal of the Arts Council’s own development efforts.

So much money and effort is put into monitoring and improving the operations of the Arts Council’s NPOs, only for the Arts Council to suddenly decide that all this progress should not play a role in informing future funding.

Indeed, it seems to me that the Arts Council uses the data it collects from its NPOs very selectively: When making large-scale ideological changes to funding structures, the Arts Council easily brushes these metrics aside; but when trying to get arts organizations to conform to its developmental agenda, the Arts Council employs these metrics as both carrot and stick.

One could easily say that the Arts Council’s obsession with quantifiability essentially allows them to slash funding to any organization of their choosing based on arbitrary metrics, regardless of the organization’s artistic achievements. In other words, it gives them seemingly “objective” numbers to point to in order to shruggingly vindicate any act of artistic vandalism.

The Arts Council’s use of previous performance data to inform its funding decisions has been the cause of some controversy following its decision to defund the ENO. Stuart Murphy, chief executive of ENO, in an interview with the Guardian, claims that ENO had received “glowing” feedback from the Arts Council in the lead-up to the funding cuts, garnering praise for its audience-widening and diversity efforts.

Indeed, Murphy reports that the Arts Council told them that they were “on track” based on their ongoing performance and had expressed no concerns to ENO about their audience numbers. Based on this feedback, the announcement that ENO would have its funding pulled came as a shock to the organization (Murphy characterized the decision as “baffling”).

If this is true, it would not only suggest that the Arts Council had been inconsistent in the ways in which applied its “NPO sensitivity” policy (telling an arts organization that they are “on track” could certainly be seen as application feedback), but that the Arts Council may even have been duplicitous its treatment of NPOs in the lead-up to their funding decision.

Of course, without being privy to these communications, the general public may never know for sure. But one thing is certain: the ambiguity about the relationship between NPO monitoring and evaluation and the Arts Council’s funding decisions creates fertile ground for these kinds of misunderstandings.

At the center of this row over statistics is the Arts Council’s claim that they had seen “almost no growth in audience demand for traditionally staged ‘grand’ or large-scale opera.” Mera-Nelson framed this audience decline as inherent to the size and scale of opera companies like ENO: “My question would be, realistically, how many people, honestly, can have their first opera experience by coming to a ‘grand’ theater?”

(A so-called “grand” theater like the Coliseum, one might argue, is the ideal means of providing folks with their first operatic experience: There are plenty of seats for new-comers, the cheaper seats are often subsidized by more expensive ones, and the sheer scale of the spectacle is great for “hooking in” new opera-lovers.)

Murphy, however, suggests that the Arts Council’s statistics on audience decline are false, or at least misinterpreted: “It quickly became apparent that no audience analysis had been conducted. […] If the data exists, I would love to see it—or perhaps it was collected during the lockdowns of recent years?”

So, I looked at the Arts Council’s own data. I found it distinctly limited: the Arts Council only has publicly available data on the ENO for two seasons. In the 2018/19 season, ENO received 164,804 attendances over 95 performances with a total ticket capacity of 184,607. During the 2017/18 season (the last complete pre-Covid season), ENO received 127,641 attendances across 85 performances with a total ticket capacity of 181,008.

This limited data certainly doesn’t preclude the potential for audience growth at ENO. ENO grew its audiences from 70% capacity in the 2017/18 season (with an attendance ratio of 1502 attendances per performance) to 89% in the 2018/19 season (with an attendance ratio of 1735 attendances per performance).

I also looked at the broader audience trends for large music organizations in London, as published in the Arts Council’s data (an anonymized 17-organization sample that no doubt includes many organizations comparable to the ENO).

Their data shows that, between 2015 and 2018, these organizations increased their earned income (i.e., money from ticket sales) by 12%, even as public subsidy and private donations remained roughly the same. (whereas in 2015/16, earned income only constituted 53% of their total income, by 2017/18, earned income had risen to 56% of total income.)

During this same period, audience statistics fluctuated. The total number of attendances for these organizations dropped from 3.95 million to 2.61 million between the 2015/16 season and the 2016/17 season, but rose again to 3.61 million in the 2017/18 season. This broadly follows attendance trends across all Arts Council-funded organizations over the same period: there was a notable drop in attendance in the 2016/17 season across all NPOs (perhaps owing to economic insecurity surrounding the Brexit referendum and/or the general election?).

At the same time, the number of activities mounted by these organizations increased from 3,724 to 4,123 between the 2015/16 and 2016/17 seasons, but fell to 4,054 in the 2017/18 season. But this only shows part of the picture. Over the same period, attendance at events for children and young people put on by these organizations increased by almost 23%.

And so, I looked at data for the Royal Opera House specifically for this period (if one can make any conclusions about the state of “grand” opera in London, it is surely by looking at this prestigious venue). Between 2015 and 2018, rates of attendance per performance rose at the ROH by 32%, even as the number of performances and net number of attendances fluctuated.

I also looked at statistics from other opera companies of a similar size around England for this pre-Covid period—which were widely varied. Glyndebourne’s touring operation, for example, decreased its number of performances during this period, but its net attendance remained rather constant and attendance rates for each performance actually increased by 10%.

Opera North’s net attendance, meanwhile, grew by 56% during this period, but it also nearly doubled its total number of performances, so attendance rates per performance fell somewhat. But because these are touring companies, fluctuations in attendance rates per performance can very easily be explained by changes in performance venues.

In other words, I could find little in the Arts Council’s publicly available data to suggest that there was “almost no growth in audience demand for traditionally staged ‘grand’ or large-scale opera.” The data is so fractured and piecemeal, with organizations dropping in and out of the Arts Council’s public data records, that it is hard to see any sweeping, long-term trends.

What I see in the open data is waves of growth and decline in large, London-based music organizations from year to year that don’t seem to be out of sync with broader trends. And with such dramatic political events rocking the English arts sector over the past six or seven years, it would be churlish to extrapolate any broad audience trends from this data.

One certainly wouldn’t make such blanket statements that there is “almost no growth” in audience demand for large-scale opera. I don’t know where the Arts Council drew this conclusion from, but it doesn’t seem to be from their own publicly-available data.

But I am not a statistician. But actual statisticians at the Times, provided with more complete data by the Arts Council, found that the biggest factors in slow audience growth for opera were Arts Council funding cuts.

Neil Fisher, the journalist who broke this story, describes the statistics as “dodgy,” built on a “self-fulfilling prophecy” whereby the Arts Council slashes opera funding, audience numbers decline as a result, and then the Arts Council uses that decline to justify further cuts.

He also revealed that the Arts Council had unfairly targeted the opera sector for cuts, misleading the public over its purported state of decline: audiences at non-operatic musical performances had, in fact, seen a much steeper drop in audience numbers.

But, in many ways, these statistics are beside the point. All this discussion on the relative merits of scale might prompt us to wonder whether a system in which small, newly founded arts organizations must compete with large, historied arts organizations for the same pot of arts funding can really ever be fair to anyone involved.

The system would seem unfair to those small, understaffed, emerging organizations who face disadvantage competing against well-staffed, established companies. But it is surely also unfair to these iconic, beloved cultural institutions who employ hundreds of creatives and play to thousands of people and yet are constantly at risk of losing long-standing funding to newcomers. Ultimately, no one winds up happy.

The Arts Council’s music portfolio includes such esteemed names as Glyndebourne, Wigmore Hall, the Hallé, and English Touring Opera. But these organizations are pitted against a range of smaller organizations, including arts education charities, local festivals, and (in one case) a commercial gym.

Naturally, all of these organizations are important to a vibrant arts ecology. But Arts Council England does little to vouchsafe the funding of either large opera companies like ENO and emerging groups like OperaUpClose, instead finding increasingly gladiatorial ways of making these (radically different) organizations battle it out for the same funding pool.

Thus, Arts Council England has created a bizarre system in which arts organizations of wildly varying sizes, with wildly varying agendas, and with wildly varying aesthetic, geographic, and financial scopes, to prove their relative worth. And so, the unfortunate situation arises in which the ENO is implicitly viewed as less valuable or less worthy of funding than smaller musical ventures with entirely different remits.

The Arts Council, however, seems to appreciate the flexibility of being able to defund large organizations in order to make way for smaller ones. Mera-Nelson explicitly frames the funding of opera companies like the ENO as a barrier to smaller companies receiving funding:

But if you look at the way that the Arts Council investments in opera were configured, they’re heavily focused on a grand scale. And because so much of our investment was there, it makes for very little room for everything else. And not just other genres of music, but actually other types of opera—so, smaller, more flexible companies.

She informed me that, as a result of ENO’s removal from its music portfolio, the Arts Council has, indeed, been able to increase the amount of funding to smaller organizations: “that has created a big shift, because now there is more resource for some—but not necessarily on the scale of ENO.”

But the question remains: why must this be an “either/or” situation? An ideal model of public arts funding would safeguard the subsidies of long-established cultural institutions while also providing opportunities for industry-changing newcomers. It would certainly not force these groups to constantly justify their existence to unelected bureaucrats with unchecked latitude to entirely overturn funding conventions.

***

In the third and final instalment of this story, I examine the political ideologies behind the Arts Council’s decision to cut funding to large classical music organizations and question whether their new funding plan will actually result in a more equitable allocation of arts funding.

Photo of ENO Yeomen of the Guard © Tristram Kenton

 

 

Comments