*sidenote* Before you read this post, please bear in mind that it contains a lot of numbers and data, but if you bear with it I hope it all makes sense and I hope you’ll find the results as interesting as I do!
In a recent issue that was raised on social media between theatre critic West End Wilma and performer Carrie Hope Fletcher on what theatre reviews should focus on, I thought I’d dig my teeth into this interesting topic.
West End Wilma was criticising the show, not for the talent of the actors or production values such as the lighting, set, design, direction or choreography, but instead focusing on more the content and narrative of the show.
While totally being within her right to criticise the show in whatever way she sees fit, I was unsure whether West End Wilma‘s review was a fair one and I tended to agree with Carrie Hope Fletcher in that by this logic, shows such as Les Miserables and Miss Saigon that have flawed characters and hard-hitting plots should also receive the same treatment and be given negative reviews.
When looking at whether West End Wilma was fair in her review of Dogfight by reviewing the show based on its misogynistic plot, I decided to call into question the role of theatre reviewers. Through a Survey Monkey survey, 33 volunteer participants who considered themselves to be theatre reviewers in one way or another, answered questions on their views of theatre reviews.
The common phrase springs to mind “everyone’s a critic” because quite literally anyone who sees a show and has an internet connection can be a theatre critic. Whether this is by one tweet expressing how much they loved or hated a show in one sentence, or an in-depth theatre review on The Stage, The Guardian or their own blogs, anyone can be a critic.
However, one thing that has never been clarified is what theatre reviews should focus on. Should it be the moral values of the story itself – should a show such as Dogfight be critiqued for being misogynistic, and by default as Carrie Hope Fletcher says, should Les Misérables also be critiqued for having misogynistic plots and characters who – for want of a better term – “use and abuse” women, in the way that Fantine is treated? Or should theatre reviews focus on the music of the show? The narrative and characters? The actors and the talent of the actors? How accessible the theatre is?
I opened these questions up to the reviewers to see how they focused their attention in their reviews.
There were ten factors that reviewers had to rank in order of importance to least important. The maximum score a factor could have would be 330 – if all 33 reviewers gave that factor a 10. Similarly, the lowest score a factor could have would be 33 – if all the reviewers gave it a 1.
All ten factors that were ranked were:
- The actors themselves (did you know and/or like them before?)
- The talent of the actors (particularly in all three disciplines for musicals)
- The production values (direction, choreography, sound, set design, costumes, lighting)
- Whether it was a good or bad story (narrative, plot, development of characters, relationships)
- Whether it was a good or bad story morally (were characters and was the plot flawed by being racist, sexist, homophobic)
- Accessibility (ease of getting to the theatre, ticket price – even if it was gifted, how much would it have been, price of merch, potential disability needs)
- Music (how songs or other music fit with the style of the show, lyrics, catchiness)
- Whether the show promoted something good in society or sparked you in some way (was it socio and politically beneficial and relevant)
- Whether the show made you emotional in any way (sad, happy, angry i.e. did it move you, did you connect with characters
- Whether you generally enjoyed it and was it to your personal taste
With an impressive score of 234, whether the show made reviewers emotional in some way i.e. happy sad, whether the show moved them or they connected with characters was the top ranked factor when writing theatre reviews and regarding the show as positive, negative or somewhere in the middle.
The lowest factor that was ranked was the actors in the shows – whether the reviewers knew or liked them beforehand.
In order from top to bottom the factors with their scores out of 330 were:
The above chart can also be broken down into the average grade that each of the 33 reviewers gave each of the factors out of 10:
Number of 1s and 10s Given
Based on the overall scores, as expected, the factor that received the most 1s and was ranked least important across the board was “Actors”, with 30.3% of reviewers regarding it their least important consideration when writing reviews.
However, despite having the top spot in the overall score, the factor that was given the most 10s by reviewers was not “Emotion and Connection”, but instead was “General Enjoyment”, with 21.2% of reviewers considering it their most important factor.
The reason that, although receiving the most 10s, “General Enjoyment” did not take the top or even second or third most important spot overall, was that it proved to be quite controversial. What became clear in this whole analysis was that “one man’s trash is another man’s treasure” – whereby some people ranked factors as their most important, while others ranked it as their least important. Among the most controversial factors was indeed “General Enjoyment” where, although 21.2% regarded it their highest, 9.1% of reviewers regarded it their lowest ranked factor, pulling it down.
In comparison to the three factors that beat it, this is the breakdown of “Emotion and Connection”, “Music” and “Talent”:
|Factor||Percentage of Reviewers who marked 10/10||Percentage of Reviewers who marked 1/10|
|Emotion and Connection||15.2%||0%|
As you can see, in comparison to the three above, despite having the highest number of 10s, “General Enjoyment” has a considerably higher number of 1/10s, bringing its importance down in the main graph.
Other controversial factors were:
|Factor||Percentage of Reviewers who marked 10/10||Percentage of Reviewers who marked 1/10|
|Good Story Narrative||12.1%||9.1%|
|Good Social Values||3.0%||9.1%|
This can be more clearly seen in these two graphs:
It is even clearer to see here that “Actors” was the unanimously least important factor, being the factor that got the lowest amount of 10/10s and also being the factor that got the highest amount of 1/10s.
In contrast, “Accessibility” is a really interesting factor to consider. When solely looking at the amount of reviewers who gave “Accessibility” 10/10, it is the fifth highest factor. However, it was also the second highest factor to receive 1/10 importance, with 27.3% of reviewers not considering “Accessibility” to be particularly noteworthy when writing theatre reviews. This brings down its overall ranking.
“Production Values” is another interesting one, as it didn’t receive many 1/10s, and it didn’t receive a bad number of 10/10s, yet in the overall ranking, it didn’t come up as a hugely important factor.
A lack of consistency
In spite of this analysis, which I personally find to provide very interesting results, something that became glaringly obvious to me in this research was that when looking at the main graph out of 330, although there was quite a considerable difference between the top two and bottom two results, a lot of the factors were close in numbers. There are definitely talking points to be had around the number of 10s and 1s given, as discussed above, but the fact that between the second and sixth highest factors (Music, Talent, General Enjoyment, Good Story Narrative and Production Values), there was only a difference of 20 between Production Values on 188 and Music on 208, with differences of only three or four between each one, says to me that the reviewers all think quite differently.
Had reviewers all thought in the same way, there would be a clearer distinction between each factor, and the graph may look more like the below, where there is a definite trend and unanimous decision on the most – least important factor:
However, this was definitely not the case, and as we have already noted by the controversial split between 10s and 1s given, what one reviewer considers to be highly important, another reviewer disregards.
The spread of results and most common responses
A final graph here shows all the results from the 33 respondents and shows the spread of answers.
Similarly to the graphs showing the number of respondents who gave certain factors 1/10 or 10/10, we can easily see the most common results through this graph, and look at the whole spread of results, as well as looking at the most common answer for each of the factors.
Again, the fact that “Actors” was ranked the lowest important factor is unsurprising, noting the large amount of red and brown in its column and similarly, the fact that “Emotion and Connection” came out on top reflects the amount of healthy green in its column.
The most common number attributed to each factor was:
|Factor||Most Common Ranking||How Many Times It Was Given|
|Production Values||5 & 6||6|
|Good Story Narrative||8||6|
|Good Story Morally||4||9|
|Emotion and Connection||9||7|
When looking at the factors that had the most amount of 1s and 2s (“Actors“, “Accessibility” and “Societal Values“), it is clear to see why these came out lowest overall. Similarly, “Emotion and Connection“, “Talent” and “Good Story Narrative” all had highest amounts of 8s or 9s, giving them the top spot for reviewers.
So, while I would personally argue that the review in question by West End Wilma focused on an aspect of the show that the majority of theatre reviewers wouldn’t consider to be the highest factor when writing theatre reviews, there is definitely a case for reviewing shows through that angle, should you wish.
As we know that reviewers all seem to focus on different aspect in their reviews, do you think this is right? Should reviews from all reviewers be more consistent and focus on the same things? One could argue that three reviewers could see the same show – one focus on the talent of the actors, one focus on the production values and one focus on the social morals of the show, and all three come out with different reviews and perspectives. Do you think this is fair and creates a fair depiction of the show? Or should there be a guideline that reviewers adhere to so that all focus on the same thing? I’d love to hear your thoughts in the comments below on whether reviews should be more standardised or open to interpretation.
Of course, everyone’s a critic, but what we now know how different all the reviewers are!
I would like to thank the volunteers who took my survey to allow this blog post to come about in discussing the data produced.