The other day at the hospital, I was asked to fill in a feedback form. “How likely are you to recommend this hospital to friends and family if they needed similar care or treatment?” it asked. I had to place a tick beside the most appropriate response (ranging from ‘extremely likely’ to ‘extremely unlikely’).

Thinking back to my Heritage MA Visitor Studies lectures, I identified this as a ‘Likert’ scale. (So named because Likert was its inventor – and sadly not because it asks you how much you Like-rt something.)

It also called to mind some of the other Visitor Studies topics I’d learned about:

  • Audience Research
  • Exhibit Evaluation
  • Quantitative vs. Qualitative research methods
  • Statistical Analysis
  • Front-end, Formative, Summative and Remedial Evaluation

Hardly a day goes by now, it seems, without being asked to give some sort of feedback for services or goods. The world’s gone rating mad. If I buy an item online, not only am I sent an email asking me to rate the item itself, but I’m usually asked to rate the service provided by the seller as well. Did the item arrive on time? Was it as described? If I contacted the seller, did they respond quickly and were they courteous etc. etc. Furthermore, was the amount of packaging appropriate?  Sellers are desperate for you to give them a five-star rating because it’s now so important to their business.

In fact, these days, I’m more surprised if I’m not asked to fill in some kind of feedback form than if I am. This is particularly true at museums and heritage attractions.

Museums once presented the visitor with exhibits that were labelled using academic language that only a fellow academician would understand. What did it matter if ‘ordinary’ visitors might have liked further explanation (or interpretation) of an exhibit to make their visit more interesting? Or even – perish the thought – enjoyable? That wasn’t what museums were there for…

Feedback – Visitor Studies – Evaluation: whichever words you use, in the case of a publicly-funded service (including lottery-funded), it’s all about accountability. About showing that the public’s hard-earned cash will be/has been well spent and you are doing a good job – and achieving the desired effect: whether it’s patient satisfaction, visitor satisfaction, or whatever.

Today, digital technology makes the rating game particularly easy. Online forms are quick and easy to fill in and submit and your responses are instantly number-crunched by the web software – with your review appearing minutes later for all to see, along with all the others. (You can look up any hospital on the NHS Choices website and read its reviews and ratings, just as you would a hotel or visitor attraction.)

Quantitative vs. Qualitative

Some museums and visitor attractions have attempted to design quantitative research to measure ‘learning outcomes’ – for example by testing visitors’ knowledge before and after visiting an exhibition or site.

However, we must ask ourselves whether helping people to learn ‘facts’ is really the main aim of the interpretation we have produced. If, instead, the intended effect is a more general ‘enhancement’ of their visit, then techniques such as ‘quiz-questioning’ at the exhibition’s exit and entrance will not find out how effective it was.

We might do better to concentrate, instead, on collecting qualitative data – characterised by so-called ‘rich’ responses. This sort of research is far more about feelings than numbers. There are many ways to collect qualitative data – and that’s all I’m going to say here because it’s far beyond the scope of this blog to start describing them!

Vital statistics?

You may feel that by presenting your results as numerical values (for example, percentages) you will be able to give them more credence. However, there are several big pitfalls to think about.

In statistical analysis, sample size is a fundamental issue. A large museum or visitor attraction may be able to get thousands of visitors to agree to be interviewed/to complete a survey questionnaire. However, for most purposes, the respondents must also be selected at random to ensure the integrity of the results. Random sampling is trickier than it sounds. A common pitfall is the ‘self-administered questionnaire’ left in a pile at the front desk: the results will merely tell you what people who like filling in questionnaires think of your visitor attraction. (And they are probably not representative of the population as a whole!)

So, unless you are a large institution with access to hundreds (if not thousands) of visitors and some market research expertise (to guide you through the twin minefields of questionnaire design and sampling strategy), it may be better not to go down the quantitative data/statistics route at all.

Dead-end or front-end?

According to one definition: “Exhibit Evaluation is discovering the extent to which something has succeeded in achieving its purpose”. This would indicate that it’s something that you do right at the end of a project. Indeed, once upon a time, this so-called summative evaluation was the only type of evaluation being carried out in museums.

But evaluation doesn’t have to be a dead-end process, where the only interest lies in finding out how well a piece of interpretation met its objectives – at a point when there’s no time or money left to improve the outcome based on your findings. (Though one would hope that someone, somewhere, embarking on a similar project could still use your results.) Evaluation shouldn’t just be a box ticking exercise. It can do so much more.

For example, Front-end evaluation takes place at the planning stage with a focus on the target audience. So… even if you were dead keen at the outset to produce a panel on the identification of bryophytes; by doing a bit of front-end evaluation (i.e. on-site interviews) you might find that most of the visitors to the site are actually mothers with young toddlers – who would much prefer to read a fun panel about mini-beasts.

Formative evaluation is done during the design stage to improve the interpretation prior to installation. It’s therefore usually carried out on mock-ups or prototypes. The aim is to improve the interpretation by trial and error.

Finally, Remedial evaluation does exactly what it says on the tin.

 

Putting it all into practice

Back at the hospital, they’re building a new wing, complete with new ‘way-finding’ system – to be implemented, eventually, across the whole site. The interior designers acknowledge that signage can be very expensive and also very hard to get right first time (even after lots of trial ‘walk-throughs’) and they have hit upon a really sensible solution. All signs will be paper-based, to fit inside a large, smart-looking frame. So, if it’s found that people are getting confused or there is a change in the ward numbering system, the signs can be amended and replaced cheaply and quickly using digital print.

This is an approach similar to one that I like in interpretation panel design. A sort of combination of front-end, formative, summative and remedial evaluation all rolled into one – made possible by new technology.

Today, digital print makes it much easier and cheaper to delay ‘setting your interpretation in stone’ (sometimes quite literally) until you know that you’ve got it just right and it really works. Let’s say you’re planning an outdoor panel or set of panels. First, get out the office for a few hours to interview visitors to the site. Take along some preliminary artwork to show them and ask them what they think of your proposals. People generally like being asked for their opinions. (But you must then be prepared to act on their suggestions!)

After doing this front-end evaluation, you are already much better informed and you can produce a ‘temporary’ graphic panel in a relatively inexpensive medium, such as Foamex or polycarbonate (request a sample):

This can be installed in a sign that takes a removable graphic panel, such as the Cavalier™, Musketeer™ and Bowman™ range of interpretation displays.

Your panel can then be further evaluated in situ through more interviews – and/or from visitor feedback via a QR code etc. – and any changes made to the artwork before installing an amended version. This could be in a more permanent medium such as n-viro™ (request a sample), if you are confident that you’ve got it right. Or, depending upon the situation, you might want to stick with the more ‘temporary’ panel type, which would allow you to constantly review and update the information every time the panel needs replacing.

Evaluation – you know it makes sense! But it can seem like a complete pain. It isn’t an easy thing to do and it can be time-consuming. So, when you’re grappling with those deadlines, it can easily fall off the bottom of the to-do list!

I’ve been scratching my head to find that ‘killer argument’ for doing some (front-end/formative) evaluation on even a small project such as a single ‘welcome’ panel in a nature reserve car park. I know that I won’t get far by citing personal satisfaction as a reason (though it is one). So, how about ending with the following list of ‘returns on investment in evaluation’ that I wrote in my Visitor Analysis and Evaluation lecture notes, way back in 1999?

  • Success of communication with non-specialist audiences
  • Development time saved (evidence to support decisions taken and reduced persuasion time)
  • Money saved by getting messages and interactives right first time
  • Visitors consulted meant good PR

Related website:

Visitor Studies Group

For more info visit the Fitzpatrick Woolmer website.

Back to top