Over the last eight months, a small team of colleagues from the Oxford University Museums and the university’s IT Services Mobile Development Team have been working on a project to look at best practice for engaging visitors in gallery spaces using mobile devices. With the working title of ‘Hidden Museum’, we worked with Mia Ridge, as a sector expert in evaluating audience engagement with digital technology, to conduct a series of prototyping and user testing rounds to investigate a variety of elements of mobile delivery. We are now gathering our learning from that process to build a prototype app for the Museum of the History of Science that we plan to complete and test with the public early in 2016.
This opportunity came about thanks to the Oxford University IT Innovation Seed Fund. This fund, launched by the university in late 2014, seeks digital projects that will enhance the staff or student experience. Open to all members of the university, it is designed to encourage small start-up digital projects, which may go on to become full services or even business opportunities through the university’s spin out scheme.
The fund allowed us to buy out the time of a project manager, content specialist and two mobile developers to work on the project, as well as buy in time and expertise to help with user testing and developing bespoke content such as films and animations.
The project was broken into three phases:
Phase One: Research – desk research looking at current mobile engagement research and initiatives in order to identify the gaps in existing knowledge and frame our research questions.
Phase Two: Iterative Testing – having identified our research questions we adapted content for user testing across the museums, making iterative changes to fix problems and address new questions raised.
Phase Three: Prototype Development – development of a prototype app for one of the museums built on all the learning gathered in the first two phases, and to allow for user testing of a complete experience (as opposed to component parts).
In phase two, we focussed on four core areas of research. I wanted to share a few key things we learnt as part of this process in the hope it is interesting to other working on mobile.
Questions: what is the best approach to in gallery navigation in order to provide location specific content? How accurately do users expect to be located in the gallery space? Do users like being ‘pushed’ content based on their location, or do they prefer the agency of ‘pull’?
Locating the user on the floor plan, while it has its uses for wayfinding, appears to be a distracting feature, encouraging users to follow their progress on the map rather than engage with the space and displays around them.
Users appreciated the ‘magic’ of having content trigger at just the right moment, but their delight did not offset the frustration when the technology failed, suggesting that this should not be the only method for triggering content.
Users had little trouble locating and triggering content based on a static floor plan, pin and an image, and having to trigger the content themselves did not diminish their experience. Users generally use the floor plan to find the general location, and then used an object image to identify the precise location. They often use details from the object, such as features of the stand, to identify them. Consequently it is important that object images reflect what they look like in the gallery. Of particular importance is scale – objects that appear similar size on screen but are largely different from one another in scale on display caused confusion.
Users enjoyed using image recognition to trigger content, and it did make them feel more connected with it and pay closer attention (this was not as obvious with QR codes). Creating image recognition experiences that worked well across the board was difficult – while we could usually determine from where a user would try and recognise the object, there was a great deal of variety in how they held their device, which caused discrepancies. We found that giving the user an outline of the object to line up made this easier. Users liked the use of a vibration or similar features to signal to them that they had successfully captured the image.
Questions: How playful can we be with the tone of the content before we undermine the authoritative voice of the museum? How closely does content need to be linked to the objects and displays and can audiences follow links to more abstract ideas?
Traditional audioguide content was considered highly appealing, and users appreciated getting the ‘facts’ about an object – who owned it, where it was from, and why it is significant.
Users did not have trouble linking from objects to conceptual ideas as long as the connection was made clear.
There was a limit to how many ideas could be communicated in a single clip without the attention of users wondering.
Once clip length got to about 45 seconds the attention of users often seemed to wander, with many wanting to check the remaining duration of the clip.
Questions: What are the requirements for delivering video in gallery in a way that encourages users to look at the objects while engaging with their screen? How can video be used to enhance engagement with the objects and provide something more than what you can watch on a screen at home? Does video offer benefits over audio? In what circumstances? How engaging is audio content for modern audiences? Is there a way this can be improved?
Users preferred audio to talking head video – they would prefer to listen to audio while looking at an object than to watch a video of a person.
Users found it difficult to switch their attention between the screen and the object.
Video demonstrations of objects in use were popular with users who usually said that it successfully increased their understanding of the object, but length was an issue – it was difficult to keep videos short when the objects being explained are complex, and difficult to provide sufficient interesting footage for the duration of the description.
Inserting prompts into videos had a mixed response in all cases, with some users noticing the prompts and choosing to respond, but many not noticing or ignoring them. This made the pauses problematic as they were not relevant to those who did not follow the prompts, and all users found them confusing.
Users got distracted from video or audio content went on for too long (especially more than 45 seconds) and would look to check the time left. As seen in the Tone/Relation section, they also got distracted if a single clip dealt with too many concepts. Therefore clips should be kept short and on a single concept. Deeper content could be managed by providing multiple short clips for the same objects.
Questions: How can interactives be implemented in gallery in a way that enhances engagement with the objects and displays rather than the mobile device, and in a way that offers enhanced understanding of and engagement with the object, rather than something gimmicky.
Users unanimously said that the interactive enhanced their understanding of the object and that it was an enjoyable experience.
Users enjoyed physical engagement triggered by their phone, rather than just completing tasks on a screen.
Users found it difficult to orient themselves in terms of what they were meant to be doing at the start of the interactive without clear instructions at the start, however it is difficult to do this on screen as many users chose not to read this information and then hit start, and needed to return to it later.
For complex interactives it was important to have step by step instructions rather than giving them all up front, and it was important to make these obvious on the screen and users could often miss them when they were immersed in the interactive.
Based on this learning we are currently developing a prototype app for the Museum of the History of Science with the working name ‘Pocket Curator’ – we hope to complete the app in January 2016 and conduct user testing and upgrades in February 2016.
We picked the Museum of the History of Science as their displays lend themselves to the kind of interactive engagement we found visitors engaged with – there are many scientific instruments that can be difficult for visitors to understand when they are meant to do something, but sit still on display behind glass.
For the app we have selected seven objects to feature, distributed over the three floors of the museum. Each objects has a number of pieces of audio content, broken down into very short 20-45 second sound bites dealing with different themes, allowing us to keep clips short, content clear, but provide a significant amount of information for users who are interested. Each object also has a piece of interactive content. We started with the concept pf ‘try it’, looking for ways to let the user try the scientific instrument on display for themselves, but expanded this to a number of content types with a focus on illuminating how the objects and devices work, and allowing the user to actively do something where possible. So users are invited to measure their latitude with a sextant, experiment with their own digital lodestone, learn to tell time in decimal and recreate Marconi’s wireless demonstration.
We look forward to sharing more details about the app and all its elements in a future post!