A seat at the table: Giving visitors a voice in exhibition development through user testing

Emily Hellmuth, Indianapolis Museum of Art, USA, Silvia Filippini Fantoni, North Carolina Museum of Art, USA, Tiffany Leason, Indianapolis Museum of Art, USA, Jen Mayhill, Indianapolis Museum of Art, USA

Abstract

Since the Indianapolis Museum of Art (IMA) introduced a visitor-centered exhibition development process in 2013, user testing of analog and digital interpretive tools in both the conceptual and implementation stages has been a fundamental part of the process. User testing is an important tool in gauging visitors’ interest in a specific activity. It also helps to determine if interpretive tools are intuitive, engaging, and easy to use, so that changes can be made before installing them in the galleries. The systematic incorporation of user testing in the exhibition development process has resulted in the implementation of a number of successful interactives, which have realized high take-up rates and have received positive feedback from visitors of all ages, thus contributing to the overall increase in satisfaction with our exhibitions. This paper will reiterate the importance of user testing, particularly when it comes to technology-based projects, and highlight some of the things learned in the past few years from using this strategy both in the front-end and formative stages. Staffers will also discuss different approaches to user testing and share takeaways from experiences with mobile testing stations and Test It Lab, including advantages and disadvantages of each model. At a time when many museums, and art museums in particular, are considering introducing user testing into their practice in a more methodical way, the IMA may be able to help these institutions make informed decisions about how to move forward.

Keywords: testing, user testing, visitors, prototypes, visitor-centered

1. Toward a visitor-centered exhibition development process

Recent reports from the National Endowment for the Arts and the Association of Art Museum Directors indicate that attendance to art museums in the United States is declining, particularly among Millennials (National Endowment for the Arts, 2015a, 2015b; Association of Art Museum Directors, 2015). In an effort to reverse this unfortunate trend, the Indianapolis Museum of Art (IMA) has recently implemented a more visitor-centered approach to exhibition and programming development with the objective of making the institution more accessible, engaging, and fun, especially to those with little knowledge of art history, and to stimulate social interaction (Filippini Fantoni et al., 2014). This change is in response to more recent trends that indicate that learning is no longer the primary reason why people visit museums (fig. 1).

hellmuth.fig1

Figure 1: LaPlaca Cohen’s 2014 Culture Track study indicates that enjoyment and social interaction are surpassing learning as the main reasons for visiting culture institutions (LaPlaca Cohen, 2014)

At the core of this visitor-centered approach is the implementation of a new exhibition development model, which was inspired by some pioneer institutions such as the Detroit Institute of Arts, Brooklyn Museum, and Oakland Museum of California. The new process is led by a multidisciplinary team consisting of a curator, designer, interpretive planner, exhibition manager, and evaluator, with additional specialists rotating in and out depending on the topic (Filippini Fantoni, 2014). In addition to having members with varied competencies and viewpoints, the team works in ways that encourage collaboration and invite all members to weigh in on all topics to make decisions by consensus. This approach has allowed interpretation and evaluation specialists who advocate for the interests of visitors and their needs to be part of the decision-making process.

2. Involving the visitor in the exhibition development process through evaluation

In working as a team, which is by nature more collaborative, the process also invites the visitor voice directly through all the stages of evaluation (i.e., front-end, formative, remedial, and summative). Asking for visitor input begins during front-end evaluation to gauge, among other things, level of interest in an exhibition topic, likelihood to visit, and likelihood to bring children for a visit. This stage of evaluation can either be accomplished on site or online to reach a wider demographic and gather more responses. IMA researchers routinely survey Indiana residents and museum members to gain feedback on their interest in upcoming exhibitions. These findings can help with planning the exhibition calendar as well as giving thought to the sequencing of types of exhibition experiences.

Once the topic of an exhibition has been defined, further research is carried out in the formative stages to help fine-tune the messages to communicate, as well as the tools used to communicate such messages. For example, for the Gustave Baumann, German Craftsman – American Artist exhibition (October 2015–February 2016), findings showed that respondents, after seeing images of some of the works in the show and briefly reading about it, had many questions about the technique and process employed by the artist in making the prints. As a result, the team decided to include an additional outcome related to exploring the artist’s process, which has turned out to be one of the most successful aspects of the exhibition. Formative research is also carried out in the development stages to help inform titles and visual branding of exhibition, as well as various analog and digital interpretive tools, which are tested multiple times to guarantee that they are intuitive and easy to use (see section 3 for more information).

Upon a new exhibition opening, the team keeps a particularly close eye on how visitors are interacting in the exhibition and what they are saying in case adjustments need to be made. A recent example of remedial evaluation occurred for the Dream Cars exhibition. One of the cars in the exhibition, the BMW GINA, was covered in fabric, with a video to the left of the car showing how its mechanics moved under the fabric exterior and a touchable fabric sample mounted on the wall to the right of the car. Early on, it was clear that visitors were missing this important feature of the car’s design; thus, larger and more prominent text was added to the label, with an arrow drawing attention to the fabric sample.

Finally, researchers conduct quite a bit of summative evaluation for exhibitions and activity spaces, and typically gather visitor feedback and track behaviors through surveys, interviews, and observations. Surveys provide a quantitative understanding of the demographics and psychographics of exhibition visitors, their level of satisfaction, and which interpretation tools they use. Interviews allow staff to have in-depth conversations with visitors about their exhibition experience and articulate what they take away, in a manner that they may not have done unprompted. This helps staff determine if key messages (learning outcomes) have been communicated effectively. The findings from these studies help to not only document what visitors learned, but also carry forward and apply to future exhibitions.

While these numerous tests and evaluations tend to increase the overall exhibition development time, the benefits associated with the inclusion of visitors in the process have so far outweighed the problems. Positive outcomes include higher visitor satisfaction (fig. 2), better communication of the key messages related to the exhibition (as mentioned in post-visit interviews), and higher attendance (from 44,000 in 2013 to 81,000 in 2014 and finally 90,000 in 2015).

Print

Figure 2: overall exhibition satisfaction over past nine featured exhibitions

Another benefit brought about by the new model is the integration of analog and digital interpretive tools in exhibitions. In addition to traditional labels and wall text, the visitor experience is now enhanced with mobile guides, visual didactics, videos, immersive environments, iPad apps, interactive tables, and hands-on experiences. Not only has the number of these tools increased, but also, thanks to this iterative and user-centered approach, they are easier to use and better incorporated into the exhibition (both conceptually and physically), thus resulting in higher take-up rates (Collerd Sternbergh et al., 2015).

3. The importance of testing digital tools

While the need to test interpretive tools is fundamental to guarantee the success of any application, it is particularly important for digital projects for a number of reasons. First of all, the costs associated with developing technology-based interpretation are generally higher than their analog counterparts. Martin and McClure (1983) indicate that the main reasons why digital projects exceed their budget are requests for changes by users and overlooked tasks. Furthermore, research shows that fixing problems after launch is far more expensive than making changes during the conceptual, design, and development stages (fig. 3). This is why user testing and an iterative approach are fundamental, particularly in the early phases of these projects.

hellmuth.fig3

Figure 3: costs normally associated with making changes to a digital project (Pressman, 1987)

When it comes to technology-based interpretation, it is also important that the proposed tools are usable by not only the younger generations for which they are often intended, but also our core audience (ages forty-five years and older). Involving users of various age groups in the development process is therefore essential to determining the success of an application. An example is that of Pointillize Yourself (Collerd Sternbergh et al., 2015), an application developed for an exhibition on Neo-Impressionist portraiture, which allowed visitors to take a “selfie” and turn it into a Neo-Impressionist portrait. The app, which was developed using an iterative approach and user testing, was used by over 60 percent of the exhibition visitors, 59 percent of whom were older than forty-five.

Another important factor to consider is that visitors generally do not read instructions. Therefore, making an application as simple and intuitive as possible is key, particularly for those who are not familiar with technology. At the conceptual stage, interpretation specialists, designers, and developers have the tendency to include more “bells and whistles” than users really need. Testing applications multiple times during the development process has helped us to not only determine if people intuitively understand what to do, but also simplify the application by refining some design elements and eliminating complex features that do not necessarily respond to user needs.

Generally when it comes to technology-based interpretation, IMA staff carry out at least three types of user testing, each with ten to thirty participants:

  • Paper prototype: even with a digital project, first stages of testing often happen on paper in order to get feedback on a concept, basic functionality, as well as what people take away from the experience, without putting resources towards the actual development. This also facilitates open input by visitors on the tool or interactive by not having a product that looks refined or almost finished.
  • Wireframe or design testing: this may be interactive and has the objective to test specific aspects of the interface (e.g., Is the interface intuitive? Do users know what to do? Do the various elements of the interface respond in an expected way? Are any instructions/content clear?).
  • Beta testing: those interactive elements that could not necessarily be evaluated in previous prototype stages are tested, as well as responsiveness, all functionalities, and any remaining bugs.

Occasionally, depending on the nature of the application and the results from the early prototype testing, certain steps can be repeated or skipped. Below is an example of how user testing has been useful in refining and eventually determining the success of a recently developed digital application.

4. User testing and development of the Make Your Mark app

The Make Your Mark app, which was created as part of the Gustave Baumann, German Craftsman – American Artist exhibition (October 2015–February 2016), involved multiple rounds of user testing. The objective of the app was to allow visitors to create their own signature mark, much like artist Gustave Baumann’s hand-in-heart symbol that he used to sign his prints (fig. 4). Since it would be part of the exhibition experience and thus used by all from families to older visitors, Make Your Mark needed to be appealing, intuitive, and engaging for all ages.

hellmuth.fig4

Figure 4: Baumann signed his prints with this hand-in-heart symbol, which represented the tenet that “whatsoever the hand finds to do, the heart should go forth in unison.” Point Lobos (detail), about 1934, color woodblock print, 8 x 8-1/4 in. Indianapolis Museum of Art, gift of Stephen W. Fess and Elaine Ewing Fess, 1998.90.

The initial brainstorming session resulted in the development of two main ideas, for which two different prototypes were created. Both prototypes had five steps: (1) choose a term to describe you; (2) choose an activity you enjoy; (3) choose a color; (4) add your signature; and (5) choose whether to email your mark. The prototypes were the same for steps 3 through 5, but differed in steps 1 and 2. In the first prototype (fig. 5), users chose from a selection of words in steps 1 and 2 (e.g., strong, wise, gardening, cooking). In the second prototype (fig. 6), steps 1 and 2 offered users a selection of icons that the creative team felt represented these words. Staff debated whether words or icons would be a better option for users. Words to articulate these concepts could be clearer than icons (e.g., selecting the word “strong” more clearly communicates you are identifying yourself as strong than selecting the sun icon, which users may or may not interpret to mean strong). However, this was an app to be used by exhibition visitors ages six years and older, and thus needed to be cognizant of different reading levels.

hellmuth.fig5

Figure 5: in the first paper prototype, users chose from a selection of words in steps 1 and 2

hellmuth.fig6

Figure 6: in the second paper prototype, users chose from a selection of icons in steps 1 and 2

Users were asked to explore each of the prototypes, answer a series of questions about expectations at each step, identify confusing elements or content and any changes they wanted to see, and tell us what they took away from the experience. From this first round of testing, the team learned that the app was simple and straightforward, but that the words and their associated icons needed to be refined (some were confusing, had weak associations, or were not applicable to younger age groups). Staff also learned that there was some confusion with guests regarding where to add their signature and that the users were divided on their preference for using icons or words.

The solution the creative team came to in the subsequent prototype was to offer both words and icons in step 1 and icons in step 2 (fig. 7). Step 1 asked users to choose a word to describe themselves and then select one of three icons they think best fits that word; step 2 asked users to choose an activity they enjoy, represented only by icons. From this round of testing, staff learned: (1) users liked that step 1 offered them first a word to choose from and then the opportunity to choose which icon best fit that word for them; (2) some users wanted to see both a word and an icon for step 2; (3) there were still some problematic word and shape options; (4) users would want to see a preview of their mark throughout the experience; (5) splitting their signature into first name and last name was problematic for users; and (6) the tentative title Make Your Mark conveyed the desired message to users.

hellmuth.fig7

Figure 7: steps 1 and 2 from the paper prototype tested in the second round

Staff made changes to the app based on this feedback (fig. 8) and then, when ready, further tested the beta version with users on an iPad. Beta testing confirmed that there were no major issues with the app’s concept, structure, or interface, but that some developmental bugs needed adjustment.

hellmuth.fig8

Figure 8: screenshots from steps 1, 2, 3, and 4 in the final Make Your Mark app

Thanks to this iterative approach and the various types of user tests that were conducted over the course of its development, the Make Your Mark app turned out to be a straightforward and easy-to-use experience for all ages (fig. 9). In the app’s first three months, 48 percent of exhibition visitors have reported using the app, and nearly nine thousand unique marks have been created through January 2016.

hellmuth.fig9

Figure 9: age distribution of visitors using the Make Your Mark app

5. Prototype testing through mobile stations and pop-up spaces

The development of the Make Your Mark app is an example of how prototypes and user testing can contribute to the development of a successful interpretive tool suitable for different age groups. Testing for Make Your Mark was carried out in the galleries using a movable cart to hold prototype materials for feedback. This mobile station approach has been used by audience research and evaluation staff as well as members of the interpretation team and occasionally the exhibition core team to carry out testing for various analog and digital interpretive experiences with visitors on site. These stations are moved to various locations throughout the museum that are highly trafficked or where certain types of visitors could be found to test with a specific target audience (e.g., those with children, teens). With this approach to testing, staff are able to go straight to the desired user or wherever may be busiest in the museum, but are limited to testing one prototype at a time.

Test It Lab pop-up spaces

In order to overcome this limitation, in the past six months the research and evaluation team has also experimented with pop-up spaces called “Test It Lab.” The Lab has gone through two iterations, held in September and December 2015, based on availability of a suitable space and timing of projects to test. The objectives of Test It Lab are to: (1) test multiple experiences at once and (2) get quick feedback from stakeholders in a way that is both helpful to the museum and rewarding for the visitor (i.e., as a form of engagement for our audiences). These two iterations of the Lab were held in different strategic locations: one in a gallery space during a period between exhibitions situated on the first floor of the museum (figs. 10–13) and another in a lounge area near the entrance to the galleries on the second floor (figs. 14–16). In both of these instances, the Lab was staffed by one or more researchers during a four-day period.

Museum Next 2015

Museum Next 2015

Museum Next 2015

Museum Next 2015

Figures 10–13: Test It Lab in its first iteration at the IMA in September 2015

Test it lab

Test it lab

Test it lab

Figures 14–16: Test It Lab in its second iteration at the IMA in December 2015

A number of different prototypes, both analog and digital, were tested in the two iterations of Test It Lab, but the focus here will be on digital only. Most prototypes were designed to be facilitated using a protocol for testing, but a few were self-directed. Self-directed prototypes include signage or a question that prompts visitors to answer by completing a survey, using stickers to vote or leaving Post-it notes with their responses.

One of the interpretive tools tested in the two different iterations of Test It Lab was for the exhibition A Joy Forever: Marie Webster Quilts (March 2016–January 2017). This digital version of the artist’s scrapbook appeared in the first Test It Lab as static pages on a touchscreen computer, then later in the second iteration of the Lab as a beta version (fig. 17). Results from the testing indicated that some of the icons were confusing to users, while the color, size, and location of the hotspots did not draw enough attention or impeded viewing the elements on the page.

hellmuth.fig17

Figure 17: interface design of digital scrapbook tested with visitors in Test It Lab during the second iteration

Other interactives tested included two different applications for the exhibition 19 Stars of Indiana Art: A Bicentennial Celebration (May 2016–January 2017). The first was an interactive prototype of a “BuzzFeed”-style profile quiz, which was developed using quiz-making software called Interact (tryinteract.com). Testing provided useful feedback about the title of the activity (Whoosier Are You? vs. Hoosi-are You?), as well as issues related to some of the questions and answer choices. For this exhibition, staff also tested a paper prototype of a digital map of Indiana with information connecting the artists featured in the exhibition to the state. From testing a 2.5-foot by 4-foot paper version of the map mounted on foam core (fig. 18), the team was able to gather valuable information about their interest in the overall concept, how visitors chose to access content (by artist, by location, or both), as well as the type of content they expected to see. This information along with other user input gathered during testing will help the team move forward in developing a tool that is more intuitive and easy to use.

Test it lab

Figure 18: life-size paper prototype, approximately 2.5 feet by 4 feet, of digital map for 19 Stars of Indiana Art tested in the second iteration of Test It Lab

Test It Lab pros and cons

The team has realized several benefits from hosting Test It Labs as a pop-up space. First, having an actual space and more resources available allows staff to test larger prototypes and ones that require power outlets or tables and seating. Second, showcasing multiple prototypes at different stages of development makes the process more transparent and gives the visitors insight into how museum staff develop these types of tools. Furthermore, having a space that looks like an event of sorts piques visitors’ curiosity and prompts them to participate. Third, offering multiple prototypes also allows for the researcher to tailor the participatory experience to the guest and give them a choice of which interpretive tools or activities they would like to try. Fourth, dedicating a space to getting feedback in this way demonstrates that the organization supports this visitor-centered approach and associates value with giving our guests a chance to try out tools in development and share their thoughts.

Despite the benefits, there are also some challenges associated with this approach. First, testing multiple prototypes at once means that staff are under a time crunch to define ideas; prepare prototypes, protocols, supplies, and equipment to test each tool; and schedule time for the space to be facilitated by multiple researchers. Second, staff also need to limit the number of interactives or tools being tested so that it is manageable to be facilitated. Third, scheduling a time for Test It Lab can also be a challenge. Not only does a space need to be available, but the time of year and days of the week also need to be taken into account. The museum does not always have a large number of visitors on weekdays, and testing with different target audiences needs to occur on certain days (e.g., Family Days) or on the weekends when there is higher visitation. Fourth, getting feedback on multiple tools at once also creates the need for the analysis of data collected from these brief spurts of testing and puts pressure on research staff to produce written analysis in short order. Researchers have realized the need to become equally efficient in managing analysis and reporting.

Finally, the success of these pop-up spaces very much depends on the characteristics of the place in which they are installed. For instance, Test It Lab II, which was hosted in the more open space immediately outside of the galleries on floor 2 of the museum, was more successful in terms of number of visitors who participated. While actual attendance was not counted in the first iteration, 228 visitors entered Test It Lab II, which was noticeably busier than the first iteration. This higher number can be attributed to people being able to check out what was going on without having to enter a separate room. Also, staff were able to approach visitors more easily since they did not have to leave an enclosed space, and due to it previously serving as a lounge, guests were able to sit down while waiting on their companions to finish giving feedback.

6. Conclusions

As mentioned above, mobile stations and pop-up spaces had advantages and disadvantages, but overall yielded positive results in terms of both the usability of the products that were created and the process. Thanks to these approaches, in fact, staff from across the museum have become more comfortable with testing paper or cardboard prototypes that may be a bit crude looking, but adequately convey the activity to participants. This approach has also accelerated the process of getting feedback when not having to present a polished product. Having a rough-looking prototype in the early stages also conveys to participants that the activity can be changed and is not already fully developed, but will be with their input. Another benefit is that it encourages visitors to come back to the museum for those experiences that they have helped to shape.

Given the overall positive results that user testing has yielded so far, as well as the advantages and disadvantages of each testing approach outlined above, moving forward the IMA will continue to use a combination of mobile stations and Test It Lab. Realistically, schedules and facilities do not always allow for waiting to test prototypes until a space to install Test It Lab becomes available. Thus, both approaches will continue to be used: Test It Lab when it is suitable and available, and mobile stations in the interim periods. The IMA will also continue to experiment with the best locations for Test It Lab depending on what projects are being tested and the target audience.

While there continue to be factors to experiment with regarding both the Test It Lab and mobile station approaches, it is clear that offering visitors a seat at the table through user testing and other types of evaluation has allowed us to better incorporate their voices and thus create exhibitions and interpretive tools that are more effective, engaging, and easy to use.

References

Association of Art Museum Directors. (2015). Art Museums by the Numbers 2014. Consulted January 25, 2016. Available https://aamd.org/sites/default/files/document/Art%20Museums%20By%20The%20Numbers%202014_0.pdf

Collerd Sternbergh, M., S. Filippini Fantoni, & V. Djen. (2015). “What’s the point? Two case studies of introducing digital in-gallery experiences.” Museums and the Web 2015. Consulted January 25, 2016. Available http://mw2015.museumsandtheweb.com/paper/whats-the-point-two-case-studies-of-introducing-digital-in-gallery-experiences/

Filippini Fantoni, S. (2014, October). Participatory exhibition design: inviting visitors to be part of the process. MuseumID Conference presentation, Museum of London, London, UK. Consulted January 25, 2016. Available http://www.slideshare.net/SilviaFantoni/presentation-museum-id-conference-v2

Filippini Fantoni, S, K. Jaebker, & T. Leason. (2014). “Participatory Experiences in Art Museums: Lessons from Two Years of Practice.” MW2014: Museums and the Web 2014. Consulted January 25, 2016. Available http://mw2014.museumsandtheweb.com/paper/participatory-experiences-in-art-museums-lessons-from-two-years-of-practice/

LaPlaca Cohen. (2014). Culture Track 2014. Consulted January 25, 2016. Available http://www.laplacacohen.com/culturetrack/

Martin, J., & C. McClure. (1983). Software Maintenance. The Problem and Its Solution. Prentice Hall.

National Endowment for the Arts. (2015a). A Decade of Arts Engagement: Findings From the Survey of Public Participation in the Arts, 2002–2012. Consulted January 25, 2016. Available https://www.arts.gov/publications/decade-arts-engagement-findings-survey-public-participation-arts-2002-2012

National Endowment for the Arts. (2015b). When Going Gets Tough: Barriers and Motivations Affecting Arts Attendance. Consulted January 25, 2016.​ Available https://www.arts.gov/publications/when-going-gets-tough-barriers-and-motivations-affecting-arts-attendance

Pressman, R. (1987). Software Engineering: A Practitioner’s Approach. New York: McGraw-Hill.


Cite as:
Hellmuth, Emily, Silvia Filippini Fantoni, Tiffany Leason and Jen Mayhill. "A seat at the table: Giving visitors a voice in exhibition development through user testing." MW2016: Museums and the Web 2016. Published February 2, 2016. Consulted .
https://mw2016.museumsandtheweb.com/paper/a-seat-at-the-table-giving-visitors-a-voice-in-exhibition-development-through-user-testing/