Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt

Sam Brenner, Cooper Hewitt, Smithsonian Design Museum, USA, Lisa Adang, Cooper Hewitt, Smithsonian Design Museum, USA

Abstract

As museums continue to produce interactive digital experiences, a large amount of attention is paid to the front end—the part of the experience that is seen by visitors. However, as the scope and complexity of these bespoke experiences grow, so does the need for administrative tools that facilitate their control, monitoring, and content production. The design of these interfaces presents its own set of challenges. This paper will survey multiple administrative interfaces produced at Cooper Hewitt and analyze their design, development, and use. It will assess the role of such interfaces in the larger context of the institution’s transformation to place digital technologies at the core of the museum. Ultimately, it is the aim of this paper to share observations on the development of administrative interfaces at Cooper Hewitt in order to emphasize their growing importance in museums and present strategies towards handling their production.

Keywords:

1. Introduction

As museums increasingly invest in producing unique interactive digital experiences, a large amount of attention is paid to the facet of the experience designed for visitor interaction. Less discussed is the fact that complex visitor experiences demand sophisticated internal museum tools to help staff manage the data and the tasks associated with gallery interactives. There are many solutions that museums should consider during planning and implementation of digital gallery experiences, ranging from out-of-the-box to customized to fully bespoke software. This paper shares two case studies from Cooper Hewitt, Smithsonian Design Museum (CHSDM) that focus on internally designed, built, and maintained administrative tools with the intention of bringing increased attention to the considerations and resources that these types of staff-facing interfaces require.

The interface design of administrative tools presents a unique set of challenges which can greatly impact the outcome of visitor-facing experiences. These tools deserve careful consideration because they can affect staff members’ willingness to adopt and engage with new technologies. By producing administrative tool interfaces that speak directly to user needs, museum technologists can help staff gain ownership of digital infrastructures. The result is a museum more involved with and invested in the work of producing and maintaining rich, digitally enhanced visitor experiences.

The case studies presented here discuss Tagatron, a Web-based application for use largely by Curatorial and Education museum staff, and the Pen Pairing Station, a tool comprising a hardware interface and tablet-based application for use at the museum’s front desk by Visitor Service associates. Both of these administrative tools are integral parts of the recently launched new experience at CHSDM, and they respond to the new workflows around data creation and management that came along with the suite of visitor-facing digital experiences the museum now offers. (For more on the new experience at Cooper Hewitt, see Chan & Cope, 2015.) The Tagatron and Pen Pairing Station case studies recount the process of designing, building, and maintaining the tools in the following stages of development: Identifying a Need, Concepting, Developing a Minimum Viable Product, Onboarding Users, and Gathering Feedback. Both sections also present reflections on Lessons Learned in the tool development process. As a result of the case studies, questions to ask of administrative tools are proposed that attempt to clarify unknowns about the design of both the tool itself and the system in which it resides.

2. Tagatron

Figure 1: Tagatron V1, objects list page

Figure 1: Tagatron v1, objects list page

Figure 2: Tagatron V2, objects list page

Figure 2: Tagatron v2, objects list page

  • Function(s): to allow curatorial staff to create tag and relation data, linking it to collection object records that visitors can explore in the in-gallery Collections Browser interactive table application
  • Language: back end built with NodeJS (v1), PHP (v2); front-end built with HTML, CSS, and JavaScript (both versions)
  • Intended Platform: desktop computer
  • Users: more than twenty curators, curatorial assistants, and interns
  • Time in Production: forty hours (v1); eighty hours (v2)
  • Time in Use to Date: twelve months (v1); six months (ongoing) (v2)

Tool overview

Tagatron is a tool that allows curatorial staff to associate metadata ill-suited for storage in The Museum System (TMS), the museum’s main collections management software package, with collection objects. Tagatron handles two types of metadata: tags and relations. Tags, in this context, follow Trant’s (2009) definition as one or two words that “identify and categorize” objects. The concept of relations defined by CHSDM is the link between two objects from the museum’s collection (one on display and one in storage) that share, at a curator’s discretion, formal or conceptual properties.

At the time of CHSDM’s reopening in December 2014, newly installed interactive tables featured the Collections Browser application to provide visitors “a highly visible way [to explore] the breadth of the collection” (Chan & Cope, 2015). Defining and applying tags and relations to objects on display was a critical task in advance of the reopening, because these metadata supply critical data for Collections Browser functionality.

Identifying a need

Six months before discussions on Tagatron began, Local Projects, the design firm contracted to build the Collections Browser, worked with curators to record tag and relation information in Excel spreadsheets. In order to populate the application with sufficient data, curatorial staff and Local Projects agreed on the number of tags and relations for each object, outlining the following requirements: every object on display (i.e., primary object) should relate to eight objects not on display (i.e., related objects); all primary and related objects should have six tags each; and tags should divide evenly between two tag lens categories. Lenses were devised as a taxometric guide for staff applying tags to objects. For example, using the “motif” lens suggests tags should describe the formal visual qualities of an object, while the “user” lens suggests tags should relate to the object’s applications and intended audience. The aim of the lens categories was to reduce the number of “redundant tags,” which provide no new information outside of existing museum documentation (Trant, 2009).

The spreadsheet system that museum staff and Local Projects were using to organize tags and relations was, as one Curatorial staff member described it in a post-project interview, “a pain” (Tagatron, 2016a). Another curator thought in retrospect it was the most “efficient” way to “see and manage [tag and relation] information, although no one did those [things] consistently” (Tagatron, 2016b). From a workflow perspective, Excel may have been an accessible and manageable tool for the job; however, from a data perspective, the spreadsheets were rife with inconsistencies in formatting and language. By May 2014, after six months of tagging and relating with the spreadsheet-based system, museum staff managed to record 164 primary objects (22 percent complete), 1,439 related objects (24 percent complete), and 4,071 tags (10 percent complete). (Percentages are in relation to the museum’s reopening goals.) While these metrics show admirable effort on the staff’s part, it was clear that in order to reach the museum’s data creation goals in time for the reopening, encourage consistency of data, and create a structure that would be scalable for the future, CHSDM needed to implement a different system.

Concepting

In reassessing the data preparation strategy for the Collections Browser, discussions between Cooper Hewitt’s Digital and Emerging Media team (D&EM) and Local Projects established a standalone administrative Web application solution, later called Tagatron. Discussions yielded wireframe sketches of the interface produced by Local Projects illustrating two main views: the primary objects list page (figure 3) and the secondary object detail pages (figure 4). The objects list page consisted of a column of object information (both primary and related objects) that included TMS-derived metadata such as designer, department, and a thumbnail image. The design for this page also incorporated filtering options based on data completion to allow users to see which records met established tagging and relating requirements. The object detail page wireframes depicted an object image with associated TMS metadata, as well as tag and relation information. This page also allowed users to input new tag and relation metadata.

Figure 3: Local Projects’ “Tagging Interface” Object List page wireframe sketch

Figure 3: Local Projects’ “Tagging Interface” object list page wireframe sketch

Figure 4: Local Projects’ “Tagging Interface” Object Detail page wireframe sketch

Figure 4: Local Projects’ “Tagging Interface” object detail page wireframe sketch

Developing a minimum viable product

After rapidly moving through the concepting phase with Local Projects, D&EM staff commenced development of the application. One full-time software developer committed to building Tagatron, setting the goal to build a minimum viable product (MVP) in the span of three days. The time constraint was intended to create an efficient route to building and deploying a working Tagatron prototype that maximized time for the curatorial staff to learn and use the new interface and allowed time for iteration. The developer selected NodeJS for the back-end environment, Express as the application framework, and MongoDB as the database, recognizing that the project would be contained in scope and potentially a good opportunity to experiment with these technologies.

The initial release of Tagatron was extremely streamlined. Boiling features down to core MVP requirements, the developer omitted several features suggested in early wireframe sketches. This included replacing an individualized login system with a single common username and password; foregoing the convenience of tag auto-completion; and eliminating an additional view that grouped objects by shared tags.

3. Onboarding users and gathering feedback

After the development sprint, it was time to introduce CHSDM staff to the first version of the Tagatron interface. The D&EM team introduced the first group of users, largely from the Education and Curatorial departments, to Tagatron. In a live demonstration, they learned how the interface could be used for filtering, tagging, and relating objects. On her initial impression of the tool, one curator expressed relief, saying, “suddenly there was finally a way that we were going to be able to get this [work of tagging and relating] to where we needed it to go.” The same curator also mentioned, however, that her experience of the first version of Tagatron was that it contained “all these [interface] problems” (Tagatron, 2016b). Responding to these issues became the overwhelming focus of the developer’s time over the following weeks. D&EM staff took on a support role to respond to user requests for assistance as Curatorial and Educational staff began using Tagatron on a regular basis. These help sessions provided the opportunity to collect causal feedback from users, helping D&EM staff identify various issues with Tagatron, as well the values and understandings that informed users’ experiences with the application.

Lesson learned from user feedback: Filtering and task management

By observing curatorial staff members’ workflow in the early stages of the Collections Browser project, the Tagatron developer knew that division of labor was key to facilitating adoption of the tool. Since staff were using Tagatron to collaborate across curatorial departments on a common task, the developer built a filtering layer to the object database that allowed users to sort objects by the department that created the record, which mirrored how objects were originally organized in the curators’ spreadsheets (figure 5). In this way, staff members from the Product Design and Decorative Arts curatorial department, for example, could see all of the object records created by their own department. They could also add a second layer of filtering—complete/incomplete—to see if all of their department’s records met minimum metadata requirements; they therefore, according to initial logic, could assess the amount of work left to be done.

The developer, however, was surprised to observe a strong reaction of users against this organization of tasks in Tagatron. Because this issue was so central to the practical implementation of the tool within the workflow of curatorial staff, it negatively affected users’ view of Tagatron. According to one staff member, “the curators discovered it was much more challenging [to tag other curatorial departments’ objects] and everyone agreed that the first round of tagging should be done by the people who know the most about it” (Tagatron, 2016b). In response, the Tagatron developer added the ability to view records according to museum rules, which served as a filter to identify the department to which the collection object belonged (figure 6). Using this filter, curators could go back to tagging records for objects only in their own segments of the collections.

Figure 5: The Tagatron homepage when it was initially launched

Figure 5: Tagatron homepage when it was initially launched

Figure 6: The Tagatron homepage after further iterations, showing expanded filters and search functionality

Figure 6: Tagatron homepage after further iterations, showing expanded filters and search functionality

Lesson learned from user feedback: Interface feedback and trust

In fielding requests for assistance with the first version of Tagatron, museum technology staff observed that the complete/incomplete filter caused users significant frustration. Users would use it to check their work, ensuring that they filled in all tags and relations by applying the filter intermittently or at the end of a Tagatron work session. However, when users used the filter, they saw objects for which they had just entered metadata still showing up as incomplete. The problem was technically simple—the Web browser’s back button, the most convenient way to return to the list of objects, loaded a cached version of the page without the recently completed work, but the user experience issue it created was immense—it propagated user distrust in Tagatron because it caused them to think that their work was not being saved and to do needless work creating duplicate records. One curator reflected, “What I remember being the biggest issue really quickly, which was sort of an anxiety initially, was about doing all this work in something that we didn’t know about its longevity. […] Tagatron is not something that any of us trusted at the time to keep the information, take it, and put it anywhere that the rest of us could ever go back to again” (Tagatron, 2016b).

This anxiety was two-fold: users were uncertain about how their work would manifest in the finished Collections Browser interface, and they lacked clear and immediate feedback from within the Tagatron interface to show their input was being saved (Tagatron, 2016b). Confirmation that their work was going to be sufficiently utilized in the Collections Browser application would have to come later with the installation of the interactive tables, but a technical adjustment to Tagatron could provide users reassurance that tags were being saved. The developer updated the website to force the page to reload itself from the server even when accessed through the back button. The result of the change was increased trust; as one curator explained after the update, “I trust [Tagatron] a lot more now and trust that it remembers, or it acknowledges that it’s remembering, what has been done in the past in a way that I can see” (Tagatron, 2016b).

4. Recognizing larger problems with database logic

Although the first iteration of Tagatron resolved some issues, it also revealed new ones. The change in filtering rules made so users could see objects by curatorial department of origin proved incompatible with the way data was being stored in MongoDB. As a result, curators began noticing duplicate records of objects where two different departments had drawn relations between the same object. This was problematic, as one curator recounted, “You were never finished, it felt like. [The work] was supposed to disappear from your plate when it was done, but it never did because there would always be a duplicate record” (Tagatron, 2016a). While user interface tweaks could mitigate some of this confusion, these problems affirmed the developer’s larger feeling that MongoDB was no longer the appropriate choice of database for Tagatron. The database’s dynamic schema was inhibiting the ability to quickly address issues with the application. The developer was generally inexperienced with MongoDB’s paradigm for filtering and reorganizing elements (i.e., map-reduce) and felt that any idea he wanted to test in the application would have first required a total reorganization of the data.

Tagatron’s database was also causing problems for users because it was not reflecting up-to-date object information from TMS. Since Tagatron was making a copy of the data upon its addition to the application database, subsequent changes in TMS, such as new photographs or cataloging information, were not being transferred. This created a complication for users who now needed to verify Tagatron information in TMS (Tagatron, 2016a). Generally, Tagatron was unsustainable as a source of truth; its data needed to be brought back into CHSDM’s main MySQL database, which powers the collections website and the API. As a solution, the developer established the database schema in MySQL and wrote import scripts to transfer the data.

Due to time constraints before the museum’s reopening, from this point forward iterations on Tagatron were limited to minor tweaks. Larger flaws in the application, such as synchronization issues or frustrations with MongoDB as a database, would need to wait. Despite remaining concerns, measurements of the number of tags and relations over time show that Tagatron collected data at a greater pace than Excel spreadsheets (figures 7 and 8). It is unclear how much this trend was driven by increased efficiency supported by Tagatron or by the general ramp-up to the museum’s reopening, but according to CHSDM metrics, Tagatron certainly had a positive correlation with increased tagging pace.

Figure 7: non-unique tags over time

Figure 7: non-unique tags over time

Figure 8: related objects over time

Figure 8: related objects over time

5. Rebuilding Tagatron

Using MongoDB for the Tagatron database caused a number of issues on the front and back ends, so the developer initiated a complete rebuild of the application after the museum had reopened, when minimal tagging work was scheduled. On the back end, this meant removing a layer of separation between Tagatron’s database and the main MySQL collections database. The rebuild also entailed eliminating MongoDB and NodeJS, as well as adding methods to the collection API that would allow for the addition and removal of tags and relations directly into the main database (figures 9 and 10).

Figure 9: application architecture of Tagatron v1

Figure 9: application architecture of Tagatron v1

Figure 10: application architecture of Tagatron v2

Figure 10: application architecture of Tagatron v2

Updating the front end to address filtering and feedback concerns involved eliminating “tagging department” and “museum department” as primary forms of filtering, and replacing them with “exhibitions” and “object packages” (figure 2). This structure anticipated future tagging endeavors linked to exhibitions and took advantage of Curatorial staff’s use of exhibition and object package-based organization of objects in TMS. The elimination of “tagging departments” also allowed for duplicate objects to be merged together and ensure that Tagatron users could clearly see if an object had already been tagged or related.

Removing the layer of separation between Tagatron and the main collection database eliminated the need for objects to be manually imported into Tagatron. It also enabled the developer to add “accession number” as a third method of primary filtering, which unburdened the D&EM team from a great deal of maintenance work. Users responded positively to this change, noting that the absence of duplicate records and more familiar filtering tools made the tagging much easier (Tagatron, 2016a).

Reflections on building, iterating, and maintaining Tagatron

The creation of Tagatron coincided with the implementation of a suite of new visitor experiences at Cooper Hewitt. It fit into a system of solutions designed to help staff manage the new tasks associated with producing the new experience. Tagatron is a small but important part of this system, and its function is highly targeted: it allows Curatorial and Education staff to create and manage tag and relation data for the Collections Browser interactive table application.

The D&EM staff who built and maintain Tagatron learned from user feedback that in addition to the basic functionality of the tool, it needed to have features to help users parse tasks and divide responsibilities. In turn, they found that the functions Tagatron needed to provide for managing responsibility should reflect curators’ sense of ownership and expertise in particular facets of the collection. More thorough user research up front might have surfaced these needs and values sooner, but D&EM staff appreciate that working so closely with users—fielding ongoing feedback, providing assistance, and iterating—supplied much of the insight needed to understand the relationship users had to the tool so that they could work toward improving Tagatron.

In the context of the Tagatron database, the experiment of using MongoDB and NodeJS was a failure. The D&EM staff’s level of comfort and knowledge with these technologies inhibited them from keeping pace with the number of users and issues with Tagatron. While experimenting with technology is an essential responsibility of the D&EM team at Cooper Hewitt, in retrospect it is clear that the number of workflows dependent on this newly centralized museum tool made it a poor candidate for unfamiliar and untested back-end architecture. Ultimately, this decision necessitated a full rebuild of the back end of Tagatron, which helped address a number of usability concerns but delayed technology staff’s response time to them.

6. Pen Pairing Station

Figure 11: Pen Pairing Station interface, v1

Figure 11: Pen Pairing Station interface, v1

Figure 12, Pen Pairing Station v2

Figure 12, Pen Pairing Station interface, v2

  • Function(s): to allow Visitor Experience associates (VEAs) at the museum’s front desk to upload a unique shortcode to a visitor’s Pen so their visit can be retrieved later
  • Language: Python back end, HTML/CSS/JavaScript front end
  • Intended Platform: Raspberry Pi to run the application, with a Nexus 9 tablet to display the user interface, Pen gateway board
  • Users: fifteen to twenty VEAs
  • Time in Production: sixty hours (v1); sixty hours (v2)
  • Time in Use to Date: three months (v1); seven months (ongoing) (v2)

Tool overview

The interactive Pen was a significant part of the new experience unveiled at Cooper Hewitt’s reopening. The Pen is a tool that allows visitors to collect data throughout the museum; the data gets stored by the museum and associated with an identifying visit code (i.e., shortcode) so that the visit data is retrievable by visitors when they log into the Cooper Hewitt website (https://cooperhewitt.org/you) with the code. The Pen Pairing Station is a tool designed to allow staff to do the work of associating a Pen with a particular visit shortcode, as well as upload and clear the data from returned Pens.

The Pen Pairing Station (figure 13) comprises a Raspberry Pi computer (RPi) running a Python Web application and a Nexus 9 tablet computer running the user interface. The two communicate with each other using WebSockets. The RPi communicates with the Pen using Near-Field Communication (NFC) via a custom-fabricated circuit board (the Pen gateway board); it connects to Cooper Hewitt’s API over HTTPS. A barcode scanner connected to the Nexus 9 tablet allows shortcodes to be transferred from a visitor’s admission ticket to the interface of a Pen Pairing Station.

Figure 13: the Pen Pairing Station in situ. The Raspberry Pi and Pen gateway board are inside the lower enclosure

Figure 13: Pen Pairing Station in situ; the Raspberry Pi and Pen gateway board are inside the lower enclosure

Identifying a need

The primary function of the Pen Pairing Station is to write a four- to five-character alphanumeric shortcode to the onboard memory of a Pen. This pen pairing process involves a Visitor Experience associate (VEA) pressing and holding the flat end of a Pen against a target on the Pen gateway board for ten to fifteen seconds. Pen pairing is integrated into the overall ticketing flow as staff vend or collect tickets, distribute Pens, and orient visitors to the museum building, exhibitions, and new experience. For this reason, the Pen Pairing Station was created to be fast and stable, and to communicate to multitasking staff at a glance. Beyond the user interface, the pairing process was designed to encompass a number of tasks outside a typical user’s view. The Pairing Station was built to communicate with the CHSDM API so that the museum’s databases record and timestamp the link between a visit and a Pen. It also uploads Pen data to the Cooper Hewitt API and runs a number of checks to ensure data integrity along the way.

Concepting and developing a minimum viable product

The design studio Tellart built a command-line version of the Pen Pairing Station to demonstrate functionality, and CHSDM soon joined to collaborate on the design of a graphical user interface. Through the design process, Tellart and CHSDM narrowed the focus of the MVP down to four main actions nested into tabs: Pen pair (“Pen check-out”), Pen return (“Pen check-in”), pair multiple Pens (“batch”), and review completed operations and debug (“tools”) (figure 11). The pairing tab had a text-entry field for VEAs to input shortcodes, a button to initiate pairing, and a status field for visual feedback. The Pen return tab had buttons for users to initiate reading and clearing Pen data. After building the interface to these specifications, Tellart handed the code over to CHSDM to handle iterating and supporting the tool. After final interface tweaks to create a functional tool and shortly before the Pen’s March 10, 2015, launch, D&EM staff introduced the Pen Pairing Station to VEAs, describing its capabilities to write and read Pen data.

7. Lesson learned from user feedback: Context, context, context

Perhaps most critical to the Pen Pairing Station’s successful implementation was D&EM staffer Katie Shelly’s observation that VEAs used the Pen Pairing Station as just one small component of the ticket vending and visitor orienting process (Shelly, 2015). Originally, the cluttered interface of the Pen Pairing Station required too much time and attention from users. It needed to be redesigned to better function in the periphery. As one VEA recently reminded the D&EM staff, “In terms of the role of my job… this just a tool and this is just a small part of what I actually do. I’m not focusing that much on [all the details of the interface]. I’m more focused on my relationship to the visitor and what I’m saying and how that exchange is going” (Pen Pairing Station, 2016b).

To gather feedback on the usability of the interface, Shelly began observing VEAs using the Pairing Stations at the front desk after they were deployed for two months (Shelly, 2015). In three sessions, covering both busy and quiet periods, she identified a number of issues with the Pairing Stations. VEAs indicated that they found some of the labeling on the interface incompatible with their everyday language. The original wording used on the tool’s interface grew out of the D&EM staff’s understanding of the Pen pairing processes, but to fit into the workflow and culture at the museum’s front desk, the language needed to shift toward the VEAs’ fast-paced, visitor-facing role.

In her early observation sessions, Shelly also found VEAs lost time reloading the application interface after every Pen pairing. It seemed the behavior developed because of a prominently placed “refresh” button, but reloading the page was not required—the interface reset itself between pairings automatically (figure 14). The button existed for use in the rare cases of an unrecoverable error. In addition to a user interface design issue, the D&EM staff interpreted this behavior as a need for visual feedback that the interface was ready for the next Pen pairing, so they determined that they would address this in the next set of interface iterations.

Figure 14: v1 interface during pen pairing: waiting to begin (left), waiting for pen press (middle), success (end)

Figure 14: v1 interface during pen pairing: waiting to begin (left), waiting for pen press (middle), success (end)

Iterating the front-end interface in response to users

Figure 15: Wireframe sketches for Pen Pairing Station V2

Figure 15: wireframe sketches for Pen Pairing Station, v2

Based on her observations at the front desk Pairing Stations, Shelly produced sketches that became a template for the first front-end interface iteration (figure 15). The redesign removed the multiple text-feedback fields from the first version and used the space instead to enlarge important call-to-action buttons and provide a single point for feedback. Errors and warnings also took more prominence in the redesign. D&EM staff reoriented the language used in the interface to be more specific to the tasks of VEAs. As one staff member later remarked, “‘Check out/check in’ was confusing so now the words are ‘pair’ and ‘return.’ […] We tell [visitors] to ‘return the Pen’ and that correlates with returning the Pen on the tablet [interface]” (Pen Pairing Station, 2016a). To give users feedback that the Station was ready for the next pairing, the D&EM staff included a reset countdown, a five-second animation of a growing circle, in the redesign (figure 16, right) .

Figure 17: V2 interface during pen pairing: waiting to begin (left), waiting for pen press (middle), success (end)

Figure 16: v2 interface during pen pairing: waiting to begin (left), waiting for pen press (middle), success (end)

8. Improving fault tolerance

Some of the earliest changes to the Pen Pairing Station also addressed its back end. One addition to the application’s functionality allowed the Pen Pairing Station to confirm input of a valid shortcode by checking it against the Cooper Hewitt database; this prevented an error from occurring when a shortcode was incorrectly scanned or input into the tool. This addition, combined with preexisting logging and visit registration API calls, shifted responsibilities to remote servers (figure 17). Each server introduced a new point of failure, and when an outage occurred, the software was not capable of handling it. This meant that writing a shortcode to a Pen—the primary function of the Station and a function that is independent of network connection—could not take place, and therefore the distribution of Pens to visitors would have to temporarily cease.

Figure 17, Diagram of the pen pairing station’s network dependencies. Each connection symbolizes a point of failure

Figure 17: diagram of Pen Pairing Station’s network dependencies; each connection symbolizes a point of failure

The most significant iteration to the back end of the Pen Pairing Station was to make it more tolerant of faults in network connectivity. Following computer scientist Brian Randell’s research, the new design created an application that could “switch to the use of [a] spare component,” where the spare component is “not merely a copy of the main component,” but “of independent design, so that… it [could] cope with the circumstances that caused the main component to fail” (Randell, 1975). The implementation of a logging system provided a degree of fault tolerance to the original tool, but its failure was that it was a long-term solution to a problem that, from the perspective of the VEAs, needed solving immediately. The pairing failures concerned VEAs because of their impression on visitors, as one associate explained, “A lot of it beyond the pairing piece is how people react to experiencing new technologies. So, if I am explaining and I’m trying to make it look all seamless and effortless, if it’s not working […] that gives them a little bit of lack of confidence” (Pen Pairing Station, 2016b).

In response, the developer modified the back end to ignore network errors and attempt to complete the Pen pairing process. This would ensure that the process of writing the shortcode to the Pen, an operation less prone to errors, completed successfully. Because the role of VEAs and their view of Pen pairing are independent of network connection, this decision helped staff operate on the front end of the interface with less interruption. Warnings (adapted from error messages in an earlier interface iteration) sidelined feedback on network connectivity, but still made it available to VEAs. The remote components, specifically the Cooper Hewitt API, were similarly updated to be more tolerant of actions happening out of order. For example, if a Pen called the API to report that it collected an object and the API had no record of that Pen being paired (due to a network problem), the API would be able to perform the processes that had previously been skipped so that no data was lost.

9. Reflections on building, iterating, and maintaining the Pen Pairing Station

While the improved interface and the increased fault tolerance of the Pen Pairing Station created a simpler, more functional tool, subsequently gathered feedback provides a clear direction for future iterations. In a recent interview, one VEA explained his view that filtering appropriate information according to user needs is an even more critical adjustment than language alone. In particular, he found the added warning messages to be “almost incomprehensible because they are talking about stuff [VEAs] don’t really know about.” He continued, “For us it’s more [important if] it worked or it didn’t work. So sometimes we get different error messages and we are down there going, ‘What does that mean?’” (Pen Pairing Station, 2016b). Further iterations of the interface will work to separate information according to user needs and to optimize the Pen Pairing Station to perform its core duty quickly and efficiently on both the back and front ends.

The Pen Pairing Station gave Cooper Hewitt an opportunity to explore how an administrative tool can both allocate and shift responsibilities. D&EM staff included the network connection warning messages in the front-end interface under the initial assumption that the tool would empower VEAs to perform triage on pairing problems. However, the reality of VEAs’ responsibilities and expertise at the museum’s front desk demands a tool streamlined for maximum efficiency and clarity. Future iterations will strive to support the main finding of user research around the Pen Pairing Station that shows this tool’s design needs to function smoothly as part of the complex choreography of the museum’s front desk, where a multitasking VEA operates the tool in front of museum visitors.

10. Administrative tools: The big picture

In designing, building, and maintaining Tagatron and the Pen Pairing Station, Cooper Hewitt uncovered a number of challenges. Through the process of developing administrative tools, D&EM staff learned important lessons about responding to user needs, values, and workflows with considered interface design and back-end architecture. They also came to understand the scope of resources required in such an undertaking, as well as the rewards of designing and implementing staff-facing software in-house. Specifically, CHSDM benefited from the cross-departmental conversation that these tools enabled—in taking on the role of building and maintaining Tagatron and the Pen Pairing Station, the D&EM team engaged users from departments across the museum and observed closely how the tools fit into staff members’ larger roles.

The process of fine-tuning the administrative tools’ interfaces based on feedback caused D&EM staff to consider users’ relationships to the applications and the work they facilitate, bringing awareness to the fact that these tools function as important intermediaries between staff and new technologies introduced into museum galleries. In this position at the locus of change, administrative tools take on a mitigating role; therefore, their interfaces should be as accommodating as possible to user needs. Technology staff who implement these tools should be prepared to field users’ frustrations with the tool interfaces as they iterate designs over time, and they should also understand that they will need to answer to users’ more general reactions to new technologies and new responsibilities. In this way, taking on the challenge of designing and implementing administrative tools is a fantastic way for museum technologists staff to become better attuned to the systemic ripple effect new visitor-facing technologies can catalyze, and take their position as both implementers of new technologies and ambassadors of progress.

11. Addendum: Questions to ask when implementing administrative tools at your museum

These questions, informed by our experience implementing Tagatron and the Pen Pairing Station, are intended to give shape to your museum’s strategy for design and production of administrative tools.

Question 1: To what degree should the tool fit with preexisting notions?

One of the main issues overall with Tagatron was its ability to filter objects. While it was functionally capable of filtering, the way it did so did not fit with existing mental models its users held for how objects are organized in a museum. With the Pen Pairing Station, we saw how the tool, which necessarily added responsibilities to the role of the VEA, was simplified so that it did not add too many responsibilities. By better anticipating preexisting expectations and mental models, we might have been able to create a more comfortable environment for users.

Question 2: How much of the underlying technology should come through to the interface?

A different way to consider the previous question is to frame it around the underlying technology, and how the best mental model for a tool’s interface might not reflect the best technical model for its back end. The initial filtering functionality of Tagatron, for example, was a simple reflection of how the data was stored in the database. When the interface had to change to provide a different form of filtering, however, the back end proved to be inflexible. The initial Pen Pairing Station provided complex error messages that came through unmodified from the piece of infrastructure that raised the error, which confused users. When building an interface for any underlying technical process, there is an option to add a layer of abstraction, ensuring that the best designs for both front and back ends may be realized.

Question 3: What kinds of feedback does the tool provide?

With Tagatron, we saw how missing feedback created distrust in the tool’s ability to do its job, whereas with the Pen Pairing Station, we saw how too much feedback created confusion. This underscores the importance of considering the user’s perspective when designing a tool. While a rapidly iterative design and development process allowed us to identify and address these issues once they were introduced, it was unable to anticipate them. Holistic design approaches, such as design thinking (Mitroff Silvers, Wilson, & Rogers, 2013), might have allowed us to anticipate preferences for feedback so we could organize our larger systems accordingly.

Question 4: Is it an appropriate time for experimentation?

We used Tagatron as an opportunity to experiment with NodeJS and MongoDB, two technologies with which we lacked experience. We learned from our attempts to succeed with those technologies. But by following those initial architectural decisions through to how they negatively impacted both our own ability to maintain the project and the users’ ability to filter and organize their work, we can see that in this case it would have been better sticking to familiar ground, or at least constraining the scope of our experimentation to less-critical components of the project. Indeed, our most technically successful experiments with less-familiar technologies have been on projects with a much smaller scope of work (Brenner, 2015).

References

Brenner, S. (2015). “Label Writer: Connecting NFC tags to collection objects.” Consulted January 13, 2016. Available http://labs.cooperhewitt.org/2015/label-writer-connecting-nfc-tags-to-collection-objects/

Chan, S., & A. Cope. (2015). “Strategies against architecture: interactive media and transformative technology at Cooper Hewitt.” MW2015: Museums and the Web 2015. Consulted January 13, 2016. Available http://mw2015.museumsandtheweb.com/paper/strategies-against-architecture-interactive-media-and-transformative-technology-at-cooper-hewitt/

Mitroff Silvers, D., M. Wilson, & M. Rogers. (2013). “Design Thinking for Visitor Engagement: Tackling One Museum’s Big Challenge through Human-centered Design” In N. Proctor & R. Cherry (eds.). Museums and the Web 2013. Silver Spring, MD: Museums and the Web. Consulted January 13, 2016. Available http://mw2013.museumsandtheweb.com/paper/design-thinking/

Pen Pairing Station. (2016a). User 1, personal interview. Conducted January 12, 2016.

Pen Pairing Station. (2016b). User 2, personal interview. Conducted January 13, 2016.

Randell, B. (1975). “System Structure for Software Fault Tolerance.” IEEE Transactions on Software Engineering. pp. 220–232.

Shelly, K. (2015). “Happy Staff = Happy Visitors: Improving Back-of-House Interfaces.” Consulted January 13, 2016. Available http://labs.cooperhewitt.org/2015/happy-staff-happy-visitors-improving-the-interfaces-of-back-of-house-pen-ticketing-tools/

Tagatron. (2016a). User 1, personal interview. Conducted January 5, 2016.

Tagatron. (2016b). User 2, personal interview. Conducted January 5, 2016.

Trant, J. (2009). “Tagging, Folksonomy and Art Museums: Early Experiments and Ongoing Research.” Journal of Digital Information 10(1). Consulted January 13, 2016. Available http://journals.tdl.org/jodi/article/view/270/277


Cite as:
. "Winning (and losing) hearts and minds of museum staff: Administrative interfaces at Cooper Hewitt." MW2016: Museums and the Web 2016. Published January 15, 2016. Consulted .
https://mw2016.museumsandtheweb.com/paper/winning-and-losing-hearts-and-minds-of-museum-staff-administrative-interfaces-at-cooper-hewitt/