Changing gears: Fast-lane design for accelerated innovation in memory organisations

Johan Oomen, Netherlands Institute for Sound and Vision, The Netherlands, Maarten Brinkerink, Netherlands Institute for Sound and Vision, The Netherlands, Bouke Huurnink, Netherlands Institute for Sound and Vision, The Netherlands, Josefien Schuurman, Netherlands Institute for Sound and Vision, The Netherlands


Audiovisual archives are embracing the opportunities offered by digitisation for managing their work processes and offering new services to a wide array of user groups. Organisation strategy, working processes, and software development need to be able to support a culture where innovation can flourish. Some institutions are beginning to adopt the concept of "two-speed IT.” The core strategy aims to accommodate two tracks simultaneously: foundational but “slow,” and innovative but flexible and “fast." This paper outlines the rationale behind the two-speed IT strategy. It highlights a specific implementation at the Netherlands Institute for Sound and Vision, a large audiovisual archive and museum. Two-speed IT is enabling Sound and Vision to reach its business objectives.

Keywords: strategy, participatory culture, infrastructure, innovation, asset management, software development

1. Introduction

Museums benefit from fostering a “culture of innovation” as a way to effectively manage ever-changing expectations of user groups, and at the same time make the most of new opportunities offered by technology (Simon, 2011). The fundamental challenge is how to achieve the public missions (i.e., supporting a myriad of users to utilize heritage collections so that they can actively learn, experience, and create). As Douglas Rushkoff (2014) notes, “It’s not about how digital technology changes us, but how we change ourselves and one another now that we live so digitally.” For this, it is essential for museums to have access to technical infrastructure that allows not only for digital assets management but also to “pursue contemporary objectives” (Johnson et al., 2015)—for instance, using new channels for content distribution, (e.g., YouTube, Instagram) to engage with new user groups, or using technologies (e.g., linked open data, NLP) to enrich and optimize work processes or allow for creative ways to access collections (Gorgels, 2013).

In this paper, we propose the fostering of innovation for heritage organisations through “two-speed IT” (Bossert, 2015) and the accompanying organisational structure to realise it. The paper is structured as follows: in Section 2, we zoom in on some of the most urgent challenges audiovisual archives face today. In Section 3, we highlight the necessity for audiovisual archives to invest in innovative capital to ensure long-term impact. In Section 4, we introduce the concept of two-speed IT as a way to ensure uptake of innovative IT solutions in a production environment. In Section 5, we illustrate how two-speed IT can function in practice.

2. Audiovisual archives and the future

Audiovisual materials will be as much a part of the future fabric of information as text-based materials are today. As creation will continue to expand, archives will be storing and managing increasingly large collections of assets. Archives operate within a dynamic and multifaceted context. They will grow to become nodes in a network of communities along with other content providers and a variety of stakeholders from industries ranging across education and research, creative industries (publishing, broadcasting, game industry), tourism, journalism, and so on. Recent studies indicate that by around 2025, analogue carriers will need to have been digitised. After that date, it will be impossible to transfer the carriers, due to either technical obsolescence of the playback devices or the state of the physical carriers (Casey, 2015; National Film and Sound Archive, 2015; Wright, 2012).

For many archives, managing “born digital” is already the norm, with analogue collections only growing through donations or acquisitions. So, the future of audiovisual archives is digital. Multiple formats will need to be supported, from the highest industry standards to emerging open-video formats and wrappers. Content, in various formats, will continue to be managed through specialised asset management systems. Metadata will be fine-grained, allowing access at shot or scene level (W3C, 2011). Standards will be adopted to allow interchange between collections (Resource Description Framework, Simple Knowledge Organisation System, Persistent Identifiers,, etc.) and to maintain a record of provenance or metadata records as content is distributed online. Navigation across the combination of semantic data and a diverse range of media types is essential. In terms of the value chain of media consumption and production, the position of archives and roles of archive staff will evolve. Already today, we see the transformation of the traditional role of archivists/cataloguers. The future archivist plays a role as media manager; managing assets from their inception all the way through to distribution and long-term storage (Lydon & Kleinert, 2011).

In summary, we envision the future audiovisual archives as being smart, connected, and open, using smart technologies to optimise workflows for annotation and content distribution. Audiovisual archives can do so by:

  1. Collaborating with third parties to codesign and codevelop new technologies in order to manifest themselves as front-runners rather than followers (Brynjolfsson & McAfee, 2014)
  2. Connecting to other sources of information (other collections, contextual sources) and to a variety of often niche user communities, researchers, and the creative industries
  3. Embracing the use of standards defined by external instances rather than by the cultural heritage communities themselves
  4. Fully adopting “open” as the default to have maximum impact on society by applying open licenses for content delivery, using open source software and open standards wherever possible, promoting open access to publications, and so on

Asset management systems will need to be able to manage various streams of metadata: (1) metadata exported from production systems, (2) expert annotations, (3) machine-generated metadata, (4) crowdsourced annotations and other sources, and (5) knowledge extracted from secondary sources related to content.

With respect to ensuring long-term storage, archives need to make fundamental choices between storing content on servers they own, using cloud storage, or opting for mixed models. Other choices relate to the type of storage media (tape, optical, solid state, hard drives) and adaptation of standardised working processes to endure digital durability.

The dynamics between the creative industries and archives will change. Archive staff and “creatives” will be working more closely together than ever before (Eskevich et al., 2013). These result in ample opportunities, for instance playing a more proactive role in the production process, and suggest topics for new programmes based on gems from the archive. This relates to the future role of archives as curators of vast materials of content. Filters need to be applied to provide meaningful access to vast collections. These files can be created by machines (recommender systems), by experts, or by a smart combination of both.

3. Fostering a “culture of innovation”

The lawful foundation of archives differs from organisation to organisation. Some organisations are established by law as separate entities (legal deposits); others are part of larger organisations like museums, libraries, universities, or broadcasters. In many cases, audiovisual collections are maintained by public bodies and in effect serve public missions, but not exclusively. Commercial footage libraries and other commercial entities (e.g., search engines and video platforms) are also looking after growing bodies of audiovisual heritage, albeit with other primary motivations than providing access to gain knowledge or support creative processes. Growing areas of importance are private archives, notably created by the billions of people carrying smartphones that allow for high-quality multimedia recording. Personal archiving is starting to be addressed (Redwine, 2015) but is still a huge area of research. Established archives are investigating to what extent they can help ensure long-term access to these collections. Many commercial players are active in this domain, from social networks to cloud storage providers. Given this context, it is key for “traditional” archives to educate their constituents about the value they bring to society through securing the sharing of knowledge, a prerequisite for democracies to function; but also, perhaps more down to earth, to educate and entertain communities and individuals and to facilitate the exchange of ideas between various stakeholders.

Over the past years, we have participated in many online and offline discussions in the audiovisual archive domain. Below are some of the main subjects that will impact their future position given the context in which they operate.

Foremost, audiovisual archives are in a challenging position, operating as custodians of (mostly) in-copyright works whilst also managing the public’s expectations in providing online access. Copyright rules need to be modified to allow memory organisations to provide access to their collections. A balance needs to be found between giving creators a remuneration for using their works and allowing the guardians of their works to provide public access for various user groups. As a fundamental rule, content added to the public domain should stay in the public domain (Communia, 2011). Also, memory organisations should consider adopting an “open by default” access policy, as to lead by example. Also, archives could consider liaising with rights owners and study the possibility to to provide access to commercially unviable (i.e., out-of-commerce) content under few constrictions (European Commission, 2012). Modernisation of copyright regulations should look at collective licensing and into other ways that decrease the burden for obtaining copyright permissions. With respect to newly created material, creators should be encouraged to use Creative Commons licenses to foster a culture of innovation and creativity. For works commissioned by public institutions, the use of open licenses could be made compulsory (European Commission, 2015).

Impact needs to be measurable and measured wherever and whenever possible, not only because archives are asked to be accountable for how resources are spent, but also to build solid business cases that will enable future investments, be they in services or supporting infrastructures. Following the Balanced Value Impact Model, we can distinguish between Internal, Innovation, Economic, and Social Impact (Tanner, 2012). Impact metrics also need to take into account new types of use. Already, material from archives is shared using open licenses (e.g., on platforms such as Wikipedia) (Brinkerink, 2015). Use on these third-party platforms needs to be monitored if possible; alternatively, qualitative evidence needs to be gathered. Audio- and video-fingerprinting can be used to track content usage over various platforms.

As a result of digitisation, archives and their users are sharing the same information space. To fully realise their potential, archives need to ensure that their collections are available where users reside. A practical implication of this truism is that institutionally maintained access points such as searchable archive catalogues should not be the only way to access collections. On the Web, content likes to travel, and archives must embrace this fact, at the least by making their catalogues findable for online search engines and shareable on social media platforms, but more fundamentally by providing developers with application program interface (API) access to the catalogue and content, and by adopting machine-readable copyright labels to facilitate access (Chan & Cope, 2015). In this way, third parties can “build upon” online collections (e.g., publishers that integrate resources in learning environments). Following this “liberalisation” of content, a new paradigm emerges that allows archives to focus their efforts on “super serving” niche communities such as filmmakers, media scholars, and amateur historians.

Archives benefit from fostering a “culture of innovation” as a way to effectively manage ever-changing expectations of user groups, and at the same time make the most of new opportunities offered by technology (Mckeown, 2012). For this, it is essential for archives to have access to technical infrastructure that allows not only management of digital assets, but also the pursuit of contemporary objectives in line with user expectations. For instance: using new channels for content distribution such as YouTube and Instagram to engage with new user groups; using technologies such as linked open data and natural language processing to augment and optimize work processes; or allowing for creative ways to access collections. A “culture of innovation” will also open possibilities to increase the level of cooperation with academia in areas ranging from digital humanities to computer science.

4. Introducing two-speed IT

Bossert (2014) outlines how organisations need to have capabilities in four distinct areas in order for them to remain successful as their operations and services are increasingly digitised:

First, because the digital business model allows the creation of digital products and services, companies need to become skilled at digital-product innovation that meets changing customer expectations. [..] Second, companies need to provide a seamless multichannel (digital and physical) experience so consumers can move effortlessly from one channel to another. For example, many shoppers use smartphones to reserve a product online and pick it up in a store. Third, companies should use big data and advanced analytics to better understand customer behavior. For example, gaining insight into customers’ buying habits—with their consent, of course—can lead to an improved customer experience and increased sales through more effective cross-selling. Fourth, companies need to improve their capabilities in automating operations and digitizing business processes. This is important because it enables quicker response times to customers while cutting operating waste and costs.

In order to deliver on a timely basis, software development of “testing, failing, learning, adapting, and iterating rapidly” (Bossert, 2014) needs to de in place. However, applying an “experimental” development approach in an operational context that includes critical back-end (legacy) systems is hardly possible, nor is it appropriate. As a way to cope with this fundamental incompatibility, organisations can choose to adopt a digital product management model, coined “two-speed IT.” This accommodates two tracks, or “speeds,” simultaneously: a “slow” foundational speed and a “fast” innovative speed. Below, we introduce the concept as it may be used in the heritage domain.

4.1 Two-speed IT in the heritage domain

In the heritage domain, managing digital assets and embracing innovation are characterised by very different dimensions—in terms of standards used, in terms of partnerships, in terms of managing investments over time, in terms of accountability, in terms of staff expertise, and so on.

For the “slow” speed, standardised and off-the shelf solutions are used to secure 24/7 service. The solutions are updated following service-level agreements with suppliers. In the heritage domain, good examples are systems for managing storage, cataloguing, play-out, and ordering. Given the impact, the frequency of updating applications in the “slow” ecosystem is not high and is measured in months or years rather than weeks.

The “fast” speed features mostly tailor-made solutions that cater to very specific user requirements and are used to experiment with new technologies. The applications do not have very stringent requirement regarding stability and minimum “uptime” (i.e., they are in some cases maintained by developers themselves). For instance: experimental visualisations of datasets, automatic metadata extraction services, and online magazines linked to current exhibits. This is the “speed” most closely connected to creating highly personalised experiences (Rodney, 2016).

Both ecosystems have their specific infrastructures, applications, development, and staging environments, as well as suppliers. As highlighted in figure 1, they overlap partly, for instance when ecosystems make use of similar underlying streams of data. In practice, the “conversion” from slow to fast is a process driven by business requirements. What’s key is to optimise systems and processes.

Figure 1: the two-speed IT ecosystems

Our illustrative use case is the Netherlands Institute for Sound and Vision (hereafter also referred to as “Sound and Vision” and “the institute”). Sound and Vision is a leading audiovisual archive with a growing digitised collection of 1.9 million objects (ranging from film, television, and radio broadcasts to music recordings and Web videos) and a museum that attracts approximately 250,000 visitors annually. Born-digital assets are ingested in a state-of-the-art digital repository accessible both online and in the museum.

While putting two-speed IT into practice, Sound and Vision has been inspired by leading examples in the cultural field, which have been putting a similar “culture of innovation” into practice, also bringing the two-speeds together. For instance, the Rijksmuseum has built a state-of-the art, high-quality, and immense online collection on top of the data from its internal ADLIB catalogue, while taking the idea of an active audience more than seriously with their Rijksstudio (Gorgels, 2013). Also, the award-winning and Webby-nominated Walker Art Center website couldn’t have been built without a strong emphasis on in-house and iterative Web development (Simon, 2011).

Another source of inspiration is the API-driven technology “stack” of the Cooper Hewitt, enabling innovative ways to unlock the TMS collection database, both online and on site (Chan & Cope, 2015). The stack of the Cooper Hewitt connects two proprietary servers: the collection database (TMS) and the database that knows about the visitors. In the context of this paper, these servers are positioned in the “slow” speed ecosystem. An API allows the creation of a range software applications, including the website and the interactives in the exhibits. As Meyer (2015) notes, “the [Cooper Hewitt] museum made a piece of infrastructure for the public. But the museum will benefit in the long term, because the infrastructure will permit them to plan for the near future.”

The API is also connected to third-party sources including social media platforms such as Flickr and Instagram, creating richer and more personalised experiences for visitors.

5. Two-speed IT in practice

Sound and Vision has ensured the successful transition to the digital domain after completing a seven-year, 90 million euro programme to digitise its analogue assets. Today, it has one of the largest collections of digital heritage assets in the world, totalling over 15 petabytes. Recently, a multiannual innovation agenda was adopted, consisting of five research themes: (1) automatic metadata extraction and big data analysis, (2) explore new access paradigms, (3) understand users, (4) ensure digital durability, and (5) study the impact of media. An integral part of the transition to the digital domain, a new mission statement, a new strategic plan (covering 2016 to 2020), and a new organisational structure were defined and implemented. A guiding principle was the conviction that the success of memory organisations lies in their ability to make the abovementioned notions of “smart,” “connected,” and “open” an integral part of their strategies (Oomen & Aroyo, 2011; Ridge, 2014). Sound and Vision adopted two-speed IT as one of the key design principles.

Before implementing two-speed IT, software development had been mainly executed by third-party software developers. Also, there were few formalised connections between research and development (R&D) and the rest of the organisations, making it hardly possible to implement results from R&D in daily operations.

Hence, two-speed IT needed to be grounded also in the organisational structure. Today, three department are jointly responsible for delivering successful IT solutions: R&D, Development, and Production and Maintenance (figure 2).

Figure 2: departments working on two-speed IT at the institute

The departments have the following responsibilities:

  1. R&D (Research and Development): implementing the research agenda through participation in research projects. Software development by scientific programmers.
  2. Development: translating business requirements into functional requirements. Development following SCRUM. Evaluating output of R&D projects. Creating services and applications that foster and adopt innovation.
  3. Production and Maintenance: ensuring the uptime of applications, installing new versions and patches from third-party suppliers according to set service-level agreements.

Small and medium enterprises (SMEs) also play a key role, as they are involved in the successive stages; they are partners in R&D programmes and are involved.

Note that the three departments are not responsible for the business requirements and product ownership of the services developed. The business units, Archive (responsible for the collection management and access) and Museum (operating the Sound and Vision museum and online presentation), are responsible for this.

Handling of the flow and resources between the teams will be addressed by adopting the concept of (living) labs that allow Sound and Vision to experiment and explore innovative concepts in a near-production environment, in close-to-real-life scenarios with realistic data sets. As we require the labs to be based upon production system infrastructure and protocols, the uptake of successful concepts in the production environment can be much smoother. Obviously this requires close tuning and synchronisation between the three departments involved.

5.1 First adoptions: Speaker labeling and entity extraction

At Sound and Vision, an off-the-shelf asset management system (DAAN, Digital Audiovisual Archive Netherlands) from supplier Vizrt forms the foundation of the “slow” ecosystem (Vizrt, 2015). The asset management is the “core” of the archive and includes services to search, view, select, license, and order digital media assets. Next to DAAN is a more agile “fast” ecosystem of tailor-made solutions for distinct functionalities, notably open source search and automatic metadata extraction. This is the layer where output of research can be implemented in production workflows.

Following the two-speed IT, Sound and Vision successfully deployed automatic speaker labelling (a result from a research project with Radboud University) in 2014, speeding up the annotation process and offering a new access point to the collections. In 2015, technology to extract names of people, places, events, and organisations from subtitle files was introduced. This was originally developed in collaboration with the University of Amsterdam. In both cases, spin-off companies from universities are playing an important role in the process, as they were involved in developing the initial demonstrators with academics and currently are working under a service-level agreement with the Production and Maintenance department.

5.2 Scaling up two-speed IT for online availability

Plans are underway for a major revision of the access strategy. Using the off-the-shelf asset management system as the back end, distinct access scenarios will be supported. This will be the ultimate test case for the “two-speed IT” approach, as it impacts multiple departments and requires significant investments in the infrastructure.

Revising the access strategy started with identifying three ways to make collections available online that are in line with the strategic plan for 2016 to 2020:

  1. Public accessibility: providing access to the collections for everyone. This is one of the core missions of the institute. It ensures everyone can search through the entire catalogue online and—if possible—also directly consult the material. If the material isn’t digitized yet, or if rights are not cleared for online consultation, alternative options for accessing the material will be presented to the user.
  2. Online presentation: highlighting core parts and specific items from the collection, in high quality. This allows the institute to enact its role as a museum in an online environment. It involves interpretation and contextualisation of the collection through singling out and featuring individual objects. Being able to to experiment with different online forms of presentation is key here. Active participation of the online audience will play an important role in ensuring that production of meaning is not limited to the perspective of the museum curator.
  3. Open reusability: enabling reuse of the “open” metadata and content by individuals and by third-party applications (through APIs). This supports the “open by default” access policy that the institute adheres to, allowing third parties to reuse collections that fall in the public domain or, where relevant, IPR is held by Sound and Vision.

Figure 3: schematic overview of the three forms of online availability

These three forms (figure 3) together clarify what online availability constitutes for the institute. Subsequently, they provide a starting point for the formulation of functional and technical requirements for the infrastructure that is required to support them.

The three forms of online availability are not tied to a particular service or organisational department, as they transcend the business units Museum, Archive, and R&D. Furthermore, they provide insight into the different ways in which the institute achieves its tasks and goals in relation to online availability of its collections. The different forms of online availability are complementary to, and interdependent on, each other. The realisation of the three forms of online availability is an important step towards a holistic online strategy for Sound and Vision. Therefore, it is crucial to ensure that these forms are realised simultaneously. This averts fragmentation of the online services provided by the institute and reflects the complementary nature and mutual reinforcement of the three forms.

The next step in defining the online strategy was making an inventory of the functional and technical requirements for the three forms of online availability:

For Public Accessibility, it is of prime importance that the entire collection is found easily. This requires proper integration into popular search engines, detailed metadata, and search functionality that meets the needs of the user. Navigating the collection must be intuitive and insightful (e.g., supported with automated recommendations and contextualised with links to internal and external collections). When material can be made available online, it is important that the quality and user experience meet the expectations of everyday consumers of online media services. These are constantly evolving. When only metadata is available for online consultation, alternative viewing options (in situ consultation or ordering a physical copy) need to be offered. All of this relies heavily on sound administration of the relevant features and information in the catalogue.

Online Presentation is about highlighting specific items from the collection and placing them in a rich, meaningful context. This is managed through curation, research, and editorial work and is supported by automatic, data-driven, curation mechanisms. It is the ambition of the Museum unit to experiment and innovate in the online presentation forms. The user’s own voice is of great importance here. An important requirement is therefore to support public participation, enable a dialogue with the user, and give the audience an active role in the production of meaning.

The Open Reusability of the collection is primarily supported by capturing specific (IPR) information about the collection in the catalogue, particularly the application of open licenses or the fact that an item is part of the public domain. This information will then be made explicit in the appropriate form for both humans and (search) machines. To support various types of use of the content, it should be available in high-quality and open-media formats (both consultation copies and source files). For automated forms of reuse by third parties, programmatic access to the collection (metadata and content) through an open API—without login—is a requirement.

The three forms of online availability provide opportunities to strengthen each other. For example, the contextualisation (automated or curated) that results from the Online Presentation could greatly enhance the intuitive navigation of the online catalogue that is provided under Public Accessibility, and vice versa.

To meet the functional and technical requirements for the three forms of online availability summarised above, several new components need to be introduced into the digital infrastructure of Sound and Vision. The three most important ones are listed below:

Transcoding and online consultation copies

Online consultation copies are intended as end products that enable the user to consult the archival material online (and also to reuse the material in the case of Open Reusability). Playback quality and the file formats that are utilized must be in line with the expectations of everyday consumers of online media services (like YouTube and Netflix), which are constantly evolving. Important to note here is that the online consultation copies must be generated for only a small part of the entire archive, namely where the IPR status permits online consultation. In the current situation, this is an estimated 0.180 percent of the entire collection (and 0.600 percent of the digital collection). The proportion available for reuse is estimated to be 0.032 percent of the entire collection. The nature of the online consultation copies comes with higher demands on the playback quality (for an optimal user experience) and the variety of file formats (for different purposes and devices, in a rapidly changing technical environment). This differs from the so-called low-res preview proxies that have already been generated for the entire digital archive, for use in the media asset management (MAM) system.

Data processing and enrichment

Online availability of the Sound and Vision archive is reliant on four sources. Source files are stored in the Digital Archive, which can be accessed via the MAM. The MAM also serves as a catalog and contains the metadata of all objects. The Thesaurus ensures consistent use of descriptive terms and serves as a bridge to link to the various collections in the archive and (external) context sources. To assist navigation and automated contextualisation, based on the abovementioned sources a media and data analysis component is required. Automatic annotation and linking is performed by various algorithms. The resulting annotations are, if possible, put back into the MAM catalogue (as metadata enrichments).

Flexible back end

The flexible back end is an API service architecture that combines data from multiple internal and external collections and context sources. It functions as an intermediate layer in the digital infrastructure, in order to extend the standard functionality provided by the MAM with specific functionality required for the three forms of online availability. The resulting combined data is exposed via various service APIs to which online platforms and third parties can connect (depending on the goal).

The abovementioned functional and technical requirements and proposed components also offer new possibilities beyond the three forms of online availability. First, the flexible back end can support experimentation, innovation, and incubation based on the Sound and Vision archive by combining direct access to the digital archive and collection metadata with new forms of media and data analysis and/or audience participation. This is relevant for internal R&D, but also for experimentation and innovation on the basis of the archive by (research) partners and startups. It allows the various parties to experiment (casually, exploratively, or in a defined lab setting) with the enrichment of the archive. This corresponds with the desirable behavior in ecosystem “fast” of two-speed IT. Second, the infrastructure described above offers opportunities for new ways to provide access to and/or present the digital collection within the walls of the institution. Besides exploiting the digital archive and collection metadata, the (automated) enrichment, contextualisation, and linking of the items is reused in the museum context. This provides opportunities for better integration of online and on-site museum experiences.

On the one hand, continuity of the museum exhibitions during the opening hours of the museum are of high importance and should therefore reside in ecosystem “slow” of two-speed IT. However, to provide a more contextualised, dynamic, and personalised experience for the visitor, it is also important to be able to expose the museum visitor to the innovations happening in ecosystem “fast” from the outset. This allows for incremental development of new museum experiences and also provides opportunities for audience engagement, beyond what ecosystem “slow” can offer.

6. Conclusions

In this paper, we proposed that heritage institutions consider adopting “two-speed IT” to enable innovation in parallel to maintaining a stable core digital infrastructure. In presenting our case, we sketched our vision of the potential future role of archives and some of the challenges that they will face to fulfill that potential. We also give a brief outline of the concept of two-speed IT and propose its embedding as a dual pair of ecosystems for cultural heritage organisations.

Our proposal is based on the successful implementation of two-speed IT at the Netherlands Institute for Sound and Vision, which has adapted the approach to enable it to innovate with new technology while maintaining a stable, standardised basis for IT infrastructure. This has had impact on the strategy and organisational structure of the institute. Following the implementation of this approach, cutting-edge speaker identification and automatic entity-extraction techniques were successfully implemented in production systems in 2014 and 2015. This year, an ambitious programme on online access has been initiated (see Section 5).

Over the past two years, we have learned a lot regarding two-speed IT and have found it to be a well-suited strategy for ensuring that the outcomes of research and innovation projects can find their way to production systems. With the experience gained over the past years, we look forward to implementing the new access strategy and further developing the two-speed IT model within our own organisation. We hope other museums and memory organisations think about creating and maintaining the technical prerequisites to flourish in an online networked environment.

In future work, we will compare how different organisations are adopting similar or alternative approaches to the two-speed challenge with the aim to substantiate our initial findings further.


Besser, Howard. (2015). “Personal Digital Archiving: Issues for Libraries and a Summary of the PDA Conference.” Cape Town, South Africa. Available

Bossert, Oliver. (2015). “Organizing for digital acceleration: Making a two-speed IT operating model work.” McKinsey & Company. Consulted January 12, 2016. Available

Brinkerink, Maarten. (2015). “Dutch Cultural Heritage Reaches Millions Every Month.” Netherlands Institute for Sound and Vision. Published June 17, 2015. Consulted February 16, 2016. Available

Brynjolfsson, E., & A. McAfee. (2014). “The second machine age: Work, progress, and prosperity in a time of brilliant technologies.”

Casey, Mike. (2015). “Why Media Preservation Can’t Wait: The Gathering Storm.” IASA Journal 44 (January): 14–22.

Chan, Sebastian, & Aaron Cope. (2015). “Strategies against architecture: interactive media and transformative technology at Cooper Hewitt.” MW2015: Museums and the Web 2015. Published April 6, 2015. Consulted February 16, 2016. Available

Communia. (2011). “Policy Recommendations.” Published June 2011. Available

Eskevich, Maria, Gareth J.F. Jones, Robin Aly, Roeland Ordelman, Shu Chen, Danish Nadeem, Camille Guinaudeau, Guillaume Gravier, Pascale Sébillot, Tom De Nies, Pedro Debevere, Rik Van de Walle, Petra Galuscáková, Pavel Pecina, & Martha Larson. (2013). “Multimedia information seeking through search and hyperlinking.” Proceedings of the 3rd ACM conference on International conference on multimedia retrieval. Dallas, Texas. Available http://hal.archives-ouvertes/icmr106-eskevich.pdf

European Commission. (2012). “Memorandum of Understanding (MoU) on Key Principles on the Digitisation and Making Available of Out-of-Commerce Works.” Published September 20, 2012. Consulted February 16, 2016. Available

European Commission. (2015). “European legislation on reuse of public sector information.” Published July 23, 2015. Consulted February 16, 2016. Available

Gorgels, P. (2013). “Rijksstudio: Make Your Own Masterpiece!” In N. Proctor & R. Cherry (eds.). Museums and the Web 2013. Silver Spring, MD: Museums and the Web. Published January 28, 2013. Consulted February 16, 2016. Available

Johnson, L., S. Adams Becker, V. Estrada, & A. Freeman. (2015). “NMC Horizon Report: 2015 Museum Edition.” Austin, Texas: The New Media Consortium.

Lydon, Maggie, & Anna-Lena Kleinert. (2011) “BBC Roles for Archivists: Developments of archivists roles and competencies.”

Mckeown, M. (2012). “Adaptability: The art of winning in an age of uncertainty.” London: Kogan Page.

Meyer, Robinson. (2015). “The Museum of the Future Is Here.” Published January 20, 2015. Consulted February 16, 2016. Available

National Film and Sound Archive. (2015). “Deadline 2025: Collections at Risk: Discussion paper.” Consulted February 16, 2016. Available

Oomen, J., & L. Aroyo. (2011). “Crowdsourcing in the Cultural Heritage Domain: Opportunities and Challenges.” In C&T ’11, Proceedings of the 5th International Conference on Communities and Technologies. Brisbane, ACM.

Redwine, Gabriela. (2015). “Personal Digital Archiving.” Technology Watch Report. Digital Preservation Coalition. December.

Ridge, M. (2014). “Crowdsourcing our cultural heritage.” London: Asgate.

Rodney, Seph. (2016). “The Evolution of the Museum Visit, from Privilege to Personalized Experience.” Published January 22, 2016. Consulted February 16, 2016

Rushkoff, Douglas. (2013). “Present shock: when everything happens now.” New York, NY: Current.

Simon, Nina. (2011). “Digital Museums Reconsidered: Exploring the Walker Art Center Website Redesign.” Museum 2.0. Published December 7, 2011. Consulted February 16, 2016. Available

Tanner, S. (2012). “Measuring the Impact of Digital Resources: The Balanced Value Impact Model.” Report, Open Content Exchange Platform.

Vizrt. (2015). “Dutch national TV and film archive choose Viz One for MAM.” Published March 18, 2015. Consulted February 16, 2016. Available

W3C. (2011). “Media Fragments URI 1.0.” Media Fragments Working Group. Published December 2011. Consulted February 16, 2016. Available

Wright, Richard. (2012). “Preserving Moving Pictures and Sound.” DPC Technology Watch Report 12-01. Digital Preservation Coalition. March 2012. Available

Cite as:
Oomen, Johan, Maarten Brinkerink, Bouke Huurnink and Josefien Schuurman. "Changing gears: Fast-lane design for accelerated innovation in memory organisations." MW2016: Museums and the Web 2016. Published January 19, 2016. Consulted .