Finding value beyond the dashboard
Alex Green, Wellcome Library, UK, Annelise Andersen, Wellcome Library, UK
Abstract
In evaluating digital projects, the focus for many cultural institutions has been on quantitative methods, using defined metrics to identify and measure impact, value, and success. Continued efforts to develop standardised frameworks and tools, such as aggregate influence scores and dashboard templates, has meant that this is increasingly viewed as quick, broadly applicable, and easily understood. However, there remains dispute over whether this form of measurement, which places numbers and statistics above more subjective, qualitative response, is too reductive. This, in turn, prompts questions for us when evaluating our digital projects: • What are we actually measuring, and what are the numbers really telling us? • Why might these results be significant and privileged over other forms of insight? • Are we at risk of oversimplifying and narrowing evaluation processes for digital projects because of the abundance and relative ease of digital evaluation tools? While recognising the value of Web analytics, we have sought to engage with the above questions when designing evaluation of digital projects at Wellcome Library. Drawing from Simon Tanner's Balanced Value Impact model, we explore iterative approaches to evaluation through agile frameworks, which can be adapted to allow for changing priorities and iterated for the needs of different digital projects. Through a multimodal approach, we aim to combine rigorous data analysis with qualitative methods to reveal emergent patterns and behaviour, in order to better understand the often complex relationships our users have with our digital offer. This approach has generated rich, nuanced data, giving greater understanding of our audiences and clear pathways to improve our digital services. Though it remains in tension with many institutions' reporting requirements, it is an approach that is lively with possibility and accommodates the continual evolution and range of response in users’ experiences.Keywords: evaluation,analytics,google analytics,audience research, usability,visitor studies
“21st century museums are places of transition. The populations which they serve are increasingly diverse. Technology is altering the ways in which information is accessed and the processes by which culture is formed. As public institutions, museums are expected to be responsive and adaptive to these changes at a time when public investment is decreasing. Museums must compete for public resources by justifying the value they contribute to individuals and to society.” (Scott et al., 2014: 7)
1. Introduction
Over the past decade the cultural sector has seen a proliferation of digital work, and today millions of people worldwide engage with the arts and cultural sector through digital channels. Digital schemes, projects, festivals, strategies, and other kinds of work now sit comfortably alongside more traditional modes of production in the cultural sector (Harley et al., 2006). Today’s continuously evolving digital landscape offers new creative opportunities to those working in and with culture. Digital technologies have demonstrated the ability to significantly change the interaction and behaviour, and roles and boundaries, between cultural producers and consumers in both expected and unexpected ways (Borgman, 2003; Blandford et al., 2004). As Bakhshi et al. (2010: 6) confirm, “our research shows that not only are new digital technologies bringing new audiences to arts and cultural organisations, they are creating new sources of cultural and economic value, and in some cases taking the art form itself in new directions.”
Despite the continuing changes that digital programmes have brought to engagement with museum and library collections, there remains limited understanding of the impact they have on audiences and institutions. Improving this understanding is essential for institutions seeking to realise the potential of their digital collections, reach new audiences, or reorient their practices in a rapidly changing context. This paper will explore the difficulties of evaluating the impact of digital collections, situating this within the broader context of the UK cultural sector. Through a case study of the Wellcome Library Transformation Programme, it will consider possibilities for how new evaluative practices can complement and enhance existing metrics-based approaches and raise questions about the value of findings and how they can be utilised in practice.
2. The difficulty of evaluating digital
Digital projects inevitably bring new challenges for evaluation, as their potential is inextricably linked to their ability to enable new forms of interaction and behaviour. However, they don’t exist in an independent sphere of activity unlinked to the rest of the institutions and contexts in which they operate. As with any activity, digital projects operate within a complex ecosystem of interaction, effecting numerous impacts both within and outside of institutions. This is further complicated by the many ways cultural institutions have utilised digital technologies, leading to a multiplicity of forms, functions, and expectations that do not necessarily lend themselves well to standardised forms of evaluation.
Audiences of digital services or products are frequently remote from the institutions in which they exist. Unlike audiences that are physically present in a location, they cannot be easily sampled for traditional research, and opportunities to collect feedback, demographic, or motivational data are therefore limited. While online surveys are often deployed to collect these forms of data, samples inevitably suffer from selection bias as respondents must proactively choose to participate and in the absence of population demographics, samples cannot be weighted. In addition, the expected benefits for digital projects are often loosely defined: for example, “reaching new audiences.” Others are by their nature both intangible and unknowable until they occur: for example, “enabling potential for new kinds of engagement.” Other seemingly simple benefits require a complex confluence of factors to be realised—for example, “providing economic benefit to users engaging with collections” requires prospective users to be aware of collections and use them for an economically beneficial activity (i.e., not for procrastination at work) and to save money in comparison to alternative options—a causal chain impossible to evidence from Web metrics. While digital projects can afford these kinds of benefits, we need to define evidence expectations more tightly in order for them to be robust (see Finnis et al., 2011) and data collection needs to move beyond what can be inferred from Web metrics.
The continuing trend towards iterative and incremental modes of digital production, rather than the creation of linear series of discrete products, can make traditional project evaluation approaches problematic. How do we know when a project is finished? More challengingly, how can we set success criteria upfront when the aims of outputs of projects are redefined multiple times throughout its development? However, despite the challenges, learning from the process is essential to improve the management and execution of digital projects. For cultural institutions seeking financial return on digital investment, there may be further pressures for reliable evidence, as is highlighted in a 2010 report commissioned by Arts Council England, Digital audiences: Engagement with arts and culture online: “[cultural institutions] will need to strike a balance between ambition and pragmatism when deciding where to invest their money in digital media, especially as the current business models do not guarantee additional direct revenue” (MTM London, 2010: 7).
Adapting to the challenges inherent in implementing digital programmes also affects institutions themselves, often requiring changes to organisational structures, management practices, and ways of working. This is shown, for example, in the Smithsonian’s five-year strategic plan, “Creating a Digital Smithsonian” which, alongside goals to create, manage, and make available large volumes of digital collections explicitly describes the need to “[e]stablish business goals and implement business strategies that support a digital Smithsonian” and develop “organizational capability to efficiently implement the Smithsonian Digitization Program” (Smithsonian Institution, 2010: 13). Approached in this way, digital functions not only as another channel to connect with users or provide services, but also an integral component of evolving institutional strategy and vision. Understanding the less tangible forms of value this may bring to the organisation adds a further layer of reflexive complexity to the evaluation of digital programmes. In the light of this complexity, measuring the value of a digital programme might be better considered through a holistic model that focuses upon its total affordance, asking what it is able to do, for whom, and in what kinds of ways, taking perspectives from both consumers and producers of digital cultural content.
With the difficulty of evaluating digital, individual approaches to projects and programmes have remained at best diverse and at worst underdeveloped and unsystematic (Tanner, 2012; Finnis et al., 2011; Selwood, 2010). As Tanner (2012: 9) notes: “problems of obtaining evidence of impact to support sustainability relate in part to one simple aspect of digital resources: they have not existed for a very long time in the main part.” However, there is an expanding area of scholarship regarding the evaluation of digital cultural work (Malde et al., 2014; Tanner, 2012; Arts Council England, 2013; Culture Counts, 2015). Villaespesa, for example, has proposed the development of standard metrics for digital evaluation to aid comparability and benchmarking (Villaespesa, 2015; Stack & Villaespesa, 2015).
3. Tensions of evaluation in the UK cultural sector
On the one hand, the relative youth of digital and its status as an emerging tool can explain the immaturity of digital programme evaluation in cultural institutions. However, sporadic and non-standardised forms of evaluation are far from exclusive to digital projects; they are prevalent throughout the cultural sector. The range and sometimes conflicting array of ideas about the value of work and how it should be measured as produced by the cultural sector is in part responsible for this. As a significant proportion of cultural institutions within the UK are publicly funded, much of the literature written about these evaluative processes studies their ability to create “direct causal links between policy interventions and intended outcomes” (Scott et al., 2014: 10). Since the 1980s, the dominant trend within this literature has been that value is often premised on financial valuations, or extrapolations, of socioeconomic impact. To some extent, this reflects changing public policy discourses and political priorities, which determine the criteria by which government agencies and department evaluate and prioritise the UK cultural sector. The need for institutions to produce funding cases premised on the forms and language of funders privileges some types of “value” over others, which can in turn become part of the normative “common sense” of the organisation.
In order to satisfy the needs of the funding body, evaluation methods are therefore often, as Selwood (2010: 5) notes, “most apposite to the requirements of the framework being used” rather than necessarily being the most appropriate or useful in generating knowledge. For example, the generic frameworks cluster (“Generic Learning Outcomes,” “Generic Social Outcomes,” and “Generic Wellbeing Outcomes”) developed by Arts Council England in 2008 (www.artscouncil.org.uk/what-we-do/supporting-museums/ilfa/meausring-outcomes/generic-learning-outcomes/), and the UK Department for Culture Media and Sport’s “Culture and Sport Evidence” programme (DCMS, 2013), all privilege quantitative forms of evaluation based on presumed financial and economic benefit. In this atmosphere, evaluation is often reduced to decontextualised numeric scoring, providing limited understanding and obscuring the reality of user interactions. Though quantitative measures do provide information about how projects perform, they risk failing to understand actual value (Selwood, 2002; O’Brien, 2010). For example, if a Facebook post is liked many times, these numbers might tell us something about the reach of a project, but are relatively opaque in terms of explaining user motivation or sentiment. Similarly, sentiment analysis of tweets may give broad indications of user feeling, but reveals little of the depth of interaction or the degree of importance it holds to individuals.
Within the political and financial contexts of the cultural sector, institutions will always have to demonstrate value in ways that meet funders’ expectations. Furthermore, as the UK museums sector continues to be subject to state funding cuts (Museums Association, 2015), the preoccupation with funding and finding ways to communicate value to funders will continue. As Anderson (2004: 4) notes: “[w]ithout generally accepted metrics, arts organisations will have more and more trouble making a case for themselves.” However, increasing numbers of studies are looking at how value might be demonstrated in terms other than quantitative (e.g., Jensen, 2014; Arts Council England, 2013; Culture Counts, 2015). The following discussion describes the evolution of alternative evaluation approaches to digital projects in the Wellcome Library Transformation Programme in response to shifting stakeholder requirements and organisational needs. This was specifically driven by an increasing need to understand user interactions and motivations, while also enabling the iterative development of digital content and platforms.
4. Wellcome Library Transformation Programme
The Wellcome Library is one of the world’s great cultural treasures: a unique and ever-growing collection of archives, manuscripts, books, journals, art, and ephemera that document the place of health and medicine across cultures and over time. It is also an integral part of Wellcome Collection, one of London’s fastest-growing and most innovative museums. Focused on exploring the connections between medicine, life, and art in the past, present, and future, Wellcome Collection offers visitors contemporary and historic exhibitions, lively public events, a café, a shop, a restaurant, and conference facilities as well as publications, tours, a book prize, and international and digital projects. The Library’s collections range from medieval manuscripts and archives to internationally significant monograph collections via packaging ephemera, video and audio, musical scores, and over 250,000 artworks. It spans subject areas from the history of surgical treatment, to public health and infectious diseases, to cookery, demonology, and mind control.
Like many other cultural heritage institutions, Wellcome Library faces the challenge of moving from an environment dominated by physical content to one in which we also serve a global audience for online content. With an ambitious vision to create the world’s largest free and unrestricted digital library focused on cultural contexts of health and provide a first-class physical research environment focused on our unique collections, the Wellcome Trust Board of Governors approved a £20 million Transformation Programme with a broad remit to develop the library’s organisation, activities, and ways of working. The digital ambition of this programme, as summarised by Christy Henshaw, Digitisation Programme manager, and Robert Kiley, Head of Digital Services, was to “permanently break the bonds imposed by a physical library and provide full access to our collections in new and innovative ways. We aimed to create an entirely new digital presence based on the Wellcome Library’s historic foundations and modern personality” (Henshaw & Kiley, 2013).
However, the Transformation Programme was not defined and implemented prescriptively; rather, it developed iteratively and self-reflexively in response to the changing needs of the organisation and the knowledge gained in developing systems and processes, creating digital collections, and learning about users’ needs. The first phase of the programme initially took a thematic approach, focusing on developing digital collections, resources, and interpretation around the history of modern genetics (http://wellcomelibrary.org/collections/digital-collections/makers-of-modern-genetics/). This had been identified as a strategic priority for collection development due to the relative scarcity of research collections and the significant investment of the Wellcome Trust in sequencing the human genome, along with subsequent genomic research. Additional projects were included in the programme in response to emerging partnership opportunities. These included a partnership with ProQuest, a global information-content and technology company, to digitise books for their Early European Books initiative; and a project part-funded by Jisc, a UK not-for-profit championing digital technologies in education and research, to digitise the Medical Officer of Health reports for Greater London.
Building from this, the second phase sought to massively upscale collection digitisation, partnering with Jisc and Internet Archive to digitise Wellcome Library’s entire nineteenth-century book collection along with health-related collections from ten other research libraries to create in excess of fifteen million images (http://wellcomelibrary.org/collections/digital-collections/uk-medical-heritage-library/). Digital content creation also became more responsive to user demand, producing collections to accompany public exhibitions and incorporating items digitised by public request. However, digitisation of archive and special collections material remained thematic, with a focus on mental health and asylum records. This phase also became more attuned to the experience of users, with a greater focus on the interpretation and communication of collections. For example, Digital Stories (http://digitalstories.wellcomecollection.org/) was developed as an experimental long-form digital platform intended to engage non-specialist and non-researcher audiences with the digital collections.
Wellcome Library is now entering the third phase of the Transformation Programme. The earlier thematic focus and strongly planned approach has evolved to framework based on three key strands: building digital collections, sharing them with users, and actively engaging users with content and themes. As the programme reaches maturity, and with a critical mass of digital collections, focus is now shifting from production of digital content to the use and reuse of content by audiences. Key projects include exploring redevelopment of search and browse for work in progress (see https://alpha.wellcomelibrary.org/), movement to cloud-based infrastructure, expansion of artwork and special collection digitisation, and development of crowdsourcing projects on recipe manuscripts.
5. Evaluating the Wellcome Library Transformation Programme
As previously discussed, the nature of all evaluation activity is fundamentally situational, dependent on external and internal pressures and contexts. Evaluation rests in an interlinked web of project nature, stakeholder response, management requirement, and strategic direction. Malde et al. (2014: 5) assert, “[m]easuring value is subjective and must always be personal … To better understand digital engagement, cultural organisations need to explore what and who they value.” Every project is unique, seeking to reach a different group or enable a different form of interaction, and every project evaluation is subject to a different interlocking set of internal and external forces, spanning staff satisfaction to funder requirements. In shaping evaluation for the Transformation Programme, we have aimed to reflect and draw from this in tailoring multimodal approaches that, while drawing from developments in standardised methods and metrics, seek to explore and pilot new, more creative forms of investigation.
In the first phase of the programme, recognising the value of standardised approaches in communicating value to external and internal stakeholders led to the development of dashboards based on Web metrics. These have subsequently continued to evolve, drawing from resources developed by institutions including Tate (http://www.tate.org.uk/about/our-work/digital/digital-metrics) and Carnegie Museums (http://studio.carnegiemuseums.org/projects/digital-metrics-dashboard/). This required significant work in reimplementing Google Analytics across our Web domains and building a coherent event-tracking schema to collect data on how each digitised item is used, including page turns, pan and zoom interactions, downloads, shares, and bookmarking. While this allowed us to develop basic measures for the volume of engagement with digital collections, along with indicators for audience reach, geographic location, and limited demographics by coding visiting organisations (Duin et al., 2012), it didn’t help in understanding the types of engagement or its quality. We were able to award notional scores when benchmarking against other projects, but not understand what these really meant, whether they actually fulfilled our particular business needs, or, crucially, how best to improve them.
6. Enhancing evaluation: Flexibility and multiple methods
Seeking to enrich our approaches to gathering and analysing data, we encountered Tanner’s (2012) “Balanced Value Impact Model” (BVI). Here, he proposes that impact be reframed to mean “[t]he measureable outcomes arising from the existence of a digital resource that demonstrate a change in the life or life opportunities of the community for which the resources is intended.” (Tanner, 2012: 9). The BVI model aims to help those engaged in evaluation by drawing different models of economic, social, and innovation Impact Assessment (IA) from areas outside the cultural sector together, including international development, health research, and policy analysis, and reformatting them into a new, cohesive, and logical process using a tailored logical framework linked to a balanced scorecard.
The BVI model emphasises the potential of the combination of qualitative and quantitative approaches to produce more comprehensive understandings of value than one or the other applied alone. Bamberger et al. (2010) note that there has been a drive towards this for some time by those working with impact. Leeuw and Vaessen (2009) argued for the value of quantitative measures for the purposes of causal attribution, but suggested that this be complemented with qualitative methods. Khandker et al. (2010: 4) expand on the value of qualitative approaches in saying that:
Qualitative analysis, as compared with the quantitative approach, seeks to gauge potential impacts that the program may generate, the mechanisms of such impacts, and the extent of benefits to recipients from in-depth and group-based interviews. Whereas quantitative results can be generalizable, the qualitative results may not be. Nonetheless, qualitative methods generate information that may be critical for understanding the mechanism through which the program helps beneficiaries.
For significant projects in the Transformation Programme, we developed tailored logical frameworks, considering the types of value we sought to achieve (e.g., increasing use of collections in undergraduate teaching), the perspectives from which we would consider value (including those of users, staff, collaborators, depositors, and funders), and the methods, or indicators, we would use to measure. We didn’t, however, define what success would be for these indicators. Taking a prematurely prescriptive approach to value, by defining fixed targets or performance indicators up front, would have run the risk of inscribing rigid conceptions of success for the programme, when in truth we were still in the process of understanding what a successful digital collection service offers and what success looks like. The frameworks were then developed into work packages and delivered by a combination of external consultants, research agencies, and internal staff as appropriate. The range of research activity included focus groups, telephone and in-person interviews, rapid prototyping with users, surveys, user testing, and staff workshops, along with statistical analysis, journey mapping, and coding of social media comments. We tried to maintain a flexible and open approach to the evaluation we did for this programme throughout its duration. For example, as strategic audience focus evolved from “everyone” through demographic categories (such as undergraduate student or media researcher), to a set of motivationally defined researcher segments (Stack & Villaespesa, 2015), we updated our survey and data collection processes to gain a better understanding of our audiences and their interactions with us.
Beyond the specific, granular learning relating to individual projects, we found a set of common threads in relation to engagement with Wellcome Library digital collections. Some of these were indicated by our Web metrics, but none were entirely encapsulated by them.
- Users found it hard to conceptualise the extent and nature of collections, and so a discovery model solely based on search often left them missing relevant material or feeling frustrated at not finding something they imagined should exist. In particular, less-familiar users would imagine the Wellcome Library as tightly focused on contemporary scientific literature rather holding collections broadly related cultural contexts of health, leading to a mismatch of expectations with offer.
- Features that had been considered “add-ons” to the core value proposition of the digitised collections (for example, full-text search of OCR), were seen by users as essential. This in particular poses challenging questions for engagement with archival and manuscript collections.
- Many users, including experienced academic historians, found difficulty in navigating hierarchical archive catalogues and locating digitised material within them.
- Potential for innovative research or creative reuse of collections was hampered by ease of access to the underlying data and clearly documented programming interfaces, but also by lack of inspiration or examples of what might be possible.
- User motivations and behavioural modes could be complex and not easily reduced to simple categories: individuals might organically move from satisfying personal interest to conducting academic research within the same journey arc and draw no distinction between these behaviours.
7. Embedding into practice
As we embark on the third phase of the programme, we have an evaluation approach that combines broader programme-level metrics with specific research targeted to individual projects. Beyond learning for how to improve engagement with our audiences, we have also learned about how to better integrate and manage evaluation to provide timely and actionable learning. As the programme developed from its pilot phase, project management and specification evolved from traditional waterfall modes to Agile-influenced approaches. This was driven not only by the need to adapt to learning from evaluation, but also as a reflection of the increasing uptake of these methodologies within professional communities, and has afforded an opportunity to embed impact measures and evaluative measures throughout the project life cycle.
Project proposals within the programme now develop estimations of the likely reach, value to users, fit with the programme, and value in developing the Wellcome Library as an organisation through an interactive calculator. Developers of projects can work with this calculator to refine proposals and address potentially weak areas prior to pitching to internal stakeholders. We are working with project managers to extend the concept of project wash-ups and process reviews, to multiple review points throughout project execution. We are also seeking to better involve user feedback, using the expertise and perceptions of our audiences as a constant challenge and formative critique rather than final, summative peer review. Recognising the challenges inherent in shifting the balance of power from internal to audience need, we have also sought to involve senior leadership in this approach, reconfiguring the formal “Programme Board” as a discussant of progress and developer of ideas, rather than an arbiter of success. Privileging audience interaction to the extent that it could influence the direction of projects within cultural institutions was a trend first predicted by Stephen Weil (1997: 4): “[i]n the museum of the near future, it will be primarily the public, and not those inside the museum, who will make…decisions.” While direct involvement of consumers or potential customers is increasingly common in the creation of commercial digital projects, considering users as agents of change or co-interpreters in the formation and implementation of digital programmes is a new form of practice for Wellcome Library and has required a significant culture shift to enable it.
8. Where next?
Through this case study, we have discussed how value can be added to standardised or “templated,” forms of evaluation to enable deeper and more actionable learning. In the context of the Wellcome Library Transformation Programme, evaluation has both reflected and influenced the evolution of the programme’s objectives and values, transforming not just practice in the Programme, but itself in the process. While the context this case study inhabits is particular to both the UK cultural sector and the Wellcome Trust’s position as an independently funded institution, the need to develop evaluative models that interpret and present the value of work within the cultural sector according to specific contexts and requirements is common to settings across the world. That these contexts are variable is in fact core to the contention that evaluation cannot be solely composed of “templated” approaches but needs to be tailored to project, audience, and context. As Wellcome Library Transformation Programme moves forward, developing measurements of value and success away from standardised performance indicators and “digestible” dashboards is an experimental and potentially riskier approach, but one we hope will create the space for learning rather than measuring, and enable the generation of new knowledge rather than the tallying of scores. We have sought to mitigate some of the potential risks this might bring by retaining some standardised measures and management data, albeit slimmed down; however, there are still further challenges to meet, including:
- How to best communicate value to our stakeholders, making more complex conceptions of impact both tangible and intelligible
- How to set challenging targets while avoiding fixing a rigid trajectory
- How to recognise the subjectivities of value while retaining the ability to compare and learn from others
- How to effectively embed evaluative processes in individual project management practice but retain coherence across a programme
- Finally, the creation of Wellcome Library’s digital collections is still relatively recent, and we are only just beginning to see the evidence of published research drawing from them; gaining any real understanding of the long-term impacts of digital collections feels a long way off at the moment.
References
Anderson, M.L. (2004). Metrics of Success in Art Museums. Los Angeles, CA: The Getty Leadership Institute. Consulted January 8, 2016. Available http://cgu.edu/pdffiles/gli/metrics.pdf
Arts Council England. (2013). Great Art and Culture for Everyone: 10 Year Strategic Framework (2nd edition), 2010–2020. London: Arts Council England. Consulted January 11, 2016. Available http://www.artscouncil.org.uk/media/uploads/Great_art_and_culture_for_everyone.pdf
Bakhshi, H., A. Mateos-Garcia, & D. Throsby. (2010). Beyond Live: Digital innovation in the performing arts. London: NESTA. Consulted January 10, 2016. Available: http://www.nesta.org.uk/sites/default/files/beyond_live.pdf
Bamberger, M., K. Mackay, & E. Ooi. (2005). Influential Evaluations. Detailed Case Studies. Operations Evaluation Department. Washington, DC: World Bank. Consulted January 2, 2016. Available http://lnweb90.worldbank.org/oed/oeddoclib.nsf/DocUNIDViewForJavaSearch/920F6ECD297978D785256F650080BB9E/$file/influential_evaluation_case_studies.pdf
Bamberger, M., V. Rao & M. Woolcock. (2010). “Using mixed methods in monitoring and evaluation: experiences from international development.” In A. Tashakkori and C. Teddlie (eds.). Handbook of Mixed Methods in Social and Behavioural Research (second revised edition). Thousand Oaks, CA: SAGE, 613–41.
Blandford, A., G. Buchanan, & M. Jones. (2004). “Usability of Digital Libraries” (editorial). Journal of Digital Libraries 4(2): 69–70.
Borgman, C.L. (2003). “Designing digital libraries for usability.” In A. Peterson Bishop, N. Van House, & B.P. Buttenfield (eds.). Digital Library Use: Social Practice in Design and Evaluation. Cambridge, MA: MIT Press, 85–117.
Culture Counts. (2015). About. Consulted January 11, 2016. Available https://culturecounts.cc/about/
Culture Metrics. (2015). Culture Metrics Blog: Policy Week Event: Using Digital Technology to Assess Quality in the Arts. Last updated November 26, 2015. Consulted January 11, 2016. Available http://www.culturemetricsresearch.com/
DCMS (UK Department for Culture, Media and Sport). (2013). CASE Programme – Government Guidance. London: DCMS. Last updated February 23, 2013. Consulted January 31, 2016. Available https://www.gov.uk/guidance/case-programme
Duin, D., D. King, & P. Van Den Besselaar. (2012). “Identifying Audiences of E-Infrastructures – Tools for Measuring Impact.” PLoS ONE 7(12). Consulted January 10, 2016. Available http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0050943
Finnis, J., S. Chan, & R. Clements. (2011). Let’s Get Real: How to evaluate success online? Brighton: Culture24. Consulted January 11, 2016. Available http://weareculture24.org.uk/projects/action-research/how-to-evaluate-success-online/
Harley, D., J. Henke, & S. Lawrence (2006). Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences. Berkeley, CA: Center for Studies in Higher Education. Consulted: January 2, 2016. Available http://www.cshe.berkeley.edu/sites/default/files/shared/research/digitalresourcestudy/report/digitalresourcestudy_final_report_goal1.pdf
Henshaw, C., & R. Kiley. (2013). “The Wellcome Library, Digital” Digital Ariadne 71. Available http://www.ariadne.ac.uk/issue71/henshaw-kiley
Jensen, E. (2014). “Critical Review’s Conclusions.” The Role of Technology in Evaluating Cultural Value. Warwick: University of Warwick. Updated July 2014. Consulted January 12, 2016. Available http://www2.warwick.ac.uk/fac/soc/sociology/staff/jensen/culturalvalue/#conclusion
Khandker, S.R., G.B. Koolwal, & H.A. Samad. (2010). Handbook on Impact Evaluation: Quantitative Methods and Practices. Washington, DC: World Bank.
Leeuw, F., & J. Vaessen. (2009). Impact Evaluations and Development: NoNIE guidance on impact evaluation. Washington, DC: NoNIE, World Bank. Consulted January 11, 2016. Available http://siteresources.worldbank.org/EXTOED/Resources/nonie_guidance.pdf
Malde, S., J. Finnis, A. Kennedy, M. Ridge, E. Villaespesa, & S. Chan. (2014). Let’s Get Real: A journey towards understanding and measuring digital engagement. Brighton: Culture24. Consulted January 11, 2016. Available http://weareculture24.org.uk/projects/action-research/phase-2-digital-engagement/
MTM London. (2010). Digital Audiences: Engagement with arts and culture online. Arts Council England, Museums, Libraries and Archives Council, Arts & Business. London: Arts Council England. Consulted January 12, 2016. Available http://www.artscouncil.org.uk/media/uploads/doc/Digital_audiences_final.pdf.
Museums Association. (2015). Cuts Survey 2015: Museums Association. Consulted January 11, 2016. Available http://www.museumsassociation.org/campaigns/funding-cuts/cuts-survey
O’Brien, D. (2010). Measuring the Value of Culture: A Report to the Department for Culture, Media and Sport. Project Report. London: Department for Culture, Media and Sport. Consulted January 10, 2016. Available https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/77933/measuring-the-value-culture-report.pdf
Scott, C., J. Dodd, & R. Sandell. (2014). Cultural Value: User value of museums and galleries: a critical view of the literature. Project report. Leicester: Arts and Humanities Research Council. Consulted January 11, 2016. Available https://www2.le.ac.uk/departments/museumstudies/rcmg/publications/cultural-value-of-museums
Selwood, S. (2002). “Measuring Culture.” Spiked Culture. Consulted January 12, 2016. Available: http://www.spiked-online.com/articles/00000006DBAF.htm
Selwood, S. (2010). Making a difference: the cultural impact of museums. National Museum Directors’ Council, An Essay for NMDC. London: Sara Selwood Associates. Consulted January 12, 2016. Available http://www.nationalmuseums.org.uk/media/documents/publications/cultural_impact_final.pdf
Smithsonian Institution. (2010). Creating a Digital Smithsonian: Digitization Strategic Plan. Washington, DC: Smithsonian Institution. Consulted January 11, 2016. Available https://www.si.edu/Content/Pdf/About/2010_SI_Digitization_Plan.pdf
Stack, J., & E. Villaespesa. (2015). “Finding the motivation behind a click: Definition and implementation of a website audience segmentation.” Museums and the Web 2015. Consulted January 11, 2016. Available http://mw2015.museumsandtheweb.com/paper/finding-the-motivation-behind-a-click-definition-and-implementation-of-a-website-audience-segmentation/
Tanner, S. (2012). Measuring the Impact of Digital resources: The Balanced Value Impact Model. London: Arcadia and Kings College London. Consulted January 11, 2016. Available http://www.kdcs.kcl.ac.uk/fileadmin/documents/pubs/BalancedValueImpactModel_SimonTanner_October2012.pdf
Villaespesa, E. (2015). “An evaluation framework for success: Capture and measure your social-media strategy using the Balanced Scorecard.” Museums and the Web 2015. Consulted January 11, 2016. Available http://mw2015.museumsandtheweb.com/paper/an-evaluation-framework-for-success-capture-and-measure-your-social-media-strategy-using-the-balanced-scorecard/
Weil, S. (1997). “The Museum and the Public.” Museum Management and Curatorship 16(3): 257–271.
Cite as:
Green, Alex and Annelise Andersen. "Finding value beyond the dashboard." MW2016: Museums and the Web 2016. Published February 1, 2016. Consulted .
https://mw2016.museumsandtheweb.com/paper/finding-value-beyond-the-dashboard/