Making evidence useful via visualisation, 20th June 2013, NESTA, 1 Plough Place, London

Research Assistant in GeoInformatics, David Alderson, attended the “Making evidence useful via visualisation” event organised by the Reuters Institute for Study of Journalism, Oxford held at NESTA on 20th June, in London. The event organised brought together members of the academic research community in the fields of computer vision, visualisation and computing science, alongside a wealth of representation from media outlets, designers, artists and government agencies and departments. The event focussed on the need for effective visualisation of data and the challenges faced in delivery of tools and techniques to help a wide audience wade through the growing quantity of collected data. Keynote speaker Dr Luciano Floridi (Director of Research, Oxford Internet Institute, University of Oxford), and Alan Smith OBE (Principal Methodologist, Office of National Statistics) amongst others, highlighted the need for transparency and provenance of the visualisation process to allow users to understand the processes and transformations data may have undertaken to arrive at a particular visual representation. The provision of access to the underlying data, as well as the derived output or tool itself, can help to provide that lineage for a user, and potentially allow them to “recreate” the process undertaken to create the visualisation, or at least have access to the same data to create visualisations of their own. A presentation by Aleks Collingwood, Programme Manager and Statistics Specialist at the Joseph Rowntree Foundation (JRF) gave an overview of the new JRF Data portal, and echoed the sentiments of allowing users the capability to create their own versions and interpretations through links to underlying data. Finally a discussion session, chaired by Geoff Mulgan, Chief Executive of NESTA, encouraged the audience to participate in highlighting some of the key challenges to effective data visualisation and where they would like to see change in how these are delivered. A summary of the key points is listed below:

  • Improved graphical literacy of the general audiences for data visualisation, and also producers;
  • Increased levels of interactivity and personalisation of the data visualisation process, in much the same way as Google Maps has allowed everyone to become a cartographer;
  • More approaches and tools enabling user-generated content, in this instance, user-generated visualisations;
  • MORE FUN!

However, the proliferation of data visualisation techniques, tools and technologies does raise the question of at what point have we replaced “too much data” with “too many visualisations”? Will there come a point where a decision-maker will require sophisticated tools and techniques to search through a myriad of visualisations of data, rather than the data itself?

Leave a Reply

Your email address will not be published. Required fields are marked *