About Me

My Photo
I am Professor of Digital Humanities at the University of Glasgow and Theme Leader Fellow for the 'Digital Transformations' strategic theme of the Arts and Humanities Research Council. I tweet as @ajprescott.

This blog is a riff on digital humanities. A riff is a repeated phrase in music, used by analogy to describe a improvisation or commentary. In the 16th century, the word 'riff' meant a rift; Speed describes riffs in the earth shooting out flames. The poet Jeffrey Robinson points out that riff perhaps derives from riffle, to make rough.

Maybe we need to explore these other meanings of riff in thinking about digital humanities, and seek out rough and broken ground in the digital terrain.

24 September 2015

Acts of Reading, Redux

Contribution to a panel at a British Library 'Digital Conversation', 24 September 2015

Six months ago, at Bronwen Thomas’s suggestion, I submitted a guest entry to the blog of the Digital Reading Network, which I called ‘My Acts of Reading’. In the entry, I tried to describe my relationship at different points of my life with reading. The importance of reading as a social and cultural activity is illustrated by the way in which so many of our memories are bound up with it. I am sure many of us remember our parents reading to us or the excitement of first gradually mastering how to decipher a book. One of my most vivid memories of Christmas is of a bitterly cold day in the 1960s - opening my presents, finding a book there, and going back to the warmth of my bed to read the book. For most of us, reading is bound up with our very personality. By changing the nature of our engagement with reading and writing, digital technologies are transforming some of the most fundamental and distinctive features of human behaviour.

Another vivid reading memory is from 1993, when I was shown the World Wide Web for the first time by Tim Hadlow, the remarkable Systems Administrator here in the British Library. The way in which the web combined text and image in new configurations made it obvious that here was something that was going to change much of my intellectual and cultural world. I was very pleased to be part of the team which, under the leadership of another remarkable librarian, Graham Jefcoate, helped put together the British Library’s first website ‘Portico’. I became involved in a number of projects for digitisation of manuscripts, particularly the Electronic Beowulf. In those early years, digital images and text were a specialist tool that one used at work for formal research. This began to change as JSTOR arrived and more academic journals became available online. From about 2000, I began to notice how more and more of my reading was taking place online and increasingly this online activity was not just simply at work. The point, in about 2006, when printed newspapers became a weekend indulgence was a significant landmark.   

However, I held out for a long time against reading books online. Although one of the strategies I adopted as a librarian at Lampeter in west Wales to deal with high levels of student demand for particular books was to buy e-books, for my own reading, both academic and leisure, I tended to prefer an old-fashioned printed book, notwithstanding many friends urging the virtues of a kindle upon me. The change occurred for me last year when I was reading Mark Ormrod’s monumental biography of Edward III. Mark’s book is a wonderful piece of historical writing, but Edward III lived for a very long time and this biography is a very big book. Carrying the large book around was beginning to give me a backache. I really couldn’t face carrying the printed volume around any more, and purchased the e-book to read on my ipad. It was a revelation. I found I got much more pleasure from reading the e-book than from reading the printed volume. This wasn’t simply due to the convenience and simplicity of the e-book - something about the backlighting of the text and the physical nature of the tablet seemed to encourage me to read more. My reading became rebooted.

So, my blog entry for Bronwen describing my late conversion to e-books was six months ago. How have things changed?  Do I still feel that the e-book has transformed my acts of reading? 

Certainly, my e-reading enthusiasm continues unabated. My home is in a remote part of west Wales, but the rural buses have recently installed wi-fi, and it is wonderful that, if I finish my book during a bus ride, I can download another. But I suppose, after six months, I do have other more critical reflections on my e-reading.

1. There are limits to my e-reading. I described in my original blog how I find I still need to transcribe medieval documents if I am to use them effectively in my research. For my leisure reading, I find that poetry still seems for me to work better in print - maybe because like whisky it needs to be sipped. 

2. Although most of my everyday reading is now electronic, my academic research is mixed medium. Some of the books are use aren’t available digitally, but the biggest barrier to my more extended use of e-books in research is that academic books are so tremendously expensive, no matter whether they are printed or electronic, so any extended research means consulting books in a library. Although librarians have been experimenting with e-books for many years now, we still have not worked out how to best make e-books available. Our e-book collection to Lampeter was always very cumbersome to use, and the digital rights management in packages like Adobe Digital Editions makes the borrowing of e-books very fiddly. We need an academic book equivalent of iPlayer, which would enable me to be able to borrow academic books as easily as I can download a biography from Amazon on my Welsh bus. 

3. Likewise, I find it frustrating that I can’t so easily share books I am enthusiastic about with my friends if the books are in electronic form. Amazon has a mechanism for lending e-books to friends, but I must admit I haven’t yet used it, and not all my friends are as keen on e-reading as I am. 

4. Although older books are readily available in electronic form through the Internet archive or Project Gutenberg, I must say that since I became an e-reading convert, I think I read many more recently published books - not necessarily a good thing, I think. 

However, much as I enjoy my e-reading, I am increasingly struck by how untransformative it is. I use my phone or tablet very happily for activities I did before using a variety of devices: listening to the radio, checking e-mail, watching movies, hearing music, taking photographs, reading. But I don’t use my phone for anything that I didn’t do before.

This leads to my final reflection. Pleased though I am with e-books, they are very boring products. You just get an html presentation of the book’s text. The illustrations are tucked away at the end - there aren’t even any hyperlinks from the text to the illustrations. I recently bought an e-book in which the illustrations are left out. But part of the exciting thing about the digital medium is the opportunity to have much more richly illustrated texts. When I read about medieval manuscripts I want lots of pictures of them. But, all too often, my e-book is just bare text - the Thames and Hudson paperback on The Medieval Papacy that I bought in 1974 was much more richly illustrated than my e-book.

One of my dystopian fears for our digital future is that it will turn out to be populated by pdfs of journal articles. All the potential richness of the digital medium will have been ignored in favour of producing homogenised factory production line scholarship. And, while I retain my e-book enthusiasm, what I want is more books like that being currently produced by Tim Hitchcock and Bob Shoemaker, based on their work on the Old Bailey Proceedings, which will allow the reader access to the records and documents on which the book is based. I understand that the production process for this book has been difficult and protracted, but it is precisely the possibility of presenting books in a different way that makes a digital environment exciting, and I hope that the future of e-books will be more media rich and varied than the plain and frankly crude html wrappers with which we are presented at the moment.

Read more »

6 September 2015

Big Data: Some Historical Perspectives

This was a contribution to a plenary panel at the European Policy on Intellectual Property conference organised by CREATe at the University of Glasgow in September 2015. In the nature of a short contribution to a panel on a wider theme, it barely scratches the surface of the possibilities implied i the title, but here it is for the record. 

It is already very evident from our discussions here that a distinctive feature of research into intellectual property is the emphasis on historical understanding. Petra Moser’s keynote yesterday was a wonderful illustration of how intellectual property researchers find historical data which has a wider cultural significance and is more than simply a lab for exploring different models of access. It may seem that, since big data is meant to present issues of scale and potential that we haven’t encountered before, historical perspectives won’t be particularly helpful. What I want to suggest here that we perhaps need to widen our historical terms of reference, and not restrict ourselves to precise historical precedents and analogies. 

Sometimes, big data requires big history (although we should also be aware of the caveat of Tim Hitchcock about the dangers of thinking exclusively on a large scale — we need both microscope and macroscope).

Let’s go back a long way, to 1086, when William the Conqueror, who had won the English crown at the Battle of Hastings twenty years previously, gave orders that a detailed survey should be undertaken of his English dominions. The Anglo-Saxon chronicle described how William sent his men to every county to enquire who held what land. The chronicler was horrified by the amount of information William collected: ‘So very thoroughly did he have the inquiry carried out that there was not a single hide, not one virgate of land, not even — it is shameful to record it, but it did not seem shameful for him to do — not even one ox, nor one cow, nor one pig which escaped notice in his survey’. Collating all this information required William’s clerks to develop innovative data processing techniques, as they prepared a series of summaries of the data, eventually reducing it to two stout volumes. The motives of William in collecting this information are still debated by historians, but the data was immediately put to use in royal courts and tax collection. Within a short period of time, this eleventh-century experiment in big data had become known as Domesday Book — the book of the day of judgement, from which there is no appeal — just as there can today be no appeal from the algorithms that might be used to set our insurance policy or credit rating.

Domesday Book was the first English public record and is a forcible reminder that anxieties about government data collection are nothing new. 2015 marks the anniversary of the grant of another celebrated English public document, Magna Carta. King John is remembered as the tyrant forced to grant Magna Carta at Runnymede, but his reign was also important because many of the major series of records recording government business began in his reign. John’s reign saw an upsurge in the use of technologies of writing and mathematics in the business of government. One important thread in the Magna Carta story is that it was both a reaction to, and at the same time an expression of, this growth in new technologies of government. It’s intriguing that Tim Berners-Lee and others have called for a Magna Carta for the world wide web to address issues of privacy and openness. There are a number of problems with this. One is, of course, that Magna Carta is linked to a common law system — it hasn’t even been adopted by the whole of Britain, as Scotland with its roman law system has always had a semi-detached relationship with Magna Carta. The other is that granting that the embedding of Magna Carta in English political life was a complex process, spread over several centuries and involving two civil wars.

In considering the issues of governance, ethics and identity posed by big data, this kind of longue durée approach can be very helpful. Jon Agar’s wonderful book, The Government Machine: A Revolutionary History of the Computer describes how the conception of the modern computer was influenced by the type of administrative processes developed by government bureaucracies in the nineteenth century which sought to distinguish between high level analytical policy work and routine mechanical clerical labour. Charles Babbage’s work was a sophisticated expressions of this nineteenth-century urge to identify and mechanise the routine. Closely linked to this urge to mechanise government was a concern, in the wake of the industrial revolution and the growth of population, to gather as much statistical information as possible about the enormous changes taking place. In a way, data can be seen as an expression of modernity. Another key big data moment was the 1890 United States census when the huge quantity of data necessitated the use of automatically sorted punch cards to analyse the information. Jon Agar vividly describes the achievements of this analogue computing and the rise of IBM. His account of the debates surrounding the national registration schemes introduced in wartime and the anxieties about linking these to for example employment or health records illustrate how our current concerns have long antecedents.

However, I think looking at big data concerns in this way does more than simply remind us that there is nothing new under the sun. It is also helpful in clarifying what is distinctive about recent developments and in identifying areas which should be policy priorities. First is the ubiquity of data. 
For governments from the eleventh to the twentieth century, data was something gathered with enormous clerical and administrative effort which had to be carefully curated and safeguarded. Data like that recorded in Domesday Book or records of land grants was one of the primary assets of pre-modern governments. Only large organisations such as governments or railroad companies had the resources to process this precious data — indeed one of the changes that is very evident is the shift in processing power, and perhaps we should be talking more about big processing rather than big data. Data was used in order to govern and was integral to the political compact. Now data is ubiquitous and comparatively cheap to acquire and process, this framework of trust no longer applies. Moreover, the types of organisations deploying data have changed. In particular, it is noticeable that the driving forces behind the development of big data methods have frequently been commercial and retail organisations: not only Google and Amazon, but also large insurance, financial and healthcare corporations. This is a contrast to earlier developments, both analogue and digital, where governments have been prominent and private sector involvement more limited.

The Oxford English Dictionary draws a distinction between the term big data as applied to the size of datasets and big data referring to particular computational methods, most notably predictive analytics. Predictive analytics poses very powerful social and cultural challenges, especially as more and more personal data such as whole genome sequences becomes cheaper and more widely available. How far can your body be covered by existing concepts of privacy? And is the likely future path of your health, career and life a matter of purely personal concern? In many ways, it is this idea of prediction which most forcibly challenges many of our most cherished social and cultural assumptions. Predicitive policing — an early contact by the police with people considered likely to commit crimes — is already being tested in some American cities. Predictivity almost dissolves privacy because it shifts the way in which we look at freedom of choice. It starts to become irrelevant as to what my reading or music choices are if they can be readily predicted from publicly available data. How we cope with a society in which many of our actions can be predicted is one of the chief challenges posed by big data. As my colleague Barry Smith, from the AHRC’s Science in Culture theme, has emphasised, the neuroscience surrounding predictivity — the way in which the brain copes with this predictivity — will become a fundamental area of research. As predictive analytics shades into machine learning, these questions will become even more complex, since we will start to see the distinction described by Agar between analytic work and routine labour breaking down in large organisations, posing major social and cultural challenges.

Finally, it is worth noting that generally the most important large data sets (censuses, tax records) have been about people, but increasingly big data will become about things. For example, machine tools frequently have sensors attached to them which enable the state of the tools to be monitored remotely by the manufacturer. This might encourage the manufacturer to monitor use of their products by clients in ways that could have commercial implications. The monitoring of medical implants will raise even more complex issues. A hint of the kind of complications that these developments might raise was given in the concurrence of Justice Alitto in the US Supreme court judgement in US v Jones 2012, which concerned the use of GPS tracking devices by police. Struggling to imagine how the framers of the US constitution would have viewed such devices, and imagined the analogy of ‘a case in which a constable secreted himself somewhere in a coach and remained there for a period of time in order to monitor the movements of the coach’s owner’.
For the Anglo-Saxon chronicle complaining about Domesday Book, the objects of the king’s greed were evident: land and animals; our future anxieties may be very different because our chief anxiety may be about objects linked to us in much more distant and complex ways.

Read more »

26 July 2015

Digital Humanities and the Future

This was a talk I gave at the University of Sussex on 20 November 2013. Parts of it are now out of date (for example, there is now a lot more to say about the REF as far as the intellectual direction of DH in the UK is concerned), but other sections are perhaps useful, so it may be worth sharing by means of this late blogging. The illustration shows the Banksy mural 'No Future Girl Balloon' which appeared on a house in Southampton in 2010 but was painted over shortly afterwards.

Talking about the future is always a rash endeavour. Charles Henry has described how in 1876 an article in the journal Nature envisaged the value of the telephone chiefly as a new form of home entertainment. It was anticipated that Alexander Graham Bell’s invention would ‘at a distance, repeat on one or more pianos the air played by a similar instrument at the point of departure. There is a possibility here...of a curious use of electricity. When we are going to have a dancing party, there will be no need to provide a musician. By paying a subscription to an enterprising individual who will, no doubt, come forward to work this vein, we can have from him a waltz, a quadrille, or a gallop, just as we desire. Simply turn a bell handle, as we do the cock of a water or gas pipe and we shall be supplied with what we want. Perhaps our children may find the thing simple enough’. While this is interesting as an anticipation of streamed music, as a discussion of future of the telephone, it was wide of the mark.

Dreams of the future frequently drive the way technology develops. H.G. Wells’s dream of a ‘World Brain’, described by him in a lecture in 1936, reflected his own intellectual preoccupation with synthesis and the search for grand narratives rather than any technical possibilities. Yet Wells’s interest in whether microfilm could be used to develop such a world brain inspired subsequent researchers to experiment with new technologies as they appeared, and influenced Arthur C Clarke when he proposed in 1962 a world library powered by supercomputers. At the recent Digital Economy conference at Media City in Salford, a BBC speaker showed a video describing a vision of future communications technology enunciated by Captain Peter Eckersley, the first Chief Engineer of the BBC, in 1926. The vision described by Eckersley in 1926 for television and pervasive media eerily prefigured the kind of technologies which are only just now, nearly a century later, appearing in a domestic context. When this video was shown, a member of the audience remarked that in a way the video was a condemnation of the BBC, since it suggested that it had not developed its engineering vision since 1926, and had for nearly  hundred years been relentlessly pursuing the realization of the dreams of its first chief engineer. Regardless of how we view this criticism, the examples of Eckersley’s 1926 vision and of Wells’s dream of a world brain illustrate forcefully how the most important driver in technological development can be the human imagination and dreams of a future state.

For the digital humanities, part of its promise is always the claim that is on the side of the future. The digital native will effortlessly succeed the clumsy digital immigrant, and so technology will pervade all aspects of humanities research. This assumption of the inevitable triumph of digital technology underpins some of the most strident claims made on behalf of digital humanities in recent years. Digital humanities has been claimed as ‘the next big thing’ on the intellectual landscape, the successor to the critical theory which has dominated since the 1950s. In 2009, William Pannapacker wrote, after the MLA Convention, that ‘Among all the contending subfields, the digital humanities seem like the first "next big thing" in a long time, because the implications of digital technology affect every field’. Pannapacker continued: ‘I think we are now realizing that resistance is futile. One convention attendee complained that this MLA seems more like a conference on technology than one on literature’. These assumptions of the inevitable triumph of the digital humanities have fed into a visionary discourse of DH which, stressing its interdisciplinary and collaborative aspirations, sees it as a means of renewing and transforming the academic practice of the arts and humanities. Mark Sampler has famously commented that ‘The digital humanities should not be about the digital at all. It’s all about innovation and disruption. The digital humanities is really an insurgent humanities’. Likewise the Digital Humanities Manifesto declared that: ‘the Digital Humanities revolution promotes a fundamental reshaping of the research and teaching landscape’.

This visionary discourse around DH has been immaculately documented and analysed by Patrik Svensson. The way in which the rhetoric of DH frequently becomes suffused with the ‘technological sublime’ has also been emphasized by Paul Gooding. Melissa Terras and Claire Warwick in a recent article. As Patrik Svensson stresses, much of this rhetoric is not so much a comment on the possibilities of digital technologies but rather using the idea of a digital humanities as a springboard for a debate about the nature of the humanities. Digital humanities has become for some scholars a field in which we can reimagine the humanities, perhaps without reference to the digital at all. Yet there still remains a strong techno-optimistic thread within the digital humanities and an assumption that its time will inevitably come. Patrik Svensson points out how these assumptions echo the theme of the ‘proximate future’ discussed by Paul Dourish and Genevieve Bell in their remarkable book, Divining a Digital Future: Mess and Mythology in Ubiquitous Computing. Dourish and Bell emphasise how governments, corporations and institutions portray the future as a technological utopia which is always just around the corner, and never here.

The commercial and political benefits of this constant claim that we are on the verge of a technological utopia are obvious. A good example of the power of the idea of the proximate future is Singapore, where the government seeks to create ‘a global city, universally recognized as an enviable synthesis of technology, infrastructure, enterprise and manpower [with a] new freedom to connect, innovate, personalize and create’. Dourish and Bell emphasise the disconnect between this digital freedom and restrictions on human rights in Singapore, and suggest that this promise of jam tomorrow helps bolster these restrictions. Rhetoric of the proximate future, in the view of Dourish and Bell, has obscured the fact that the future is already here; technological trends identified and developed in units like the Xerox Palo Alto Research Centre twenty or thirty years ago have moved into everyday life and have effected profound transformations on every aspect of our existence. No doubt changes will continue and we will still see many remarkable innovations, but the digital future arrived sometime ago, and it would be better for us to start examining and using more closely what is around us. In talking about digital transformations, we are talking about a process which is current and all around us, not about the future.

I am a child of Harold Wilson’s white heat of technological revolution. I must admit that listening for fifty years to speeches advising me that technology is about to unleash a revolution unprecedented in human history is a little wearing and jangling on  the nerves. In expectation of the coming technological revolution, I was taught in the 1960s a new type of mathematics which required me to learn to use a slide rule and to perform arithmetic with binary numbers. Although I am now a professor of digital humanities, and have had quite a bit to do with computing, I have never since had to perform calculations with binary numbers. However, the fact that somehow the new mathematics left me with a lack of understanding of a number of fundamental mathematical concepts (although I scraped an O level pass) has left me feeling disadvantaged as we start to think about new quantitative techniques in various humanities subjects. I fear that the myth of the proximate future has damaged me. If we see the aim of digital humanities as simply being to promote the use of technology in studying arts and humanities subjects, then I suspect that the claim that we are constantly moving towards a new technological revolution has also been unhelpful. The way in which digital humanities is engaged with promulgating this myth of a proximate utopia is apparent from the way in which the subject constantly reinvents and renames itself: from humanities computing to digital humanities, and now e-science, e-research, web science, digital studies, digital culture.

At one level, in accordance with Alan Liu’s Laws of Cool, it is perhaps necessary and unavoidable for digital humanities to propagate the myth of the proximate future. At another, this vacuous myth-making may do digital humanities a disservice. A colleague in America recently forwarded to me a remark by a history undergraduate writing a long essay on ‘digital history’, who wrote that: ‘The digital humanities, of which digital history is a subset, is scary because there is no definition of what is meant by the term. Real historians fear its lack of cohesion’. I’m not sure that is necessarily an argument for a tight definition of DH, but it does suggest that the rhetoric might obscure the substance, and be off-putting to precisely the audiences we should be seeking to enthuse.

Are we overcomplicating DH? I fear so. Let’s return to our roots. In Britain, a key moment in the development of digital humanities took place on the banks of Loch Lomond in September 1996. A meeting was held at the Buchanan Arms Hotel entitled ‘Defining Humanities Computing’. Attending the meeting were representatives of three leading universities which had been involved in the Computers in Teaching Initiative established in Britain in the early 1990s. Many of the names are familiar still: from King’s, Harold Short, Willard McCarty and Marilyn Deegan; from Glasgow, Christian Kay, Jean Anderson and Ann Gow; from Oxford, Stuart Lee, Mike Popham and Mike Fraser. It’s perhaps the nearest thing to a digital humanities summit meeting that has ever taken place in Britain. Among the questions were debated were:

  • How should we define Humanities Computing theoretically or pragmatically in terms of current practice? 
  • Where does humanities computing fit within institutions of higher education? How will computing integrate into standard humanities courses? 
  • What should research in humanities computing be about? 

These are questions that are still as pressing as they were twenty years ago, and I fear we still lack cogent answers. It is fair to say that the deliberations on the banks of Loch Lomond were even then heated. For some, computing was something which facilitated and supported academic research, and the role of humanities computing specialists was analogous to that of lab technicians.  For others, particularly Willard McCarty, who has been the most persistent and forceful advocate of this view in Britain, it is a field of intellectual endeavour and investigation on a par with more widely recognized academic disciplines such as history, classics or media studies.

In the course of the discussions in Scotland, Willard drafted the following definition of the field as he saw it then:

‘HUMANITIES COMPUTING is an academic field concerned with the application of computing tools to humanities and arts data or their use in the creation of these data. It is methodological in nature and interdisciplinary in scope. It works at the intersection of computing with the other disciplines and focuses both on the pragmatic issues of how computing assists scholarship and teaching in these disciplines, and on the theoretical problems of shift in perspective brought about by computing. It seeks to define the common ground of techniques and approaches to data, and how scholarly processes may be understood and mechanised. It studies the sociology of knowledge as this is affected by computing as well as the fundamental cognitive problem of how we know what we know.'

'Within the institution, humanities computing is manifested in teaching, research, and service. The subject itself is taught, as well as its particular application to another discipline at the invitation of the home department. Practitioners of humanities computing conduct their own research as well as participate by invitation in the projects of others. They take as a basic responsibility collegial service, assisting colleagues in their work and collaborating with them in the training of students.'  

This is a beautifully crafted working definition, which would apply as much to the digital humanities today as to the humanities computing of 1996 (an updated version, supplied by Willard, is available here)  . The clarity of the definition, however, brings to the forefront a number of issues. The simplicity of the insistence that humanities computing is about using technology in humanities scholarship is important. But in 1996, there was still an air of reticence and passivity about this activity. Could computers model and mechanise what scholars did? The focus is on replicating existing scholarly practice in a digital environment. The idea that computers might create new types of scholarship is implicit here, but not actually stated. Likewise, it is assumed that intellectual disciplines are equated to the administrative structures of universities. Disciplines equal departments, it is suggested, and humanities computing only intervenes (in a collegial fashion) at the request of the home department.

Most of those attending the Loch Lomond event were not members of the academic staff of their respective universities. Most worked in information services or in libraries, in what were in those days in Britain called ‘academic related’ posts. Intellectually and in terms of their academic expertise, these pioneers of humanities computing were without doubt the equals of those in full academic posts. Part of the reason for the meeting at Loch Lomond was to try and create a co-ordinated approach to the anomalous position created by the fact that many of those who were pioneering the use of humanities computing were not themselves academics. Curiously, as far as the UK is concerned, the position of scholars and researchers who do not hold formal academic posts has got worse rather than better. The category of ‘academic-related’ post has been abolished, and Britain has misguidedly emulated North America in insisting in a distinction between academics and professional services staff, who often have significantly poorer career conditions than academic staff. Too often in this process, digital humanities work has been regarded as more appropriate to the professional services. We may trace this diminution in the status of digital humanities practitioners to that very reticence which states that we model the practices and requirements of academics. We shouldn’t. We should be challenging the way in which academic research is conducted, and disrupting cosy disciplinary assumptions. Instead of documenting and modelling what historians have done for generations, we need to show how it could be done differently.

In essence, we use computers at present to undertake humanities research more quickly, conveniently and cheaply. This reflects the way in which all those engaged in developing the infrastructure underpinning humanities research have sought to try and replicate in a more mechanized environment existing scholarly practice. Very few scholars have tried to break out of these existing models – one such is with us here this evening, Tim Hitchcock. But one Old Bailey exemplar cannot a revolution make. The way in which our digital landscape replicates the older print scholarship reflects the lack of confidence among practitioners of digital humanities in challenging older structures of scholarship and their unwillingness to build really new structures. It is striking how digital projects are often bound by the very old-fashioned structure of the edition. While I was working at King’s College London, much of the Department of Digital Humanities research was about building for individual scholars digital editions of canonical materials (rarely something unfamiliar) ranging from Ben Jonson and Jane Austen to calendars of historical documents. Even in the major prosopographical datasets produced at King’s – some of the most intriguing and potentially transformational work undertaken within the digital humanities – the data is safely locked away behind a web interface which makes the data almost as intractable as if it was printed.

It is difficult to escape the impression that digital methods have hitherto chiefly been used as a means of trying to restore dying and endangered forms of editorial scholarship. A good illustration of this is the calendar. This was from the nineteenth century a major means of publishing archival records for historians. Printed volumes contained short and thoroughly indexed summaries of historical record series. The vast size of the record series justified the production of summaries – even in the abridged form the printed volumes represented a huge series. For many areas of historical research, the calendar was the essential tool and the first step in primary research. But they wee enormously expensive to produce and printing costs became increasingly prohibitive.  In a desperate attempt to keep the small trickle of calendars flowing, Roy Hunnisett of the Public Record Office produced in 1977 a guide to record publication which gave rules for the preparation of calendars. This is fascinating as a document of late print culture. Hunnisett’s rules are dominated by the need to reduce printing costs and at almost every point are shaped by what was proved to be a doomed method of publishing records.

As a historian whose research has been facilitated by series such as the Calendar of Patent Rolls or the Calendar of Close Rolls, I applaud enthusiastically the digital revival of this movement for giving access to archival records. But the historians who have led these projects have generally found it difficult to re-imagine how a calendar might operate in a digital environment. What we have is what Her Majesty’s Stationery Office were doing in 1910, with the additional facility of some images of the records. This problem is exemplified by the way in which Hunnisett’s rules, formulated for print, are still used as the editorial basis of the online calendars, although many of the compromises Hunnisett was forced to make were intended solely to reduce printing costs, and thus do not apply in a digital environment. So, how might we imagine a calendar in an online environment? The concept of a calendar assumes that summaries are the only way to explore the vast quatities of information in archival record series. If we accept that assumption of extracting and abridging historical records as a reasonable way of proceeding, then we could think about different strategies and structures for summarizing these records. We could start to produce a variety of more summary tables of information in particular records which could then be displayed and linked in different configurations.

Instead of the standard and restricted chronological structure of the calendar, we could establish open data repositories containing tables summarizing different aspects of the records, linked to images to facilitate verification. I have for many years worked on the records of the Peasants’ Revolt of 1381 and it was an interest in editing these that first really drew me in to digital humanities work. I have recently stated to experiment with preparing and sharing data relating to the revolt in this way and I think it has some exciting possibilities. The concept of nanopublications – scholarly statements reduced to their smallest possible component and expressed as RDF triplets – might be relevant here, with archival resources being represented by vast linked groups of nanopublications. But this poses many challenges – I would regard my work on the Peasants Revolt as my most important scholarly work.  I think I have now reached the stage where I would be happy for it to become a large number of digital tables which I share with whoever is interested – losing in the process a lot of the traditional sense of authorship, ownership and acknowledgement – but it’s taken me a long time to reach that stage, and for many younger scholars this poses profound challenges in terms of careers and academic profile.

The online calendar stands as an indictment of our timorous approach to existing scholarship in developing the digital humanities. I think it will be clear that, while I enthusiastically subscribe to the view that arts and humanities scholarship should deeply engage with the new technological possibilities and facilities which are all around us, I don’t take the view that the triumph of digital humanities is inevitable. In my most dystopian moments, I fear that the kind of creative engagement humanities scholars have had in recent years with digital technology will in future become more difficult as the digital world becomes increasingly commercialized and locked down. In the UK, it’s worth looking at the awful thing, the Research Excellence Framework (probably the most striking example of academic newspeak I have yet encountered – even worse than examples from Soviet bloc universities in the Stalinist era). The REF defines the status of particular types of academic activity in the UK as strongly as the tenure process in North America. Unlike the tenure process, research assessment in the UK has always gone out of its way to accommodate interdisciplinary research and new forms of electronic communication. In the 2008 Research Assessment Exercise, digital humanities formed part of the panel dealing with Library and Information Management, and DH units did very well. King’s College London, although only its first time in the exercise in this subject, came joint top of the unit of assessment, and in Glasgow the Humanities Advanced Technology and Information Institute was the leading Scottish institution. In order to reduce the breathtaking and grotesque costs of the REF, it was decided to create larger panels this time, so library and information science has been joined with cultural and media studies to form one large panel. Although the rubric for this panel mentions DH, there is no recognized DH specialist on the panel, although organisations like ADHO made nominations. The rules of the exercise have been changed to exclude many research staff as well as working librarians, archivists and information specialists. In some cases, joint DH-Cultural Studies submissions have been necessary. Of course, we don’t know yet what outcome of the REF will be (true in November 2013, but of course we now have the results, and I have offered some preliminary reflections on them here), but I think we can already say that, if REF defines the research landscape in the UK, digital humanities does not figure very prominently on it.

Many of the issues about the future of the digital humanities can be traced back to concerns evident in Willard’s definition from Loch Lomond. The Loch Lomond meeting was very much of its time, in the assumption that a small group of enthusiasts from just three universities could shape approaches as to how digital technology would be integrated into arts and humanities provision of British higher education. The 1990s was characterized by a kind of gold rush, in which individuals and groups felt that they could annex parts of the digital future. A couple of medievalists might hope to shape the digital future of medieval studies by establishing a portal; others sought to control future editorial practice by developing appropriate guidelines. This was analogue thinking par excellence but this mentality of seeking to become recognized as the ‘Mr Digits’ of certain aspects of scholarly activity is still I think evident. And this is true of the digital humanities. Bodies like the Alliance of Digital Humanities Organisations make digital technologies seem safe, familiar, comfortable and (above all) controllable. Much of our literature (such as Melissa Terras’s remarkable and compelling keynote at DH 2010) assumes that, in the arts and humanities, the digital equates to the formally constituted bodies in ADHO. This is clearly wrong, and dangerous. One only needs look to HASTAC, which has been far more successful than ADHO in attracting young and digitally committed faculty across a variety of disciplines and interests to see the danger in clinging to the structures of forty years ago. But it goes much, much further. As humanities computing pursued research funding, and sought to model itself on scientific research institutes, it forgot about pedagogy.  As a result the Association of Learning Technologists sprang up, which is just as large and active as ADHO, but there appears little contact between them. Likewise, other areas, such as museums and archives, have pursued their own digital paths, with only patchy contact with DH. As a community DH is singularly ill prepared to deal with the digital becoming mainstream. Having spent many years predicting that everyone will absorb digital techniques, we are very uncertain what to do when that actually happens, and we become very small cogs in a huge machine. The growth of areas of academic study like digital culture, web science and digital studies illustrate the issues – these are the digital achieving recognition from mainstream academia, and those in the DH community aren’t sure how to accommodate this, no matter how wide we make the tent.

This leads to the argument which was my starting point in thinking about this talk, namely that the digital humanities are inherently time-limited and must inevitably disappear.  This assumes that, once the tools developed by DH have passed into common use, DH will have done its job, and ceases to have a purpose. Once the humanities become digital, there is no further use for the digital humanities.  This argument has recently been clearly expressed by Peter Webster of the British Library in a post on ‘Where Should the Digital Humanities Live?’ Peter wrote: ‘The end game for a Faculty of DH should be that the use of the tools becomes so integrated within Classics, French and Theology that it can be disbanded, having done its job. DH isn’t a discipline; it’s a cluster of new techniques that give rise to new questions; but they are still questions of History, or Philosophy, or Classics; and it is in those spaces that the integration needs eventually to take place’. At one level, this might be an argument that DH should then be more primarily critical, but I think it ignores the extent to which our engagement with digital technology is a continuum. John Naughton has noted how the humanities is the only area which refers to ‘the digital’ in this way. At one level, it reflects an assumption that ‘the digital’ is in some way alien; at another, it assumes that ‘the digital’ represents a series of techniques which came to maturity with the appearance of the World Wide Web in the mid 1990s (it is that has led David Berry and others to suggest that we can talk of the ‘post-Digital’). I think it is an oversimplification however to see that apotheosis of the 1990s as a single transformational moment which we are in the process of coming to terms with. They were part of a continuum of transformation which in my view reaches back to the Industrial Revolution. We know how to make digital editions of classical texts, but how can the new technologies of making help us study the classical period? What use is the internet of things to classicists (a lot, I would say). What about born digital data – something which could be fitted into Willard’s Loch Lomond definition, but wasn’t apparently at the forefront of thinking at that time. In short, it is clear that there are many new technologies and new science coming along which will also offer manifold opportunities and challenges to the humanities.  The role of the digital humanities is not to continue to crank up the digital photocopier, but rather to explore these innovations and consider how they enable us to engage with the subject areas of the humanities in fresh ways. In order to achieve this – and ensure their own future – digital humanities practitioners need to take more of an intellectual lead in creating projects.

Read more »