I recently gave a talk at the English Institute at Harvard on the history of the relationship between tactility and reading. My main aim was to think through new digital reading practices within a longer history of the handedness of book reading. There were many challenging questions at the end of the talk, most of which had to do with the map my collaborator, Mark Algee-Hewitt, had made of Goethe’s corpus of works (included below). We are using statistical algorithms to generate literary topologies, beginning with Goethe’s corpus, but then we will be moving into German eighteenth-century literature more generally.
The one question that really stuck in my mind — as a problem — was the one where someone suggested that what everyone is waiting for from digital humanities is the “killer app.” By this rather inelegant formulation I believe the questioner meant a project that uses statistical techniques to tell a new literary or cultural history. Not only did the metaphor strike me as particulary unapt (unapp? inept?) — I for one don’t want to bring this mentality of computing culture into the humanities. More importantly, it seemed to overlook the principles of inquiry behind these types of methodologies, which are both very exploratory (what can we explain through the quantitative?) and very aggregative. There is no one way to translate the meaning of texts into quantifiable form. For many, of course, there is no way period. But if there is, it won’t be a single model, but a combination of many simultaneously working in concert with one another.
The successful researcher within this kind of “digital humanities” will be someone who is able to aggregate a variety of statistical approaches to achieve a new sense of the meaning of some aspect of our textual past. Time to put to rest (as in r.i.p.) the killer app.