We love the ever-expanding use of digital tools, especially in the arts and humanities. Not only for what we learn about specific artists, aesthetics and audiences, but because using computers to analyze arts and letters helps us bridge the (artificial) divide between the Arts and the Sciences.
Bits and bytes are helping us reunite the Two Cultures of the 20th century, bringing us back to the integrated, holistic approach that existed at least until the early 19th century (yes, speaking here of Western learning and the so-called common era; Lobster & Canary is always curious to know more about how other traditions of inquiry have contemplated the issues discussed here). Echoes of the days when a Goethe focused on optics and color theory as much as poetry, when a Humphry Davy and an Erasmus Darwin expressed their chemical and biological findings in poetry, when a Shelley and his Romantic peers made technology and science the serious object of their poetry.
Back to the Lunar Society of Birmingham, and to Diderot and the Encyclopedistes in the 18th century!
Back, back to the Lincean Academy in 17th-century Rome, and the Dutch and German polymaths of that same era!
Back, back, back to Da Vinci, Alberti, and Aldus Manutius!
Two areas of computer-aided inquiry particularly intrigue us:
* Image analysis: Taking the venerable techniques of connoisseurship to new levels, computer scientists have recently put forth interesting hypotheses on image identification and artistic affinities.
For instance (chosen more or less at random from among many possible examples), here is the opening to "Image Processing for Artist Identification; Computerized Analysis of Vincent van Gogh's Brushstrokes" by C. Richard Johnson jr. et al. (IEEE Signal Processing Magazine, July 2008):
"As image data acquisition technology has advanced in the past decade, museums have
routinely begun to assemble vast digital libraries of images of their collections. The
cross-disciplinary interaction of image analysis researchers and art historians has reached a stage where technology developers can focus on image analysis tasks supportive of the art historian’s mission of painting analysis in addition to activities in image acquisition, storage, and database search. In particular, the problem of artist identification seems ripe for the use of image processing tools. In making an attribution, experts often use not only
current knowledge of the artist’s common practices, in combination with meticulous comparisons of a variety of technical data (acquired, e.g., through ultraviolet fluorescence, infrared reflectography, x-radiography, paint sampling, and/or canvas weave count ), but they also include a visual assessment of the presence of the artist’s “handwriting” in the brushwork. This suggests that mathematical analysis of a painting’s digital representation could assist the art expert in the process of attribution."
(For more, click here.)
* Textual analysis: Humanists of all stripes were among the first to see the benefits of digital tools, for everything from raw set-construction and recurrence compilation to sophisticated pattern recognition.
The journal Literary & Linguistic Computing is a good gateway. Here you find articles like these:
"Constructing readings of damaged and abraded ancient documents is a difficult, complex, and a time-consuming task. It frequently involves reference to a variety of linguistic and archaeological datasets and the integration of previous knowledge of similar documentary material. Due to the involved and lengthy reading process, it is often difficult to record and recall how the final interpretation of the document was reached and which competing hypotheses were presented, adopted, or discarded in the process of reading. This article discusses the development of the application called DUGA, which uses Decision Support System (DSS) technology to aid the day-to-day reading of damaged documents. Such an application will facilitate the process of transcribing texts by providing a framework in which scholars can record, track, and trace their progress. DUGA will include a word search facility of external resources such as the Vindolanda ink tablets through the knowledge base Web Service called APPELLO. This functionality will support the scholars through their reading process by suggesting words, which may confirm current interpretations or inspire new ones. Furthermore, DUGA will allow continuity between working sessions, and the complete documentation of the reading process, that has hitherto been implicit in published editions." [Abstract for "Towards a decision support system for reading ancient documents," by Henriette Roued-Cunliffe, in the December, 2010 issue].
"The use of corpus material and methods represents a major methodological innovation in Chinese historical linguistics. The very exciting findings uncovered in this article may be seen as the first systematic large-scale investigation of the various morpho-syntactic patterns underpinning the evolution of Chinese lexis. In this article, we have made a ground-breaking investigation into the diverse lexical modes and patterns which have emerged and developed in each major period in Chinese history, in which the generation of corpus linguistic data and the subsequent computational statistical modelling have been essential." [Abstract of "A corpus-based study of lexical periodization in historical Chinese,' by Meng Ji, in June, 2010 issue].
"This article provides quantitative evidence for a hypothesis concerning fourth-century translations of Indian Buddhist texts from Prakrit and Sanskrit into Chinese. Using a Variable Length n-Gram Feature Extraction Algorithm, principal component analysis and average linkage clustering we are able to show that 24 sutras, attributed by the tradition to different translators, were in fact translated by the same translator or group of translators. Since part of our method is based on assigning weight to n-grams, the analysis is capable of yielding distinctive features, i.e. strings of Chinese characters, that are characteristic of the translator(s). This is the first time that these techniques have successfully been applied to medieval Chinese texts. The results of this study open up a number of new directions for the lexicographic and syntactic study of early Chinese translations of Buddhist texts." [Abstract for "Quantitative evidence for a hypothesis regarding the attribution of early Buddhist translations, " by Jen-Jou Hung, Marcus Bingenheimer and Simon Wiles, in April, 2010 issue}.
For more from Literary & Linguistic Computing, click here.