The Center for Visualization at the University of Kentucky and the Carolina Digital Library and Archives (CDLA) at UNC-Chapel Hill are beginning a collaboration that will lead to the development of a new software system focusing on the incorporation of images into the digital editing process, and will specifically address the issues of:

  • Image manipulation and comparison
  • Image annotation
  • Linking images and text (automated, semi-automated, and by-hand)
  • Building the metadata framework(s) that support the operations above.

This project will seek to address the importance of information beyond text for the humanistic endeavor, and focus not only on the text, but on the context of text. Tools (and the editors using them) need to recognize that edited text, which is inherently flat and two-dimensional, has its origins in physical objects (manuscripts, inscriptions in stone, wax tablets, maps, paint on fabric, etc.), which are by nature three-dimensional, complicated, varied, heterogeneous, and which exist within their own space and time. A traditional critical edition commonly consists of a main text plus variant readings and editorial input. But what of the objects from which those readings are taken? What of their origins, their existence through time? The key to our thinking is that text is not simply the flat data derived from the context, but that the text is the context. A text consists of everything that comprises it and we need tools that take the objects and their contexts into account as well as take advantage of them.

The current center of gravity for editing tools is in the one-dimensional space of linear text, and we want to shift this into the second dimension of images. Currently, tools are built either to focus on the text or on the image. Tools for encoding text with reference to image or for annotating image with text, such as the EPPT (Edition Production and Presentation Technology) or the IMT (Image Markup Tool), show an image in one pane and text in another, and provide a method for “linking” them by hand, but extensive linking and annotation is both tiring and time consuming. In addition, there is no relation (expressed or not) between the linking interface and the semantics of the text. We want to explore possibilities for automated and semi-automated text to image linking, and for developing a central metaphor for image-text editing tools, through a careful front-end analysis using human factors and usability engineering techniques. We are also committed to a participatory design philosophy in which a representative set of end users will help guide design decisions throughout the process. We have set up a project wiki at imagetool.pbwiki.com, and interested people are invited to go there to find out more about the project, and to describe their own projects and use cases (for the wiki invite key, contact Dot Porter at dporter [at] uky.edu).

This presentation will report on the progress of this project and discuss details of the specific use cases to be implemented, and will serve to create discussion on the incorporation of new use cases into the project.