TILE Blog

Layers 3 and 4

In my last blog entry I detailed the first two layers of a four-layer model for electronic editions and archives.  The final two layers are detailed below:

Level 3: Interface layer

While stacks of multimedia files and transcripts in open repositories would, in some ways, improve the current state of digital libraries, interfaces are required if users are to do anything but simply access content a file at a time.  Of course, interfaces can be very expensive to develop and tend to become obsolete very quickly. Unfortunately, the funding for interface development rarely lasts longer than a year or two, so the cost of maintaining a large code base usually falls to the hosting institution, which rarely has the resources to do so adequately.  A new system and standard for interface development is required if interfaces are to be sustainably developed.

Code modularization and reusability have long been ideals in software development, but have only been realized in limited ways in the digital humanities.  Several large infrastructure projects, most notably SEASR, seek to provide a sustainable model for interoperable digital humanities tools, but have yet to achieve wide-scale adoption.  Our model will follow the example of SEASR, but with a scope limited to web-based editions and archives, we may therefore impose some code limitations that more broadly intentioned projects could (and should) not.

We propose a code framework for web-based editions, first implemented in JavaScript using the popular jQuery library, but adaptable to other languages when the prevalent winds of web development change.  An instance of this framework is composed of a manifest file (probably in XML or JSON format) that identifies the locations of the relevant content and any associated metadata and a core file (similar to, but considerably leaner than, the core jQuery.js file at the heart of the popular JavaScript library) with a system of “hooks” onto which developers might hang widgets they develop for their own editions.  A widget, in this context, is a program with limited functionality that provides well-defined responses to specific inputs.  For example, one widget might accept as input a set of manuscript images and return a visualization of data about the handwriting present in the document.  Another might simply adapt a deep zooming application, such as OpenLayers, for viewing high resolution images and linking them to a textual transcript.  Each widget should only depend on the core file and, if applicable, the content and other input data; no widget should directly depend on any other.  If data must be passed from one widget to the next, the first widget should communicate with the core file that can then call an instance of the second one.

It should be noted that we are, in fact, proposing to build something like a content management system at a time when the market for such systems is very crowded.  Nonetheless, experience with the major systems (Omeka, Drupal, Joomla, etc.) has convinced us that while a few provide some of the functionality we require, none are suited for managing multimedia scholarly editions.  Just as Omeka clearly serves a different purpose and audience than Drupal, so will our system meet the similar yet nonetheless distinct needs of critical editors.

Level 4:  User generated data layer

Many recent web-based editions have made use of “web 2.0” technologies which allow users to generate data connected to the content.  In many ways, this is the most volatile data in current digital humanities scholarship, often stored in hurriedly constructed databases on servers where considerations of scale and long-term data storage have been considered in only the most cursory fashion.  Further, the open nature of these sites mean that it is often difficult to separate data generated by inexperienced scholars completing a course assignment from that of experts whose contributions represent real advances in scholarship. Our framework proposes the development of repositories of user-generated content, stored in a standard format, which will be maintained and archived.   Of course, storing the data of every user who ever used any of the collections in the framework is impossible. We therefore propose that projects launch “sandbox” databases, out of which the best user-generated content may be selected for inclusion and “publication” in larger repositories.  In some cases, these repositories may also store scholarly monographs that include content from a set of archives. Subscription fees may be charged for accessing these collections to ensure their sustainability.

Conclusion

It should be noted that much in the above model is already practiced by some of the best electronic editing projects.  However, the best practices have not been articulated in a generalized way.  Although we feel confident our model is a good one, it would be the height of hubris to call it “best practice” without further vetting from the community.  That, dear reader, is where you come in.  The comments are open.

Posted in Uncategorized | Comments closed

A four layer model for image-based editions

Perhaps the most iconic sort of project in the literary digital humanities is the electronic edition.  Unfortunately, these projects, which seek to preserve and provide access to important and endangered cultural artifacts, are, themselves, endangered.  Centuries of experimentation with the production and preservation of paper have generated physical artifacts that, although fragile, can be placed in specially controlled environments and more or less ignored until a researcher wants to see them.  On the other hand, only the most rudimentary procedures exist for preserving digital artifacts, and most require regular care by specialists who must convert, transfer, and update the formats to those readable by new technologies that are not usually backwards compatible.   A new model is required.  The multi-layered model pictured here will, we believe, be attractive to the community of digital librarians and scholars, because it clearly defines the responsibilities of each party and requires each to do only what they do best.

Level 1:  Digitization of Source materials

Four-layered  model for image-based editionsThe creation of an electronic edition often begins with the transfer of analog objects to binary, computer readable files.  Over the last ten years, these content files (particularly image files) have proven to be among the most stable in digital collections.  While interface code must regularly be updated to conform to the requirements of new operating systems and browser specifications, text and image file formats remain relatively unchanged, and even 20 year old GIFs can be viewed on most modern computers.  The problem, then, lays not so much with the maintenance of these files but in their curation and distribution.  For various reasons (mostly bureaucratic and pecuniary rather than technical), libraries have often attempted to limit access to digital content to paths that passed through proprietary interfaces.  This protectionist approach to content prevents scholars from using the material in unexpected (though perhaps welcome) ways, and also endangers the continued availability of the content as the software that controls the proprietary gateways becomes obsolete.  Moreover, these limitations are rarely able to prevent those with technical expertise (sometimes only the ability to read JavaScript code) from accessing the content in any case, and so nothing is gained, and (potentially) everything is lost by this approach.

More recently, projects like the Homer Multitext Project, the Archimedes Palimpsest, and the Shakespeare Quartos Archive, have taken a more liberal approach to the distribution of their content.  While each provides an interface specially designed for the needs of their audience, the content providers have also made their images available under a Creative Commons license at stable and open URIs.   Granting agencies could require that content providers commit to maintain their assets at stable URIs for a specified period of time (perhaps 10-15 years).  At the end of this period, the content provider would have the opportunity to either renew their agreement or move the images to a different location.  The formats used should be as open and as commonly used as possible.  Ideally, the library should also provide several for each item in the collection.  A library might, for instance, chose to provide a full-size 300 MB uncompressed tiff image, a slightly smaller JPEG2000 image served via a Djatoka installation, or a set of tiles for use by “deep zooming” image viewers such open layers.

Level 2:  Metadata

The files and directories in level 1 should be as descriptive as possible and named using a regular and easily identifiable progression (e.g. “Hamlet_Q1_bodley_co1_001.tif”); however, all metadata external to the file itself should be considered part of level 2. Following Greene and Meissner’s now famous principle of “More Product, Less Process”, we propose that all but the most basic work of identification of content should be located in the second level of the model, and possibly performed by institutions or individuals not associated with the content provider at level 1.  The equipment for digitizing most analog material is now widely available and many libraries have developed relatively inexpensive and efficient procedures for the work, but in many cases there is considerable lag time between the moment the digital surrogates are generated and the moment they are made publicly available.  Many content providers feel an obligation to ensure that their assets are properly cataloged and labeled before making them available to their users.  While the impulse towards quality assurance and thorough work is laudable, a perfectionist policy that delays publication of preliminary work is better suited for immutable print media than an extensible digital archive.  In our model, content providers need not wait to provide content until it has been processed and catalogued.

Note also that debates about the proper choice or use of metadata may be contained at this level without delaying at least basic access to the content.  By entirely separating metadata and content, we permit multiple transcriptions and metadata (perhaps with conflicting interpretations) to point to the same item’s URI.  Rather than providing, for example, a single transcription of an image (inevitably the work of the original project team that reflects a set of scholarly presuppositions and biases) this model allows those with objections to a particular transcription to generate another, competing one.  Each metadata set is equally privileged by the technology, allowing users, rather than content-providers, to decide which metadata set is most trustworthy or usable.

In my next blog entry I will discuss the next (and final) two layers of this model:  interfaces and user-generated data.

Posted in Uncategorized | Comments closed

TILE directors begin blogging

Last week, the TILE team held their six month project meeting in Bloomington, Indiana.  At this meeting we further refined the scope of the project and have agreed to deliver the following tools by July of 2010:

  • A extension of the image markup features of the Ajax XML Encoder (AXE).  The extension will feature a newly designed, more user-friendly web interface and will permit editors to link regions of any shape to tags selected from a metadata schema supplied by the editor.  Additionally, editors will be able to link non-contiguous regions and specify the relationship between the two regions.
  • A automated region recognizing plugin for AXE that can be modified to recognize regions of any type but which will initially be designed to identify all of the text lines in an image of horizontally-oriented text.
  • A jQuery plugin that permits text annotation of an HTML document.

Also, in order to better communicate the work of the project with our partners as well as the larger digital humanities community, we have decided to blog weekly about some important issue relating to the project or text & image linking in particular.  This week, I (Doug Reside) will post a series of articles about a new, structural model for multimodal editions.  We welcome your feedback.

Posted in Uncategorized | Comments closed

Announcing tilegen!

The TILE team is pleased to announce tilegen, a Firefox extension for automatically tiling large images for use in deep zoom programs such as OpenLayers.

This extension now creates Tiles in a Tile Mapping Services (TMS) fashion. For more on the TMS specifications, we recommend viewing the OSGeo Wiki page for TMS at: http://wiki.osgeo.org/wiki/Tile_Map_Service_Specification.

In short, PNG files of the individual tiles will be stored in the form: “ / 1.0.0 / / zoom_level / x_index / y_index + .png” Where: 0 is the zoom level at LOWEST resolution and x=0 and y= are the top corners of the image.

You can install this plugin by double clicking below, or dragging it onto the Firefox browser. The source code is available by renaming the xpi extension to zip and then unzipping the file.

Download tilegen.

Disclaimer: MITH is not responsible for the irresponsible use of this code, or for any injuries to people or properties which result from its use.

Posted in News | Tagged , , | Comments closed

Welcome to the TILE project blog!

Here you’ll find the latest TILE news, as well as information about our project team, partner projects, and prototype and related tools. Be sure to visit regularly for project updates, or subscribe to the RSS Feed to have news sent directly to you.

What exactly is TILE? TILE stands for Text-Image Linking Environment, and it’s a web-based tool (or more properly a collection of tools) that will enable scholars to annotate images, and to incorporate them into their digital editions. TILE will be based primarily on the Ajax XML Encoder (AXE) developed by project co-PI Douglas Reside and funded through an NEH Digital Humanities Start-up grant. During the course of this project we will extend the functionality of AXE to allow the following:

  • Semi-automated creation of links between transcriptions and images of the materials from which the transcriptions were made. Using a form of optical character recognition, our software will recognize words in a page image and link them to a pre-existing textual transcription. These links can then be checked, and if need be adjusted, by a human.
  • Annotation of any area of an image selected by the user with a controlled vocabulary (for example, the tool can be adjusted to allow only the annotations “damaged” or “illegible”).
  • Application of editorial annotations to any area of an image.
  • Support linking for non-horizontal, non-rectangular areas of source images.
  • Creation of links between different, non-contiguous areas of primary source images. For example:
    • captions and illustrations;
    • illustrations and textual descriptions;
    • analogous texts across different manuscripts

We are especially concerned with making our tool available for integration into many different types of project environments, and we will therefore work to make the system requirements for TILE as minimal and as generic as possible.

The TILE development project is collaborative, involving scholars from across the United States and Europe who are working with a wide variety of materials – ancient and modern, hand-written and printed, illustrated, illuminated, and not. This project has the potential to help change not just digital editing, but the way software in the humanities is developed and considered. Many tools created for humanists are built within the context of a single project, focusing either on a single set of materials or on materials from a single time period, and this limits their ability to be adapted for use by other projects. By design, our project cuts across subjects and materials. Because it will be simple, with focused functionality, our tool will be usable by a wide variety of scholars from different areas and working with a variety of materials – illustrations and photographs as well as images of text. Therefore we have brought together several collaborators from different projects with different needs to provide advice and testing for our work: The Swinburne Project and Chymstry of Isaac Newton at Indiana University-Bloomington, the Homer Multitext Project at Harvard’s Center for Hellenic Studies, the Mapas Project at the University of Oregon, and various projects supported through the Digital Humanities Observatory at the Royal Irish Academy, Dublin. As TILE becomes available, we will be seeking additional projects and individuals to test its usability. Watch the TILE blog for announcements!

TILE is a two-year project, scheduled to run from May 2009 through May 2011. Funding for TILE is provided by the National Endowment for the Humanities, through the Preservation and Access (Research and Development) program.

If you have any questions please leave a comment below or write to us at TILEPROJECT [at] listserv [dot] heanet [dot] ie. Thanks for visiting!

Posted in News | Comments closed