Textual Analysis – Maryland Institute for Technology in the Humanities https://mith.umd.edu Fri, 14 Aug 2020 19:10:06 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Chris Mustazza Digital Dialogue https://mith.umd.edu/dialogues/dd-spring-2018-chris-mustazza/ Tue, 27 Mar 2018 18:22:11 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=19139 How can we construct a literary history of recorded poetry that recognizes media as an intrinsic dimension of the poems’ forms? Given the longtime understanding of the recorded poem as, at best, a simulacrum of a primary, written text (if not of the live performance, too—a copy of a copy), poetry recordings have not [...]

The post Chris Mustazza Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

How can we construct a literary history of recorded poetry that recognizes media as an intrinsic dimension of the poems’ forms? Given the longtime understanding of the recorded poem as, at best, a simulacrum of a primary, written text (if not of the live performance, too—a copy of a copy), poetry recordings have not received the same kind of material and media archaeological study as their textual counterparts. Through a precis of the PennSound archive, the world’s largest archive of recorded poetry and the archival response to Charles Bernstein’s call for scholarly attention to the performed poem, this talk suggests what such a literary history of poetry audio might look like. We will examine various periods of sound recording history, starting with late nineteenth-century European work to create Stimmporträts, or portraits of voices, through American record companies’ attempts to grapple with the political-economy of sounded verse. If there is a center and a periphery of our sonic memory, attention will need to be paid to oppositional archives and their use of media to subvert systems of dominance by seizing the media of production. Examples of the various kinds of archives under consideration will include: experimental French phonetics labs of the early 20th century; Columbia University’s Speech Lab Recordings, scored to aluminum records in the 1930s and ‘40s; and so-called Soviet bone records, samizdat records cut into discarded x-ray films. We will conclude on the question of what new affordances become possible when grooves become bits and the temporality of discs gives way to the logic of disks. By looking at the newest digital humanities research in distant listening and media archaeology, we will see that the digitized forms of previously recorded poems are not just copies of copies, but generative of new possibilities.

See below for a Sutori recap of this Digital Dialogue, including live tweets and select resources referenced by Mustazza during his talk.

The post Chris Mustazza Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Elisa Beshero-Bondar Digital Dialogue https://mith.umd.edu/dialogues/dd-fall-2017-elisa-beshero-bondar/ Tue, 17 Oct 2017 16:30:15 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=18886 In this talk, I will introduce the collaboration of the Pittsburgh Bicentennial Frankenstein team with MITH to produce a new and authoritative digital edition of the 1818, 1823, and 1831 published texts of Frankenstein linked with the Shelley-Godwin Archive edition of Mary Shelley’s manuscript notebooks. We have been hard at work on the project [...]

The post Elisa Beshero-Bondar Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

In this talk, I will introduce the collaboration of the Pittsburgh Bicentennial Frankenstein team with MITH to produce a new and authoritative digital edition of the 1818, 1823, and 1831 published texts of Frankenstein linked with the Shelley-Godwin Archive edition of Mary Shelley’s manuscript notebooks. We have been hard at work on the project since fall, and aim to complete the project by May 2018, the bicentennial of the novel’s first publication.

Preparing the edition has given us a fascinating vantage point on early work with 1990’s hypertext, as we began our work by up-converting hundreds of “hypercard” files in Stuart Curran and Jack Lynch’s Pennsylvania Electronic Edition of Frankenstein. That hypertext edition represented groundbreaking digital scholarship in the era of web 1.0, by deploying an interface for reading the 1818 and 1831 texts in juxtaposed parallel texts. Our work on the project has involved polishing and repurposing the code of Curran’s and Lynch’s electronic editions of the 1818 and 1831 texts. With help from Rikk Mulligan, Digital Scholarship librarian at Carnegie Mellon University, we have been correcting our restored text against photo facsimiles of the originals, and we have prepared plain text and simple XML editions from OCR of the 1823 edition, derived via ABBYY Finereader, and formatted like our editions of the 1818 and 1823. We have been preparing a new edition in TEI by first processing these documents with CollateX, which computationally locates the points of variance (or “deltas”) among the editions and outputs these as a single critical edition with TEI XML critical apparatus markup.

Collating the print editions establishes a basis for one last and especially challenging stage of our project. We are now working with Raffaele Viglianti to integrate the Shelley-Godwin Archive’s manuscript notebook drafts of Frankenstein with our critical edition of the published novels. For this we are planning a new implementation of TEI critical apparatus markup to point to specific locations in the manuscript notebooks. This will provide a way to link a reading interface of the novel that highlights “hotspots” of variance in the print edition and that links into relevant passages in the Notebooks.

We will be offering our bicentennial edition to update the one currently hosted by Romantic Circles. Our new edition’s reading interface should invite readers to learn the interesting story of how the events and characters of the novel changed over the first decades of its life, from the time of its first drafts by its 18-year-old author to the changes imposed by authors and editors over three published editions from 1818, 1823, and 1831. We hope our edition will inspire fresh investigations of longstanding questions about Frankenstein’s transformations, such as the extent of Godwin’s interventions in the text in 1823 and how many of these these persist in the 1831 text. This dialogue offers a chance to share views of the new TEI edition underway, and invites reflection and discussion of our textual methods in stitching together our new textual “monster.”

See below for a Storify recap of this Digital Dialogue, including live tweets and select resources referenced by Beshero-Bondar during her talk.

The post Elisa Beshero-Bondar Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Maxim Romanov Digital Dialogue https://mith.umd.edu/dialogues/dd-spring-2016-maxim-romanov/ Tue, 23 Feb 2016 01:30:57 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=16579 In the course of 14 centuries, Muslim authors wrote, compiled and recompiled a great number of multivolume collections that often include tens of thousands of biographical, bibliographical and historical records. Over the past decade, many of these texts (predominantly in Arabic) have become available in full text format through a number of digital libraries. The [...]

The post Maxim Romanov Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
In the course of 14 centuries, Muslim authors wrote, compiled and recompiled a great number of multivolume collections that often include tens of thousands of biographical, bibliographical and historical records. Over the past decade, many of these texts (predominantly in Arabic) have become available in full text format through a number of digital libraries. The overall number of texts in these libraries amounts to thousands with their overall volume exceeding 1,5 billion words. Scholars of Islam and Islamic history have already realized the value of the newly available resources, but it is the new digital methods of engaging with these texts that offer the qualitative change of the field by opening research opportunities that were unthinkable a mere decade ago. In particular, various text-mining techniques allow us to study such multivolume collections in their entirety, making it possible to engage into the data-driven exploration of the “longue durée” of Islamic history. Following the general introduction into the digital Islamic humanities, the lecture will then zoom in on the results of the computational analysis of the largest premodern biographical collection—“The History of Islam” of the Damascene scholar al-Ḏahabī (d. 1348 CE), who covered 700 years of Islamic history through over 30,000 biographical records. The riches of this collection hold the keys to understanding of both the Islamic written tradition as well as the history of the Islamic society. What was the Islamic society in the course of 700 years of its history? How was it changing in time and space? What were the major cultural centers? Did they remain the same or passed on the baton among each other? Focusing on these and other questions, the lecture will showcase a variety of analyses possible through text mining and algorithmic reading, pondering on the implications of novel digital methods for the field of Islamic studies.

The post Maxim Romanov Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Julia Flanders: “Small TEI Projects on a Large Scale: TAPAS” https://mith.umd.edu/dialogues/julia-flanders-small-tei-projects-on-a-large-scale-tapas/ Wed, 12 Sep 2012 12:00:45 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=9034 The TEI Archiving, Publishing, and Access Service (TAPAS) is tackling one of the trickiest problems of scholarly text encoding. How can we provide robust, large-scale TEI publication services, while accommodating the detailed scholarly insight that makes TEI such a valuable tool for the digital humanities? What level of customization and variation can we support without [...]

The post Julia Flanders: “Small TEI Projects on a Large Scale: TAPAS” appeared first on Maryland Institute for Technology in the Humanities.

]]>
The TEI Archiving, Publishing, and Access Service (TAPAS) is tackling one of the trickiest problems of scholarly text encoding. How can we provide robust, large-scale TEI publication services, while accommodating the detailed scholarly insight that makes TEI such a valuable tool for the digital humanities? What level of customization and variation can we support without compromising on interoperability, and what are the mechanisms by which we can achieve the optimal balance? And who needs variation anyway—what kinds of scholarly insight are at stake, or at risk?

TAPAS seeks to offer long-term TEI repository and publishing services, with special focus on supporting scholars who lack access to XML publishing infrastructure or expertise at their own institutions. Supported by a planning grant from the IMLS and now by a two-year IMLS National Leadership Grant and an NEH Digital Humanities Startup Grant, the TAPAS service will make it possible for scholars to use TEI in their teaching and research without mastering the full suite of XML technologies. The service will also provide access to consulting, training, documentation, and community-developed tools. This talk will explore the conceptual and strategic challenges in developing TAPAS, and in particular the problem of how to harmonize—or transcend—divergent approaches to TEI encoding.

The post Julia Flanders: “Small TEI Projects on a Large Scale: TAPAS” appeared first on Maryland Institute for Technology in the Humanities.

]]>
Digital Humanities at Scale: the HathiTrust Research Center https://mith.umd.edu/dialogues/digital-humanities-at-scale-the-hathitrust-research-center/ Wed, 29 Feb 2012 05:00:03 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=6931 The recently formed HathiTrust Research Center (HTRC) is dedicated to the provision of computational access to the HathiTrust repository.The center’s mission is to provide a persistent and sustainable structure to enable original and cutting edge research in tools to enable new discoveries on the text corpus of the HathiTrust repository.  In this talk, I will [...]

The post Digital Humanities at Scale: the HathiTrust Research Center appeared first on Maryland Institute for Technology in the Humanities.

]]>
The recently formed HathiTrust Research Center (HTRC) is dedicated to the provision of computational access to the HathiTrust repository.The center’s mission is to provide a persistent and sustainable structure to enable original and cutting edge research in tools to enable new discoveries on the text corpus of the HathiTrust repository.  In this talk, I will talk about the functionality that HTRC will provide, about the research questions that come out of providing a facility for large-scale text analysis, and about the research and modes of use we hope to stimulate within the digital humanities community through the center.

The post Digital Humanities at Scale: the HathiTrust Research Center appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Videogame Text https://mith.umd.edu/dialogues/the-videogame-text/ Tue, 14 Oct 2008 04:00:26 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4178 The word 'text' in this title does double duty. First, it identifies the videogame itself as a text in the general sense: the object of study, the type of artifact which is here subjected to analysis. Second, the specific textual phenomenon which will be the focus of this presentation is, literally, videogame text—that is, the [...]

The post The Videogame Text appeared first on Maryland Institute for Technology in the Humanities.

]]>
The word ‘text’ in this title does double duty. First, it identifies the videogame itself as a text in the general sense: the object of study, the type of artifact which is here subjected to analysis. Second, the specific textual phenomenon which will be the focus of this presentation is, literally, videogame text—that is, the design, appearance, and uses of alphanumeric characters within videogames. By situating videogame typography in an appropriate historical, cultural, and technological context, an analysis of letter and number forms and their uses on the videogame screen can yield insights into the design history and dissemination of videogame texts. Further, the aesthetic properties of videogame text are shown to be one means by which specific videogame platforms express their influence over videogame discourse. This presentation, which summarizes the major research of my dissertation, will focus on typography in early videogame systems. It will also include a demonstration of a data-mining tool developed for this purpose.

The post The Videogame Text appeared first on Maryland Institute for Technology in the Humanities.

]]>
Using Digital Tools to Not-Read Gertrude Stein’s “The Making of Americans” https://mith.umd.edu/dialogues/using-digital-tools-to-not-read-gertrude-steins/ Tue, 11 Sep 2007 04:00:08 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4229 The difficulties engendered by the complicated patterns of repetition in Gertrude Stein's 900-page novel _The Making of Americans_ make it almost impossible to read this modernist tome in a traditional, linear manner as any page (most are startlingly similar) will show. However, by visualizing certain of its patterns--by looking at the text "from a distance"--through [...]

The post Using Digital Tools to Not-Read Gertrude Stein’s “The Making of Americans” appeared first on Maryland Institute for Technology in the Humanities.

]]>
The difficulties engendered by the complicated patterns of repetition in Gertrude Stein’s 900-page novel _The Making of Americans_ make it almost impossible to read this modernist tome in a traditional, linear manner as any page (most are startlingly similar) will show. However, by visualizing certain of its patterns–by looking at the text “from a distance”–through textual analytics and visualizations, one can read the novel in ways formerly impossible and re-evaluate whether there is or is not “a there there.” This talk will focus on how various analytic methods (such as text mining and frequent pattern recognition) and visualization tools (such as FeatureLens and Spotfire) under research in the MONK project have been used to achieve a new *non*-reading of the text which Stein called her “masterpiece” and critiques called “linguistic murder.”

The post Using Digital Tools to Not-Read Gertrude Stein’s “The Making of Americans” appeared first on Maryland Institute for Technology in the Humanities.

]]>