DH Organizations – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 19:59:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Measuring Impact of Digital Repositories – Simon Tanner https://mith.umd.edu/measuring-impact-of-digital-repositories-simon-tanner/ Tue, 23 Apr 2019 13:03:12 +0000 https://mith.umd.edu/?p=20568 Measuring Impact of Digital Repositories Open, Collaborative Research: Developing the Balanced Value Impact Model to Assess the Impact of Digital Repositories Thursday, April 25, 11 AM, MITH (0301 Hornbake Library) Simon Tanner will offer a sneak peek at the Balanced Value Impact Model 2.0 (BVI Model). Tanner will introduce the Digital Humanities at King's College [...]

The post Measuring Impact of Digital Repositories – Simon Tanner appeared first on Maryland Institute for Technology in the Humanities.

]]>
Measuring Impact of Digital Repositories
Open, Collaborative Research: Developing the Balanced Value Impact Model to Assess the Impact of Digital Repositories
Thursday, April 25, 11 AM, MITH (0301 Hornbake Library)

Simon Tanner will offer a sneak peek at the Balanced Value Impact Model 2.0 (BVI Model). Tanner will introduce the Digital Humanities at King’s College London, and link this to his open and collaborative research practices to tell the story of the intellectual development of the BVI Model. He will detail the BVI Model 2.0 to highlight what’s new and how it works. Tanner will relate these changes to his collaboration with Europeana to develop their Impact Playbook and look to the future of that tool.

The session will include time for questions and discussion.

Simon Tanner is Professor of Digital Cultural Heritage in the Department of Digital Humanities at King’s College London. He is a Digital Humanities scholar with a wide-ranging interest in cross-disciplinary thinking and collaborative approaches that reflect a fascination with interactions between memory organization collections (libraries, museum, archives, media and publishing) and the digital domain.

As an information professional, consultant, digitization expert and academic he works with major cultural institutions across the world to assist them in transforming their impact, collections and online presence. He has consulted for or managed over 500 digital projects, including digitization of the Dead Sea Scrolls, and has built strategy with a wide range of organizations. These include the US National Gallery of Art and many other museums and national libraries in Europe, Africa, America and the Middle East. Tanner has had work commissioned by UNESCO, the Danish government, the Arcadia Fund and the Andrew W. Mellon Foundation.  He founded the Digital Futures Academy that has run in the UK, Australia, South Africa and Ghana with participants from over 40 countries.

Research into image use and sales in American art museums by Simon Tanner has had a significant effect on opening up collections access and OpenGLAM in the museum sector. Tanner is a strong advocate for Open Access, open research and the digital humanities. Tanner was chair of the Web Archiving sub-committee as an independent member of the UK Government-appointed Legal Deposit Advisory Panel. He is a member of the Europeana Impact Taskforce which developed the Impact Playbook based upon his Balanced Value Impact Model. He is part of the AHRC funded Academic Book of the Future research team.

The post Measuring Impact of Digital Repositories – Simon Tanner appeared first on Maryland Institute for Technology in the Humanities.

]]>
Reckoning with Digital Projects: MITH Makes a Roadmap https://mith.umd.edu/reckoning-with-digital-projects-mith-makes-a-roadmap/ Thu, 04 Oct 2018 20:20:34 +0000 https://mith.umd.edu/?p=20164 In February of 2018, MITH spent dedicated time talking about sustainability of digital projects with a team from the University of Pittsburgh’s Visual Media Workshop (VMW) as part of a focused user testing session for The Socio-Technical Sustainability Roadmap. The research project that produced the Roadmap was led by Alison Langmead, with Project Managers Aisling [...]

The post Reckoning with Digital Projects: MITH Makes a Roadmap appeared first on Maryland Institute for Technology in the Humanities.

]]>

In February of 2018, MITH spent dedicated time talking about sustainability of digital projects with a team from the University of Pittsburgh’s Visual Media Workshop (VMW) as part of a focused user testing session for The Socio-Technical Sustainability Roadmap. The research project that produced the Roadmap was led by Alison Langmead, with Project Managers Aisling Quigley (2016-17) and Chelsea Gunn (2017-18). The final goal of that project was to create a digital sustainability roadmap for developers and curators of digital projects to follow. The work was initially based on what the project team discovered during its NEH-funded project, “Sustaining MedArt.” In this blog post, which is a late entry in MITH’s Digital Stewardship Series from 2016, I’m going to talk a bit about what I discovered during the process of using the roadmap for one of MITH’s projects, how I synthesized our discoveries in the form of a concrete tool for MITH to utilize the roadmap afterward, and how this has changed some of my conceptions about digital sustainability practices.

The process of walking a future digital project through the roadmap can be completed either in a full eight-hour day session, or two four-hour sessions. During  the process, you work through three sections, each with different modules pertaining to aspects of a project’s future sustainability prospects. We chose the latter, with each attending member focusing on a different MITH project they were developing or working on. I opted to use a project for which we were awaiting funding at the time, Unlocking the Airwaves: Revitalizing an Early Public and Educational Radio Collection. Although significant time and effort went into developing the grant proposal for Airwaves, which included a section on sustainability, the Roadmap process cemented how much more concretely we could have been thinking through these issues, and how better planning for those components from the start would lead to better management of the project. In fact, one finding that Langmead and her team had discovered as they developed and tested the roadmap, is that thinking through the project management aspects of a digital project was a necessary first component to even being able to effectively get through the remaining sections of roadmap exercises. So as they went along, they added several elements and exercises to Sections A and B which force users to pinpoint the structural elements of their project. These include elements such as access points, deliverables, workflows, intellectual goals, data flow, and anticipated digital lifespan. This kind of work is essentially an extension of a project charter, which often includes a lot of these same basic concepts. In fact, Module B1 of the roadmap encourages users to create or reference existing charters, and stresses that using the roadmap in conjunction with a charter enhances the usefulness of both tools.

The lifespan questions in Section A were eye-opening, because although the need to ask them seems obvious – How long do you want your project to last? Why have you chosen this lifespan? – I think we as stewards of digital information feel compelled to predict unrealistically long lifespans, which Langmead and her collaborators define as “BookTime:”

“BookTime” is a term we have coined to denote a project lifespan equivalent to, “As long as a paper-based codex would last in the controlled, professional conditions of a library.” It may often be assumed that this is coterminous with “Forever,” but that belief relies heavily on a number of latent expectations about the nature of libraries, the inherent affordances of paper and glue, and other infrastructural dependencies.

The module asks us to acknowledge that not every digital project can realistically span decades into the future, and that sometimes this honesty is better for both the project and your team. The module also leverages concepts such as ‘graceful degradation,’ and ‘Bloom-and-Fade,’ both of which, in moments of dark humor, felt similar to planning for a project’s  hospice care or estate. “It’s okay, everything dies, let’s just be open in talking about it and how we’ll get through it together.” Humor aside, it was a useful exercise for me to acknowledge that time, change, and entropy will stand in the way of a project achieving BookTime, and that that IS, in fact, okay.

The other two sections and exercises that I felt were the most useful and that provided the core, structural materials on which to base a sustainability plan were Sustainability Priorities (Section A4) and Technological Infrastructure (Sections B2 and B3). In the former, we were asked to list out the core structural components of a project “without which your project simply would not be your project,” and to list them in order of priority. This could include things such as, but not limited to, authority records, curated access points, facets, geo-spatial data, or digitized materials. We were also asked to define the communities that each property served. In the latter, we were asked to list out every single technological component of the project, from Google Drive, to Trello, to IIIF servers, to the university’s digital repository, define the function(s) of each, and assign project team members that are responsible for each. Then we were asked to realistically assess how long each technology was guaranteed to be funded, as well as “how the duration of the funding for members of your project team compares with the duration of the funding for technologies they maintain, keeping in mind that funding discrepancies may require special considerations and/or contingency plans to ensure uninterrupted attention.” Again, at first glance, much of this may seem very logical and obvious, but actually doing these exercises is illuminating (and sometimes sobering).

After Sections A and B force you to have a reckoning with the deep dark potential (good and bad) of your project, Section C focuses on applying the the National Digital Stewardship Alliance (NDSA)’s Levels of Preservation to your identified structural components. The Levels of Preservation are a set of recommendations that align the entire the digital preservation spectrum in six core areas: Access, Backing up Work, Permissions, Metadata, File Formats, and Data Integrity. For each of these areas, the roadmap defines four ‘levels’ of commitment to each of these areas, and what each of those levels really mean. For example, Level 1 for Data Integrity involves designating which project members have credentials for certain accounts and services, and who has read/write/move/delete authorization. Levels 2-3 requires the ability to repair data and create fixity information for stable files, and Level 4 specifies the checking of that fixity data specifically in response to specific events or activities. After defining your current and anticipated levels in each area, you’re asked to define concrete actions your team would need to undertake in order to achieve your desired level. Once again, these exercises encourage expectation management, with comments like “Please note! Reaching Level 4 sustainability practices is not the goal. Your work here is to balance what your project needs with the resources (both in terms of technology and staff) that you have.” It also notes that it is “absolutely okay” to decide that your project will choose Level 0 for any one of these areas, choosing consciously not to engage with that area, using the resources you have to focus on what your team wants to prioritize.

Module A3 in written form

After the two four-hour meetings, my brain was full and I was full of new ideas about my project that probably should have already occurred to me, but that only coalesced in any meaningful way by walking through the roadmap process. I’ve also been around long enough to know that the giddy enthusiasm that comes after a meeting like this can die on the vine if those ideas aren’t transformed into actionable items and documented somewhere. I did have the printed roadmap modules and exercises with my written answers on them, and Langmead and her team were clear that if we wanted to merely file (or scan) those written documents and stop there, that was fine. But written in the final module of the roadmap is the recommendation that after its completion, “make sure that you store the documentation for this, and all other, STSR modules in one of your reliable sites of project documentation.” So after several months of contemplation, I finally determined that MITH’s most reliable current site of project documentation is Airtable, which we’ve been using more and more to track aspects of different projects.

Airtable is an online relational database application that looks and functions like a spreadsheet in its default ‘Grid’ UI, but which also has more robust relational functions allowing you to meaningfully connect data between different tables/worksheets. As opposed to merely entering my answers to each module/exercise, I opted to begin by actually moving references and links to all the roadmap’s sections and modules into two tables in Airtable, so that the full text of each module was easily at hand for reference. I also included base, table, and

column descriptions at all levels (this would be the rough equivalent of Excel comments), which explain how information should be entered or that gave sample entries. The base description also provides an overview to this whole exercise, and gives attribution to the project in the format requested by Langmead and her team.

There are descriptions throughout with details on how to utilize each table or field. Click on the ‘i’ Info button to display them.

There were actual spreadsheets provided by the Roadmap’s project team for certain exercises, and I uploaded those as new tables in Airtable, and modified them as needed to connect/link with other tables. For example, the Technological Infrastructure table (which includes all the various technologies used by your project), the ‘Project Member Responsible’ column is linked to the Project Team table. So after you’ve entered the data for each, you can go back to the Project Team table and see all the tech components each member is responsible for, rolled up in a linked record field. There’s also a reference table listing out the definitions of Levels 1-4 for each of the six NDSA areas, so when you’re deciding what to enter in the Sustainability Levels table, you can instantly reference that table and choose an appropriate level for each area. After crafting the ‘template,’ I tested its usability by entering all the data from Unlocking the Airwaves that I’d written down. By doing that I realized where there were a few tweaks and bottlenecks that needed ironing out, and went back and modified the template. See below for a few more screenshots of the completed template.

So now we’ve got the roadmap data for Unlocking the Airwaves saved in a reliable site of project documentation. MITH team members are now encouraged (but not required) to use the template as we develop new projects, and it’s available to anyone else who’d like to request a blank duplicated copy. Dr. Langmead also provided a gentle but useful reminder that there is inherent risk in picking and using any such technology for this purpose, since platforms like Airtable may not always remain available. She suggested that we include a mention along the lines of “The inclusion of Airtable in your project’s suite of technologies should be considered carefully (in line with the work done in Modules A5 and B2)” in the intro description text for the base, which we did.

In a way this was also a sense-making exercise wherein, by taking all the roadmap data and turning it into structured data, I’d not only be able to sync up all these components in my head and turn them into actionable tasks, I’d also better retain the information. Anyone who has transformed, mapped, or structured previously unstructured data knows that by doing these tasks, you become much more intimately connected to your data. But what I think really appeals to me about the roadmap process is the mindfulness aspect. It encourages participants to think beyond the theoretical concepts of sustainability and actually apply them, write them down, look at them, consider their implications, and be honest about project expectations as aligned with available resources. In a world of overtapped resources and academic and bureaucratic hurdles, that’s an incredibly valuable skill to have.

The post Reckoning with Digital Projects: MITH Makes a Roadmap appeared first on Maryland Institute for Technology in the Humanities.

]]>
Dana Williams and Kenton Rambsy Digital Dialogue https://mith.umd.edu/dialogues/dd-fall-2016-dana-williams-kenton-rambsy/ Wed, 09 Nov 2016 14:30:32 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=17799 Patterns in literary scholarship suggest that serious considerations of a literary period do not fully begin until at least a generation after its emergence. Accordingly, meaningful scholarship on African American literature since 1970 is only now beginning to slowly emerge. Scholars interested in this period face two significant challenges. First, the sheer volume of [...]

The post Dana Williams and Kenton Rambsy Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

Patterns in literary scholarship suggest that serious considerations of a literary period do not fully begin until at least a generation after its emergence. Accordingly, meaningful scholarship on African American literature since 1970 is only now beginning to slowly emerge. Scholars interested in this period face two significant challenges. First, the sheer volume of literature published after 1970 can be overwhelming, so identifying a specialty area around which to acquire deep expertise is at once critical and limiting. Second, since literary periods are themselves often nebulously constructed, developing literary histories for a contemporary period can quickly dissolve into competing contrivances, particularly if/when primary source material to document many of its ideals and common threads prove elusive.

Arguably, the clash of too much written material to claim mastery of and too little awareness of primary resources related to the desired specialty area has resulted in an unnecessary muting of key discourses that shaped this highly influential period. Digital Humanities practices, however, can help manage this challenge, thereby giving voice to these key discourses. Ultimately, Williams and Rambsy contend that data management (technology) can be an essential tool for constructing a substantive literary history with a texture reflective of the period’s ripe content and contexts. As a case in point, the presentation focuses specifically on those texts Toni Morrison brought into print as Senior Editor at Random House Publishing Company as the specialty area around which the presenters have significant expertise and for which a singularly unique literary history can be constructed.

See below for a Storify recap of this Digital Dialogue, including live tweets and select resources referenced by Williams and Rambsy during their talk.

The post Dana Williams and Kenton Rambsy Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Purdom Lindblad Digital Dialogue https://mith.umd.edu/dialogues/dd-fall-2016-purdom-lindblad/ Tue, 20 Sep 2016 13:30:45 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=17796 In the Republic of the Imagination, Azar Nafisi champions reading as a way to open ourselves to deepen empathy and entice our curiosity. Inspired, I am developing ways of documenting and visualizing not only what I read, but also what caused me to read using linked open data. Through a custom Jekyll plugin, RDFa [...]

The post Purdom Lindblad Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

In the Republic of the Imagination, Azar Nafisi champions reading as a way to open ourselves to deepen empathy and entice our curiosity. Inspired, I am developing ways of documenting and visualizing not only what I read, but also what caused me to read using linked open data. Through a custom Jekyll plugin, RDFa triples are extracted from my reading notes text files. The plugin writes JSON-LD triplets, which are then used as input for a variety of visualizations.

Tracing reading reveals the breadths and depths of interconnected themes among works that, initially, are connected simply because I noticed one influenced me to read the other. The visualizations enable movement between big-picture views of the corpus and close-readings of individual books and can reveal adjacent possibilities in themes, readings, as well as shape questions that may be currently unarticulated.

Influenced by feminist interface design, this talk will focus on the design and creation of visualizations – as finding aids, as maps into the landscape of a personal corpus.

See below for a Storify recap of this Digital Dialogue, including live tweets and select resources referenced by Lindblad during her talk.

The post Purdom Lindblad Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Digital Dialogues Collection, chronicling a slice of the digital humanities since 2005 https://mith.umd.edu/the-digital-dialogues-collection-chronicling/ Mon, 08 Aug 2016 13:30:29 +0000 http://mith.umd.edu/?p=17802 This is the 6th post in MITH’s Digital Stewardship Series. In this post, MITH’s summer intern David Durden discusses his work on MITH’s audiovisual collection of historic Digital Dialogues events. The Digital Dialogues series showcases many prominent figures from the digital humanities community (e.g., Tara McPherson, Mark Sample, Trevor Owens, Julia Flanders, and MITH’s own [...]

The post The Digital Dialogues Collection, chronicling a slice of the digital humanities since 2005 appeared first on Maryland Institute for Technology in the Humanities.

]]>

This is the 6th post in MITH’s Digital Stewardship Series. In this post, MITH’s summer intern David Durden discusses his work on MITH’s audiovisual collection of historic Digital Dialogues events.

The Digital Dialogues series showcases many prominent figures from the digital humanities community (e.g., Tara McPherson, Mark Sample, Trevor Owens, Julia Flanders, and MITH’s own Matthew Kirschenbaum) speaking about their research on digital culture, tools and methodologies, and the interlocking concerns of the humanities and computing.

As mentioned in my earlier post, the nature of this collection presents several challenges to preservation and access as the series continues on into the future. As with many collections that are the focus of digital curation, the topics and subject matter covered in the Digital Dialogues continuously evolve and change over the course of the series. The collection itself is a record of the evolution of the digital humanities, the growth of MITH, and the rapid development of digital technologies, e.g., audio podcasts, multimedia podcasts, HD web hosted video.

My project was intended to help MITH balance the challenges of proper storage of existing content with the challenges of developing sustainable workflows for the dissemination of current and future content. Prior to this project, the Digital Dialogues collection was dispersed among several locations, representing different workflows, available technologies and access platforms over time. There have been 193 Digital Dialogues since September of 2005. There are recordings of 129 of these—78 recorded on video, and 51 recorded on audio (only). Access copies for videos and audio tracks were hosted in a variety of locations, such as Vimeo, Internet Archive, or an Amazon S3 server instance. Source and project files were located on a combination of the internal drive for MITH’s iMac video editing station, an external hard drive, and a separate local server. After the completion of this project, the preservation, storage and accessibility of all Digital Dialogues content has been streamlined. Source and project files are now organized in a set file directory structure and stored redundantly on two separate local drives, and all access copies are available through a single source—Vimeo—making it easier for users to have access to the entire collection. Due to weekly upload limits imposed by Vimeo, there are currently 71 videos uploaded, and 45 more videos are in the upload queue and will be available soon.

Over the course of this project, I was involved in the processes of editing and exporting videos, updating the MITH site,, and preparing digital content for long-term storage, but through that process I did manage to find some time to actively engage with the sheer volume of content that exists within the collection. Several Digital Dialogues were in line with my own research interests and hobbies, so I was able to engage with the collection as both a curator and researcher, and watched these videos in their entirety.

Here are a few (only a tiny sample) of my favorites:

Spectacular Stunts and Digital Detachment: Connecting Effects to Affects in US Car Movies, by Caetlin Benson-Allott

These three are of personal interest to me, but each video also represents the variety of content that the Digital Dialogues has to offer. Additionally, the Donahue and Freedman pieces represent other ways that MITH is distributing content associated with each Digital Dialogue. Rachel Donahue’s Digital Dialogue page, in addition to the video of her presentation, features her slide deck available for download in PDF format. Richard Freedman’s Digital Dialogue page features a Storify recap that features links to resources referenced in his presentation that are inaccessible from the video alone.

Featured video: “It’s too Dangerous to Go Alone! Take This.” Powering Up for Videogame Preservation

Donahue Title Slide

Title slide from Rachel Donahue’s Digital Dialogue

I am an avid fan and player of videogames, which is why I chose to highlight the talk in this video. Rachel Donahue worked on a Library of Congress-sponsored project, Preserving Virtual Worlds (PVW), which focused on the complexities of preserving the digital content of videogames (the Preserving Virtual Worlds website can only be viewed through the Internet Archive, but the project report is available here).

Donahue’s talk explains the methodology devised by PVW to determine the ‘how’ and ‘why’ of videogame preservation, which isn’t as straightforward as I originally thought. She begins with a simple explanation of what it is exactly that PVW’s videogame preservation focused on: videogames that were originally for computer or dedicated consoles, such as the Super Nintendo Entertainment System. This talk represents a wide range of preservation activities and approaches at the highest level. Donahue proceeds to explain that the problems inherent in videogame preservation stem from the existence of different preservation priorities from different members of the gaming community, e.g.: developers, players, and archivists. These sub-groups often overlap and further complicate the process. The player and developer communities may disagree about what the most important aspects of the game are, and in reference to the game Oregon Trail, Donahue states,

“if you talk to a lot of people about the Oregon Trail, and ask them ‘what do you most remember about the Oregon Trail, what do you think is most important to the Oregon Trail?’, and they’re going to say things like, dysentery, trying to shoot squirrels, making it to Independence Rock before July 4th, fjording the river, having enough axles in your pack, having enough stuff in general without weighing down your oxen so much that they can’t move; maybe if you’re a little bit more observant you might think, ‘problematic portrayal of Native Americans,’ but you’re not going to say, ‘data model.’ I don’t think anybody thinks about the data model, but if you talk to the creators of the Oregon Trail, they are in fact going to say, ‘the data model, the statistics, those are the most important parts of the game.”

Oregon Trail

Photo credit: mygeekwisdom.com

Videogames often have a multiplayer component that is a source of nostalgia for players. When comparing the gameplay between two-player Super Mario Bros., which can be preserved through software emulation or preservation of original hardware, to online play in Halo 3, which required servers operated by Microsoft in addition to the hardware and software components, one can quickly see how the ‘what’ of videogame preservation can imply drastically different things to groups within the community. Donahue also mentions that there are often unique trends and quirks for specific games within the player community which are not always preservable (such as ‘bunny hopping’ in Quake).

A variety of questions must be answered before preservation activities can move forward. The most important question is: “what exactly are we preserving?” Aside from content, videogames are data, software, hardware, unique storage media, and peripherals such as controllers. Each element of a videogame system may require a specific skillset in order to achieve any sort of reliable preservation. In the case of hardware and circuit boards, basic knowledge of electronics and computer repair may be required; when using emulation, scripting skills will inevitably be required. Videogame preservation also demands a distinction to be made between original hardware preservation and software emulation–what is the minimum level of preservation for a videogame? The question of what to save is most certainly a philosophical one: is it the aesthetic of the original object and the experience of playing the game in its original state, or will any experience involving the loose entity of the game be acceptable?

Retrode

The Retrode (retrode.org) is a device that allows for hardware emulation using original videogame cartridges.

Donahue exhibits several surveys created to gauge the focus of preservation activities. For the curator or archivist, survey questions were more technical, and a few examples are ‘can the game be played’, ‘do you have the equipment to emulate’, and ‘will you provide a complete videogame experience, or will you just preserve the artifacts?’ For players, the questions are more rooted in videogame culture, for example, ‘what is the core of the game and what does it mean’, ‘what contributes to the success of a franchise’, ‘what is the importance of multiplayer’, and ‘is this a good game or a milestone game’?

Donahue and the PVW project made great strides in articulating the specific needs of videogame preservation as well as providing the groundwork for establishing preservation standards for an often overlooked and misunderstood part of our culture. This is just one of many interesting and unique Digital Dialogues within the collection – to view more, visit the Past Digital Dialogue Schedules page, where you can browse through all previous seasons and explore.

The post The Digital Dialogues Collection, chronicling a slice of the digital humanities since 2005 appeared first on Maryland Institute for Technology in the Humanities.

]]>
A Decade of Digital Dialogues Event Recordings and the Challenges of Implementing a Retroactive Digital Asset Management Plan https://mith.umd.edu/decade-digital-dialogues-event-recordings-challenges-implementing-retroactive-digital-asset-management-plan/ Thu, 14 Jul 2016 20:39:00 +0000 http://mith.umd.edu/?p=17756 This is the 5th post in MITH's Digital Stewardship Series. In this post, MITH's summer intern David Durden discusses his work on MITH's audiovisual collection of historic Digital Dialogues events. I was brought on as a summer intern at MITH to work on a digital curation project involving Digital Dialogues, MITH’s signature events program featuring speakers from around [...]

The post A Decade of Digital Dialogues Event Recordings and the Challenges of Implementing a Retroactive Digital Asset Management Plan appeared first on Maryland Institute for Technology in the Humanities.

]]>
This is the 5th post in MITH’s Digital Stewardship Series. In this post, MITH’s summer intern David Durden discusses his work on MITH’s audiovisual collection of historic Digital Dialogues events.

I was brought on as a summer intern at MITH to work on a digital curation project involving Digital Dialogues, MITH’s signature events program featuring speakers from around the U.S., and occasionally beyond, which has been running for eleven years. The Digital Dialogues events program has documented the development of the digital humanities as well as the ideas and work of several of the pioneers of the field. However, as the digital humanities grew and developed, so did the technology used to record and edit the Digital Dialogues. This digital record must be curated and preserved in order to ensure that the Digital Dialogues events are accessible for many years to come.

Staying current with changes in digital audio and video recording and editing resulted in a variety of media sources, file types, storage locations, and web-hosting services. MITH currently has a workflow for recent and future Digital Dialogues that ensures proper storage of raw video, systematized file naming-conventions, standards for video editing and the creation of web content, and redundant storage. This plan, in some form, must be retroactively applied to almost a decade of content.

Since I was dealing with a variety of locations for content, the first task at hand was to consolidate media from all storage locations and resolve discrepancies and duplications. This resulted in aggregating all available content from an editing workstation, an external drive, an AWS server, and a local server. Once all the content was funneled into a singular location, I began the slow and tedious process of comparing files and folders. I was able to separate usable media from everything else and began moving content into a well-organized master directory that will be cloned into redundant storage for preservation. Future workflows will prevent discrepancies by having content be imported, named, organized, and edited on the local workstation and then copied to external storage sources to prevent duplication or accidental changes to archived content.

An example of the future data flow for Digital Dialogues videos

An example of the future data flow for Digital Dialogues videos

MITH had been successfully saving multiple copies of files across different storage devices, but many of these files reflected out-dated workflows and there were often several versions of the same file. The recording of Digital Dialogues went through several technological evolutions and left behind a messy file structure. Some source files were saved, others are missing. Some final product videos and recordings were duplicated across local storage devices, others exist solely in the Internet Archive and other web-hosting services. MITH’s early Digital Dialogues provide an example of the danger inherent in relying on singular storage locations and web-hosting services to archive digital assets. The file compression used by many services, as well as the possibility of service interruption, make web-hosting a ‘front-end access-only’ form of digital storage. The important thing to emphasize here is that once digital source media is lost, it is usually lost forever, which is why it is always necessary and recommended to have a data management plan ready at the onset of any digital project.

Data storage isn’t the only challenge that the Digital Dialogues collection presents as the collection has moved through different A/V editing workflows and standards. The Digital Dialogues transitioned from audio recording to video recording, as well as from using iMovie to Adobe Premiere to edit video, a transition that has left a considerable number of useless project files lingering about. The differences between the two video editing software suites are considerable and present several challenges to long term functionality. Adobe Premiere and iMovie handle the import of source media very differently. Premiere doesn’t actually import the source media, but instead creates a link to the file using a system path, which results in project files that are only a few hundred kilobytes in size. IMovie, however, stores a copy of the original media as well as a variety of program specific data, which greatly increases the size of the project folder. Additionally, Adobe Premiere allows for backwards compatibility to some degree, whereas iMovie does not, making Premiere a better choice for long term functionality of project files.

The links that Adobe Premiere creates to source media are problematic because, if the source media changes location or filename, the links are effectively broken and media must be relocated before any editing can occur. However, as long as the source media is preserved and is identifiable, it is a simple task to point Premiere to the correct location of the source. To ensure MITH’s future access to working project files (which is important if a derivative is lost and needs to be regenerated, or video formatting needs to be updated for a website), I created a well organized and descriptively named directory containing all project files and associated linked media. The current editing and curation plan involves each Digital Dialogue event being stored in a folder containing source media and the edited derivative. Before transferring any source media, an appropriate directory is created to store the files. Files are then transferred from an external storage device or camera to the video editing iMac work-station and stored in the appropriate event folder. The event folders are named using the following convention:

‘YYYYMMDD_SpeakerNameInCamelCase_AdditionalSpeakersSeparatedByUnderscores’.

Events are organized by season (e.g., Spring 2016) and stored in a season folder using the following convention:

‘YYYY-Season-Semester’.

All events for a season will be edited in a single Adobe Premiere project file that is located within the season folder. This reduces the amount of project files to manage and also streamlines the video editing process.

Example of a well-organized Digital Dialogue season folder

Example of a well-organized Digital Dialogue season folder

Another part of this project consisted of editing previous content to conform to current standards. Due to the variety of files that existed, both formats and duplicates, I decided to prioritize raw footage (or the highest quality derivative that I could discover) for archiving and the creation of new videos. Provided that usable media was accessible, videos currently on the MITH website are being updated to reflect proper MITH logos and branding, as well as title slates with appropriate attributions to speakers, dates and talk titles. There are also many years of Digital Dialogues recorded as audio, which are in the process of being exported to a standardized video format so that the majority of Digital Dialogues will be accessible to the user through one hosting service (Vimeo). At the end of the project, I will have created or recreated around 105 videos, streamlined and documented any changes to MITH’s audiovisual workflows, and ensured proper digital stewardship of an important collection of digital humanities scholarship. My second and final blog post in this series will highlight some of the more interesting content in this collection.

 

 

 

The post A Decade of Digital Dialogues Event Recordings and the Challenges of Implementing a Retroactive Digital Asset Management Plan appeared first on Maryland Institute for Technology in the Humanities.

]]>
Digital Humanities at Scale: the HathiTrust Research Center https://mith.umd.edu/dialogues/digital-humanities-at-scale-the-hathitrust-research-center/ Wed, 29 Feb 2012 05:00:03 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=6931 The recently formed HathiTrust Research Center (HTRC) is dedicated to the provision of computational access to the HathiTrust repository.The center’s mission is to provide a persistent and sustainable structure to enable original and cutting edge research in tools to enable new discoveries on the text corpus of the HathiTrust repository.  In this talk, I will [...]

The post Digital Humanities at Scale: the HathiTrust Research Center appeared first on Maryland Institute for Technology in the Humanities.

]]>
The recently formed HathiTrust Research Center (HTRC) is dedicated to the provision of computational access to the HathiTrust repository.The center’s mission is to provide a persistent and sustainable structure to enable original and cutting edge research in tools to enable new discoveries on the text corpus of the HathiTrust repository.  In this talk, I will talk about the functionality that HTRC will provide, about the research questions that come out of providing a facility for large-scale text analysis, and about the research and modes of use we hope to stimulate within the digital humanities community through the center.

The post Digital Humanities at Scale: the HathiTrust Research Center appeared first on Maryland Institute for Technology in the Humanities.

]]>
Digital Humanities 3.0: Where We Have Come From and Where We Are Now? https://mith.umd.edu/dialogues/digital-humanities-3-0-where-we-have-come-from-and-where-we-are-now/ Tue, 16 Sep 2008 04:00:31 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4186 I want to ask the related questions of where we are and where we are headed in the digital humanities. Since I am an historian, I approach these questions by asking where we have been and how we got here. Digital Humanities 1.0 was the use of information technology and computing to produce forms of [...]

The post Digital Humanities 3.0: Where We Have Come From and Where We Are Now? appeared first on Maryland Institute for Technology in the Humanities.

]]>
I want to ask the related questions of where we are and where we are headed in the digital humanities. Since I am an historian, I approach these questions by asking where we have been and how we got here. Digital Humanities 1.0 was the use of information technology and computing to produce forms of scholarship that literally could not be done in the analog age: the encoding of text (TEI), computational linguistics and the creation of humanities databases (Perseus) were the two most prominent examples. DH 2.0 was the era inaugurated by IATH and scholars such as Jerry McGann, Ed Ayers and others in reconceptualizing traditional humanities questions through the intellectual power of technology — relating text to image, creating complexly interrelated databases, use of large-scale digitization. DH 3.0 is where I hope we are, searching for a new order of technical possibilities that will change modes of thought. DH 3.0 assumes the technology and moves on to give primary consideration to intellectual problems that were inconceivable in either the analog or early digital eras. The challenge is no longer either the usage of technology or the linking of technology to more or less traditional humanities problems, but reconceptualizing the problems. But of course all of this assumes what is not true — that we put a digital humanities infrastructure into place.

The post Digital Humanities 3.0: Where We Have Come From and Where We Are Now? appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Internet Archive and the Digital Humanities https://mith.umd.edu/dialogues/the-internet-archive-and-the-digital-humanities/ Tue, 27 Nov 2007 05:00:25 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4210 The Internet Archive was founded eleven years ago by Brewster Kahle to build the world's first 'Internet Library.' Since 1996, the Archive has been collecting bi-monthly snapshots of the World Wide Web--the entire Web--resulting in a cumulative collection of approximately 100 billion Web pages. This cumulative historical record can be browsed and viewed using the [...]

The post The Internet Archive and the Digital Humanities appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Internet Archive was founded eleven years ago by Brewster Kahle to build the world’s first ‘Internet Library.’ Since 1996, the Archive has been collecting bi-monthly snapshots of the World Wide Web–the entire Web–resulting in a cumulative collection of approximately 100 billion Web pages. This cumulative historical record can be browsed and viewed using the Wayback Machine, an access interface developed by the Internet Archive (www.archive.org.) The Archive has since expanded its activities to include book scanning, audio collections, video and still image collections and Open Education Resources (videotaped lectures for entire college-level courses and the supporting materials.) These collections comprise over two petabytes of data, stored in the Archive’s Digital Repository in San Francisco. The Internet Archive is active in the open-source software community and has developed several widely used tools for web harvesting, search, and management of clustered storage environments. The Archive is dedicated to open source principles; accordingly all software used and developed by the Internet Archive is open source and open access. As an active technology partner in the academic, library and research communities, the Archive has become both a storage partner and content source for educators and researchers. Its role as a unifier of open access content sources is growing through collaborative projects in book scanning and access, large collections of imagery and other content formats, as well as its roles as administrator of the Open Content Alliance (www.opencontentalliance.org) and as a co-founder of the International Internet Preservation Consortium (www.iipc.org.)

The Archive is engaging in several projects that can directly serve educators and researchers in the Digital Humanities community. In this talk, LINDA FRUEH will describe the collections, projects and capabilities of the Internet Archive, and hopes to generate lively discussion in how the Archive can work to better support the conduct of Digital Humanities studies.

The post The Internet Archive and the Digital Humanities appeared first on Maryland Institute for Technology in the Humanities.

]]>
Agora.Techno.Phobia.Philia: Gender (and other messy matters), Knowledge Building, and Digital Media https://mith.umd.edu/dialogues/agora-techno-phobia-philia-gender-and-other-messy-matters-knowledge-building-and-digital-media/ Tue, 23 Oct 2007 04:00:03 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4215 "The degree to which American society has embraced and absorbed computer technologies is astonishing. The degree to which the changes provoked by computers leave prevailing inequalities is troubling." --Special Issue, "From Hard Drive to Software: Gender, Computers, and Difference," Signs: Journal of Women in Culture and Society (August 1990--yes, you read the date correctly). In [...]

The post Agora.Techno.Phobia.Philia: Gender (and other messy matters), Knowledge Building, and Digital Media appeared first on Maryland Institute for Technology in the Humanities.

]]>
“The degree to which American society has embraced and absorbed computer technologies is astonishing. The degree to which the changes provoked by computers leave prevailing inequalities is troubling.” –Special Issue, “From Hard Drive to Software: Gender, Computers, and Difference,” Signs: Journal of Women in Culture and Society (August 1990–yes, you read the date correctly).

In the wake of the sixties, the humanities in general and their standings in particular had suffered, according to some, from being feminized by the messy considerations of gender, race, sexuality, class. For some, humanities computing and digital humanities seemed to offer a space free from all this messiness and a return to “objective” questions of representation. In 2007, asking some obvious, basic questions seems more than in order: Are digital humanities and new media important for feminist cultural, social, and intellectual work? Concomitantly, can feminism enhance and improve the world and work of computer science, of humanities computing, of digital humanities? Questions basic to feminist critical inquiry are certainly worth asking of our digital work: How do items of knowledge, organizations, working groups come into being? Who made them? For what purposes? Whose work is visible, what is happening when only certain actors and associated achievements come into public view? What happens when social order is assumed to be an objective feature of social life (i.e., uninformed by ethnomethodology)? What counts as innovation: why are tools valorized and whose work in their development and in their application is recognized? These and other questions posed by the group will be examined in this collaborative exchange.

The post Agora.Techno.Phobia.Philia: Gender (and other messy matters), Knowledge Building, and Digital Media appeared first on Maryland Institute for Technology in the Humanities.

]]>