Raffaele Viglianti – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 20:00:32 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies https://mith.umd.edu/launch-of-early-modern-songscapes-beta-site-encoding-and-publishing-strategies/ Wed, 13 Feb 2019 15:50:55 +0000 https://mith.umd.edu/?p=20511 Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University [...]

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University of South Carolina (USC). My role (Raff Viglianti) at MITH for this first exploratory phase has focused on designing a data model and an online viewer for the text and musical score of the songs. Prof. Scott Trudell (UMD) and Prof. Sarah Williams (USC) have contributed to shaping the data model and have carried out the encoding work so far.

Fig. 1 Schematic representation of the encoding data model for a song, with TEI including MEI data. The song shown is When on the Altar of my hand. Facsimile from Early English Books Online.

The scholarship surrounding Lawes’s book and Early Modern song is at the nexus of literature and music and pays careful attention to both the literary and musical aspects of the songs. To reflect this duality in the data model of a digital edition, we use the Text Encoding Initiative (TEI) format for the verse and the Music Encoding Initiative (MEI) format for the notated music. You can find our encoded files on GitHub. Combining the two formats is becoming a fairly established practice (see for example the Thesaurus Musicarum Latinarum), but is not without challenges as existing tools and workflows are usually focused on either TEI or MEI. The hierarchical nature of these formats also requires one of the two to contain the other or, in other words, take a primary position. We have decide to prioritize TEI, partly because it has a well established metadata header in which we store bibliographical information. The MEI representing the music notation is then embedded within the TEI (see Fig. 1). We have decided to reproduce the underlying lyrics as a TEI-encoded stanza in order to provide our interpretation of how it may appear if formatted as subsequent stanzas often printed after the music.

For some songs, we are also dealing with multiple versions from other sources with or without music. In these cases, we produce a variorum edition, or a presentation of the text that showcases differences across the sources without privileging one over the other. Both TEI and MEI are well equipped formats for modeling textual variance, but both assume that one text will be the main reading text and only variant text will be encoded from other sources. To overcome this apparent limitation, we create a separate TEI/MEI document that only represents a collation; in other words, a document that lists where the differences between the sources of one song are to be located. This allows us to encode each source separately and to the degree of detail that we deem appropriate without worrying about tessellating multiple sources in one place (see Fig. 2). This approach has proven quite effective and I have had the opportunity to apply it to other projects at MITH and beyond, such as Digital Mishnah and the Frankenstein Variorum edition where, together with colleagues at Pittsburgh University and CMU, particularly Prof. Elisa Beshero-Bondar, we have begun to further develop, contextualize, and generalize this approach.

Fig. 2 Diagram of the data model of an hypothetical song with variants, showing three sources (A, B, and C) and a collation containing two variants that identify and connect diverging parts of the sources.

One goal of the Early Modern Songscapes project is to capture song as a multidimensional form, so we are complementing the edition with recorded performances of the songs, including variant version, under the direction of Prof. Katherine Larson (UoT). The musicians are Rebecca Claborn (mezzo-soprano), Lawrence Wiliford (tenor), and Lucas Harris (lute).

The UoT Scarborough Digital Scholarship Unit, under the direction of Marcus Barnes, has provided the backbone for the project through a robust implementation of Fedora for storing the Songscapes data and Islandora for the project website. My focus has been on providing a lightweight viewer for displaying the TEI, MEI, and adding interactivity for exploring variant readings and sources. The viewer is written in React/Redux and uses CETEIcean for rendering the TEI and Verovio for rendering MEI. Both of these tools offer a solution for rendering these data directly in a user’s browser, thus reducing the need for server-side infrastructure for TEI and MEI publications. They also provide isomorphic (that is one-to-one) renderings of the data, which allows to manipulate the rendering as if it were the actual underlying data. This, for example, makes it somewhat simple to write code to follow references from collation documents to the sources according to the variorum edition model described above. You can read more on CETEIcean in Cayless & Viglianti 2018 and on Verovio in Pugin 2106 (pages 617-631).

The first phase of Early Modern Songscapes has culminated with a conference at the University of Toronto, February 8-9 2019. As we plan the next phase, we are gathering user feedback on the site: we invite you to visit songscapes.org and fill in our survey!

Fig. 3 A screenshot of the current prototype showing a variant for the song Venus, redress a wrong that’s done (A Complaint Against Cupid).

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Report: Music Encoding Conference 2018 https://mith.umd.edu/report-music-encoding-conference-2018/ Wed, 30 May 2018 19:42:05 +0000 http://mith.umd.edu/?p=19657 Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 - 25 May 2018). For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond [...]

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 – 25 May 2018).

For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond the Score: Decoding Musical Performance,” which highlighted the challenges of encoding/decoding music notation through the lens of performance research and practice.

We are also particularly grateful to Anna Kijas who, in her keynote speech, “What does the data tell us?: Representation, Canon, and Music Encoding,” highlighted critical topics that are too often neglected in the music encoding community. Her talk made the fundamental point that our acts of building digital representations of notated music can (and currently do) reinforce traditional canons of music history that overlook contributions by women and people of color. In establishing a “digital canon” we have an unprecedented opportunity to change this. Read the full text of her keynote on Medium.

We closed MEC with a productive unconference day in the MITH offices and we are happy to already see some activity on the Music Encoding Initiative community as a result!

Music Encoding Conference reception and performance with Brad Cohen and Tory Wood

Many thanks were given throughout the conference days; however, we would be remiss not to acknowledge again the support provided by the University of Maryland College of Arts and Humanities and the MEI Board for sponsored bursaries for students. This was especially important to allow students to attend the conference in a place that is currently geographically distant from the core constituencies of the MEI community. We are also thankful to Tido for sponsoring the Wednesday reception and particularly to soprano Tory Wood and Tido’s founder and director Brad Cohen for a wonderful live performance.

We enjoyed hosting our attendees at the beautiful Clarice Smith Performing Arts Center and are grateful to the wonderful team there: Leighann Yarwood, Amanda Lee Barber, Kara Warton, and their technical staff. Special thanks also to Lori Owen from the College of Arts and Humanities. We are also thankful for the students from the Performing Arts Library who manned the registration desk and helped with all odds and ends of the conference. They are: Jennifer Bonilla, Peter Franklin, Will Gray, Kimia Hesabi, Amarti Tasissa, Zachary Tumlin, Terriq White, and Barrett Wilbur.

Finally, we are thankful to all who submitted contributions to the conference and to the Program Committee: Karen Desmond (chair), Johanna Devaney, David Fiala, Andrew Hankinson, and Maja Hartwig.

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Announcing the Music Encoding Conference 2018 Call for Proposals https://mith.umd.edu/announcing-music-encoding-conference-2018-call-proposals/ Wed, 27 Sep 2017 19:30:50 +0000 http://mith.umd.edu/?p=18944 ** Deadline extended until November 15 11:59pm EST ** Submit at https://www.conftool.net/music-encoding2018 The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”. Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an [...]

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

** Deadline extended until November 15 11:59pm EST ** 
Submit at https://www.conftool.net/music-encoding2018

The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”.

Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an ‘un-conference’ day on 25 May)
Location: University of Maryland, College Park, Maryland, USA
Deadline for Proposals: 15 November 2017 (11:59pm EST)
Notification of Acceptance: 4 December 2017
Keynote speakers: Anna Kijas (Boston College)
                                John Rink (University of Cambridge)

Music encoding is a critical component of the emerging fields of digital musicology, digital editions, symbolic music information retrieval, and others. At the centre of these fields, the Music Encoding Conference has emerged as an important cross-disciplinary venue for theorists, musicologists, librarians, and technologists to meet and discuss new advances in their fields.

The Music Encoding Conference is the annual focal point for the Music Encoding Initiative community (http://music-encoding.org), but members from all encoding and analysis communities are welcome to participate.

For the first time, the annual conference will have a theme: “Encoding and Performance”. We welcome in particular submissions that theorize the relationship between music encoding and performance practice, describe experiments (failed or successful) in creating digital dynamic scores, propose ways of using encoded music for pedagogical purposes related to performance, or imagine future interconnections. The conference will be held at the Clarice Smith Performing Arts Center, and therefore, we encourage presentations that include a performance component or demonstration.

As always, other topics are welcome. Suggested topics include, but are not limited to:

  • music encoding for performance research and practice
  • music encoding as a theoretical approach for research
  • methodologies for encoding, music editing, description and analysis
  • rendering of symbolic music data in audio and graphical forms
  • relationships between symbolic music data, encoded text, and facsimile images
  • capture, interchange, and re-purposing of music data and metadata
  • evaluation and control of quality of music data and metadata
  • ontologies, authority files, and linked data in music encoding and description
  • music encoding and symbolic music information retrieval
  • additional topics relevant to music encoding, editing, and description

Authors are invited to upload their submission for review to our Conftool website: https://www.conftool.net/music-encoding2018. The deadline for all submissions is 15 November 2017 (11:59pm EST).

Abstracts (in PDF format only) should be submitted through ConfTool, and the submitted PDF must anonymize the authors’ details.

Types of proposals

Paper and poster proposals. Provide an abstract of no more than 1000 words, excluding relevant bibliographic references (no more than ten). Please also include information about presentation needs, particularly if you are planning a performance demonstration.

Panel discussion proposals, describing the topic and nature of the discussion and including short biographies of the participants, must be no longer than 2000 words. Panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers.

Proposals for half- or full-day pre-conference workshops, to be held on May 22nd, should include the workshop’s proposed duration, as well as its logistical and technical requirements.

Friday May 25th is planned as an un-conference day, self-organized by the participants and open for anyone who wants to initiate a discussion on a topic mentioned above.

Additional details regarding registration, accommodation, etc. will be announced on the conference web page (http://music-encoding.org/community/conference).

If you have any questions, please e-mail conference2018@music-encoding.org.

Program Committee

  • Karen Desmond, chair (Brandeis University)
  • Johanna Devaney (Ohio State University)
  • David Fiala (Centre d’Études Supérieures de la Renaissance, Tours)
  • Andrew Hankinson (Bodleian Libraries, University of Oxford)
  • Maja Hartwig (University of Paderborn)

Organizing Committee

  • Amanda Lee-Barber (The Clarice Smith Performing Arts Center)
  • Stephen Henry, co-chair (Michelle Smith Performing Arts Library)
  • Raffaele Viglianti, co-chair (Maryland Institute for Technology in the Humanities)
  • Leighann Yarwood (The Clarice Smith Performing Arts Center)

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Dramatic Reading of Percy Shelley’s Prometheus Unbound https://mith.umd.edu/dramatic-reading-percy-shelleys-prometheus-unbound/ Wed, 06 Sep 2017 17:14:08 +0000 http://mith.umd.edu/?p=18914 A dramatic reading of Percy Shelley's Prometheus Unbound will take place on Wednesday, October 25, 3:00 -5:00 pm at the Cafritz Foundation Theatre. The show is directed by MITH's intern and Theater, Dance, and Performance Studies graduate student Victoria Scrimer. Victoria has been working closely with extant draft and fair copy manuscripts of Prometheus Unbound by [...]

The post Dramatic Reading of Percy Shelley’s Prometheus Unbound appeared first on Maryland Institute for Technology in the Humanities.

]]>
A dramatic reading of Percy Shelley’s Prometheus Unbound will take place on Wednesday, October 25, 3:00 -5:00 pm at the Cafritz Foundation Theatre. The show is directed by MITH’s intern and Theater, Dance, and Performance Studies graduate student Victoria Scrimer. Victoria has been working closely with extant draft and fair copy manuscripts of Prometheus Unbound by encoding them for the Shelley-Godwin Archive. The show will feature digital images and transcriptions from the Archive to highlight poignant passages and unused textual variants.

From the Theater, Dance, and Performance Studies department blog:

In collaboration with the Maryland Institute of Technology in the Humanities (MITH), TDPS presents a dramatic reading of Percy Shelley’s rarely-performed lyric drama, Prometheus Unbound featuring digital imagery of Shelley’s original manuscripts.

William Godwin, Percy Shelley’s father-in-law, is famously quoted as saying “God himself has no right to be a tyrant” and in many ways that is what Prometheus Unbound movingly professes. A condemnation of slavery in all forms, Prometheus Unbound is perhaps Shelley’s best-known and most beloved work. Inspired by Aeschylus’ classical Prometheia trilogy, Shelley adapts the mythological story of the Titan who gave man fire, refiguring the fate of mankind’s champion and the fall of the tyrant, Jupiter. It is a drama of Romantic ideals– suffering, endurance, and freedom– that is as relevant in today’s socio-political climate of resistance as it was two centuries ago.

Wednesday, October 25, 3-5PM, Cafritz Foundation Theatre

The post Dramatic Reading of Percy Shelley’s Prometheus Unbound appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Addressability API https://mith.umd.edu/music-addressability-api/ Mon, 24 Nov 2014 19:22:04 +0000 http://mith.umd.edu/?p=13428 The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean being able to talk about a specific music passage (cfr. Michael Witmore’s blog post on textual addressability). On paper, something equivalent could be done by circling or highlighting a part [...]

The post Music Addressability API appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean being able to talk about a specific music passage (cfr. Michael Witmore’s blog post on textual addressability).

On paper, something equivalent could be done by circling or highlighting a part of a score. But how could this be done on a music document on the web? Would it be possible to link to a part of a score like I can link to a paragraph of a wikipedia page? How precise can I be?

Enhancing this kind of addressability could be useful to quote passages, express analytical statements and annotations, or pass a selection of music notation on to another process for rendering, computational analysis, etc.

Project Progress as of November 2014

Most of our efforts have been focused on creating a URI syntax to address common western music notation regardless of the format of a music notation document. Music notation is represented in a variety of digital formats and there isn’t an equivalent of a “plain text” music document. Even the simplest music note is represented differently across systems. Nonetheless, there are certain primitives that are common to most music notation representation systems. These are the ones that we are considering now:

beat: music notation relies on beat to structure events, such as notes and rests, in time.

measures: typically indicated by bar lines, measures indicate a segment corresponding to a  number of beats.

staves: staves in scores separate music notation played by different instruments or group of instruments.

Consider the following example (from The Lost Voices project), where we want to address the notation highlighted in red:

DC0519 L’huillier, Si je te voy

We can say that it occurs between measure 38 and 39, on the first and third staves (labelled Superius and Tenor — this is a renaissance choral piece). Measure 38, however, is not considered in full, but only starting from the third beat (there are four beats per measure in this example).

According to our syntax, this selection could be expressed as follows:

document/measures/staves/beats/
dc0519.mei/38-39/1,3/3-3

The selection of measures is expressed as a range (38-39), staves can be selected through a range or separately with a comma (1,3), and beats are always relative to their measure, so 3-3 means from the third beat of the starting measure to the third beat of the ending measure. Things can get more complicated, but for that we defer to the Music Addressability API documentation that we’ve been writing (beware: it’s still a work in progress, feel free to contribute on GitHub!)

One important aspect worth noting is that the beat is the primary driver of the selection: only selections that are contiguous in beat can be expressed with this system. For now, this seems to be a sufficiently flexible way of addressing music notation and we’re working on a way to group several selections together in case the addressing act needs to be more complex — more on this next time.

Upcoming goals for the project

Defining a Music Addressability API is fun, but it’s useless without an implementation. So we’re working on a web service able to parse the URL syntax described in the API and to retrieve the addressed music notation from a file encoded according the Music Encoding Initiative format (MEI). Unlike the URL syntax, the implementation has to be format specific, because it needs to know how measures, staves, and beats are represented to be able to parse and retrieve them.

We’re using MEI because our next step in the new year will be focusing on our case-study data: a corpus of renaissance songs edited and published by the Lost Voices project. Students involved on the project have created a number of micro-analyses of different parts of the scores; we’ll re-model them using the URL syntax specified by the Music Addressability API to test its effectiveness.

Challenges still ahead

After collecting feedback from the MEI community, we were able to identify some aspects of the API that still need to be ironed out. Relying on beat works well because music typically has beat. Music notation, however, often breaks rules in favor of flexibility. Cadenzas, for example, are ornamental passages of an improvisational nature that can be written out with notation that disregards a measure’s beat. How could we address only part of a cadenza if beat is not available? This is one of a few ideas that are drawing us back to the whiteboard and we look forward to developing solutions.

If you’re interested in what EMA is setting out to do, please do get in touch and make sure to keep an eye on our GitHub repository where we’ll keep updating the API and release tools.

EMA is a one-year project funded by the NEH DH Start-Up Grants program.

The post Music Addressability API appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Walt Whitman Archive https://mith.umd.edu/walt-whitman-archive/ https://mith.umd.edu/walt-whitman-archive/#comments Thu, 20 Mar 2014 14:34:30 +0000 http://mith.umd.edu/?p=13099 In the past few months, MITH has been developing software for a project related to the Walt Whitman Archive. The Walt Whitman Archive is an electronic research and teaching tool that sets out to make Whitman's vast work, for the first time, easily and conveniently accessible to scholars, students, and general readers. Working in collaboration [...]

The post The Walt Whitman Archive appeared first on Maryland Institute for Technology in the Humanities.

]]>
In the past few months, MITH has been developing software for a project related to the Walt Whitman Archive. The Walt Whitman Archive is an electronic research and teaching tool that sets out to make Whitman’s vast work, for the first time, easily and conveniently accessible to scholars, students, and general readers. Working in collaboration with the University of Texas at Austin, as well as the Center for Digital Research in the Humanities at the University of Nebraska–Lincoln, the project team is focusing on Walt Whitman’s annotations and commentary about history, science, theology, and art being discussed during his time. These annotations survived in many forms, either as marginalia and underlinings on books, or as collages of newspaper clippings, or as separate handwritten notes. Studying this material can further our understanding of the poet’s self-education and his compositional methods.

The documents containing Whitman’s annotations have been transcribed and encoded by the project team according to the Text Encoding Initiative (TEI) standard, which allows scholars to precisely describe complex texts with paste-downs, doodles, mixed printed and handwritten content, etc. In the case of Whitman’s marginalia, it has also been possible to encode the editors’ understanding of what exactly the poet highlighted, annotated and commented on.

MITH is now building a digital publication around these precisely encoded data. To do so, we are adapting the tools developed for our own Shelley-Godwin Archive.  The Shelley-Godwin Archive provides the digitized manuscripts of Percy Bysshe Shelley, Mary Wollstonecraft Shelley, William Godwin, and Mary Wollstonecraft, bringing together online for the first time ever the widely dispersed handwritten legacy of this uniquely gifted family of writers. The archive makes use of Linked Open Data principles and emerging standards such as the Open Annotation and the Shared Canvas data models in order to open the contents of the archive to widespread use and reuse. The Linked Open Data drive our rendering engines and is generated directly from the TEI files. By adapting our Shelley-Godwin tools for Whitman, we found that Open Annotation was particularly suited for modeling Whitman’s own annotations, as the data model offered a basic and open system to represent generic annotation acts (for example by relating a piece of Whitman’s commentary to the specific portion of text that it annotates).

The final digital publication will allow users to read semi-diplomatic transcriptions of the texts alongside facsimile images, as well as visually distinguish regions of text annotated by Whitman. We are also building a search system that indexes the text according to several categories, so that users may search on text annotated by Whitman as opposed to text written by the author, etc.

We are busy finalizing this work – watch our blog for a project launch announcements in a few weeks. Meanwhile, feel free to write to Raffaele Viglianti if you have any questions.

 

The post The Walt Whitman Archive appeared first on Maryland Institute for Technology in the Humanities.

]]>
https://mith.umd.edu/walt-whitman-archive/feed/ 1