Digital Musicology – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 20:00:25 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 MEI for All! or Lowering the Barrier to Music Encoding through Digital Pedagogy https://mith.umd.edu/dialogues/dd-spring-2020-anna-kijas/ Tue, 18 Feb 2020 13:00:30 +0000 https://mith.umd.edu/?post_type=mith_dialogue&p=20835 Over approximately the last decade, the Music Encoding Initiative (MEI), has become a recognized international community-driven effort that has developed and maintains the MEI schema, standards, and shared documentation. The potential of machine-readable music data that can be reused, rendered, shared, or analyzed using a computer, is quite appealing, however the reality is that various [...]

The post MEI for All! or Lowering the Barrier to Music Encoding through Digital Pedagogy appeared first on Maryland Institute for Technology in the Humanities.

]]>
Over approximately the last decade, the Music Encoding Initiative (MEI), has become a recognized international community-driven effort that has developed and maintains the MEI schema, standards, and shared documentation. The potential of machine-readable music data that can be reused, rendered, shared, or analyzed using a computer, is quite appealing, however the reality is that various barriers exist for people who may be interested in creating or using encoded music data for the first time.

One approach to lowering barriers is through digital pedagogy, in which the focus is “specifically on the use of technology to break down learning barriers and enhance students’ learning experiences.”(1) In addition to teaching MEI via online tutorials or workshops, students and scholars* should consider approaching the MEI through the lens of digital pedagogy or more specifically critical pedagogy, which emphasizes and overlaps with many of the tenets that make up the ACRL Framework for Information Literacy for Higher Education.(2) Critical pedagogy encourages questions around authority and power structures, for instance: why was MEI created and for whom, whose music is being encoded, who has access to the data, when/why should we use MEI, what type of infrastructure is necessary for MEI work, and so on. Encouraging and engaging in conversations with students and scholars about the affordances of MEI is equally valuable as is the act of creating encoded music data or full-on MEI projects.

In this talk, I will explore some of the barriers that students and scholars new to the MEI often experience and discuss models related to some of my own work as a librarian and digital humanities practitioner; focusing in particular on the “Introduction to the Music Encoding Initiative,” co-written with Raffaele Viglianti and recently published in the DLFteach Toolkit, (https://dlfteach.pubpub.org/toolkit), in which we aim to present music encoding through a low-barrier approach that utilizes open source tools.(3) I will also present examples (such as minimal computing efforts) from the broader digital humanities community that we might borrow from, which embrace the ethos and approaches of critical and digital pedagogy.(4)

Notes
(1) Reed Garber-Pearson and Robin Chin Roemer,: “Keeping up with digital pedagogy”
(2) ACRL, “Framework for Information Literacy for Higher Education”
(3) See also Rebalancing the Music Canon
(4) TEI By Example; Minimal computing; Programming Historian.
*By scholar, I mean any person engaged in research or scholarly activity. It is not limited to faculty.

The post MEI for All! or Lowering the Barrier to Music Encoding through Digital Pedagogy appeared first on Maryland Institute for Technology in the Humanities.

]]>
Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies https://mith.umd.edu/launch-of-early-modern-songscapes-beta-site-encoding-and-publishing-strategies/ Wed, 13 Feb 2019 15:50:55 +0000 https://mith.umd.edu/?p=20511 Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University [...]

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University of South Carolina (USC). My role (Raff Viglianti) at MITH for this first exploratory phase has focused on designing a data model and an online viewer for the text and musical score of the songs. Prof. Scott Trudell (UMD) and Prof. Sarah Williams (USC) have contributed to shaping the data model and have carried out the encoding work so far.

Fig. 1 Schematic representation of the encoding data model for a song, with TEI including MEI data. The song shown is When on the Altar of my hand. Facsimile from Early English Books Online.

The scholarship surrounding Lawes’s book and Early Modern song is at the nexus of literature and music and pays careful attention to both the literary and musical aspects of the songs. To reflect this duality in the data model of a digital edition, we use the Text Encoding Initiative (TEI) format for the verse and the Music Encoding Initiative (MEI) format for the notated music. You can find our encoded files on GitHub. Combining the two formats is becoming a fairly established practice (see for example the Thesaurus Musicarum Latinarum), but is not without challenges as existing tools and workflows are usually focused on either TEI or MEI. The hierarchical nature of these formats also requires one of the two to contain the other or, in other words, take a primary position. We have decide to prioritize TEI, partly because it has a well established metadata header in which we store bibliographical information. The MEI representing the music notation is then embedded within the TEI (see Fig. 1). We have decided to reproduce the underlying lyrics as a TEI-encoded stanza in order to provide our interpretation of how it may appear if formatted as subsequent stanzas often printed after the music.

For some songs, we are also dealing with multiple versions from other sources with or without music. In these cases, we produce a variorum edition, or a presentation of the text that showcases differences across the sources without privileging one over the other. Both TEI and MEI are well equipped formats for modeling textual variance, but both assume that one text will be the main reading text and only variant text will be encoded from other sources. To overcome this apparent limitation, we create a separate TEI/MEI document that only represents a collation; in other words, a document that lists where the differences between the sources of one song are to be located. This allows us to encode each source separately and to the degree of detail that we deem appropriate without worrying about tessellating multiple sources in one place (see Fig. 2). This approach has proven quite effective and I have had the opportunity to apply it to other projects at MITH and beyond, such as Digital Mishnah and the Frankenstein Variorum edition where, together with colleagues at Pittsburgh University and CMU, particularly Prof. Elisa Beshero-Bondar, we have begun to further develop, contextualize, and generalize this approach.

Fig. 2 Diagram of the data model of an hypothetical song with variants, showing three sources (A, B, and C) and a collation containing two variants that identify and connect diverging parts of the sources.

One goal of the Early Modern Songscapes project is to capture song as a multidimensional form, so we are complementing the edition with recorded performances of the songs, including variant version, under the direction of Prof. Katherine Larson (UoT). The musicians are Rebecca Claborn (mezzo-soprano), Lawrence Wiliford (tenor), and Lucas Harris (lute).

The UoT Scarborough Digital Scholarship Unit, under the direction of Marcus Barnes, has provided the backbone for the project through a robust implementation of Fedora for storing the Songscapes data and Islandora for the project website. My focus has been on providing a lightweight viewer for displaying the TEI, MEI, and adding interactivity for exploring variant readings and sources. The viewer is written in React/Redux and uses CETEIcean for rendering the TEI and Verovio for rendering MEI. Both of these tools offer a solution for rendering these data directly in a user’s browser, thus reducing the need for server-side infrastructure for TEI and MEI publications. They also provide isomorphic (that is one-to-one) renderings of the data, which allows to manipulate the rendering as if it were the actual underlying data. This, for example, makes it somewhat simple to write code to follow references from collation documents to the sources according to the variorum edition model described above. You can read more on CETEIcean in Cayless & Viglianti 2018 and on Verovio in Pugin 2106 (pages 617-631).

The first phase of Early Modern Songscapes has culminated with a conference at the University of Toronto, February 8-9 2019. As we plan the next phase, we are gathering user feedback on the site: we invite you to visit songscapes.org and fill in our survey!

Fig. 3 A screenshot of the current prototype showing a variant for the song Venus, redress a wrong that’s done (A Complaint Against Cupid).

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Report: Music Encoding Conference 2018 https://mith.umd.edu/report-music-encoding-conference-2018/ Wed, 30 May 2018 19:42:05 +0000 http://mith.umd.edu/?p=19657 Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 - 25 May 2018). For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond [...]

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 – 25 May 2018).

For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond the Score: Decoding Musical Performance,” which highlighted the challenges of encoding/decoding music notation through the lens of performance research and practice.

We are also particularly grateful to Anna Kijas who, in her keynote speech, “What does the data tell us?: Representation, Canon, and Music Encoding,” highlighted critical topics that are too often neglected in the music encoding community. Her talk made the fundamental point that our acts of building digital representations of notated music can (and currently do) reinforce traditional canons of music history that overlook contributions by women and people of color. In establishing a “digital canon” we have an unprecedented opportunity to change this. Read the full text of her keynote on Medium.

We closed MEC with a productive unconference day in the MITH offices and we are happy to already see some activity on the Music Encoding Initiative community as a result!

Music Encoding Conference reception and performance with Brad Cohen and Tory Wood

Many thanks were given throughout the conference days; however, we would be remiss not to acknowledge again the support provided by the University of Maryland College of Arts and Humanities and the MEI Board for sponsored bursaries for students. This was especially important to allow students to attend the conference in a place that is currently geographically distant from the core constituencies of the MEI community. We are also thankful to Tido for sponsoring the Wednesday reception and particularly to soprano Tory Wood and Tido’s founder and director Brad Cohen for a wonderful live performance.

We enjoyed hosting our attendees at the beautiful Clarice Smith Performing Arts Center and are grateful to the wonderful team there: Leighann Yarwood, Amanda Lee Barber, Kara Warton, and their technical staff. Special thanks also to Lori Owen from the College of Arts and Humanities. We are also thankful for the students from the Performing Arts Library who manned the registration desk and helped with all odds and ends of the conference. They are: Jennifer Bonilla, Peter Franklin, Will Gray, Kimia Hesabi, Amarti Tasissa, Zachary Tumlin, Terriq White, and Barrett Wilbur.

Finally, we are thankful to all who submitted contributions to the conference and to the Program Committee: Karen Desmond (chair), Johanna Devaney, David Fiala, Andrew Hankinson, and Maja Hartwig.

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Raffaele Viglianti Digital Dialogue https://mith.umd.edu/dialogues/dd_spring-2015-raffaele-viglianti/ Tue, 17 Mar 2015 12:00:52 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=13682 What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score. Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images [...]

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score.

Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images scanned from a printed source. This type of digital score exists in digital form almost exclusively for distribution. The difference between print and digital distribution is access: scores can be downloaded and printed at home.

Digital consumption, on the other hand, entails reading and performing the score directly from its digital manifestation. Small businesses are already investing in technologies to make the score follow the performer while playing, to support writing and displaying annotations by the performer, a teacher, other peers, etc.

In this talk, I’ll address the current status of digital sheet music publication and ask: can the digital consumption of a changeable, customizable publication influence a performer’s advocacy of a work? Textual scholarship and the preparation of critical editions is a fundamental component of this discussion, where I’ll convey editorial transparency as a vital function of digital consumption.

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Addressability API https://mith.umd.edu/music-addressability-api/ Mon, 24 Nov 2014 19:22:04 +0000 http://mith.umd.edu/?p=13428 The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean being able to talk about a specific music passage (cfr. Michael Witmore’s blog post on textual addressability). On paper, something equivalent could be done by circling or highlighting a part [...]

The post Music Addressability API appeared first on Maryland Institute for Technology in the Humanities.

]]>
The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean being able to talk about a specific music passage (cfr. Michael Witmore’s blog post on textual addressability).

On paper, something equivalent could be done by circling or highlighting a part of a score. But how could this be done on a music document on the web? Would it be possible to link to a part of a score like I can link to a paragraph of a wikipedia page? How precise can I be?

Enhancing this kind of addressability could be useful to quote passages, express analytical statements and annotations, or pass a selection of music notation on to another process for rendering, computational analysis, etc.

Project Progress as of November 2014

Most of our efforts have been focused on creating a URI syntax to address common western music notation regardless of the format of a music notation document. Music notation is represented in a variety of digital formats and there isn’t an equivalent of a “plain text” music document. Even the simplest music note is represented differently across systems. Nonetheless, there are certain primitives that are common to most music notation representation systems. These are the ones that we are considering now:

beat: music notation relies on beat to structure events, such as notes and rests, in time.

measures: typically indicated by bar lines, measures indicate a segment corresponding to a  number of beats.

staves: staves in scores separate music notation played by different instruments or group of instruments.

Consider the following example (from The Lost Voices project), where we want to address the notation highlighted in red:

DC0519 L’huillier, Si je te voy

We can say that it occurs between measure 38 and 39, on the first and third staves (labelled Superius and Tenor — this is a renaissance choral piece). Measure 38, however, is not considered in full, but only starting from the third beat (there are four beats per measure in this example).

According to our syntax, this selection could be expressed as follows:

document/measures/staves/beats/
dc0519.mei/38-39/1,3/3-3

The selection of measures is expressed as a range (38-39), staves can be selected through a range or separately with a comma (1,3), and beats are always relative to their measure, so 3-3 means from the third beat of the starting measure to the third beat of the ending measure. Things can get more complicated, but for that we defer to the Music Addressability API documentation that we’ve been writing (beware: it’s still a work in progress, feel free to contribute on GitHub!)

One important aspect worth noting is that the beat is the primary driver of the selection: only selections that are contiguous in beat can be expressed with this system. For now, this seems to be a sufficiently flexible way of addressing music notation and we’re working on a way to group several selections together in case the addressing act needs to be more complex — more on this next time.

Upcoming goals for the project

Defining a Music Addressability API is fun, but it’s useless without an implementation. So we’re working on a web service able to parse the URL syntax described in the API and to retrieve the addressed music notation from a file encoded according the Music Encoding Initiative format (MEI). Unlike the URL syntax, the implementation has to be format specific, because it needs to know how measures, staves, and beats are represented to be able to parse and retrieve them.

We’re using MEI because our next step in the new year will be focusing on our case-study data: a corpus of renaissance songs edited and published by the Lost Voices project. Students involved on the project have created a number of micro-analyses of different parts of the scores; we’ll re-model them using the URL syntax specified by the Music Addressability API to test its effectiveness.

Challenges still ahead

After collecting feedback from the MEI community, we were able to identify some aspects of the API that still need to be ironed out. Relying on beat works well because music typically has beat. Music notation, however, often breaks rules in favor of flexibility. Cadenzas, for example, are ornamental passages of an improvisational nature that can be written out with notation that disregards a measure’s beat. How could we address only part of a cadenza if beat is not available? This is one of a few ideas that are drawing us back to the whiteboard and we look forward to developing solutions.

If you’re interested in what EMA is setting out to do, please do get in touch and make sure to keep an eye on our GitHub repository where we’ll keep updating the API and release tools.

EMA is a one-year project funded by the NEH DH Start-Up Grants program.

The post Music Addressability API appeared first on Maryland Institute for Technology in the Humanities.

]]>