Performing Arts – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 20:00:40 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies https://mith.umd.edu/launch-of-early-modern-songscapes-beta-site-encoding-and-publishing-strategies/ Wed, 13 Feb 2019 15:50:55 +0000 https://mith.umd.edu/?p=20511 Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University [...]

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Early Modern Songscapes is a project exploring the circulation and performance of English Renaissance poetry. The recently released beta version of the project’s site includes a digital exploration of Henry Lawes’s 1653 songbook Ayres and Dialogues. The project is a collaboration between the University of Toronto (UoT), the University of Maryland (UMD), and the University of South Carolina (USC). My role (Raff Viglianti) at MITH for this first exploratory phase has focused on designing a data model and an online viewer for the text and musical score of the songs. Prof. Scott Trudell (UMD) and Prof. Sarah Williams (USC) have contributed to shaping the data model and have carried out the encoding work so far.

Fig. 1 Schematic representation of the encoding data model for a song, with TEI including MEI data. The song shown is When on the Altar of my hand. Facsimile from Early English Books Online.

The scholarship surrounding Lawes’s book and Early Modern song is at the nexus of literature and music and pays careful attention to both the literary and musical aspects of the songs. To reflect this duality in the data model of a digital edition, we use the Text Encoding Initiative (TEI) format for the verse and the Music Encoding Initiative (MEI) format for the notated music. You can find our encoded files on GitHub. Combining the two formats is becoming a fairly established practice (see for example the Thesaurus Musicarum Latinarum), but is not without challenges as existing tools and workflows are usually focused on either TEI or MEI. The hierarchical nature of these formats also requires one of the two to contain the other or, in other words, take a primary position. We have decide to prioritize TEI, partly because it has a well established metadata header in which we store bibliographical information. The MEI representing the music notation is then embedded within the TEI (see Fig. 1). We have decided to reproduce the underlying lyrics as a TEI-encoded stanza in order to provide our interpretation of how it may appear if formatted as subsequent stanzas often printed after the music.

For some songs, we are also dealing with multiple versions from other sources with or without music. In these cases, we produce a variorum edition, or a presentation of the text that showcases differences across the sources without privileging one over the other. Both TEI and MEI are well equipped formats for modeling textual variance, but both assume that one text will be the main reading text and only variant text will be encoded from other sources. To overcome this apparent limitation, we create a separate TEI/MEI document that only represents a collation; in other words, a document that lists where the differences between the sources of one song are to be located. This allows us to encode each source separately and to the degree of detail that we deem appropriate without worrying about tessellating multiple sources in one place (see Fig. 2). This approach has proven quite effective and I have had the opportunity to apply it to other projects at MITH and beyond, such as Digital Mishnah and the Frankenstein Variorum edition where, together with colleagues at Pittsburgh University and CMU, particularly Prof. Elisa Beshero-Bondar, we have begun to further develop, contextualize, and generalize this approach.

Fig. 2 Diagram of the data model of an hypothetical song with variants, showing three sources (A, B, and C) and a collation containing two variants that identify and connect diverging parts of the sources.

One goal of the Early Modern Songscapes project is to capture song as a multidimensional form, so we are complementing the edition with recorded performances of the songs, including variant version, under the direction of Prof. Katherine Larson (UoT). The musicians are Rebecca Claborn (mezzo-soprano), Lawrence Wiliford (tenor), and Lucas Harris (lute).

The UoT Scarborough Digital Scholarship Unit, under the direction of Marcus Barnes, has provided the backbone for the project through a robust implementation of Fedora for storing the Songscapes data and Islandora for the project website. My focus has been on providing a lightweight viewer for displaying the TEI, MEI, and adding interactivity for exploring variant readings and sources. The viewer is written in React/Redux and uses CETEIcean for rendering the TEI and Verovio for rendering MEI. Both of these tools offer a solution for rendering these data directly in a user’s browser, thus reducing the need for server-side infrastructure for TEI and MEI publications. They also provide isomorphic (that is one-to-one) renderings of the data, which allows to manipulate the rendering as if it were the actual underlying data. This, for example, makes it somewhat simple to write code to follow references from collation documents to the sources according to the variorum edition model described above. You can read more on CETEIcean in Cayless & Viglianti 2018 and on Verovio in Pugin 2106 (pages 617-631).

The first phase of Early Modern Songscapes has culminated with a conference at the University of Toronto, February 8-9 2019. As we plan the next phase, we are gathering user feedback on the site: we invite you to visit songscapes.org and fill in our survey!

Fig. 3 A screenshot of the current prototype showing a variant for the song Venus, redress a wrong that’s done (A Complaint Against Cupid).

The post Launch of Early Modern Songscapes Beta Site: Encoding and Publishing strategies appeared first on Maryland Institute for Technology in the Humanities.

]]>
Report: Music Encoding Conference 2018 https://mith.umd.edu/report-music-encoding-conference-2018/ Wed, 30 May 2018 19:42:05 +0000 http://mith.umd.edu/?p=19657 Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 - 25 May 2018). For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond [...]

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 – 25 May 2018).

For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond the Score: Decoding Musical Performance,” which highlighted the challenges of encoding/decoding music notation through the lens of performance research and practice.

We are also particularly grateful to Anna Kijas who, in her keynote speech, “What does the data tell us?: Representation, Canon, and Music Encoding,” highlighted critical topics that are too often neglected in the music encoding community. Her talk made the fundamental point that our acts of building digital representations of notated music can (and currently do) reinforce traditional canons of music history that overlook contributions by women and people of color. In establishing a “digital canon” we have an unprecedented opportunity to change this. Read the full text of her keynote on Medium.

We closed MEC with a productive unconference day in the MITH offices and we are happy to already see some activity on the Music Encoding Initiative community as a result!

Music Encoding Conference reception and performance with Brad Cohen and Tory Wood

Many thanks were given throughout the conference days; however, we would be remiss not to acknowledge again the support provided by the University of Maryland College of Arts and Humanities and the MEI Board for sponsored bursaries for students. This was especially important to allow students to attend the conference in a place that is currently geographically distant from the core constituencies of the MEI community. We are also thankful to Tido for sponsoring the Wednesday reception and particularly to soprano Tory Wood and Tido’s founder and director Brad Cohen for a wonderful live performance.

We enjoyed hosting our attendees at the beautiful Clarice Smith Performing Arts Center and are grateful to the wonderful team there: Leighann Yarwood, Amanda Lee Barber, Kara Warton, and their technical staff. Special thanks also to Lori Owen from the College of Arts and Humanities. We are also thankful for the students from the Performing Arts Library who manned the registration desk and helped with all odds and ends of the conference. They are: Jennifer Bonilla, Peter Franklin, Will Gray, Kimia Hesabi, Amarti Tasissa, Zachary Tumlin, Terriq White, and Barrett Wilbur.

Finally, we are thankful to all who submitted contributions to the conference and to the Program Committee: Karen Desmond (chair), Johanna Devaney, David Fiala, Andrew Hankinson, and Maja Hartwig.

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Announcing the Music Encoding Conference 2018 Call for Proposals https://mith.umd.edu/announcing-music-encoding-conference-2018-call-proposals/ Wed, 27 Sep 2017 19:30:50 +0000 http://mith.umd.edu/?p=18944 ** Deadline extended until November 15 11:59pm EST ** Submit at https://www.conftool.net/music-encoding2018 The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”. Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an [...]

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

** Deadline extended until November 15 11:59pm EST ** 
Submit at https://www.conftool.net/music-encoding2018

The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”.

Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an ‘un-conference’ day on 25 May)
Location: University of Maryland, College Park, Maryland, USA
Deadline for Proposals: 15 November 2017 (11:59pm EST)
Notification of Acceptance: 4 December 2017
Keynote speakers: Anna Kijas (Boston College)
                                John Rink (University of Cambridge)

Music encoding is a critical component of the emerging fields of digital musicology, digital editions, symbolic music information retrieval, and others. At the centre of these fields, the Music Encoding Conference has emerged as an important cross-disciplinary venue for theorists, musicologists, librarians, and technologists to meet and discuss new advances in their fields.

The Music Encoding Conference is the annual focal point for the Music Encoding Initiative community (http://music-encoding.org), but members from all encoding and analysis communities are welcome to participate.

For the first time, the annual conference will have a theme: “Encoding and Performance”. We welcome in particular submissions that theorize the relationship between music encoding and performance practice, describe experiments (failed or successful) in creating digital dynamic scores, propose ways of using encoded music for pedagogical purposes related to performance, or imagine future interconnections. The conference will be held at the Clarice Smith Performing Arts Center, and therefore, we encourage presentations that include a performance component or demonstration.

As always, other topics are welcome. Suggested topics include, but are not limited to:

  • music encoding for performance research and practice
  • music encoding as a theoretical approach for research
  • methodologies for encoding, music editing, description and analysis
  • rendering of symbolic music data in audio and graphical forms
  • relationships between symbolic music data, encoded text, and facsimile images
  • capture, interchange, and re-purposing of music data and metadata
  • evaluation and control of quality of music data and metadata
  • ontologies, authority files, and linked data in music encoding and description
  • music encoding and symbolic music information retrieval
  • additional topics relevant to music encoding, editing, and description

Authors are invited to upload their submission for review to our Conftool website: https://www.conftool.net/music-encoding2018. The deadline for all submissions is 15 November 2017 (11:59pm EST).

Abstracts (in PDF format only) should be submitted through ConfTool, and the submitted PDF must anonymize the authors’ details.

Types of proposals

Paper and poster proposals. Provide an abstract of no more than 1000 words, excluding relevant bibliographic references (no more than ten). Please also include information about presentation needs, particularly if you are planning a performance demonstration.

Panel discussion proposals, describing the topic and nature of the discussion and including short biographies of the participants, must be no longer than 2000 words. Panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers.

Proposals for half- or full-day pre-conference workshops, to be held on May 22nd, should include the workshop’s proposed duration, as well as its logistical and technical requirements.

Friday May 25th is planned as an un-conference day, self-organized by the participants and open for anyone who wants to initiate a discussion on a topic mentioned above.

Additional details regarding registration, accommodation, etc. will be announced on the conference web page (http://music-encoding.org/community/conference).

If you have any questions, please e-mail conference2018@music-encoding.org.

Program Committee

  • Karen Desmond, chair (Brandeis University)
  • Johanna Devaney (Ohio State University)
  • David Fiala (Centre d’Études Supérieures de la Renaissance, Tours)
  • Andrew Hankinson (Bodleian Libraries, University of Oxford)
  • Maja Hartwig (University of Paderborn)

Organizing Committee

  • Amanda Lee-Barber (The Clarice Smith Performing Arts Center)
  • Stephen Henry, co-chair (Michelle Smith Performing Arts Library)
  • Raffaele Viglianti, co-chair (Maryland Institute for Technology in the Humanities)
  • Leighann Yarwood (The Clarice Smith Performing Arts Center)

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Joanna Swafford Digital Dialogue https://mith.umd.edu/dialogues/dd-spring-2017-joanna-swafford/ Tue, 28 Mar 2017 05:30:28 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=18155 Although poetry is often treated as silent print on the page, this talk details how digital tools can augment poetry’s aural and performed dimensions. The talk presents three such digital projects: Songs of the Victorians, an archive and analysis of musical settings of famous Victorian poems, Augmented Notes, a tool for creating digital scores [...]

The post Joanna Swafford Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

Although poetry is often treated as silent print on the page, this talk details how digital tools can augment poetry’s aural and performed dimensions. The talk presents three such digital projects: Songs of the Victorians, an archive and analysis of musical settings of famous Victorian poems, Augmented Notes, a tool for creating digital scores synched with audio, and Sounding Poetry, a visualization tool for analyzing poetry recitations.

See below for a Storify recap of this Digital Dialogue, including live tweets and select resources referenced by Swafford during her talk.

The post Joanna Swafford Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Raffaele Viglianti Digital Dialogue https://mith.umd.edu/dialogues/dd_spring-2015-raffaele-viglianti/ Tue, 17 Mar 2015 12:00:52 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=13682 What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score. Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images [...]

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score.

Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images scanned from a printed source. This type of digital score exists in digital form almost exclusively for distribution. The difference between print and digital distribution is access: scores can be downloaded and printed at home.

Digital consumption, on the other hand, entails reading and performing the score directly from its digital manifestation. Small businesses are already investing in technologies to make the score follow the performer while playing, to support writing and displaying annotations by the performer, a teacher, other peers, etc.

In this talk, I’ll address the current status of digital sheet music publication and ask: can the digital consumption of a changeable, customizable publication influence a performer’s advocacy of a work? Textual scholarship and the preparation of critical editions is a fundamental component of this discussion, where I’ll convey editorial transparency as a vital function of digital consumption.

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Improving MEI to VexFlow https://mith.umd.edu/improving-mei-to-vexflow/ Wed, 07 Aug 2013 17:36:07 +0000 http://mith.umd.edu/?p=10848 MEI to VexFlow is a JavaScript library used to render music notation encoded according to the  Music Encoding Initiative (MEI) format. MEI is a community-driven effort to create a commonly-accepted, digital, symbolic representation of music notation documents, while keeping in mind the needs of textual scholars and the Digital Humanities. The three-month project supported by [...]

The post Improving MEI to VexFlow appeared first on Maryland Institute for Technology in the Humanities.

]]>
MEI to VexFlow is a JavaScript library used to render music notation encoded according to the  Music Encoding Initiative (MEI) format. MEI is a community-driven effort to create a commonly-accepted, digital, symbolic representation of music notation documents, while keeping in mind the needs of textual scholars and the Digital Humanities.

The three-month project supported by Google Summer of Code is at its mid-term milestone, and on this occasion it’s worth to have a look what has been achieved and what is coming up for the rest of the project.

During our first month and a half, I’ve been busy revising the existing code and adding new features. The main objective was extending the repertoire and the range of MEI features that MEI to VexFlow can support. In order to achieve that, I first focused on improving the handling of “lines” such as slurs, ties and hairpins. This work is very much an under-the-hood improvement, but it is important, because it paves the way for dealing with the quite common phenomenon of hairpins and slurs that do not begin and end precisely with a note on the staff.

 

Introduction and Allegro by Maurice Ravel

Figure 1: First four measures from Introduction and Allegro by Maurice Ravel, flute and clarinet parts only (scan)

 

MEI provides two ways of describing where a line begins and where it ends. The most straightforward way is to specify that a crescendo, for instance, ‘starts at note one in measure two and ends at note two in measure two’, as in the example below:

On the other hand there are instances when the line doesn’t exactly correspond to the start of notes. For these cases MEI allows the encoder to specify two points in time expressed as beats. For example the diminuendo in the second bar of the Ravel example (Figure 1) starts roughly on the third beat, hence one can describe it as a ‘diminuendo starting at beat three and ending at beat “five”—that is at the very end of the measure’. Or indeed, in the fourth measure the crescendo ‘starts at beat one and ends at beat three’.

I’ve made MEI to VexFlow capable of dealing with such encodings, in other words, it is now possible to display lines that are described with beats in the MEI file. Although, VexFlow is only capable of drawing lines that start and end with a note, so my solution was to find the closest notes of two given beats and to draw the lines as if they were attached to such notes. This is a very important step towards more advanced support for these objects. In the future it will be possible to render the lines between the correct locations by calculating offsets from the closest notes.

 

Second measure from Introduction and Allegro by Maurice Ravel

Figure 2: Second measure from Introduction and Allegro by Maurice Ravel, flute and clarinet part only (as rendered by MEI to VexFlow) MEI to VexFlow draws displays the diminuendo, even though it’s starting at the notehead of the third note in the measure.

 

Several other features have been introduced. Now it’s possible to display changing time signature, meter or clef. For instance, in the harp part of the Sonata for Flute Viola and Harp by Debussy all three occur at once:

 

Excerpt from Sonata for Flute Viola and Harp by Debussy

Figure 3: Excerpt from Sonata for Flute Viola and Harp by Debussy (as rendered by MEI to VexFlow)

 

System breaks are also now supported (Figure 4), and staff connectors are now rendered on the left of each system. Different staff connectors are supported in order to render different symbols connecting different groups of staves (for example, a brace for the piano right left hand, or a bracket for a group of instruments in an orchestral score).

 

C Major Prelude from The Well Tempered Clavier by J. S. Bach

Figure 4: First two lines of the C Major Prelude from The Well Tempered Clavier by J. S. Bach (as rendered by MEI to VexFlow)

 

Multi-staff system with instrument groups

Figure 5: Multi-staff system with instrument groups (as rendered by MEI to VexFlow)

 

These are important steps towards the usability of the scores in real-life use cases such as performance or practice.

For more information about the project, visit our project page and to see MEI to VexFlow in action (scores rendered directly in your browser) try out the demo page!

In our next milestone we are turning our attention towards variant handling. Historical musical pieces make their way to us through multiple documents and it often happens that multiple sources introduce differences and variants in the music. We are designing a sample web application that will be able to display 15–16th century music and provide a dynamic mechanism for the user to select which variant they want to see.

To achieve this, we are adding new parsing interface to MEI to VexFlow in order to expose the set of variants encoded in the input file. A web application will be able to display this information to the user and allow them to make their choices before rendering, and to see multiple variants at the same time.

The post Improving MEI to VexFlow appeared first on Maryland Institute for Technology in the Humanities.

]]>
Simulating Liveness: From Virtual Vaudeville to Second Life https://mith.umd.edu/dialogues/simulating-liveness-from-virtual-vaudeville-to-second-life/ Tue, 02 Oct 2007 04:00:02 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=4223 Manuscripts, paintings, sculptures, buildings and recordings can be preserved and archived for future generations. Live theatre, however, is ephemeral. This simple fact creates a tremendous challenge for theatre scholarship and pedagogy. In an effort to compensate for theatre’s evanescence, scholars and theatre artists have exploring a variety of techniques to simulate historical theatre events. The [...]

The post Simulating Liveness: From Virtual Vaudeville to Second Life appeared first on Maryland Institute for Technology in the Humanities.

]]>
Manuscripts, paintings, sculptures, buildings and recordings can be preserved and archived for future generations. Live theatre, however, is ephemeral. This simple fact creates a tremendous challenge for theatre scholarship and pedagogy. In an effort to compensate for theatre’s evanescence, scholars and theatre artists have exploring a variety of techniques to simulate historical theatre events. The key challenge is to reproduce the viewer’s immersion in the world of the theatre, and the crucial role that the community of spectators plays in constituting a performance event.

I will examine two approaches to simulating live performance using 3D computer animation over the internet. The first is use pre-rendered animations to simulate the experience of watching a performance. The Virtual Vaudeville project exemplifies this approach, letting the viewer switch at will among multiple perspectives on a single nineteenth-century performance. The project also provides a series of hypermedia notes and a real-time flythrough of the theatre. The second approach is to create fully-interactive real-time performances online. I will offer a brief historical overview of such efforts, showing how advances in technology are rapidly making online performance feasible. In particular, I will focus on the tremendous potential — and serious limitations — of Second Life as a venue for virtual performance. Moreover, I will argue that the phenomenon of live performance in Second Life raises fundamental questions about very notion of liveness.

The post Simulating Liveness: From Virtual Vaudeville to Second Life appeared first on Maryland Institute for Technology in the Humanities.

]]>