Music Studies – Maryland Institute for Technology in the Humanities https://mith.umd.edu Thu, 08 Oct 2020 20:00:40 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.1 Report: Music Encoding Conference 2018 https://mith.umd.edu/report-music-encoding-conference-2018/ Wed, 30 May 2018 19:42:05 +0000 http://mith.umd.edu/?p=19657 Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 - 25 May 2018). For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond [...]

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

Raffaele Viglianti (MITH) and Stephen Henry (Michelle Smith Performing Arts Library) hosted the Music Encoding Conference last week (22 – 25 May 2018).

For the first time, the conference had a theme: “Encoding and Performance,” which was well represented throughout the program. We are especially grateful to John Rink for his keynote lecture-recital “(Not) Beyond the Score: Decoding Musical Performance,” which highlighted the challenges of encoding/decoding music notation through the lens of performance research and practice.

We are also particularly grateful to Anna Kijas who, in her keynote speech, “What does the data tell us?: Representation, Canon, and Music Encoding,” highlighted critical topics that are too often neglected in the music encoding community. Her talk made the fundamental point that our acts of building digital representations of notated music can (and currently do) reinforce traditional canons of music history that overlook contributions by women and people of color. In establishing a “digital canon” we have an unprecedented opportunity to change this. Read the full text of her keynote on Medium.

We closed MEC with a productive unconference day in the MITH offices and we are happy to already see some activity on the Music Encoding Initiative community as a result!

Music Encoding Conference reception and performance with Brad Cohen and Tory Wood

Many thanks were given throughout the conference days; however, we would be remiss not to acknowledge again the support provided by the University of Maryland College of Arts and Humanities and the MEI Board for sponsored bursaries for students. This was especially important to allow students to attend the conference in a place that is currently geographically distant from the core constituencies of the MEI community. We are also thankful to Tido for sponsoring the Wednesday reception and particularly to soprano Tory Wood and Tido’s founder and director Brad Cohen for a wonderful live performance.

We enjoyed hosting our attendees at the beautiful Clarice Smith Performing Arts Center and are grateful to the wonderful team there: Leighann Yarwood, Amanda Lee Barber, Kara Warton, and their technical staff. Special thanks also to Lori Owen from the College of Arts and Humanities. We are also thankful for the students from the Performing Arts Library who manned the registration desk and helped with all odds and ends of the conference. They are: Jennifer Bonilla, Peter Franklin, Will Gray, Kimia Hesabi, Amarti Tasissa, Zachary Tumlin, Terriq White, and Barrett Wilbur.

Finally, we are thankful to all who submitted contributions to the conference and to the Program Committee: Karen Desmond (chair), Johanna Devaney, David Fiala, Andrew Hankinson, and Maja Hartwig.

The post Report: Music Encoding Conference 2018 appeared first on Maryland Institute for Technology in the Humanities.

]]>
Announcing the Music Encoding Conference 2018 Call for Proposals https://mith.umd.edu/announcing-music-encoding-conference-2018-call-proposals/ Wed, 27 Sep 2017 19:30:50 +0000 http://mith.umd.edu/?p=18944 ** Deadline extended until November 15 11:59pm EST ** Submit at https://www.conftool.net/music-encoding2018 The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”. Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an [...]

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Music Encoding Conference

** Deadline extended until November 15 11:59pm EST ** 
Submit at https://www.conftool.net/music-encoding2018

The Maryland Institute for Technology in the Humanities and the Michelle Smith Performing Arts Library invite you to participate in the 2018 Music Encoding Conference with the theme: “Encoding and Performance”.

Date: 23 – 24 May 2018 (with pre-conference workshops on 22 May and an ‘un-conference’ day on 25 May)
Location: University of Maryland, College Park, Maryland, USA
Deadline for Proposals: 15 November 2017 (11:59pm EST)
Notification of Acceptance: 4 December 2017
Keynote speakers: Anna Kijas (Boston College)
                                John Rink (University of Cambridge)

Music encoding is a critical component of the emerging fields of digital musicology, digital editions, symbolic music information retrieval, and others. At the centre of these fields, the Music Encoding Conference has emerged as an important cross-disciplinary venue for theorists, musicologists, librarians, and technologists to meet and discuss new advances in their fields.

The Music Encoding Conference is the annual focal point for the Music Encoding Initiative community (http://music-encoding.org), but members from all encoding and analysis communities are welcome to participate.

For the first time, the annual conference will have a theme: “Encoding and Performance”. We welcome in particular submissions that theorize the relationship between music encoding and performance practice, describe experiments (failed or successful) in creating digital dynamic scores, propose ways of using encoded music for pedagogical purposes related to performance, or imagine future interconnections. The conference will be held at the Clarice Smith Performing Arts Center, and therefore, we encourage presentations that include a performance component or demonstration.

As always, other topics are welcome. Suggested topics include, but are not limited to:

  • music encoding for performance research and practice
  • music encoding as a theoretical approach for research
  • methodologies for encoding, music editing, description and analysis
  • rendering of symbolic music data in audio and graphical forms
  • relationships between symbolic music data, encoded text, and facsimile images
  • capture, interchange, and re-purposing of music data and metadata
  • evaluation and control of quality of music data and metadata
  • ontologies, authority files, and linked data in music encoding and description
  • music encoding and symbolic music information retrieval
  • additional topics relevant to music encoding, editing, and description

Authors are invited to upload their submission for review to our Conftool website: https://www.conftool.net/music-encoding2018. The deadline for all submissions is 15 November 2017 (11:59pm EST).

Abstracts (in PDF format only) should be submitted through ConfTool, and the submitted PDF must anonymize the authors’ details.

Types of proposals

Paper and poster proposals. Provide an abstract of no more than 1000 words, excluding relevant bibliographic references (no more than ten). Please also include information about presentation needs, particularly if you are planning a performance demonstration.

Panel discussion proposals, describing the topic and nature of the discussion and including short biographies of the participants, must be no longer than 2000 words. Panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers.

Proposals for half- or full-day pre-conference workshops, to be held on May 22nd, should include the workshop’s proposed duration, as well as its logistical and technical requirements.

Friday May 25th is planned as an un-conference day, self-organized by the participants and open for anyone who wants to initiate a discussion on a topic mentioned above.

Additional details regarding registration, accommodation, etc. will be announced on the conference web page (http://music-encoding.org/community/conference).

If you have any questions, please e-mail conference2018@music-encoding.org.

Program Committee

  • Karen Desmond, chair (Brandeis University)
  • Johanna Devaney (Ohio State University)
  • David Fiala (Centre d’Études Supérieures de la Renaissance, Tours)
  • Andrew Hankinson (Bodleian Libraries, University of Oxford)
  • Maja Hartwig (University of Paderborn)

Organizing Committee

  • Amanda Lee-Barber (The Clarice Smith Performing Arts Center)
  • Stephen Henry, co-chair (Michelle Smith Performing Arts Library)
  • Raffaele Viglianti, co-chair (Maryland Institute for Technology in the Humanities)
  • Leighann Yarwood (The Clarice Smith Performing Arts Center)

The post Announcing the Music Encoding Conference 2018 Call for Proposals appeared first on Maryland Institute for Technology in the Humanities.

]]>
Joanna Swafford Digital Dialogue https://mith.umd.edu/dialogues/dd-spring-2017-joanna-swafford/ Tue, 28 Mar 2017 05:30:28 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=18155 Although poetry is often treated as silent print on the page, this talk details how digital tools can augment poetry’s aural and performed dimensions. The talk presents three such digital projects: Songs of the Victorians, an archive and analysis of musical settings of famous Victorian poems, Augmented Notes, a tool for creating digital scores [...]

The post Joanna Swafford Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>

Although poetry is often treated as silent print on the page, this talk details how digital tools can augment poetry’s aural and performed dimensions. The talk presents three such digital projects: Songs of the Victorians, an archive and analysis of musical settings of famous Victorian poems, Augmented Notes, a tool for creating digital scores synched with audio, and Sounding Poetry, a visualization tool for analyzing poetry recitations.

See below for a Storify recap of this Digital Dialogue, including live tweets and select resources referenced by Swafford during her talk.

The post Joanna Swafford Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Raffaele Viglianti Digital Dialogue https://mith.umd.edu/dialogues/dd_spring-2015-raffaele-viglianti/ Tue, 17 Mar 2015 12:00:52 +0000 http://mith.umd.edu/?post_type=mith_dialogue&p=13682 What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score. Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images [...]

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
What is the future of sheet music? The flexibility of the digital medium, as opposed to the rigidity of the printed form, calls for a more modern concept of the music score.

Even digital sheet music, in most cases, is designed to be printed; it is either produced with typesetting software, or made of images scanned from a printed source. This type of digital score exists in digital form almost exclusively for distribution. The difference between print and digital distribution is access: scores can be downloaded and printed at home.

Digital consumption, on the other hand, entails reading and performing the score directly from its digital manifestation. Small businesses are already investing in technologies to make the score follow the performer while playing, to support writing and displaying annotations by the performer, a teacher, other peers, etc.

In this talk, I’ll address the current status of digital sheet music publication and ask: can the digital consumption of a changeable, customizable publication influence a performer’s advocacy of a work? Textual scholarship and the preparation of critical editions is a fundamental component of this discussion, where I’ll convey editorial transparency as a vital function of digital consumption.

The post Raffaele Viglianti Digital Dialogue appeared first on Maryland Institute for Technology in the Humanities.

]]>
Improving MEI to VexFlow https://mith.umd.edu/improving-mei-to-vexflow/ Wed, 07 Aug 2013 17:36:07 +0000 http://mith.umd.edu/?p=10848 MEI to VexFlow is a JavaScript library used to render music notation encoded according to the  Music Encoding Initiative (MEI) format. MEI is a community-driven effort to create a commonly-accepted, digital, symbolic representation of music notation documents, while keeping in mind the needs of textual scholars and the Digital Humanities. The three-month project supported by [...]

The post Improving MEI to VexFlow appeared first on Maryland Institute for Technology in the Humanities.

]]>
MEI to VexFlow is a JavaScript library used to render music notation encoded according to the  Music Encoding Initiative (MEI) format. MEI is a community-driven effort to create a commonly-accepted, digital, symbolic representation of music notation documents, while keeping in mind the needs of textual scholars and the Digital Humanities.

The three-month project supported by Google Summer of Code is at its mid-term milestone, and on this occasion it’s worth to have a look what has been achieved and what is coming up for the rest of the project.

During our first month and a half, I’ve been busy revising the existing code and adding new features. The main objective was extending the repertoire and the range of MEI features that MEI to VexFlow can support. In order to achieve that, I first focused on improving the handling of “lines” such as slurs, ties and hairpins. This work is very much an under-the-hood improvement, but it is important, because it paves the way for dealing with the quite common phenomenon of hairpins and slurs that do not begin and end precisely with a note on the staff.

 

Introduction and Allegro by Maurice Ravel

Figure 1: First four measures from Introduction and Allegro by Maurice Ravel, flute and clarinet parts only (scan)

 

MEI provides two ways of describing where a line begins and where it ends. The most straightforward way is to specify that a crescendo, for instance, ‘starts at note one in measure two and ends at note two in measure two’, as in the example below:

On the other hand there are instances when the line doesn’t exactly correspond to the start of notes. For these cases MEI allows the encoder to specify two points in time expressed as beats. For example the diminuendo in the second bar of the Ravel example (Figure 1) starts roughly on the third beat, hence one can describe it as a ‘diminuendo starting at beat three and ending at beat “five”—that is at the very end of the measure’. Or indeed, in the fourth measure the crescendo ‘starts at beat one and ends at beat three’.

I’ve made MEI to VexFlow capable of dealing with such encodings, in other words, it is now possible to display lines that are described with beats in the MEI file. Although, VexFlow is only capable of drawing lines that start and end with a note, so my solution was to find the closest notes of two given beats and to draw the lines as if they were attached to such notes. This is a very important step towards more advanced support for these objects. In the future it will be possible to render the lines between the correct locations by calculating offsets from the closest notes.

 

Second measure from Introduction and Allegro by Maurice Ravel

Figure 2: Second measure from Introduction and Allegro by Maurice Ravel, flute and clarinet part only (as rendered by MEI to VexFlow) MEI to VexFlow draws displays the diminuendo, even though it’s starting at the notehead of the third note in the measure.

 

Several other features have been introduced. Now it’s possible to display changing time signature, meter or clef. For instance, in the harp part of the Sonata for Flute Viola and Harp by Debussy all three occur at once:

 

Excerpt from Sonata for Flute Viola and Harp by Debussy

Figure 3: Excerpt from Sonata for Flute Viola and Harp by Debussy (as rendered by MEI to VexFlow)

 

System breaks are also now supported (Figure 4), and staff connectors are now rendered on the left of each system. Different staff connectors are supported in order to render different symbols connecting different groups of staves (for example, a brace for the piano right left hand, or a bracket for a group of instruments in an orchestral score).

 

C Major Prelude from The Well Tempered Clavier by J. S. Bach

Figure 4: First two lines of the C Major Prelude from The Well Tempered Clavier by J. S. Bach (as rendered by MEI to VexFlow)

 

Multi-staff system with instrument groups

Figure 5: Multi-staff system with instrument groups (as rendered by MEI to VexFlow)

 

These are important steps towards the usability of the scores in real-life use cases such as performance or practice.

For more information about the project, visit our project page and to see MEI to VexFlow in action (scores rendered directly in your browser) try out the demo page!

In our next milestone we are turning our attention towards variant handling. Historical musical pieces make their way to us through multiple documents and it often happens that multiple sources introduce differences and variants in the music. We are designing a sample web application that will be able to display 15–16th century music and provide a dynamic mechanism for the user to select which variant they want to see.

To achieve this, we are adding new parsing interface to MEI to VexFlow in order to expose the set of variants encoded in the input file. A web application will be able to display this information to the user and allow them to make their choices before rendering, and to see multiple variants at the same time.

The post Improving MEI to VexFlow appeared first on Maryland Institute for Technology in the Humanities.

]]>