{"id":13428,"date":"2014-11-24T14:22:04","date_gmt":"2014-11-24T19:22:04","guid":{"rendered":"http:\/\/mith.umd.edu\/?p=13428"},"modified":"2020-10-08T16:00:25","modified_gmt":"2020-10-08T20:00:25","slug":"music-addressability-api","status":"publish","type":"post","link":"https:\/\/mith.umd.edu\/music-addressability-api\/","title":{"rendered":"Music Addressability API"},"content":{"rendered":"<p>The <a title=\"Enhancing Music Notation Addressability\" href=\"http:\/\/mith.umd.edu\/research\/enhancing-music-notation-addressability\/\">Enhancing Music Notation Addressability<\/a> project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean being able to talk about a specific music passage (cfr. Michael Witmore\u2019s <a href=\"http:\/\/winedarksea.org\/?p=926\" target=\"_blank\" rel=\"noopener noreferrer\">blog post<\/a> on textual addressability).<\/p>\n<p>On paper, something equivalent could be done by circling or highlighting a part of a score. But how could this be done on a music document on the web? Would it be possible to link to a part of a score like I can link to a paragraph of a wikipedia page? How precise can I be?<\/p>\n<p>Enhancing this kind of addressability could be useful to quote passages, express analytical statements and annotations, or pass a selection of music notation on to another process for rendering, computational analysis, etc.<\/p>\n<h3>Project Progress as of November 2014<\/h3>\n<p>Most of our efforts have been focused on creating a URI syntax to address common western music notation regardless of the format of a music notation document. Music notation is represented in a variety of digital formats and there isn\u2019t an equivalent of a \u201cplain text\u201d music document. Even the simplest music note is represented differently across systems. Nonetheless, there are certain primitives that are common to most music notation representation systems. These are the ones that we are considering now:<\/p>\n<p><i>beat<\/i>: music notation relies on beat to structure events, such as notes and rests, in time.<\/p>\n<p><i>measures<\/i>: typically indicated by bar lines, measures indicate a segment corresponding to a \u00a0number of beats.<\/p>\n<p><i>staves:<\/i> staves in scores separate music notation played by different instruments or group of instruments.<\/p>\n<p>Consider the following example (from <a href=\"http:\/\/digitalduchemin.org\/piece\/DC0519\/\" target=\"_blank\" rel=\"noopener noreferrer\"><i>The Lost Voices<\/i><\/a> project), where we want to address the notation highlighted in red:<\/p>\n<p><img class=\"aligncenter wp-image-13429 size-large\" src=\"http:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex.png\" alt=\"DC0519 L\u2019huillier, Si je te voy\" width=\"410\" height=\"206\" srcset=\"https:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex-200x101.png 200w, https:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex-540x272.png 540w, https:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex-980x493.png 980w, https:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex.png 988w\" sizes=\"(max-width: 410px) 100vw, 410px\" \/><\/p>\n<p>We can say that it occurs between measure 38 and 39, on the first and third staves (labelled <i>Superius<\/i> and <i>Tenor<\/i> \u2014 this is a renaissance choral piece). Measure 38, however, is not considered in full, but only starting from the third beat (there are four beats per measure in this example).<\/p>\n<p>According to our syntax, this selection could be expressed as follows:<\/p>\n<p><code>document\/measures\/staves\/beats\/<\/code><br \/>\n<code>dc0519.mei\/38-39\/1,3\/3-3<\/code><\/p>\n<p>The selection of measures is expressed as a range (38-39), staves can be selected through a range or separately with a comma (1,3), and beats are always relative to their measure, so 3-3 means from the third beat of the starting measure to the third beat of the ending measure. Things can get more complicated, but for that we defer to the <a href=\"https:\/\/github.com\/umd-mith\/ema\/blob\/master\/docs\/api.md\" target=\"_blank\" rel=\"noopener noreferrer\">Music Addressability API <\/a>documentation that we&#8217;ve been writing (beware: it\u2019s still a work in progress, feel free to contribute on GitHub!)<\/p>\n<p>One important aspect worth noting\u00a0is that the beat is the primary driver of the selection: only selections that are contiguous in beat can be expressed with this system. For now, this seems to be a sufficiently flexible way of addressing music notation and we\u2019re working on a way\u00a0to group several selections together in case the addressing act needs to be more complex \u2014 more on this next time.<\/p>\n<h3>Upcoming goals for the project<\/h3>\n<p>Defining a Music Addressability API is fun, but it\u2019s useless without an implementation. So we\u2019re working on a web service able to parse the URL syntax described in the API and to retrieve the addressed music notation from a file encoded according the Music Encoding Initiative format (MEI). Unlike the URL syntax, the implementation has to be format specific, because it\u00a0needs to know how measures, staves, and beats are represented to be able to parse and retrieve them.<\/p>\n<p>We\u2019re using MEI because our next step in the new year will be focusing on our case-study data: a corpus of renaissance songs edited and published by the <a title=\"Lost Voices project\" href=\"http:\/\/digitalduchemin.org\" target=\"_blank\" rel=\"noopener noreferrer\"><i>Lost Voices<\/i><\/a> project. Students involved on the project have created a number of micro-analyses of different parts of the scores; we\u2019ll re-model them using the URL syntax specified by the Music Addressability API to test its effectiveness.<\/p>\n<h3>Challenges still ahead<\/h3>\n<p>After collecting feedback from the MEI community, we were able to identify some aspects of the API that still need to be ironed out. Relying on beat works well because music typically has beat. Music notation, however, often breaks rules in favor of flexibility. Cadenzas, for example, are ornamental passages of an improvisational nature that can be written out with notation that disregards\u00a0a measure\u2019s beat. How could we address only part of a cadenza if beat is not available? This is one of a few ideas that are drawing us back to the whiteboard and we look forward to developing\u00a0solutions.<\/p>\n<p>If you\u2019re interested in what EMA is setting out to do, please do get in touch and make sure to keep an eye on our GitHub repository where we\u2019ll keep updating the API and release tools.<\/p>\n<p><em>EMA is\u00a0a one-year project funded by the NEH DH Start-Up Grants program.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean [&hellip;]<\/p>\n","protected":false},"author":37,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[65,74,77],"tags":[150],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v15.0 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Music Addressability API &ndash; Maryland Institute for Technology in the Humanities<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/mith.umd.edu\/music-addressability-api\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Music Addressability API &ndash; Maryland Institute for Technology in the Humanities\" \/>\n<meta property=\"og:description\" content=\"The Enhancing Music Notation Addressability project (EMA) is creating a system to address specific parts of a music document available online. By addressing we mean [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/mith.umd.edu\/music-addressability-api\/\" \/>\n<meta property=\"og:site_name\" content=\"Maryland Institute for Technology in the Humanities\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/UMD.MITH\" \/>\n<meta property=\"article:published_time\" content=\"2014-11-24T19:22:04+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-10-08T20:00:25+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex.png\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/mith.umd.edu\/#website\",\"url\":\"https:\/\/mith.umd.edu\/\",\"name\":\"Maryland Institute for Technology in the Humanities\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/mith.umd.edu\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/mith.umd.edu\/music-addressability-api\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"http:\/\/mith.umd.edu\/wp-content\/uploads\/2014\/11\/EMA_ex.png\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/mith.umd.edu\/music-addressability-api\/#webpage\",\"url\":\"https:\/\/mith.umd.edu\/music-addressability-api\/\",\"name\":\"Music Addressability API &ndash; Maryland Institute for Technology in the Humanities\",\"isPartOf\":{\"@id\":\"https:\/\/mith.umd.edu\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/mith.umd.edu\/music-addressability-api\/#primaryimage\"},\"datePublished\":\"2014-11-24T19:22:04+00:00\",\"dateModified\":\"2020-10-08T20:00:25+00:00\",\"author\":{\"@id\":\"https:\/\/mith.umd.edu\/#\/schema\/person\/f8d294b8813676c5af5057487764ff9e\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/mith.umd.edu\/music-addressability-api\/\"]}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/mith.umd.edu\/#\/schema\/person\/f8d294b8813676c5af5057487764ff9e\",\"name\":\"Raffaele Viglianti\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/mith.umd.edu\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/a0b8791a1837c64fecce13a1effbe22e?s=96&d=mm&r=g\",\"caption\":\"Raffaele Viglianti\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/posts\/13428"}],"collection":[{"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/comments?post=13428"}],"version-history":[{"count":1,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/posts\/13428\/revisions"}],"predecessor-version":[{"id":21107,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/posts\/13428\/revisions\/21107"}],"wp:attachment":[{"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/media?parent=13428"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/categories?post=13428"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mith.umd.edu\/wp-json\/wp\/v2\/tags?post=13428"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}