Humbly Report: Sean Bechhofer

Semantics 'n' stuff

Archive for the ‘music’ Category

All the World’s a Stage

with 2 comments

Jason Groth Wigs Out

Anyone who knows me is probably aware of the fact that I’m a keen amateur* musician. So I was very pleased to be able to work on a musical dataset while spending some sabbatical time at OeRC with Dave De Roure. The project has been focused around the Internet Archive‘s Live Music Archive. The Internet Archive is a “non-profit organisation building a library of internet sites and other cultural artifacts in digital form”. They’re the folks responsible for the Way Back Machine, the service that lets you see historical states of web sites.

The Live Music Archive is a community contributed collection of live recordings with over 100,000 performances by nearly 4,000 artists. These aren’t just crappy bootlegs by someone with a tapedeck and a mic down their sleeve either — many are taken from direct feeds off the desk or have been recorded with state of the art equipment. It’s all legal too, as the material in the collection has been sanctioned by the artists. I first came across the archive several years ago — it contains recordings by a number of my current favourites including Mogwai, Calexico and Andrew Bird.

Our task was to take the collection metadata and republish as Linked Data. This involves a couple of stages. The first is to simply massage the data into an RDF-based form. The second is to provide links to existing resources in other data sources. There are two “obvious” sources to target here, MusicBrainz, which provides information about music artists, and GeoNames, which provides information about geographical locations. Using some simple techniques, we’ve identified mappings between the entities in our collection and external resources, placing the dataset firmly into the Linked Data Cloud. The exercise also raised some interesting questions about how we expose the fact that there is an underlying dataset (the source data from the archive) along with some additional interpretations on that data (the mappings to other sources). There are certainly going to be glitches in the alignment process — with a corpus of this size, automated alignment is the only viable solution — so it’s important that data consumers are aware of what they’re getting. This also relates to other strands of work about preserving scientific processes and new models of publication that we’re pursing in projects like wf4ever. I’ll try and return to some of these questions in a later post.

So what? Why is this interesting? For a start, it’s a fun corpus to play with, and one shouldn’t underestimate the importance having fun at work! On a more serious note, the corpus provides a useful resource for computational musicology as exemplified by activities such as MIREX. Not only is there metadata about large number of live performances with links to related resources, but there are links to the underlying audio files from those performances, often in hgh quality audio formats. So there is an opportunity here to combine analysis of both the metadata and audio. Thus we can potentially compare live performances by individual artists across different geographical locations. This could be in terms of metadata — which artists have played in which locations (see the network below) and does artist X play the same setlist every night? Such a query could also potentially be answered by similar resources such as http://www.setlist.fm. The presence of the audio, however, also offers the possibility of combining metadata queries with computational analysis of the performance audio data — does artist X play the same songs at the same tempo every night, and does that change with geographical location? Of course this corpus is made up of a particular collection of events, so we must be circumspect in deriving any kind of general conclusions about live performances or artist behaviour.

Who Played Where?

The dataset is accessible from http://etree.linkedmusic.org. There is a SPARQL endpoint along with browsable pages delivering HTML/RDF representations via content negotation. Let us know if you find the data useful, interesting, or if you have any ideas for improvement. There is also a short paper [1] describing the dataset submitted to the Semantic Web Journal. The SWJ has an open review process, so feel free to comment!

REFERENCES

  1. Sean Bechhofer, David De Roure and Kevin Page. Hello Cleveland! Linked Data Publication of Live Music Archives. Submitted to the Semantic Web Journal Special Call for Linked Dataset Descriptions.

*Amateur in a positive way in that I do it for the love of it and it’s not how I pay the bills.

Written by Sean Bechhofer

May 23, 2012 at 1:23 pm

Posted in linked data, music, rdf

Tagged with

Sparky’s Magic Piano

leave a comment »

Not actually the Magic Piano.....

In the week before Christmas, I attended the Digital Music Research Network meeting at Queen Mary, University of London. Digital Music research is not an area I’m currently involved with, but I went to the meeting at the suggestion of Dave De Roure. I’ll be spending some sabbatical time with Dave in Oxford this year and one of the things we’re going to be looking at is whether we can apply the technologies and approaches being developed in other project (in particular the Research Objects of Wf4Ever) to tasks like Music Information Retrieval. I’m also excited about this as it fits with some of my extra-curricular interests in music. The mix of the technical and artistic (in terms of both content and people) reminded me of Hypertext conferences that I went to back in ’99 and ’00.

Although some of the talks were a long way from my expertise, I found a few of particular interest. The opening keynote from Elaine Chew discussed some of the issues involved in conducting research — for example ensuring that work leads to publication (and publications that “count”), credit is given for researchers involved in the work, and that work is sustainable. This was illustrated with some fascinating video footage of experiments with a piano duo, investigating how the introduction of delay affects the interaction and interplay between performers.

Gyorgy Fazekas presented the studio ontology — a model that builds on earlier work on a Music Ontology by Yves Raimond. At first sight, the ontology seems fairly lightweight (largely asserted taxonomy), but given my own interests in Semantic Web technologies, this is clearly an area for further investigation.

The jewel in the crown, however, was Andrew McPherson‘s work on Electronic augmentation of the acoustic grand piano. The magnetic resonator piano uses electromagnets to induce string vibrations. For those of you familiar with the EBow, used by guitarists including Bill Nelson and Robert Fripp, it’s like a piano with 88 EBows bolted on to it. A keyboard sensor (I believe using a Moog Piano Bar) captures data from the keys and drives the system. The whole thing requires no alteration to the instrument, and can be set up in a few hours. It’s an electronic instrument, but all the sound is produced using the physical soundboard and strings of the instrument itself (i.e. no amplifier/speakers).

The overall effect is a little like an organ, with infinite sustain of notes, but many more subtle effects can be obtained including string “bending” and the introduction of additional harmonic tones. Andrew gave a demonstration of the instrument over lunch. One regret I have is that performance anxiety kicked in here (I’m a fairly rudimentary pianist) and I didn’t rush forward to have a go when he offered it to the floor! And I hadn’t brought a camera. Videos on Andrew’s site show the instrument in action.

One aspect here is the use of various gestures. Electronic keyboards have facilities like aftertouch, allowing the player to add additional pressure to the keys to control the additional tones/effects. This is possible here, with other gestures such as sliding the fingers along or up and down the keys being used to “play” the instrument. In the talk, Andrew described some additional work he was doing on providing enhanced keyboard controllers to support these additional gestures. The piano keyboard is a ubiquitous controller/interface to a musical instrument — it will be interesting to see how these additional gestures and controls fit in with players’ established practices, and which gestures are “right” for which effects.

Of course, the obvious question that we then all asked was what other instruments one could apply this approach to. Answers on a postcard……

Written by Sean Bechhofer

January 6, 2012 at 1:12 pm

Posted in music, workshop

Tagged with ,

Follow

Get every new post delivered to your Inbox.