Humbly Report: Sean Bechhofer

Semantics 'n' stuff

Archive for March 2011

The Eurovision Workflow Contest

with one comment

Ever wondered where the workflows that are most downloaded or viewed in myExperiment come from? Wonder no longer! Here’s a nifty visualisation using Google’s Public Data Explorer:

myExperiment Statistics

What’s it doing?

myExperiment is a Virtual Research Environment that supports users in the sharing of digital items associated with their research. Initially targeted at scientific workflows, it’s now being used to share a number of different contribution types. For this particular example, however, I focused on workflows. Workflows are associated with an owner, and owners may also provide information about themselves, for example which country they’re in. The site also keeps statistics about views and downloads of the items. This dataset allows exploration of the relationship between these various factors.

How does it work?

The myExperiment data is made available as a SPARQL endpoint, supporting the construction of client applications that can consume the metadata provided by myExperiment. A few simple SPARQL queries (thanks to David Newman for SPARQLing support) allowed me to grab summary information about the numbers of workflows in the repository, their formats, and which country the users came from. The myExperiment endpoint will deliver this information as CSV files, so it’s just then a case of packaging these results up with some metadata and then uploading to the Public Data Explorer. Hey presto, pretty pictures!

The Explorer expects time series data in order to do its visualisations. The data I’m displaying is “snapshot” data, so there’s only one timepoint in the time series — 2011. We can still get some useful visualisations out though, allowing us to explore the relationships between country of origin, formats, and numbers of downloads and views.

This is, to quote Peter Snow on Election Night, “just a bit of fun” — I’ve made little attempt to clean the data, and there will be users who have not supplied a country, so the data is not complete and shouldn’t necessarily be taken as completely representative of productivity of the countries involved! In addition, the data doesn’t use country identifiers that the data visualiser knows about, and has no lat/long information, so mapping isn’t available (there are also some interesting potential issues with the use of England and United Kingdom). However, it’s a nice example of plumbing existing pieces of infrastructure together in a lightweight way. Although this was produced in a batch mode, in principle this should be easy to do dynamically.

So, Terry, the results please…

“Royaume Uni, douze points”.

Boom Bang-a-Bang!

Written by Sean Bechhofer

March 16, 2011 at 5:30 pm

Posted in rdf, visualisation

Tagged with

Whale Shark 2.0

with one comment

The fishDelish project is a JISC funded collaboration between the University of Manchester, Hedtek Ltd and the FishBase Information and Research Group Inc. FishBase is a “a global information system with all you ever wanted to know about fishes.” FishBase is available as a relational database and the project is about taking that data and republishing as RDF/Linked Data. The project is nearing the end, and we now have the FishBase data in a triple store. I took a look at how we could generate some nice looking species pages. FishBase currently offers pages presenting information about species (for example the Whale Shark).

Whale Shark on FishBase

I wanted to try and replicate (some of) this presentation in as simple/lightweight a way as possible. The solution I adopted involves a single SPARQL query that pulls out relevant information about a species, and an XSL stylesheet that transforms the results of that query into an HTML page. The whole thing is tied together with a simple bit of PHP code that executes the SPARQL query (using RAP — a bit long in the tooth, but it does this job), requesting the results as XML. It then uses PHP’s DOMDocument to add a link to the XSL stylesheet into the results. The HTML rendering is then actually handled by the web browser applying the style sheet. The resulting species pages (e.g. the Whale Shark again) are not — to use the words of David Flanders, our JISC Programme Manager — as information rich as the original FishBase pages, but they are sexier.

Whale Shark on fishDelish

To a certain extent, that’s simply down to styling (I’m a big fan of Georgia), but the exercise did help to explore the usage of SPARQL and XSL on the FishBase dataset. The SPARQL queries and stylesheets developed will also be useful in conjunction with the mysparql libraries developed in fishDelish. mySparql is a service developed in fishDelish that allows the embedding of SPARQL queries into pages.

The first problem I faced was trying to understand the structure of the data in the triplestore. The property names produced by D2R are not always entirely, ermm, readable. As my colleague Bijan Parsia discussed in a blog post describing his fishing expeditions, the state of linked data “browsers” is mixed. I ended up using Chris Gutteridge’s Graphite “Quick and Dirty” RDF Browser to help navigate around the data set.

A second question was how to approach the queries. The species pages have a simple structure. They have a single “topic” (i.e. the species), and then display characteristics of that species. So constructing a species page can be seen as a form filling process where the attributes are predetermined. It’s possible to write a SPARQL query to get information about a species with a single row in the results. The stylesheet (e.g. for species) can grab the values out of those results and “fill in the blanks” as required. An alternative would be to use some kind of generic s-p-o pattern in the query and pull out all the information about a particular URI (i.e. the species). In the species case though, we already know what information we’re interested in getting out so the “canned” approach is fine.

I also produced some pages for Orders and Families (e.g. Rhincodontidae or Rajiformes). The SPARQL query here returns a number of rows, as the query asks for all the families in an order or species in a family. There is redundancy in the query result as the first few columns in each row are identical. A cleaner solution here might be to use more than one SPARQL query — one pulling out the family information, one requesting the family/species list. That would require more sophisticated processing though, rather than my lightweight SPARQL query + XSL approach. Again, this is something that the mysparql service would help with.

Overall, this was in interesting experiment and exercise in understanding the FishBase RDF data. Harking back to an earlier blog post from Bijan, as I’m already familiar with SPARQL and XSL, it was probably easier for me to produce these pages using the converted data, but it’s not clear whether that would be true in general. There’s actually very little in here that’s about Linked Data. This could have been done (as the current FishBase pages are done) using the existing relational db plus some queries and a bit of scripting. There was some benefit here in using the standardised protocols and infrastructure offered by SPARQL, RDF, XML and XSL though. It was also very easy for me to do all of this on the client side — all I needed was access to the SPARQL endpoint and some XML tooling. So the real benefit for this particular application is gained from the data publication.

It did help to illustrate the kinds of things we can now begin to do with the RDF data though, and puts us in a situation where we can look at further integration of the data with other data sets. For example it would be nice to hook into resources like the BBC Wildlife Finder pages, which are also packed with semantic goodness.

It was also fun, which is always a good thing! If only the Whale Sharks themselves were as easy to find…..

(This is an edited version of an fishDelish project blog post)

Written by Sean Bechhofer

March 10, 2011 at 10:48 am

Posted in linked data, rdf

Tagged with , ,