Humbly Report: Sean Bechhofer

Semantics 'n' stuff

Whale Shark 2.0

with one comment

The fishDelish project is a JISC funded collaboration between the University of Manchester, Hedtek Ltd and the FishBase Information and Research Group Inc. FishBase is a “a global information system with all you ever wanted to know about fishes.” FishBase is available as a relational database and the project is about taking that data and republishing as RDF/Linked Data. The project is nearing the end, and we now have the FishBase data in a triple store. I took a look at how we could generate some nice looking species pages. FishBase currently offers pages presenting information about species (for example the Whale Shark).

Whale Shark on FishBase

I wanted to try and replicate (some of) this presentation in as simple/lightweight a way as possible. The solution I adopted involves a single SPARQL query that pulls out relevant information about a species, and an XSL stylesheet that transforms the results of that query into an HTML page. The whole thing is tied together with a simple bit of PHP code that executes the SPARQL query (using RAP — a bit long in the tooth, but it does this job), requesting the results as XML. It then uses PHP’s DOMDocument to add a link to the XSL stylesheet into the results. The HTML rendering is then actually handled by the web browser applying the style sheet. The resulting species pages (e.g. the Whale Shark again) are not — to use the words of David Flanders, our JISC Programme Manager — as information rich as the original FishBase pages, but they are sexier.

Whale Shark on fishDelish

To a certain extent, that’s simply down to styling (I’m a big fan of Georgia), but the exercise did help to explore the usage of SPARQL and XSL on the FishBase dataset. The SPARQL queries and stylesheets developed will also be useful in conjunction with the mysparql libraries developed in fishDelish. mySparql is a service developed in fishDelish that allows the embedding of SPARQL queries into pages.

The first problem I faced was trying to understand the structure of the data in the triplestore. The property names produced by D2R are not always entirely, ermm, readable. As my colleague Bijan Parsia discussed in a blog post describing his fishing expeditions, the state of linked data “browsers” is mixed. I ended up using Chris Gutteridge’s Graphite “Quick and Dirty” RDF Browser to help navigate around the data set.

A second question was how to approach the queries. The species pages have a simple structure. They have a single “topic” (i.e. the species), and then display characteristics of that species. So constructing a species page can be seen as a form filling process where the attributes are predetermined. It’s possible to write a SPARQL query to get information about a species with a single row in the results. The stylesheet (e.g. for species) can grab the values out of those results and “fill in the blanks” as required. An alternative would be to use some kind of generic s-p-o pattern in the query and pull out all the information about a particular URI (i.e. the species). In the species case though, we already know what information we’re interested in getting out so the “canned” approach is fine.

I also produced some pages for Orders and Families (e.g. Rhincodontidae or Rajiformes). The SPARQL query here returns a number of rows, as the query asks for all the families in an order or species in a family. There is redundancy in the query result as the first few columns in each row are identical. A cleaner solution here might be to use more than one SPARQL query — one pulling out the family information, one requesting the family/species list. That would require more sophisticated processing though, rather than my lightweight SPARQL query + XSL approach. Again, this is something that the mysparql service would help with.

Overall, this was in interesting experiment and exercise in understanding the FishBase RDF data. Harking back to an earlier blog post from Bijan, as I’m already familiar with SPARQL and XSL, it was probably easier for me to produce these pages using the converted data, but it’s not clear whether that would be true in general. There’s actually very little in here that’s about Linked Data. This could have been done (as the current FishBase pages are done) using the existing relational db plus some queries and a bit of scripting. There was some benefit here in using the standardised protocols and infrastructure offered by SPARQL, RDF, XML and XSL though. It was also very easy for me to do all of this on the client side — all I needed was access to the SPARQL endpoint and some XML tooling. So the real benefit for this particular application is gained from the data publication.

It did help to illustrate the kinds of things we can now begin to do with the RDF data though, and puts us in a situation where we can look at further integration of the data with other data sets. For example it would be nice to hook into resources like the BBC Wildlife Finder pages, which are also packed with semantic goodness.

It was also fun, which is always a good thing! If only the Whale Sharks themselves were as easy to find…..

(This is an edited version of an fishDelish project blog post)

Advertisements

Written by Sean Bechhofer

March 10, 2011 at 10:48 am

Posted in linked data, rdf

Tagged with , ,

One Response

Subscribe to comments with RSS.

  1. hi, are there any news on the availability of fishbase as linked data / rdf? need help?
    wkr http://www.turnguard.com/turnguard

    Jakobitsch Jürgen

    October 16, 2012 at 7:45 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: