
Schema.org 2.0
About a month ago Version 2.0 of the Schema.org vocabulary hit the streets. But does this warrant the version number clicking over from 1.xx to 2.0?
Check out our new fixed price service to find out how your site is performing!
About a month ago Version 2.0 of the Schema.org vocabulary hit the streets. But does this warrant the version number clicking over from 1.xx to 2.0?
I am pleased to share with you a small but significant step on the Linked Data journey for WorldCat and the exposure of data from OCLC. Content-negotiation has been implemented for the publication of Linked Data for WorldCat resources. For those immersed in the publication and consumption of Linked Data, there is little more to say. However I suspect there are a significant number of folks reading this who are wondering what the heck I am going on about. It is a little bit techie but I will try to keep it as simple as possible. Back last year, a …
As is often the way, you start a post without realising that it is part of a series of posts – as with the first in this series. That one – Entification, the following one – Hubs of Authority and this, together map out a journey that I believe the library community is undertaking as it evolves from a record based system of cataloguing items towards embracing distributed open linked data principles to connect users with the resources they seek. Although grounded in much of the theory and practice I promote and engage with, in my role as Technology Evangelist …
As is often the way, you start a post without realising that it is part of a series of posts – as with the first in this series. That one – Entification, and the next in the series – Beacons of Availability, together map out a journey that I believe the library community is undertaking as it evolves from a record based system of cataloguing items towards embracing distributed open linked data principles to connect users with the resources they seek. Although grounded in much of the theory and practice I promote and engage with, in my role as Technology …
The phrase ‘getting library data into a linked data form’ hides multitude of issues. There are some obvious steps such as holding and/or outputting the data in RDF, providing resources with permanent URIs, etc. However, deriving useful library linked data from a source, such as a Marc record, requires far more than giving it a URI and encoding what you know, unchanged, as RDF triples.
I can not really get away with making a statement like “Better still, download and install a triplestore [such as 4Store], load up the approximately 80 million triples and practice some SPARQL on them” and then not following it up. I made it in my previous post Get Yourself a Linked Data Piece of WorldCat to Play With in which I was highlighting the release of a download file containing RDF descriptions of the 1.2 million most highly held resources in WorldCat.org – to make the cut, a resource had to be held by more than 250 libraries. So here …
Why is it those sceptical about a new technology resort, within a very few sentences, to the show me the Killer App line. As if the appearance of, a gold-star approved by the bloggerati, example of something useful implemented in said technology is going to change their mind.
I have watched many flounder when they first try to get their head around describing the things they already know in this new Linked Data format, RDF. Just like moving house, we initially grasp for the familiar and that might not always be helpful. This is is where stepping back from the XML is a good idea. XML is only one encoding/transmission format for RDF
Ookaboo “free pictures of everything on earth” have released nearly a million public domain and Creative Commons licensed stock images mapped with precision to concepts, instead of just words.
But there is more… They have released an RDF dump of the metadata behind the images, concept mappings and links to concepts in Freebase and Dbpedia
One phrase in particular leapt out at me when reading Karen Coyle’s Bibliographic Framework: RDF and Linked Data post a few days ago.
My message here is that we need to be creating data, not records, and that we need to create the data first, then build records with it for those applications where records are needed.