Ivan’s private site

March 1, 2013

RDFa 1.1, microdata, and turtle-in-HTML now in the core distribution of RDFLib

This has been in the works for a while, but it is done now: the latest (3.4.0 version) of the python RDFLib library has just been released, and it includes and RDFa 1.1, microdata, and turtle-in-HTML parser. In other words, the user can add structured data to an HTML file, and that will be parsed into RDF and added to an RDFLib Graph structure. This is a significant step, and thanks to Gunnar Aastrand Grimnes, who helped me adding those parsers into the main distribution.

I have written a blog last summer on some of the technical details of those parsers; although there has been updates since then, essentially following the minor changes that the RDFa Working has defined for RDFa, as well as changes/updates on the microdata->RDF algorithm, the general approach described in that blog remains valid, and it is not necessary to repeat it here. For further details on these different formats, some of the useful links are:

Enjoy!

August 31, 2012

RDFa, microdata, turtle-in-HTML, and RDFLib

For those of us programming in Python, RDFLib is certainly one of the RDF packages of choice. Several years ago, when I developed a distiller for RDFa 1.0, some good souls picked the code up and added it to RDFLib as one of the parser formats. However, years have gone by, and have seen the development of RDFa 1.1, of microdata, and also the specification of directly embedding Turtle into HTML. It is time to bring all these into RDFLib…

Some times ago I have developed both a new version of the RDFa distiller, adapted for the  1.1 RDFa standard, as well as a microdata to RDF distiller, based on the Interest Group note on converting microdata to RDF. Both of these were packages and applications on top of RDFLib. Which is fine because they can be used with the deployed RDFLib installations out there. But, ideally, these should be retrofitted into the core of RDFLib; I have used the last few quiet days of the vacation period in August to do just that (thanks to Niklas Lindström and Gunnar Grimes for some email discussion and helping me through the hooplas of RDFLib-on-github). The results are in a separate branch of the RDFLib github repository, under the name structured_data_parsers. Using these parsers here is what one can do:

g = Graph()
# parse an SVG+RDF 1.1 file an store the results in 'g':
g.parse(URI_of_SVG_file, format="rdfa1.1") 
# parse an HTML+microdata file an store the results in 'g':
g.parse(URI_of_HTML_file, format="microdata")
# parse an HTML file for any structured conent an store the results in 'g':
g.parse(URI_of_HTML_file, format="html")

The third option is interesting (thanks to Dan Brickley who suggested it): this will parse an HTML file for any structured data, let that be in microdata, RDFa 1.1, or in Turtle embedded in a <script type="text/turtle">...</script> tag.

The core of the RDFa 1.1 has gone through a very thorough testing, using the extensive test suite on rdfa.info. This is less true for microdata, because there is not yet an extensive test suite for that one yet (but the code is also simpler). On the other hand, any restructuring like that may introduce some extra bugs. I would very much appreciate if interested geeks in the community could install and test it, and forward me the bugs that are still undeniably there… Note that the microdata->RDF mapping specification may still undergo some changes in the coming few weeks/months (primarily catching up with some development around schema.org); I hope to adapt the code to the changes quickly.

I have also made some arbitrary decisions here, which are minor, but arbitrary nevertheless. Any feedback on those is welcome:

  • I decided not to remove the old, 1.0 parser from this branch. Although the new version of the RDFa 1.1 parser can switch into 1.0 mode if the necessary switches are in the code (e.g., @version or a RDFa 1.0 specific DTD), in the absence of those 1.1 will be used. As, unfortunately, 1.1 is not 100% backward compatible with 1.0, this may create some issues with deployed applications. This also means that the format="rdfa" argument will refer to 1.0 and not to 1.1. Am I too cautious here?
  • The format argument in parse can also hold media types. Some of those are fairly obvious: e.g., application/svg+xml will map on the new parser with RDFa 1.1, for example. But what should be the default mapping for text/html? At present, it maps to the “universal” extractor (i.e., extracting everything).

Of course, at some point, this branch will be merged with the main branch of RDFLib meaning that, eventually, this will be part of the core distribution. I cannot say at this point when this will happen, I am not involved in the day-to-day management of the RDFLib development.

I hope this will be useful…

June 7, 2012

The long journey to RDFa 1.1

RDFa 1.1 Core, RDFa 1.1 Lite, and XHTML+RDFa 1.1 have just been published as Web Standards, i.e., W3C Recommendations, accompanied by a new edition of the RDFa Primer. Although it is “merely” and update of the previous RDFa 1.0 standard (published in 2008), it is a significant milestone nevertheless. RDFa 1.1 has restructured RDFa 1.0 in terms of the host languages it can be used with, and has also added some important features.

It has been a long journey. The development of RDFa (and I include RDFa 1.0 in this) was slowed down more by “social” rather than technical issues. Indeed, RDFa is at the crossroad of two different communitites which, alas!, had very little interaction before. As its name suggests, RDFa is of course closely related to RDF, i.e., to the communites related to the Semantic Web, Linked Data, RDF, etc. On the other hand, the very goal of RDFa is to add structured data to markup languages (primarily the HTML family, of course, but also SVG, Atom, etc.). This means that RDFa is also relevant to all these communities, often loosely referred to as the “Web Application” community. The interaction between these communities was not always easy, and was often characterized by misunderstandings, different engineering patterns, different concerns. To make things even more difficult, RDFa was also caught in the middle of the XHTML2 vs. HTML5 controversy: after all, the first drafts of RDFa were developed alongside XTHML2 and, although the current RDFa has long moved away from this heritage, the image of being part of XHTML2 stayed.

But all this is behind us now, and should be relegated to history. In my view the result, RDFa 1.1, reflects a good balance between the concerns and usage patterns of these communities; and that is what really counts. RDFa 1.1 allows the usage of prefixed abbreviation for URIs (so called CURIEs) that the RDF community had been using and got used to for many years, but (in contrast to RDFa 1.0) its usage is now optional: authors may choose to use full URIs wherever and whenever they wish. By the way, prefixes for CURIEs are not defined through the @xmlns mechanism inherited from XML (this was probably the single biggest stumbling block around RDFa 1.0): instead, the usage of @xmlns is deprecated in favour of a dedicated @prefix attribute. Finally, a number of well-known vocabularies have predefined prefixes; authors are not required to define prefixes for, say, the Dublin Core, FOAF, Schema.org, or Facebook’s Open Graph Protocol terms; they are automatically recognized. Finally, beyond these facilities with prefixed terms, RDFa 1.1 authors also have the possibility to define a vocabulary for a markup fragment (via the @vocab attribute) and forget about URIs and prefixes altogether: simple terms in property names or types will authomatically be assigned URIs in that vocabulary. This is particularly important when RDFa is used with a single vocabulary (Schema.org or OGP usage comes to mind again).

The behaviour of @property has been made richer, which means that in many (most?) situations the structured data can be expressed with @property alone, without the usage of @rel or @rev (although the usage of these latter is still possible). This increased simplicity is important for authors who are new to this world and may not, initially, grasp the difference between the classical usage of @propery (i.e., literal objects) and @rel (i.e., URI References as objects). (Unfortunately, this change has created some corner-case backward incompatibilities with RDFa 1.0.)

There are also some other, though maybe less significant, improvements. For example, authors can also express (RDF) lists succintly; this means that RDFa 1.1 can be used to describe, e.g., author lists for an article (where order counts a lot) or an OWL vocabulary. Also, an awkwardness in RDFa 1.0, related to XML Literals, have been removed.

The structure of RDFa has also changed. Whereas the definition of RDFa 1.0 was closely intertwined with XHTML, RDFa 1.1 separates the core definition from what it calls “Host Languages”. This means that RDFa is defined in a way that it can be adapted to all types of XML languages as well as HTML5. There are separate specifications on how RDFa 1.1 applies to XHTML1 and for HTML5, as well as for XML in general; this means that RDFa 1.1 can also be used with SVG, Atom, or MathML, because those languages automatically inherit from the XML definitions.

Last but not least: the Working Group has also defined a separate “subset” language, called RDFa 1.1 Lite. This is not a separate RDFa 1.1 dialect, just an authoring subset of RDFa 1.1: an authoring subset that makes it easy for authors to step into this world easily, without being forced to use all the possibilities of RDFa 1.1 (i.e., RDF). It can be expected that a large percentage of RDFa usage can be covered by this subset, but it would also provide a good stepping stone when more complex structures (mixture of many different vocabularies, datatypes, more complex graph structures, etc) are required.

As I said, it has been a long journey. Many people were involved in the work, both in the Working Group but also through comments coming from the public and from major potential users. But now that the result is there, I can safely say: it was worth the effort. Recent figures on the adoption of structured data on the Web (see, for example the reports published at the LDOW 2012 Workshop recently by Peter Mika and Tim Potter, as well as by Hannes Mühleisen and Christian Bizer) can be summarized by a simple statement: structured data in Web pages is now mainstream, thanks to its adoption by search engines (i.e., Schema.org) or companies like Facebook. And RDFa 1.1 has a major role to play in this evolution.

If you are new to RDFa: the RDFa Primer is of course a good starting point, but it is well worth checking out (and possibly contribute to!) the rdfa.info web site which contains references to tools, documents; you can also try out small RDFa snippets. Enjoy!

April 24, 2012

Moved my RDFa/microdata python modules to github

Filed under: Code,Python,Work Related — Ivan Herman @ 12:37
Tags: , , , ,

In case you were using/downloading my python module for RDFa 1.1 or for Microdata: I have now moved away from the old CVS repository site, and moved to GitHub. The two modules are on the RDFLib/pyrdfa3 and RDFLib/pymicrodata repositories, respectively. Both of these modules are more or less final (there are still some testings happening for RDFa, but not much left) and I am just happy if others chime in in the future of these modules.

Although part of the RDFLib project on GitHub, the two modules are pretty much independent of the core RDFLib library, although they are built on top of it. I hope that, with the help of people who know the RDFLib internal structures better, both modules can become, eventually, part of the core. But this may take some time…

April 18, 2012

Structured Data in HTML in the mainstream

Filed under: Semantic Web,Work Related — Ivan Herman @ 8:31
Tags: , ,

As referred to in my previous blog on LDOW2012, Hannes Hühleisen and Chris Bizer, but also Peter Mika and Tim Potter, published some findings on structured data in HTML based on Web Crawl results and analysis. Both Hannes’ and Peter’ papers are now on line. Hannes and Chris based their results on CommonCrawl, whereas Peter and Tim rely on Bing.

Although there are some controversies as for the usability of these crawls as well as the interpretation of their results (see Martin Hepp’s mail, and the answer by Peter Mika as well as the resulting thread on the mailing list) I think what is really important is the big picture which emerges from both set of results: no one can reasonably dispute the importance of structured data in HTML any more. Although I vividly remember a time when this was was a matter of bitter discussions, I think we can put this issue behind us now. I do not think I can summarize it better than Peter did in another of his emails:

…both studies confirm that the Semantic Web, and in particular metadata in HTML, is taking on in major ways thanks to the efforts of Facebook, the sponsors of schema.org and many other individuals and organizations. Comparing to our previous numbers, for example we see a five-fold increase in RDFa usage with 25% of webpages containing RDFa data (including OGP), and over 7% of web pages containing microdata. These are incredibly impressive numbers, which illustrate that this part of the Semantic Web has gone mainstream.

December 16, 2011

Where we are with RDFa 1.1?

English: RDFa Content Editor

Image via Wikipedia

There has been a flurry of activities around RDFa 1.1 in the past few months. Although a number of blogs and news items have been published on the changes, all those have become “officialized” only the past few days with the publication of the latest drafts, as well as with the publication of RDFa 1.1 Lite. It may be worth looking back at the past few months to have a clearer idea on what happened. I make references to a number of other blogs that were published in the past few months; the interested readers should consult those for details.

The latest official drafts for RDFa 1.1 were published in Spring 2011. However, lot has happened since. First of all, the RDFWA Working Group, working on this specification, has received a significant amount of comments. Some of those were rooted in implementations and the difficulties encountered therein; some came from potential authors who asked for further simplifications. Also, the announcement of schema.org had an important effect: indeed, this initiative drew attention on the importance of structured data in Web pages, which also raised further questions on the usability of RDFa for that usage pattern This came to the fore even more forcefully at the workshop organized by the stakeholders of schema.org in Mountain View. A new task force on the relationships of RDFa and microdata has been set up at W3C; beyond looking at the relationship of these two syntaxes, that task force also raised a number of issues on RDFa 1.1. These issues have been, by and large, accepted and handled by the Working Group (and reflected in the new drafts).

What does this mean for the new drafts? The bottom line: there have been some fundamental changes in RDFa 1.1. For example, profiles, introduced in earlier releases of RDFa 1.1, have been removed due to implementation challenges; however, management of vocabularies have acquired an optional feature that helps vocabulary authors to “bind” their vocabularies to other vocabularies, without introducing an extra burden on authors (see another blog for more details). Another long-standing issue was whether RDFa should include a syntax for ordered lists; this has been done now (see the same blog for further details).

A more recent important change concerns the usage of @property and @rel. Although usage of these attributes for RDF savy authors was never a real problem (the former is for the creation of literal objects, whereas the latter is for URI references), they have proven to be a major obstacle for ‘lambda’ HTML authors. This issue came up quite forcefully at the schema.org workshop in Mountain View, too. After a long technical discussion in the group, the new version reduces the usage difference between the two significantly. Essentially, if, on the same element, @property is present together with, say, @href or @resource, and @rel or @rev is not present, a URI reference is generated as an object of the triple. I.e., when used on a, say, <link> or <a> element, @property  behaves exactly like @rel. It turns out that this usage pattern is so widespread that it covers most of the important use cases for authors. The new version of the RDFa 1.1 Primer (as well as the RDFa 1.1 Core, actually) has a number of examples that show these. There are also some other changes related to the behaviour of @typeof in relations to @property; please consult the specification for these.

The publication of RDFa 1.1 Lite was also a very important step. This defines a “sub-set” of the RDFa attributes that can serve as a guideline for HTML authors to express simple structured data in HTML without bothering about more complex features. This is the subset of RDFa that schema.org will “accept”,  as an alternative to the microdata, as a possible syntax for schema.org vocabularies. (There are some examples on how some schema.org example look like in RDFa 1.1 Lite on a different blog.) In some sense, RDFa 1.1 Lite can be considered like the equivalent of microdata, except that it leaves the door open for more complex vocabulary usage, mixture with different vocabularies, etc. (The HTML Task Force will publish soon a more detailed comparison of the different syntaxes.)

So here is, roughly, where we are today. The recent publications by the W3C RDFWA Working Group have, as I said, ”officialized” all the changes that were discussed since spring. The group decided not to publish a Last Call Working Draft, because the last few weeks’ of work on the HTML Task Force may reveal some new requirements; if not, the last round of publications will follow soon.

And what about implementations? Well, my “shadow” implementation of the RDFa distiller (which also includes a separate “validator” service) incorporates all the latest changes. I also added a new feature a few weeks ago, namely the possibility to serialize the output in JSON-LD (although this has become outdated a few days ago, due to some changes in JSON-LD…). I am not sure of the exact status of Gregg Kellogg’s RDF Distiller, but, knowing him, it is either already in line with the latest drafts or it is only a matter of a few days to be so. And there are surely more around that I do not know about.

This last series of publications have provided a nice closure for a busy RDFa year. I guess the only thing now is to wish everyone a Merry Christmas, a peaceful and happy Hanukkah, or other festivities you honor at this time of the year.  In any case, a very happy New Year!

Enhanced by Zemanta

April 1, 2011

2nd Last Call for RDFa 1.1

Filed under: Semantic Web,Work Related — Ivan Herman @ 2:58
Tags: , ,

The W3C RDFa Working Group published a “Last Call” for RDFa 1.1 back at the end of October last year. This was meant to be a “feature freeze” version and was asking for public comments. Well, the group received quite a number of those. Lots of small things, requiring changes of the documents in many places to make them more precise even in various corner cases, and some more significant ones. In some ways, it shows that the W3C process works, ensuring quite an influence of the community on the final shape of the documents. Because of the many changes the group decided to re-issue a Last Call (yes, the jargon is a bit misleading here…), aimed at a last check before the document goes to its next phase on the road of becoming a standard. Almost all the changes are minor for users, though important for, e.g., implementers to ensure interoperability. “Almost all”, because there is one new and, I believe, very important though controversial new feature, namely the so-called default profiles.

I have already blogged about profiles when they were first published back in April last year. In short, profile documents provide an indirection mechanism to define prefixes and terms for an RDFa source: publishers may collect all the prefixes they deem important for a specific application and authors, instead of being required to define a whole set of prefixes in the RDFa file itself, can just refer to the profile file to have them all at their disposal. I think the profile feature was the feature stirring the biggest interest in the RDFa 1.1 work: they are undeniably useful, and undeniably controversial… Indeed, in theory at least, profiles represent yet another HTTP round when extracting RDF from and RDFa file, which is never a good thing. But a good caching mechanism or other implementation tricks can greatly alleviate the pain… (B.t.w., the group has also created some guidelines for profile publishers to help implementers.)

This draft goes one step further by introducing default profiles. These are profiles just like any other, but they are defined with fixed URI-s (namely http://www.w3.org/profile/rdfa-1.1 for RDFa 1.1 in general, and, additionally, http://www.w3.org/profile/html-rdfa-1.1 for the various HTML variants) and the user does not have to declare them in an RDFa source. Which means that a very simple HTML+RDFa file of the sort:

<html>
  <body>
    <p about ="xsd:maxExclusive" rel="rdf:type" resource="owl:DatatypeProperty">
      An OWL Axiom: "xsd:maxExclusive" is a Datatype Property in OWL.
    </p>
  </body>
</html>

(note the missing prefix declarations!) will produce that RDF triple that you might expect. Can’t be simpler, can it?

Why? Why was it necessary to introduce this? Well, the experience shows that many HTML+RDFa authors forget to declare the prefixes. One can look, for example, at the pages that include Facebook’s Open Graph Protocol RDFa statements: although I do not have an exact numbers, I would suspect that around 50% of these pages do not have them. That means that, strictly speaking, those statements cannot be interpreted as RDF triples. The Semantic Web community may ask, try to convince, beg, etc., the HTML authors (or the underlying tools) to do “the right thing”, and we certainly should continue doing so, but we also have to accept this reality. A default profile mechanism can alleviate that, thereby greatly extending the amount of triples that can become part of a Web of Data. And even for seasoned RDF(a) users not having to declare anything for some of the common prefixes is a plus.

Of course, the big, nay, the BIG issue is: what prefixes and terms would those default profiles declare? What is the decision procedure? At this time, we do not have a final answer yet. It is quite obvious that all the vocabularies defined by W3C Recommendations and official Notes and that have a fixed prefix (most of them do) should be part of the list. We may want to add Member Submissions to this list. If you look at the default profile, these are already there in the first table (i.e., the code example above is safe). The HTML variant would add all the traditional @rel values, like license, next, previous, etc.

But what else? At the moment, the profiles include a set of prefixes and terms that are just there for testing purposes (although they do indicate a tendency), so do not take the default profile as the final content. For the HTML @rel values, we would, most probably, rely on any policy that the HTML5 Working Group will define eventually; the role of the HTML default profile will simply be to reflect those. That seems quite straightforward However, the issues of default prefixes is clearly different. For those, the Working Group is contemplating two different approaches

  1. Set up some sort of a registration mechanism, not unlike the xpointer registry. This would also include some accompanying mailing lists where objections can be raised against the inclusion of a specific prefix, etc.
  2. Try to get some information from search engines on the Semantic Web (Sindice, Yahoo!, anyone else?) that may provide with a list of, say, the top 20 prefixes as used on the Semantic Web. Such a list would reflect the real usage of vocabularies and prefixes. (We still have to see whether this is an information these engines can provide or not.)

At this moment it is not yet clear which way is realistic. Personally, I am more in favour of the second approach (if technically feasible), but the end result may be different; this is a policy that W3C will have to set up.

Apart from the content, another issue is the change mode and frequency of the default profile. First of all, the set of default prefixes can only grow. I.e., once a prefix has made it on the default profile, it has to stay there with an unchanged URI. That is obviously important to ensure stability. I.e., new prefixes coming to the fore by virtue of being used by the community can be added to the set, but no prefix can be removed. As for the frequency: a balance has to be found between stability, i.e., that RDFa processors can rely (e.g., for caching) on a not-too-frequent change of the default profiles, and relevance, i.e., that new vocabularies could find their way into the set of default prefixes. Again my personal feeling is that an update of the profiles once every 6 months, or even once a year, might strike a good balance here. To be decided.

As before, all comments are welcome but, again as before, I would prefer if you sent those comments to the RDFa WG’s mailing list rather than commenting this blog: public-rdfa-wg@w3.org (see also the archives).

Finally: I have worked on a new version of my RDFa distiller to include all the 1.1 features. This version of the distiller is now public, so you can try out the different new features. Of course, it is still not a final release, there are bugs, so…

Enhanced by Zemanta

October 27, 2010

Publication of the Last Call for RDFa Core 1.1

The W3C RDFa Working Group has just published the “Last Call Working Draft” for RDFa Core 1.1. As Manu Sporny, the co-chair of the group, said in his tweet, this W3C jargon is equivalent to a “feature freeze”. Ie, the group does not know of any outstanding technical issues and of missing features that it would reasonably plan to add. Put it another way, this is last round of commenting before proceeding to final implementation testing and, hopefully, to a final W3C Standard. I.e., Last Call doesn’t mean that the group takes no more comments; on the contrary, technical comments are very welcome and necessary to make it sure that the final outcome is correct. Please, send your comments to the groups mailing list: public-rdfa-wg@w3.org (there is also a public archive).

Although lots of things have been discussed in the past few months (i.e., since the last draft published in August) not many things have significantly changed, in fact. Most of the changes are editorial, making the text clearer, more precise, etc. (You can look at the “diff” file, if you are interested.) This document is for the Core, i.e., the generic RDFa processing that can be used for any DOM. It is to be expected to have, in a few days, a similar document published for XHTML+RDFa 1.1 by the same Working Group, and an HTML5+RDFa 1.1 by the HTML Working Group.

I have also worked, in parallel to the specification work, on a modified version of the RDFa distiller. While the “official” service remains unchanged and relies on the current RDFa Recommendation, there is now a “Shadow” version, that relies on RDFa 1.1. The underlying code has undergone some cleanups beyond the adaptation to RDFa 1.1 so I am sure there are bugs…

Finally, a blatant self-promotion: Stéphane Corlosquet, Lin Clark and I will give a tutorial at the upcoming ISWC conference in Shanghai on RDFa and Drupal. The RDFa part relies on 1.1… (There are links to the slides on the page but you do not expect us not to touch them any more before the tutorial itself, do you? So make sure you look at them again after the event…)

August 3, 2010

New RDFa Core 1.1 and XHTML+RDFa 1.1 drafts

Filed under: Semantic Web,Work Related — Ivan Herman @ 20:49
Tags: , , , ,

W3C has just published updated drafts of RDFa Core 1.1 and XHTML+RDFa 1.1. These are “just” new heart-beat documents, meaning that they are not fundamentally new (the first drafts of these documents were published last April) but not yet ”Last Call” documents, i.e., the group does not yet consider the specification work finished. Although… in fact it is not far from that point. The WG has spent the last few weeks to get through open issues, and not many are left open at this moment.

So what has changed since my last blog on the subject where I introduced the new features compared to RDFa 1.0? In fact, nothing spectacular. Lots of minor clarifications issues to make things more precise. There has been a change on the treatment of XML Literals: whereas, in RDFa 1.0, XML Literals are automatically generated any time XML markup is in the text, RDFa 1.1 explicitly requires a corresponding datatype specification; otherwise a plain literal is created in RDF. (This is the only backward incompatibility of RDFa 1.0, as foreseen by the charter.)

Probably the most important addition to RDFa Core was triggered by a comment of Jeni Tenison (though the problem was raised by others, too). Jeni emphasized a slightly dangerous aspect of the profile mechanism in RDFa 1.1. To remind the reader: using the @profile attribute the author of an RDFa 1.1 file can refer to another file somewhere on the Web; that “profile file” may include, in one place, prefix declarations, term specifications, and (this is also new in this version!) a default term URI (see again my earlier blog on the details). The question is: what happens if the profile file is unreachable? The danger is that an RDFa 1.1 would possibly generate wrong triples, which is actually worse than not generate triples at all. The decision of the group (as Jeni actually proposed) was that the whole DOM subtree, i.e., all triples would be dropped starting with the element with the un-referenceable profile.

The profile mechanism has stirred quite some interest both among users of RDFa and elsewhere. Martin Hepp was probably the first to publish an RDF 1.1 profile for GoodRelations and related vocabulary prefixes at http://www.heppnetz.de/grprofile/. To use, essentially, his example, this means that one can use

<div profile="http://www.heppnetz.de/grprofile/">
  <span about="#company" typeof="gr:BusinessEntity>
    <span property="gr:legalName">Hepp's bakery</span>,
    see also the <a rel="rdfs:seeAlso" href="http://example.org/bakery">
    home page of the bakery.</a>
</div>

Because Martin’s profile includes a prefix definition for rdfs, too (alongside a number of other prefixes), the profile definition replaces a whole series of namespace declarations that were necessary in RDFa 1.0. I would guess that similar profile files, with term or prefix definitions, will be defined for foaf or for Dublin Core, too. Other obvious candidates for such profile definitions are the “big” users of RDFa information like Facebook or Google, who can specify the vocabularies they understand, i.e., index. (This did come up at the W3C camp in Raleigh, during the exciting discussion on the Facebook vocabulary.) Finally, another interesting discussion generated by RDFa’s profile mechanism occurred at the “RDF Next” Workshop in Palo Alto a few weeks ago: some participants proposed to consider a similar mechanism in a next version of Turtle (I must admit this came as a surprise, although it does make sense…)

As for implementations of profiles? Profiles are defined in such a way that an RDFa processor can recursively invoke itself to extract the necessary information for processing; indeed, RDFa is also used to encode the prefix, term, etc, definitions (Turtle or RDF/XML can also be used, but RDFa is the only required format). This means that an RDFa processor does not have to implement a different parser to handle the profile information. My ”shadow” RDFa distiller implements this (as well as all RDFa 1.1 features) and it was not complicated. It actually implements a caching mechanism, too: some well known and well published profiles can be stored locally so that the distiller does not go through an extra HTTP request all the time (yes, I know, this may lead to inconsistencies in theory but if such cache is refreshed regularly via, say, a crontab job, it should be o.k. in practice). At the moment the content of that cache is of course curated by hand. (The usual caveat applies: this is code in development, with bugs, with possibly frequent and unannounced changes…) You are all welcome to try the shadow distiller to see what RDFa is capable of. Of course, other RDFa 1.1 implementations are in the making. If you have one, it would be good to know about them, the Working Group is constantly looking for implementation experiences…

June 8, 2010

RDFa API draft

A few weeks ago I have already blogged about the publication of the RDFa 1.1 drafts. An essential, additional, piece of the RDFa puzzle has now been published (as a First Public Working Draft): the RDFa API. (Note that there is no reference to “1.1” in the title: the API is equally valid for versions 1.0 and 1.1 of RDFa. More about this below.)

Defining an API for RDFa was a slightly complex exercise. Indeed, this API has very different constituencies, ie, possible user communities, and these communities have different needs and background knowledge. On the one hand, you have the (for the lack of a better word) “RDF” community, ie, people who are familiar and comfortable with RDF concepts, and are used to handle triples, triple stores, iterating through triples, etc. Essentially, people who either have already a background in using, say, Jena or RDFLib, or who can grasp these concepts easily due to their background. But, on the other hand, there are also people coming more from the traditional Web Application programmers’ community who may not be that familiar with RDF, and who are looking for an easy way to get to the additional data in the (X)HTML content that RDFa provides. Providing a suitable level of abstraction for both of these communities took quite a while, and this is the reason why the RDFa API FPWD could not be published together with RDFa 1.1.

But we have it now. I will give a few examples below on how this API can be used; look at the draft for more details (there more examples there; the examples in this blog use, actually, some of the examples from the document!). The usual caveat applies: this is a working draft with new releases in the future; comments are more than welcome! (Please, send them to public-rdfa-wg@w3.org, rather than answering to this blog.)

The “non-RDF” user

Let me use the same RDFa example as in the previous blog:

<div prefix="relation: http://vocab.org/relationship/ foaf: http://xmlns.com/foaf/0.1/">
   <p about="http://www.ivan-herman.net/foaf#me"
      rel="relation:spouse" resource="http://www.ivan-herman.net/Eva_Boka"
      property="foaf:name">Ivan Herman</p>
</div>

Encoding the (RDF) triples:

<http://www.ivan-herman.net/foaf#me> <http://vocab.org/relationship/spouse> <http://www.ivan-herman.net/Eva_Boka> .
<http://www.ivan-herman.net/foaf#me> <http://xmlns.com/foaf/0.1/name> "Ivan Herman" .

So how can one get to these using the API? The simplest approach is to get the collection of property-value pairs assigned to a specific subject. Using the API, one can say:

>> var ivan = document.getItemBySubject("http://www.ivan-herman.net/foaf#me")

yielding an instance of what the document calls a ”Property Group”. This object contains all the property-value pairs (well, in RDF terms, the predicate-object pairs) that are associated with http://www.ivan-herman.net/foaf#me. It is then possible to find out what properties are defined for a subject:

>> print(ivan.properties)
<http://vocab.org/relationship/spouse>
<http://xmlns.com/foaf/0.1/name>

It is also possible relate back to the DOM node that holds the subject via ivan.origin, and use the get method to get to the values of a specific property, ie:

>>>print(ivan.get("http://xmlns.com/foaf/0.1/name"))
Ivan Herman

Note that, on the level, the user does not really have to understand the details of RDF, of predicates, etc; what one gets back is, essentially, a literal or a representation of an IRI, and that is about it. Simple, isn’t it?

Of course, there is slightly more to it. First of all, finding the property groups only through the subjects may be too restrictive. The API therefore includes similar methods to search through the content via properties (“return all the property groups whose subject has a specific property”) or via type (i.e., rdf:type in RDF terms, @typeof in RDFa terms). One can also search for DOM Nodes rather than for Property Groups. Eg, using

>> document.getElementsByType("http://http://xmlns.com/foaf/0.1/Person")

one can get hold of the elements that are used for persons (e.g., to highlight these nodes or their corresponding subtrees on the screen by manipulating their CSS style). I also used full URI-s everywhere in the examples; it is also possible to set CURIE like prefixes to make the coding a bit simpler and less error prone.

An that is really it for simple applications. Note that many RDFa content (eg, Facebook‘s Open Graph protocol, or Google‘s snippets) include only a few simple statements whose management is perfectly possible with these few methods.

The RDF user

Of course, RDF users may want more and, sometimes, the complexity of the RDF content does require more complex methods. The RDFa API spec does indeed provide a more complex set of possibilities.

The underlying infrastructure for the API is based on the abstract concept of a store. One can create such a store, or can get hold of the default one via:

>> var store = document.data.store

The default store contains the triples that are retrieved from the current page. It is then possible to iterate through the triples, possibly via an extra, user-specified filter. Furthermore, it is possible to  create (RDF) new triples and add them to the store. One can add converter methods that control how typed literals are mapped onto, say, Javascript objects (although an implementation will provide a default set for you). In some ways, fairly standard RDF programming stuff, yielding, for example, to the following code:

>> triples = document.data.store.filter(); // get all the triples
>> for( var i=0; i < triples.size; i++ ) {
>>   print(triples[i]);
>> }
<http://www.ivan-herman.net/foaf#me> <http://vocab.org/relationship/spouse> <http://www.ivan-herman.net/Eva_Boka>
<http://www.ivan-herman.net/foaf#me> <http://xmlns.com/foaf/0.1/name> "Ivan Herman"

There is one aspect that is very well worth emphasizing. The parser to the store is conceptually separated from the store (similarly to, say, RDFLib). What this means is that, though an implementation provides a default parser for the document, it is perfectly possible to add another parser parsing the same document and putting the triples into the same store. The RDFa API document does not require that the parser must be RDFa 1.1; one can create a separate store and use, for example, an RDFa 1.0 parser. But much more interesting is the fact that one can also add a, say, hCard microformat parser that produces RDF triples. The triples may be added to the same store, essentially merging the underlying RDF graphs. Eg, one can have the following:

>> store = document.data.store; // by default, this store has all the RDFa 1.1 content
>> document.data.createParser("hCard",store).parse();

merging the RDFa and the hCard terms in one triple store. I think the possibility to use the RDFa API to, at last, merge the different RDF interpretable syntaxes in this area in one place is very powerful. (It must be noted that the details of the parser interface is still changing; e.g., it is not yet clear how various parsers should be identified. This is for one of the next releases…)

As I said, comments are more than welcome, there is still work to do, but the first, extremely important step has been made!

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,613 other followers