Ivan’s private site

March 1, 2013

RDFa 1.1, microdata, and turtle-in-HTML now in the core distribution of RDFLib

This has been in the works for a while, but it is done now: the latest (3.4.0 version) of the python RDFLib library has just been released, and it includes and RDFa 1.1, microdata, and turtle-in-HTML parser. In other words, the user can add structured data to an HTML file, and that will be parsed into RDF and added to an RDFLib Graph structure. This is a significant step, and thanks to Gunnar Aastrand Grimnes, who helped me adding those parsers into the main distribution.

I have written a blog last summer on some of the technical details of those parsers; although there has been updates since then, essentially following the minor changes that the RDFa Working has defined for RDFa, as well as changes/updates on the microdata->RDF algorithm, the general approach described in that blog remains valid, and it is not necessary to repeat it here. For further details on these different formats, some of the useful links are:


November 6, 2012

RDFa 1.1 and microdata now part of the main branch of RDFLib

Filed under: Code,Python,Semantic Web,Work Related — Ivan Herman @ 21:34
Tags: , , , ,

A while ago I wrote of the fact that I have adapted my RDFa and microdata to RDFlib. Although I did some work on it since then, nothing really spectacular happened (e.g., I have updated the microdata part to the latest version of the microdata->RDF conversion note, and I have also gone through the tedious exercise to make the modules usable for Python3).

Nevertheless, a significant milestone has been reached now, but this was not done by me but rather by Gunnar Aastrand Grimnes, who “maintains” RDFlib: the separate branch for RDFa and microdata has now been merged with the master branch of RDFLib on github. So here we are; whenever the next official release of RDFLib comes, these parsers will be part of it…

August 31, 2012

RDFa, microdata, turtle-in-HTML, and RDFLib

For those of us programming in Python, RDFLib is certainly one of the RDF packages of choice. Several years ago, when I developed a distiller for RDFa 1.0, some good souls picked the code up and added it to RDFLib as one of the parser formats. However, years have gone by, and have seen the development of RDFa 1.1, of microdata, and also the specification of directly embedding Turtle into HTML. It is time to bring all these into RDFLib…

Some times ago I have developed both a new version of the RDFa distiller, adapted for the  1.1 RDFa standard, as well as a microdata to RDF distiller, based on the Interest Group note on converting microdata to RDF. Both of these were packages and applications on top of RDFLib. Which is fine because they can be used with the deployed RDFLib installations out there. But, ideally, these should be retrofitted into the core of RDFLib; I have used the last few quiet days of the vacation period in August to do just that (thanks to Niklas Lindström and Gunnar Grimes for some email discussion and helping me through the hooplas of RDFLib-on-github). The results are in a separate branch of the RDFLib github repository, under the name structured_data_parsers. Using these parsers here is what one can do:

g = Graph()
# parse an SVG+RDF 1.1 file an store the results in 'g':
g.parse(URI_of_SVG_file, format="rdfa1.1") 
# parse an HTML+microdata file an store the results in 'g':
g.parse(URI_of_HTML_file, format="microdata")
# parse an HTML file for any structured conent an store the results in 'g':
g.parse(URI_of_HTML_file, format="html")

The third option is interesting (thanks to Dan Brickley who suggested it): this will parse an HTML file for any structured data, let that be in microdata, RDFa 1.1, or in Turtle embedded in a <script type="text/turtle">...</script> tag.

The core of the RDFa 1.1 has gone through a very thorough testing, using the extensive test suite on rdfa.info. This is less true for microdata, because there is not yet an extensive test suite for that one yet (but the code is also simpler). On the other hand, any restructuring like that may introduce some extra bugs. I would very much appreciate if interested geeks in the community could install and test it, and forward me the bugs that are still undeniably there… Note that the microdata->RDF mapping specification may still undergo some changes in the coming few weeks/months (primarily catching up with some development around schema.org); I hope to adapt the code to the changes quickly.

I have also made some arbitrary decisions here, which are minor, but arbitrary nevertheless. Any feedback on those is welcome:

  • I decided not to remove the old, 1.0 parser from this branch. Although the new version of the RDFa 1.1 parser can switch into 1.0 mode if the necessary switches are in the code (e.g., @version or a RDFa 1.0 specific DTD), in the absence of those 1.1 will be used. As, unfortunately, 1.1 is not 100% backward compatible with 1.0, this may create some issues with deployed applications. This also means that the format="rdfa" argument will refer to 1.0 and not to 1.1. Am I too cautious here?
  • The format argument in parse can also hold media types. Some of those are fairly obvious: e.g., application/svg+xml will map on the new parser with RDFa 1.1, for example. But what should be the default mapping for text/html? At present, it maps to the “universal” extractor (i.e., extracting everything).

Of course, at some point, this branch will be merged with the main branch of RDFLib meaning that, eventually, this will be part of the core distribution. I cannot say at this point when this will happen, I am not involved in the day-to-day management of the RDFLib development.

I hope this will be useful…

April 24, 2012

Moved my RDFa/microdata python modules to github

Filed under: Code,Python,Work Related — Ivan Herman @ 12:37
Tags: , , , ,

In case you were using/downloading my python module for RDFa 1.1 or for Microdata: I have now moved away from the old CVS repository site, and moved to GitHub. The two modules are on the RDFLib/pyrdfa3 and RDFLib/pymicrodata repositories, respectively. Both of these modules are more or less final (there are still some testings happening for RDFa, but not much left) and I am just happy if others chime in in the future of these modules.

Although part of the RDFLib project on GitHub, the two modules are pretty much independent of the core RDFLib library, although they are built on top of it. I hope that, with the help of people who know the RDFLib internal structures better, both modules can become, eventually, part of the core. But this may take some time…

September 29, 2009

OWL 2 RL closure

OWL 2 has just been published as a Proposed Recommendation (yay!) which means, in laymen’s term, that the technical work is done, and it is up to the membership of W3C to accept it as a full blown Recommendation.

As I already blogged before, I did some implementation work on a specific piece of OWL 2, namely the OWL 2 RL Profile. (I have also blogged about OWL 2 RL and its importance before, nothing to repeat here.) The implementation itself is not really optimized, and it would probably not stand a chance for any large scale deployment (the reader may want to look at the OWL 2 implementation report for other alternatives).  But I can hope that the resulting service can be useful in getting a feel for what OWL 2 RL can give you: by just adding a few triples into the text box you can see what OWL 2 RL means. This is, by the way, an implementation of the OWL 2 RL rule set, which means that it can also accepts triples that are not mandated by the Direct Semantics of OWL 2 (a.k.a. OWL 2 DL). Put it another way, it is an implementation of a small portion of OWL 2 Full.

The core of my implementation turned out to be really easy straightforward: a forward chaining structure directly encoded in Python. I use RDFLib to handle the RDF triples and the triple store. Each triple in the RDF Graph is considered, compared to the premises of the rules; if there is a match then new triples are added to the Graph. (Well, most of the rules contain several triples to match with, and the usual approach is to pick one and explore the Graph deeper check against additional matches. Which one to pick is important, it may affect the overall speed, though.) If, through such a cycle, no additional triples are added to the Graph then we are done, the “deductive closure” of the Graph has been calculated. The rules of OWL 2 RL have been carefully chosen so that no new resources are added to the Graph (only new triples), ie, this process eventually stops.

The rules themselves are usually simple. Although it is possible and probably more efficient to encode the whole process using some sort of a rule engine (I know of implementations based on, eg, Jena’s rules or Jess), one can simply encode the rules using the usual conditional constructs of the programming language. The number of rules is relatively high but nothing that a good screen editor would not manage with copy-paste. There were only a few rules that required a somewhat more careful coding (usually to take care of lists) or many searches through the graph like, for examples, the rule for property chains (see rule prp-spo2 in the rule set). It is also important to note that the higher number of rules does really not affect the efficiency of the final system; if no triple matches a rule then, well, it just does not fire. No side effect of the mere existence of an unused rule.

So is it all easy and rosy? Not quite. First of all, this implementation is of course simplistic in so far as it generates all possible deducted triples that include a number of trivial triples (like ?x owl:sameAs ?x for all possible resources). That means that the resulting graph becomes fairly big even if the (optional) axiomatic triples are not added. If the OWL 2 RL process is bound to a query engine (eg, the new version of SPARQL will, hopefully, give a precise specification of what it means to have OWL 2 RL reasoning on the data set prior to a SPARQL query) then many of these trivial triples could be generated at query time only, thereby avoiding an extra load on the database. Well, that is one place where a proof-of-concept and simple implementation like mine looses against a more professional one:-)

The second issue was the contrast between RDF triples and “generalized” RDF triples, ie, triples where literals can appear in subject positions and bnodes can appear as properties. OWL 2 explicitly says that it works with generalized triples and the OWL 2 RL rule set also shows why that is necessary. Indeed, consider the following set of triples:

ex:X rdfs:subClassOf [
  a owl:Restriction;
  owl:onProperty [ owl:inverseOf ex:p ];
  owl:allValuesFrom ex:A

This is a fairly standard “idiom” even for simple ontologies; one wants to restrict, so to say, the subjects instead of the objects using an OWL property restriction. In other words that restriction combined with

ex:x rdf:type ex:X .
ex:y ex:p ex:x .

should yield

ex:y rdf:type ex:A .

Well, this deduction would not occur through the rule set if non-generalized RDF triples were used. Indeed, the inverse of ex:p is a blank node, ie, using it in a triple is not legal; but using that blank node to denote a property is necessary for the full chain of deductions. In other words, to get that deduction to work properly using RDF and rules, the author of the vocabulary would have to give an explicit URI to the inverse of ex:p. Possible, but slightly unnatural. If generalized triples are used, then the OWL 2 RL rules yield the proper result.

It turns out that, in my case, having bnodes as properties was not really an issue, because RDFLib could handle that directly (is that a bug in RDFLib?). But similar, though slightly more complex or even pathological examples can be constructed involving literals in subject positions, and that was a problem because RDFLib refused to handle those triples. What I had to do was to exchange all literals in the graph against a new bnode, perform all the deductions using those, and exchange the bnodes “back” against their original literals at the end. (This mechanism is not my invention; it is actually described by the RDF Semantics document, in the section on Datatype entailment rules.) B.t.w., the triples returned by the system are all “legal” triples, generalized triples play a role during the deduction only (and illegal triples are filtered out at output).

Literals with datatypes were also a source of problems. This is probably where I spent most of my implementation time (I must thank Michael Schneider who, while developing the test cases for OWL 2 RDF Based Semantics, was constantly pushing me to handle those damn datatypes properly…). Indeed, the underlying RDFLib system is fairly lax on checking the typed literals against their definition by the XSD specification (eg, issues like minimum or maximum values were not checked…). As a consequence, I had to re-implement the lexical to value conversion for all datatypes. Once I found out how to do that (I had dive a bit into the internals of RDFLib but, luckily, Python is an interpretative language…) it became a relatively straightforward, repetitive, and slightly time consuming work. Actually, using bnodes instead of “real” literals made it easier to implement datatype subsumptions, too (eg, the fact that, say, an xsd:byte is also a xsd:integer). This became important so that the rules would work properly on property restrictions involving datatypes.

Bottom line: even for a simple implementation literals, mainly literals with datatypes, are the biggest headache. The rest is really easy.  (This is hardly the discovery of the year, but is nevertheless good to remember…)

I was, actually, carried away a bit once I got a hold on how to handle datatypes, so I also implemented a small “extension” to OWL 2 RL by adding datatype restrictions (one of the really nice new features of OWL 2 but which is not mandated for OWL 2 RL). Imagine you have the following vocabulary item:

ex:RE a owl:Restriction ;
    owl:onProperty ex:p ;
    owl:someValuesFrom [
      a rdfs:Datatype ;
      owl:onDatatype xsd:integer ;
      owl:withRestrictions (
          [ xsd:minInclusive "1"^^xsd:integer ]
          [ xsd:maxInclusive "6"^^xsd:integer ]
   ] .

which defines a restriction on the property ex:p so that some its values should be integers in the [1,6] interval. This means that

ex:q ex:p "2"^^xsd:integer.


ex:q rdf:type ex:RE .

And this could be done by a slight extension of OWL 2 RL; no new rules, just adding the datatype restrictions to the datatypes. Nifty…

That is it. I had fun, and maybe it will be useful to others. The package can also be downloaded and used with RDFLib, by the way…

September 3, 2007

Yet another RDFa processor…

Filed under: Code,Python,Semantic Web,Work Related — Ivan Herman @ 17:30
Tags: ,

The summer months were quite relaxed, so at some point I decided to write an RDFa processor (in Python). I know, I could have used Elias Torres’ parser (also included in RDFlib), but my goal was a bit different. It was at a time when the RDFa task force had long technical discussion on the details of the main RDFa parsing/processing rules, and I wanted to test whether those rules, as described at that moment, were correct and implementable (they were). And, while I was at it I then decided to properly finish up the implementation to make it generally usable.

The result is a Python package (it can also be downloaded as a compressed tar file) which uses RDFLib to build up the graph as well as for final serialization. To the best of my knowledge the parser follows the latest (not yet published:-( version of RDFa, and I definitely plan to keep it that way in future. There is also a “distiller” that can be used on-line. The implementation (mainly for the distiller) is not complete: indeed, I should work on a proper error handling rather than relying on Python’s xml minidom package simply throwing an exception on the user’s face for, say, an invalid XHTML…

I also decided to test it on something more complicated, so I created an RDFa version of my foaf data. I have now an XHTML file with my foaf data that can be used (either via the distiller or directly using Python) to generate my RDF/XML foaf file. It shows one of the real advantages of RDFa: the foaf data mixes quite a number of various vocabularies, but that is absolutely no problem for something like RDFa. In any case, I do not intend to edit my foaf data in RDF/XML any more…

July 22, 2007

Yet another RDFa converter

Filed under: Code,Semantic Web,Work Related — Ivan Herman @ 9:31
Tags: ,

I realized a week ago that Dave Beckett’s triplr tool (“Stuff in, triples out”) also includes an RDFa converter now, see his news item of 2007-07-17. Ie, I can now use the URI http://triplr.org/rdfa-rdf/http://rdfa.info/ to extract or refer to the RDF content from the RDFa info page’s RDFa statements. Of course (after all, this is Dave’s tool!) I could also put “turtle” in the URI instead of “rdf” to yield, well, turtle.

The converter, of course, is still based on the latest public release of the RDFa syntax, and many things will change as a result of the current work in the RDFa group (which has become real active in the last few months, so I think new and significantly better release of the spec will come soon!). But I am sure an update of triplr will follow that soon afterwards…

July 6, 2007

SPARQL Endpoint interface to Python

Filed under: Code,Python,Semantic Web,Work Related — Ivan Herman @ 12:43
Tags: ,

I played with SPARQL on my local machine, and I also got inspired by Lee’s SPARQL library for Javascript. But, well, I prefer Python… So I made a set of utility classes first for myself, but then I decided to package it more properly. Maybe others can find it useful, too.

The goal is to give some help in turning a SPARQL query into the corresponding HTTP GET Protocol, send it to a SPARQL endpoint somewhere on the Web, and do something with the results. The simplest usage is something like:

from SPARQL import SPARQLWrapper
queryString = "SELECT * WHERE { ?s ?p ?o. }"
sparql = SPARQLWrapper("http://localhost:2020/sparql")
# add a default graph, though that can also be done in the query string
try :
    ret = sparql.query() # ret is a stream with the results in XML, it is a file like object
    deal_with_the_exception() # eg, syntax error

To make it even easier to use, conversions to more Python-friendly formats can also done on the results: eg, turn it into a proper DOM tree if the result is XML, use Bob Ippolito’s simplejson module to convert a return format in JSON into Python dictionary, or parse it with RDFLib and return an RDFLib Graph in case the return is in RDF/XML. Ie, one could have done:

try :
    ret = sparql.query()
    dict = ret.convert()

where “dict” is a Python dictionary. There are some more tricks in the library, but that essentially it…

The code is available from my site; the API documentation is included in the distribution (and is also available online).

It is an early release. There are some problems, and I expect some more. I have primarily tested it with two different SPARQL endpoints running on my local machine (joseki3 and virtuoso) and also with some public SPARQL endpoints. There are some differences on the return media type for, eg, JSON or N3, the non-standard arguments (eg, setting the return format) still diverge a bit, etc. But I would expect these to converge over time. However, I am sure that my code will have problems with some of the endpoints at least on those grounds (or others)…

The Rubric Theme Blog at WordPress.com.


Get every new post delivered to your Inbox.

Join 3,613 other followers