Ivan’s private site

April 17, 2012

Linked Data on the Web Workshop, Lyon

(See the Workshop’s home page for details.)

The LDOW20** series have become more than workshops; they are really a small conferences. I did not count the number of participants (the meeting room had a fairly odd shape which made it a bit difficult) but I think it was largely over a hundred. Nice to see…

The usual caveat applies for my notes below: I am selective here with some papers which is no judgement on any other paper at the workshop. These are just some of my thoughts jotted down…

Giuseppe Rizzo made a presentation related to all the tools we know have to tag texts and thereby being able to use these resources in linked data (“NERD meets NIF: Lifting NLP Extraction Results to the Linked Data Cloud”), i.e., the Zemanta or Open Calais services of this World. As these services become more and more important, having a clear view of what they can do, how one can use them individually or together, etc., is essential. Their project, called NERD, will become an important source for this community, bookmark that page:-)

Jun Zhao made a presentation (“Towards Interoperable Provenance Publication on the Linked Data Web”) essentially on the work of the W3C Provenance Working Group. I was pleased to see and listen to this presentation: I believe the outcome of that group is very important for this community and, having played a role in the creation of that group, I am anxious to see it succeed. B.t.w., a new round of publication coming from that group should happen very soon, watch the news…

Another presentation, namely Arnaud Le Hors’ on “Using read/write Linked Data for Application Integration — Towards a Linked Data Basic Profile” was also closely related to W3C work. Arnaud and his colleagues (at IBM) came to this community after a long journey working on application integration; think, e.g., of systems managing software updates and error management. These systems are fundamentally data oriented and IBM has embarked into a Linked Data based approach (after having tried others). The particularity of this approach is to stay very “low” level, insofar as they use only basic HTTP protocol reading and writing RDF data. This approach seems to strike chord at a number of other companies (Elsevier, EMC, Oracle, Nokia) and their work form the basis of a new W3C Working Group that should be started this coming summer. This work may become a significant element of palette of technologies around Linked Data.

Luca Costabello talked about Access Control, Linked Data, and Mobile (“Linked Data Access Goes Mobile: Context-Aware Authorization for Graph Stores”). Although Luca emphasized that their solution is not a complete solution for Linked Data access control issues in general, it may become an important contribution in that area nevertheless. Their approach is to modify SPARQL queries “on-the-fly” by including access control clauses; for that purpose, an access control ontology (S4AC) has been developed and used. One issue is: how would that work with a purely HTTP level read/write Linked Data Web, like the one Arnaud is talking about? Answer: we do not know yet:-)

Igor Popov concentrated on user interface issues (“Interacting with the Web of Data through a Web of Inter-connected Lenses”): how to develop a framework whereby data-oriented applications can cooperate quickly, so that lambda users could explore data, switching easily to applications that are well adapted to a particular dataset, and without being forced to use complicated programming or use too “geeky” tools. This is still an alpha level work, but their site-in-development, called Mashpoint is a place to watch. There are (still) not enough work on user-facing data exploration tools, I was pleased to see this one…

What is the dynamics of Linked Data? How does it change? This is the question Tobias Käfer and his friends try to answer in future (“Towards a Dynamic Linked Data Observatory”). For that, data is necessary, and Tobias’ presentation was on how to determine what collection of resources to regularly watch and measure. The plan is to produce a snapshot of the data once a week for a year; the hope is that based on this collected data we will learn more about the overall evolution of linked data. I am really curious to see the results of that. One more reason to be at LDOW2013:-)

Tobias’ presentation has an important connection to the last presentation of the day, made by Axel Polleres (OWL: Yet to arrive on the Web of Data?) insofar as what he presented was based on the analysis of the Linked Data out there. The issue has been around, with lots of controversy, for a while: what level of OWL should/could be used for Linked Data? OWL 2 as a whole seems to be too complex for the amount of data we are talking about, both in terms of program efficiency and in terms of conceptually complexity for end users. OWL 2 has defined a much simpler profile, called OWL 2 RL, which does have some traction but may be still too complex, e.g., for implementations. Axel and his friends analyzed the usage of OWL statements out there, and also established some criteria on what type of rules should be used to make OWL processing really efficient; their result is another profile called OWL LD. It is largely a subset of OWL 2 RL, though it does adopt some datatypes that OWL 2 RL does not have.

There are some features that are left out of OWL 2 RL which I am not fully convinced of; after all their measurement was based on data in 2011, and it is difficult to say how much time it takes for new OWL 2 features to really catch up. I think that keys and property chains should/could be really useful on the Linked Data, and can be managed by rule engines, too. So the jury is still out on this, but it would be good to find a way to stabilize this at some point and see the LD crowd look at OWL (i.e., the subset of OWL) more positively. Of course, another approach would be to concentrate on an easy way to encode Rules into RDF which might make this discussion moot in a certain sense; one of the things we have not succeeded to do yet:-(

The day ended by a panel, on which I also participated; I would let others judge whether the panel was good or not. However, the panel was preceded by a presentation of Chris on the current deployment of RDFa and microdata which was really interesting. (His slides will be on the workshop’s page soon.) The deployment of RDFa, microdata, and microformats has become really strong now; structured data in HTML is a well established approach out there. RDFa and microdata covers now half of the cases, the other half being microformats, which seems to indicate a clear shift towards RDFa/microdata, ie, a more syntax oriented approach (with a clear mapping to RDF). Microdata is used almost exclusively with schema.org vocabularies (which is to be expected) whereas RDFa makes use of a larger palette of various other vocabularies. All these were to be expected, but it is nice to see being reflected in collected data.

It was a great event. Chris, Tim, and Tom: thanks!

January 24, 2012

Nice reading on Semantic Search

I had a great time reading a paper on Semantic Search[1]. Although the paper is on the details of a specific Semantic Web search engine (DERI’s SWSE), I was reading it as somebody not really familiar with all the intricate details of such a search engine setup and operation (i.e., I would not dare to give an opinion on whether the choice taken by this group is better or worse than the ones taken by the developers of other engines) and wanting to gain a good image of what is happening in general. And, for that purpose, this paper was really interesting and instructive. It is long (cca. 50 pages), i.e., I did not even try to understand everything at my first reading, but it did give a great overall impression of what is going on.

One of the “associations” I had, maybe somewhat surprisingly, is with another paper I read lately, namely a report on basic profiles for Linked Data[2]. In that paper Nally et al. look at what “subsets” of current Semantic Web specifications could be defined, as “profiles”, for the purpose of publishing and using Linked Data. This was also a general topic at a W3C Workshop on Linked Data Patterns at the end of last year (see also the final report of the event) and it is not a secret that W3C is considering setting up a relevant Working Group in the near future. Well, the experiences of an engine like SWSE might come very handy here. For example, SWSE uses a subset of the OWL 2 RL Profile for inferencing; that may be a good input for a possible Linked Data profile (although the differences are really minor, if one looks at the appendix of the paper that lists the rule sets the engine uses). The idea of “Authoritative Reasoning” is also interesting and possibly relevant; that approach makes a lot of pragmatic sense, I wonder whether this is not something that should be, somehow, documented for a general use. And I am sure there are more: In general, analyzing the experiences of major Semantic Web search engines on handling Linked Data might provide a great set of input for such pragmatic work.

I was also wondering about a very different issue. A great deal of work had to be done in SWSE on the proper handling of owl:sameAs. On the other hand, one of the recurring discussions on various mailing list and elsewhere is on whether the usage of this property is semantically o.k. or not (see, e.g., [3]). A possible alternative would be to define (beyond owl:sameAs) a set of properties borrowed from the SKOS Recommendation, like closeMatch, exactMatch, broadMatch, etc. It is almost trivial to generalize these SKOS properties for the general case but, reading this paper, I was wondering: what effect would such predicates have on search? Would it make it more complicated or, in fact, would such predicates make the life of search engines easier by providing “hints” that could be used for the user interface? Or both? Or is it already too late, because the ubiquitous usage of owl:sameAs is already so prevalent that it is not worth touching that stuff? I do not have a clear answer at this moment…

Thanks to the authors!

  1. A. Hogan, et al., “€œSearching and Browsing Linked Data with SWSE: the Semantic Web Search Engine”€, Journal of Web Semantics, vol. 4, no. December, pp. 365-401, 2011.
  2. M. Nally and S. Speicher, “Toward a Basic Profile for Linked Data”, IBM developersWork, 2011.
  3. H. Halpin, et al. “When owl:sameAs Isn’t the Same: An Analysis of Identity in Linked Data”, Proceedings of the International Semantic Web Conference, pp. 305-320, 2010

September 29, 2009

OWL 2 RL closure

OWL 2 has just been published as a Proposed Recommendation (yay!) which means, in laymen’s term, that the technical work is done, and it is up to the membership of W3C to accept it as a full blown Recommendation.

As I already blogged before, I did some implementation work on a specific piece of OWL 2, namely the OWL 2 RL Profile. (I have also blogged about OWL 2 RL and its importance before, nothing to repeat here.) The implementation itself is not really optimized, and it would probably not stand a chance for any large scale deployment (the reader may want to look at the OWL 2 implementation report for other alternatives).  But I can hope that the resulting service can be useful in getting a feel for what OWL 2 RL can give you: by just adding a few triples into the text box you can see what OWL 2 RL means. This is, by the way, an implementation of the OWL 2 RL rule set, which means that it can also accepts triples that are not mandated by the Direct Semantics of OWL 2 (a.k.a. OWL 2 DL). Put it another way, it is an implementation of a small portion of OWL 2 Full.

The core of my implementation turned out to be really easy straightforward: a forward chaining structure directly encoded in Python. I use RDFLib to handle the RDF triples and the triple store. Each triple in the RDF Graph is considered, compared to the premises of the rules; if there is a match then new triples are added to the Graph. (Well, most of the rules contain several triples to match with, and the usual approach is to pick one and explore the Graph deeper check against additional matches. Which one to pick is important, it may affect the overall speed, though.) If, through such a cycle, no additional triples are added to the Graph then we are done, the “deductive closure” of the Graph has been calculated. The rules of OWL 2 RL have been carefully chosen so that no new resources are added to the Graph (only new triples), ie, this process eventually stops.

The rules themselves are usually simple. Although it is possible and probably more efficient to encode the whole process using some sort of a rule engine (I know of implementations based on, eg, Jena’s rules or Jess), one can simply encode the rules using the usual conditional constructs of the programming language. The number of rules is relatively high but nothing that a good screen editor would not manage with copy-paste. There were only a few rules that required a somewhat more careful coding (usually to take care of lists) or many searches through the graph like, for examples, the rule for property chains (see rule prp-spo2 in the rule set). It is also important to note that the higher number of rules does really not affect the efficiency of the final system; if no triple matches a rule then, well, it just does not fire. No side effect of the mere existence of an unused rule.

So is it all easy and rosy? Not quite. First of all, this implementation is of course simplistic in so far as it generates all possible deducted triples that include a number of trivial triples (like ?x owl:sameAs ?x for all possible resources). That means that the resulting graph becomes fairly big even if the (optional) axiomatic triples are not added. If the OWL 2 RL process is bound to a query engine (eg, the new version of SPARQL will, hopefully, give a precise specification of what it means to have OWL 2 RL reasoning on the data set prior to a SPARQL query) then many of these trivial triples could be generated at query time only, thereby avoiding an extra load on the database. Well, that is one place where a proof-of-concept and simple implementation like mine looses against a more professional one:-)

The second issue was the contrast between RDF triples and “generalized” RDF triples, ie, triples where literals can appear in subject positions and bnodes can appear as properties. OWL 2 explicitly says that it works with generalized triples and the OWL 2 RL rule set also shows why that is necessary. Indeed, consider the following set of triples:

ex:X rdfs:subClassOf [
  a owl:Restriction;
  owl:onProperty [ owl:inverseOf ex:p ];
  owl:allValuesFrom ex:A
].

This is a fairly standard “idiom” even for simple ontologies; one wants to restrict, so to say, the subjects instead of the objects using an OWL property restriction. In other words that restriction combined with

ex:x rdf:type ex:X .
ex:y ex:p ex:x .

should yield

ex:y rdf:type ex:A .

Well, this deduction would not occur through the rule set if non-generalized RDF triples were used. Indeed, the inverse of ex:p is a blank node, ie, using it in a triple is not legal; but using that blank node to denote a property is necessary for the full chain of deductions. In other words, to get that deduction to work properly using RDF and rules, the author of the vocabulary would have to give an explicit URI to the inverse of ex:p. Possible, but slightly unnatural. If generalized triples are used, then the OWL 2 RL rules yield the proper result.

It turns out that, in my case, having bnodes as properties was not really an issue, because RDFLib could handle that directly (is that a bug in RDFLib?). But similar, though slightly more complex or even pathological examples can be constructed involving literals in subject positions, and that was a problem because RDFLib refused to handle those triples. What I had to do was to exchange all literals in the graph against a new bnode, perform all the deductions using those, and exchange the bnodes “back” against their original literals at the end. (This mechanism is not my invention; it is actually described by the RDF Semantics document, in the section on Datatype entailment rules.) B.t.w., the triples returned by the system are all “legal” triples, generalized triples play a role during the deduction only (and illegal triples are filtered out at output).

Literals with datatypes were also a source of problems. This is probably where I spent most of my implementation time (I must thank Michael Schneider who, while developing the test cases for OWL 2 RDF Based Semantics, was constantly pushing me to handle those damn datatypes properly…). Indeed, the underlying RDFLib system is fairly lax on checking the typed literals against their definition by the XSD specification (eg, issues like minimum or maximum values were not checked…). As a consequence, I had to re-implement the lexical to value conversion for all datatypes. Once I found out how to do that (I had dive a bit into the internals of RDFLib but, luckily, Python is an interpretative language…) it became a relatively straightforward, repetitive, and slightly time consuming work. Actually, using bnodes instead of “real” literals made it easier to implement datatype subsumptions, too (eg, the fact that, say, an xsd:byte is also a xsd:integer). This became important so that the rules would work properly on property restrictions involving datatypes.

Bottom line: even for a simple implementation literals, mainly literals with datatypes, are the biggest headache. The rest is really easy.  (This is hardly the discovery of the year, but is nevertheless good to remember…)

I was, actually, carried away a bit once I got a hold on how to handle datatypes, so I also implemented a small “extension” to OWL 2 RL by adding datatype restrictions (one of the really nice new features of OWL 2 but which is not mandated for OWL 2 RL). Imagine you have the following vocabulary item:

ex:RE a owl:Restriction ;
    owl:onProperty ex:p ;
    owl:someValuesFrom [
      a rdfs:Datatype ;
      owl:onDatatype xsd:integer ;
      owl:withRestrictions (
          [ xsd:minInclusive "1"^^xsd:integer ]
          [ xsd:maxInclusive "6"^^xsd:integer ]
      )
   ] .

which defines a restriction on the property ex:p so that some its values should be integers in the [1,6] interval. This means that

ex:q ex:p "2"^^xsd:integer.

yields

ex:q rdf:type ex:RE .

And this could be done by a slight extension of OWL 2 RL; no new rules, just adding the datatype restrictions to the datatypes. Nifty…

That is it. I had fun, and maybe it will be useful to others. The package can also be downloaded and used with RDFLib, by the way…

July 4, 2009

Dagstuhl Workshop on Semantic Web

Dagstuhl castleI have just come back from the Workshop “Semantic Web: Reflections and Future Directions”, held in Dagstuhl, Germany. Organized by John Domingue, Rudi Studer, Jim Hendler, and Dieter Fensel, the workshop positioned itself as the “second release” of a similar workshop that was held at the same place 10 years ago.

The first two days of the workshop were more traditional, in the sense that it was series of presentations and panels. This was the “reflection” part of the workshop: looking back to 10 years’ of history as well a peek into the current state of the art. It was interesting but, for my taste, a bit too long; the programme of the two days could have been compressed into one or, say, one and a half days. That would have given more time to the “future directions” part, ie, discussions in break out groups on various topics. I enjoyed those a lot: free flowing discussions on various topics, helping to exchange ideas, experiences, pointers at other works and results, and crystallizing possible future R&D issues. These discussions took place in a very pleasant, relaxed atmosphere among people who mostly knew one another already, ie, we could really concentrate on issues. Each group formulated a number of research goals for the years to come; some group also came up with more practical steps and goals.

As far as I know, the workshop organizers plan to collect all those research issues in some more coherent form, so we should watch this space. In what follows I just collect some issues that I took away from the workshop without the goal of being exhaustive; indeed, there were 6-7 parallel break out groups.

Issues around Web scale. This is clearly one of the major topics of the day. What happens when one has to deal with data containing billions of triples, when the data (ie, the triples) are “dirty”, ie, inconsistent, faulty, etc. Think of the Linked Open Data cloud, of data coming from sensor networks, mobiles, etc. Do we have to re-think all the notions that the Semantic Web inherited from the logic world, ie, completeness, meaning and consequences of consistency, what it means to get results for a query, etc? This is one area where opinions tend to diverge a lot. Some would prefer to completely put aside the traditional logic approaches (rules, descriptions logic, ontologies, OWL, etc), while others may argue that the advances in computing, in reasoning engines and methods are (and are expected to be) such that these methods should still be just as usable as before. As always, I hate any black-and-white statements… I do not think dismissing an area of technology is the right way but, also, other avenues, or new viewpoints should to be explored, too (e.g., how to react on inconsistencies, trying to get possibly incomplete results but whatever can be obtained within, say, 2 minutes, that sort of things). What approach would be used is very much dependent of the application. Anyway… Web scale is a major issue, everybody agrees on that!

Interaction. This is one of the break out groups that I did not attend, unfortunately. And obviously a hugely important direction of future R&D. Many Semantic Web applications today are such that their user interface is just standard because all Semantic Web related work happens behind the scenes, usually on the server side. However, on long term, there is a clear need for programs that could somehow directly show the data in some friendly way, programs that self-adapt themselves to the nature of the data. Not only for experts, but also for laypeople. Such environments may not only include extensions of current browsers but, eg, full desktop environments. Sort of intelligent, data-oriented user interfaces. A major research problem (user interface methodology is always a major problem, whether related to Semantic Web or not…), but also a hugely exciting research and development opportunities!

Vocabularies. There was a separate group on the management of vocabularies, which has identified a number of R&D issues: how does one describe a vocabulary, its interdependence with other vocabularies, how does one rank vocabularies… These are all fundamental question to solve to be able to find vocabularies for a specific purpose, to make specialized search. There are also issues around archiving, providing stable URI-s; last but not least (and this goes way beyond vocabularies only) major legal issues on what type of attribution, copyright or other legal machinery are to be used with vocabularies (it was good to have Tom Heath, who could tell us a bit about the datacommons’ approach). As an example of the many technological problems arising, the break-out groups coined the term “cherry picking of terms”. Although OWL has a mechanism for import, the practice of the RDF world is to use (ie, “cherry pick”) vocabulary terms (predicates, classes, etc) from various different vocabularies without necessarily taking the whole vocabulary, and certainly without using the owl:import predicates (think of routine usage of dc:title without importing the full Dublin Core vocabulary). How would a reasoner treat those? It may be a little bit easier to use a more rule based approach (like OWL RL) although it is not obvious how to cherry pick just the right amount of information on a, say, predicate. But Ian Horrocks also drew my attention on formal ontology modularization work that might be very relevant here; item added to my “to-be-read” list…

Provenance (and trust). One of the issues that popped up in all other break out groups; in consequence a separate one was formed on the second day of discussions. It is indeed one of the questions that anyone who talks about Semantic Web gets; in my personal view, having a clear “story” to tell about provenance is essential for a further deployment of this technology. The discussion in the group was really interesting because this issue raises a number of other questions, like the overall relationship of cryptographic techniques and the Semantic Web, what it means to have trust in context, what are the relationships to temporal or uncertainty reasoning, etc, etc, etc. It was also interesting for me to hear about other works, like the Open Provenance Model, albeit some of these were not necessarily done by Semantic Web people (eg, by the database community). We agreed that a Wiki page will be created (probably at RPI, set up by Deb McGuinnis) to collect information on this subject, and forming a W3C Incubator Group might also be in the books to provide a more thorough state-of-the-art. A long list of additional items to my “to-be-read” pile is coming…

And, of course, it was also good to meet a bunch of people, discuss things at lunch or dinner. This type of interaction is really fruitful. And there was also intensive twittering going on (using the #swdag2009 tag, pointing to a bunch of other reseources) although this time I did not twitter too much because I had problems with my wireless card:-(

It was a good meeting; thanks for the organizers. Would be good not to wait another 10 years for the next incarnation of this event…

June 19, 2009

SemTech2009 impressions

The first and possibly most important aspect of SemTech 2009 is that… it happened! I must admit that back in April-May, when the conference’s Web Site did not include any news of the program yet, I was a bit concerned that the general economic malaise would kill this year’s conference. O.k., I might have been paranoiac, but I think some level of concern was indeed legitimate. And… not only did the conference happen as planned, but the numbers were essentially the same as last year’s (over 1000). I think that by itself is an important sign of the interest in Semantic Technologies. Kudos to the organizers!

A general trend that was reaffirmed this year: by now, Semantic Web technologies are the obvious reference points for almost all presentations, products, etc, that were presented at the event. RDF(S), RDFa, OWL, SPARQL, etc, have become household names; newer specs like SKOS or POWDER may not have been as widely referred to yet, but I am sure that will come, too. Linked Data (and, more specifically, the Linked Open Data cloud) were almost ubiquitous this year while I do not believe that it was even mentioned last year. That is a huge change (although I still miss real “user facing” applications of LOD to show up; some, like Talis’ system deployed at UK universities, were presented but not as part of the regular conference). All that being said, I somehow seem to have missed more sessions than last year, which make my impressions more patchy. There were several journal interviews that I could not escape, hallway discussions that were great but made me miss a presentation here and there… I guess this is what happens when you have such a number of people around!

Tom Tague (from Open Calais) gave a very nice opening keynote. His talk was actually not on Open Calais (he did that in 2008), but rather on his experience in talking to different people who tried to start up new ventures in the Semantic Web area (a quote from his talk: “in 80% of the discussions I did not understand what the vendors wanted, and I walked away with my cheque book intact… Simplify!”). The main areas that he looked at were tools, social, advertising, search, publishing, user interface. One of the remarks I liked was on search: in his view (and I think I agree with that) Semantic Technologies may not be really interesting for general search (where the statistical, i.e., brute force methods work well) but for specialized, area-specific search tools (things like GoPubMed or applications deployed at, eg, Eli Lilly or experimented with at Elsevier come to my mind as good examples). Similarly, these technologies are not necessarily of interest for general, “robotic” publication tools like Google’s news, but for high quality publishing, with possible editorial oversight (reducing costs and difficulties).

(He also had a nice text on one of his slides: “Web2.0: Take Web 1.0, add a liberal dash of social, generous amounts of user generated content, atomize your content assets and stir until fully confused”:-)

Tom Gruber talked about his newest project: SIRI. A super-duper personal assistant running on an iPhone with conversational (voice directed) interface. The group behind it integrates a bunch of info on the Web (the “usual” stuffs like restaurants and travel sites), categorize them, and hide the complexities behind a sexy user interface. The problem I have is that I just do not see how this would scale. I see one of the major promises of the Semantic Web getting data in RDF out there so that such, essentially mash-up applications would become much easier to create and maintain. Until then, it is really tedious… On a more personal note, I am not sure I would like the voice conversational interface. I know that I have never used the voice commands on my phone for example; I do not feel comfortable with it. But, well, that is probably only me…

Chime Ogbuji made a really nice presentation on the system they have developed at the Cleveland Clinic. Great combination of RDF, OWL, and SPARQL. The interesting aspect (for me) was that usage of a medical expert system called Cyc, which is used to convert the doctor’s question in natural language (insofar as a question full of medical jargon can be considered as “natural”:-) into, essentially, a SPARQL query. The medical ontologies are used to direct this conversion process, and then the triple store could be queried through the generated query. Impressive work. (Part of it was documented in a W3C use case, but this presentation had a different emphasis.)

Unfortunately, I had to skip Peter Mika’s presentation on the SearchMonkey experiences, I will have to look at his slides… But, as a last minute addition to the program, the organizers succeeded in getting Othar Hansson and Kavi Goel to talk about Google’s rich sniplets. I have already blogged on this a few weeks ago but this presentation made the goal of the project way more understandable. Essentially, by recognizing specific microformat or RDFa vocabularies, they can improve the user experience by adding extra information on the search result. It is interesting to observe the difference between Yahoo! and Google in this respect: both of them use microformats/RDFa for the same general goal but, whereas Yahoo! relies on the community providing applications and on users personalizing their own search result page, Google controls the output in a generic way that does not require further user actions. It will be interesting to see how these differences influence people’s usage patterns. There were some discussion on the Google’s choice on vocabularies; the presenters made it quite clear that they are perfectly happy using other vocabularies (eg, vCard or FOAF) if they become pervasive, and this is a discussion that Google plans to engage with the community. There is of course a chicken-and-egg issue there (if a vocabulary is known by Google, then it will be more widely used, too), and this is cleary an area to discuss further. But these are details. The very fact that both Yahoo! and Google look at microformats and RDFa is what counts! Who would have thought just about a year ago?

I was not particularl impressed by the Semantic Search panel. I had the impression that the participants did not really know what they should say and talk about:-(

Nice presentation by Jeffrey Smitz from Boeing on a system called SPARQL Server pages. Essentially: the user can use similar structures like, say, a PHP page, ie, a mixture of HTML tags and server “calls”, except that this “calls” refer to SPARQL queries against a triple store on the server. Their system also includes some rule based OWL reasoning on the server side, although I am not sure I got all the details. All in all, the system seemed a bit complex, but the general approach is interesting! And it is nice to see that a company like Boeing seems to make good use of RDF+OWL+SPARQL; it would be good to know more…

I missed Zepheira’s presentation on freemix which is a shame, but, well, it happens. But I did play with freemix before travelling to San Jose;  I called it “Exhibit for the masses”. And this, I think, is a fair characterization. David Huynh’s exhibit is a really nice tool, but it is not easy to use it. On the other hand, it took me about 2 minutes to make a visualization of a json data set I used for an exhibit page elsewhere…

Andraz Tori talked about Common tag, a small vocabulary that, for example, can be used when marking up texts with tags (something that engines like Zemanta or Open Calais do). Bringing the RDF and the tagging worlds together is really important; I am very curious how successful this initiative will be…

The keynote on the last day was from the New York Times (by Evan Sandhaus and Robert Larson). It was quite interesting to see how a reputable journal like the NYT has developed a tradition of indexing, abstracting, cataloging articles, how these are archived and searched. Impressive. It is also great that the NYT Annotated Corpus has been released to the Research community. I did not know about that and, I presume, this must be a great resource for a lot of people active in the are of, say, natural language processing. Finally they announced their intention to release their thesaurus in a Semantic Web format, to add a “blob” to the Linked Data Cloud. They still have to work out the details (and expect feedback from the community) and I would hope they would publish a SKOS thesaurus and might even annotate the news items on their web site using this thesaurus in RDFa. But something in this space will happen, that is for sure! Other reputable newspapers, like Le Monde, the Guardian, NRC Handelsblatt,  el Pais, will you follow?

I also had my share of talking: gave an intro tutorial to SW, gave an overview of what is happening at W3C (quite a lot this year, including the finalization of POWDER, OWL 2, and SKOS!) and participated at an OWL 2 panel (with Mike Smith, Zhe Wu, Deb McGuinnis, and Ian Horrocks). I was quite happy with the tutorial and the way the panel went; the audience for the talk could have been a bit larger. But, well…

It was a long week, long trips, not much sleep… but well worth it!

Reblog this post [with Zemanta]

April 27, 2009

Simple OWL 2 RL service

The W3C OWL Working group has published a number of OWL 2 documents last week. This included an updated version of the OWL 2 RL profile. I have already blogged about this profile (“Bridge Between SW communities: OWL RL”) when the previous release was published; there are no radical changes in this release, so there is no reason to repeat what was said there.

I have been playing with a simple and naive implementation of OWL 2 RL for a while; I have now decided to live dangerously;-) and release the software and the corresponding service. So… you can go to the OWL 2 RL generator service, give an RDF graph, and see what RDF triples an OWL 2 RL system should generate. It should give you some ideas of what OWL 2 RL is all about.

I cannot emphasize enough that this is not a production level tool. Beyond the bugs that I have not yet found, a proper implementation would, for example, optimize the owl:sameAs triples and, instead of storing them in the graph, would generate those on the fly when, say, a SPARQL request is issued. But my goal was not to produce something optimal; instead, I wanted to see whether OWL 2 RL can be implemented without any sophisticated tool or not. The answer is: yes it can. This also means that if I could do it, anybody with a basic knowledge of the underlying RDF environment and programming language (RDFLib and Python in this case) can do it, too. No need to be familiar with any complex algorithms, rule language implementation tricks, complicated external tools, description logic concepts, whatever…

January 15, 2009

OWL panel, documents, comments…

Filed under: Semantic Web,Work Related — Ivan Herman @ 8:56
Tags: , ,

Right before Christmas the videos from the ISWC2008 conference were released. (Nice Christmas present from Tim Finin and his colleagues…) So I re-listened to the panel discussion “An OWL 2 Far?”. It is always good to listen to such discussions after a few months and, shall we say, with a slightly cooler head… It is clear from the discussion (and I guess all parties can agree on that) that the different views on OWL still generate passionate feelings and discussions…

A few weeks after the conference a bunch of OWL documents were published; I also had a blog on a particular profile of it, namely OWL-RL (one of the issues discussed quite vehemently at that panel, including by yours truly…). What is slightly surprising is that, in contrast to the passions raised there, not many comments have been submitted to the Working Group yet (look at the public archive of the mailing list). This is the time to do this, though: the OWL 2 documents are in Last Call in the W3C process, ie, this is when the technical design is getting finalized. The Last Call period ends on the 23rd of January, which is not that far away… So, please, if you have concerns or issues, or even if you just want say what a wonderful work that is, speak up now!

December 3, 2008

Bridge between SW communities: OWL RL

The W3C OWL Working Group has just published a series of documents for the new version of OWL, most of them being so-called Last Call Working Drafts (which, in the W3C jargon, means that the design is done; after this, it will only change in response to new problems showing up).

There are many aspects of the new OWL 2 that are of a great interest; I would concentrate here on only one of those, namely the so-called OWL RL Profile. OWL 2 defines several “profiles”, which are subsets of the full OWL 2; subsets that have some good properties, e.g., in terms of implementability. OWL RL is one of those. “RL” stands for “Rule Language” and what this means is that OWL RL is simple enough to be implemented by a traditional (say, Prolog-like) rule engine or can be easily programmed directly in just about any programming language. There is of course a price: the possibilities offered by OWL RL are restricted in terms of building a vocabulary, so there is a delicate balance here. Such rule oriented versions of OWL have also precedences: Herman ter Horst published, some years ago, a profile calld pD*; a number of triple store vendors have a similar, restricted versions of OWL implemented in their systems already, referred to as RDFS++, OWLPrime, or OWLIM; and there has been some more theoretical work done by the research community in this direction, too, usually referred to by “DLP”. The goal was common to all of these: find a subset of OWL that is helpful to build simple vocabularies, and that can be implemented (relatively) easily. Such subsets are also widely seen as more easily understandable and usable by communities that work with RDF(S) and need only a “little bit of OWL” for their applications (instead of building more rigorous and complex ontologies which requires extra skills they may not have). Well, this is  the niche of OWL RL.

OWL RL is defined in terms of a functional, abstract syntax (defining a subset of DL) as well as a set of rules of the sort “if that and that triple pattern exists in the RDF Graph then add these and these triples”. The rule set itself is oblivious to the DL restrictions in the sense that it can be used on any RDF graphs, albeit with a possible loss of completeness. (There is a theorem in the document that describes the exact situation if you are interested.)

The number of rules is fairly high (74 in total), which seems to deceive the goal of simplicity. But this is misleading. Indeed, one has to realize that, for example, these rules subsume most of RDFS (e.g., what the meaning of domain, range, or subproperty is). Around 50 out of the 74 rules simply codify such RDFS definitions or their close equivalents in OWL (what it means to be “same as”, to have equivalent/disjoint properties or classes, that sort of things). All of these are simple, obvious, albeit necessary rules. There are only around 20 rules that bring real extra functionality compared to RDFS for building simple vocabularies. Some of these functionalities are:

  • Characterization of properties as being (a)symmetric, functional, inverse functional, inverse, transitive,…
  • Property chains, ie, defining the composition of two or more properties as being the sub property of another one. (Remember the classic “uncle” relationship that cannot be expressed in terms of OWL 1? Well, by chaining “brother” and “parent” one can say that the chain is a subproperty of “uncle” and that is it…)
  • Intersection and union of classes
  • Limited form of cardinality (only maximum cardinality and only with values 0 and 1) and  property restrictions
  • An “easy key” functionality, i.e., deducing the equivalence of two resources if a list of predefined properties have identical values for them (e.g., if two persons have the same name, same email address, and the same home page URI, then the two persons should be regarded as identical)

Some of these features are new in OWL 2 (property chaining, easy keys), others are already been present in OWL 1.

Quick and dirty implementations of OWL RL can be done fairly easily. Either one uses an existing rule engine (say, Jena rules) and lets the rule engine take its course or one encodes the rules directly on top of an RDF environment like Sesame, RDFLib, or Redland, and uses a simple forward chaining cycle. Of course, this is quick and dirty, i.e., not necessary efficient, because it will generate many extra triples. But if the rule engine can be combined with the query system (SPARQL or other), which is the case for most triple store implementations, the actual generation of some of those extra triples (e.g., <r owl:sameAs r> for all resources) may be avoided. Actually, some of the current triple stores already do such tricks with the OWL profiles they implement. (And, well, when I see the incredible evolution on the size and efficiency of triple stores these days, I wonder whether this is really an issue on long term for a large family of applications.) I actually did such quick and dirty implementation in Python; if you are curious what triples are generated via OWL RL for a specific graph, you can try out a small service I’ve set up. (Caveat: it has not been really thoroughly tested yet, i.e., there are bugs. Neither it is particularly efficient. Do not use it for anything remotely serious!).

So what is the possible role of OWL RL in developing SW applications? I think it will become very important. I usually look at OWL RL as some sort of a “bridge” that allows some RDF/SW applications to evolve in different directions. Such as:

  • Some applications may be perfectly happy with OWL RL as is (usually combined with a SPARQL engine to query the resulting, expanded graph), and they do not really need more in term of vocabulary expressiveness. I actually foresee a very large family of applications in this category.
  • Some applications may want to combine OWL RL with some extra, application specific rules. They can rely on a rule engine fed with the OWL RL rules plus the extra application rules. B.t.w., although the details are still to be fleshed out, the goal is that a RIF implementation would accept OWL RL rules and produce what has to be produced. I.e., RIF compatible implementation would provide a nice environment for these types of applications.
  • Some applications may hit, during their evolution, the limitations of OWL RL in terms of vocabulary building (e.g., they might need more precise cardinality restrictions or the full power of property restrictions). In which case they can try expand their vocabulary towards more complex and formal ontologies using, e.g., OWL DL. They may have to accept some more restrictions because they enter the world of DL, and they would require more complex reasoning engines, but that is the price they might be willing to pay. While developers of applications in the other categories would not necessarily care about that, the fact that the language is also defined in terms of a functional syntax makes (i.e., that that version of OWL RL is integral part of OWL 2) this evolution path easier.

Of course, at the moment, OWL RL is still a Draft, albeit in Last Call. Feedbacks and comments of the community as well as the experience of implementers is vital to finalize it. Comments to the Working Group can be sent to public-owl-comments@w3.org (with public archives).

October 31, 2008

ISWC2008, Karlsruhe

ISWC2008 has just finished (I am still at the hotel, leaving for home in a few hours). As usual, it is very difficult to give an exhaustive overview of the whole conference, not only because there were way too many parallel things going on, but everyone’s interests are different… These are just a few impressions. Still have to find time reading through some of the papers in more details.

Great keynote by John Giannanderea from Metaweb, ie, freebase. Freebase has always been an exciting project but the great news from the Semantic Web community’s point of view is that freebase has opened its database to the rest of the World in RDF, too. As such, freebase will soon become part of the Linking Open Data cloud (I guess there are still some details to be ironed out, and I saw John and Chris Bizer starting to discuss these). Actually, it was also interesting to hear again and again from John that the internal structure of freebase is based on a directed, labeled graph model, because that was the only viable option for them to build up what they needed. Sounds familiar?

An interesting point of the keynote was when John was wondering whether Metaweb is therefore a Semantic Web company or not. He thought that yes, it is, because the internal structure is compatible with RDF, it relies on identifiers with URIs, and is Web based. But he also thought that, well, it is not because… no description logic is in use, nor ontologies. Sigh… This still reflects the erronous view that one must use description logic to be on the Semantic Web. Wrong! So I went up to the mike and welcomed Metaweb in the growing club of Semantic Web companies…

Among the many papers I was interested in, let me refer to the one of Eyal Oren et al., “Anytime Query Answering in RDF through evolutionary algorithms” and, actually, a related submission from the same research group to the Billion Triple Challenge, called MaRVIN. In both cases the issue is that while handling very large datasets one might not necessarily want or is interested in _all_ solutions to a given query (or inferences, in case of MaRVIN) but, rather, whatever can be reached within a reasonable time. Ie, essentially, trading completeness for responsiveness. Whether genetic algorithms are the answer, as explored by Eyal and friends, or some other techniques, nobody knows; as Eyal clearly acknowledged, these are first attempts and we have to wait a few more years and furter results to get a feeling where it will lead. But the direction is really interesting.

This actually leads to what was, for me, the highlight of the conference, namely the SW Challenge, both the traditional Open Call as well as the new Billion Triple Challenge (there more details on both on the challenge’s web site). The entries were really impressive. As Peter Mika said in his closing comments on the challenge, long gone are the days when a challenge was some techie keyboard manipulation; the entries all had great user interface design, with the real regards to non-expert end users who may or may not know (and probably do not care) that the underlying technology is Semantic Web.

Among the finalists in the open call Chris Bizer presented DBPedia Mobile, (see also their site) ie, a system to access the full power of DBPedia (and, actually, the LOD cloud in general) from an iPhone via a proxy somewhere on the Web. The proxy is actually a hugely powerful environment, making use of Falcon and Sindice, and a bunch of query engines distributed over the network, all peeking into the LOD cloud and, actually, adding items to it, eg, photos taken on the iPhone. A few years ago all this would have had a SciFi edge to it, and now it was running at the conference…

Eero Hyvönen showed their HealthFinland portal (see also their site), soon to be deployed by the Finnish health authorities. Half of the system is, shall we say, more “traditional” (hm, well, what this means is that it would have been revolutionary two years ago:-), a number of serious ontologies governing health related data integration and search into the data. However, what I found exciting is the other half. Indeed, Eero and friends realized that search facets derived from serious ontologies are not really ideal for everyday end users. Therefore, they made a survey among users, derived a number of terms to be used on the user interface level, and bound these terms internally to the ontology. The result is a much more friendly system that still has the power offered by ontology directed search.

Actually, having Eero’s and Chris’ system presented side by side was also interesting from another point of view, namely to show that there are cases when using serious ontologies is important and there are cases when it isn’t. When I use an iPhone to navigate in a city and get information about, say, historical buildings then a bit of scruffiness is really not a problem. Speed, interaction, richness of data is more important. However, when it comes to, e.g., health issues, I must admit that I am prepared to wait a bit if I am sure that the results go through the rigorous inference and checking processes that one can achieve through the usage of formal ontologies. This is not the place when one should tolerate scruffiness. The stack (or, to quote Eric Miller, the “menu”) of Semantic Web technologies is rich enough to allow for both; choose what you need! All those discussions description logic vs. Semantic Web in general is futile in my view…

And then came benji’s paggr system (which actually won the Challenge in the Open Call track). Are you user of netvibes, iGoogle, or the new Yahoo user interface? Then you know what it means to quickly build up a Web page using small widgets accessing RSS feeds, stock quotes, clocks, etc. Now imagine that each of these widgets is in fact a small sparql query with some wrapper to present the result properly. Package that into a nice user interface that benji has always been a master of, and you get paggr. Not yet public, but I already signed up to play with it as soon as it is… This will really be cool!

As for the Billion Triples challenge: I already referred to MaRVIN, but there were a bunch of others like SearchWebDB or SemaPlorer, or SAOR. In some cases massively parallel storage approaches, not only offering near real time (federated) SPARQL query possibilities, but, in some cases, preprocessing it with a lower level RDFS or OWL fragment inferencing. All that done starting with millions of triples integrating all kinds of public datasets, yielding storages going beyond the 1 Billion triple mark. And let us not forget that this mark had already been reached by companies such as Tallis or OpenLink, so these new architectures just add to the lot… These were also particularly interesting with and eye to the new OWL RL profile that is being defined in the W3C OWL Working group and which aims at exactly such setups.

Let me finish with another remarkable entry, although this one did not win a price. i-MoCo created a small navigation system over a triple store containing “only” 250 million triples. So what is the big deal, you might say? Well, all the triples were stored on… an iPhone! So the next challenge will probably be to get, say, 10 billions of triples or more on your phone. Just wait a few years…

October 15, 2008

Semantic Web and uncertainty

The issue of uncertainty on the Semantic Web has been around for a while now, although it is still largely a research issue (Though not only; C&P has an extension of their Pellet tool to handle a particular probabilistic extension of OWL; but I am not aware of any other commercial system of the kind.) Ken and Kathryn Laskey and Paulo Costa have been organizing a series of workshops (the URSW series) on the subject for several years now (there will be one on the coming ISWC2008 conference, too!), and there was also a W3C Incubator Group on the subject that issued a report not a long time ago. But still a lot to be done…

The reason I remembered all that is because I found a survey of Thomas Lukasiewicz and Umberto Straccia[1] that is worth reading if you are interested in the subject. The survey gives a separate description of probabilistic, possibilistic, and fuzzy extensions of the DL dialect that is at the basis of OWL DL, together with further references if one wants to dig deeper (156 of those!). It is not an easy read at all, and I couldn’t say I understood all the details described there, far from it… But as all good surveys do it gives you an idea or, or refreshes your memory on what is happening in the area. And that is always incredibly useful.

The approaches described in the paper are fairly high level in the sense that (as the authors emphasize, too) the extensions are all on top of SHOIN(D), ie, OWL DL. That makes the constructions sometimes quite complex (mainly in the probabilistic case) and they are probably difficult to use for a lambda user. (Although, who knows. They are certainly complex to implement, but maybe the usage is not that bad. I am not sure.) However, the authors themselves refer to alternative approaches on top of simpler DL dialects (without giving too much details). It would be nice to have a survey on the extensions of a level corresponding to simpler OWL profiles like OWL RL that the OWL WG at W3C is also working on now. Just as OWL RL might be a good “entry point” for a large family of users into the world of OWL, an uncertainty extension of that level might be of a great interest, too…

Reading this survey also reminded me of short paper by Fensel and van Harmelen[2], “Unifying Reasoning and Search to Web Scale”, on which I had a very short blog a while ago. I just wonder whether fuzzy or probabilistic reasoning may not be a good approach to the problems they describe there… Althought this is clearly still a long way off.

Anyway. I learned something today…

  1. Lukasiewicz, Thomas, and Umberto Straccia. “Managing uncertainty and vagueness in description logics for the Semantic Web.” Journal of Web Semantics: Science, Services and Agents on the World Wide Web 6, no. 4 (2008). Available on line as a pre-print.
  2. “Unifying Reasoning and Search to Web Scale”, by Dieter Fensel and Frank van Harmelen, IEEE Internet Computing, Volume 11, No. 2, March/April 2007.
Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,613 other followers