Ivan’s private site

January 18, 2014

Some W3C Documents in EPUB3

Filed under: Code,Digital Publishing,Python,Work Related — Ivan Herman @ 13:04
Tags: , ,

I have been having fun the past few months, when I had some time, with a tool to convert official W3C publications (primarily Recommendations) into EPUB3. Apart from the fact that this helped me to dive into some details of the EPUB3 Specification, I think the result might actually be useful. Indeed, it often happens that a W3C Recommendation consists, in fact, of several different publications. This means that just archiving one single file is not enough if, for example, you want to have those documents off line. On the other hand, EPUB3 is perfect for this; one creates an eBook contains all constituent publications as “chapters”. Yep, EPUB3 as complex archiving tool:-)

The Python tool (which is available in github) has now reached a fairly stable state, and it works well for documents that have been produced by Robin Berjon’s great respec tool. I have generated, and put up on the Web, two books for now:

  1. RDFa 1.1, a Recommendation that was published last August (in fact, there was an earlier version of an RDFa 1.1. EPUB book, but that was done largely manually; this one is much better).
  2. JSON-LD, a Recommendation published this week (i.e., 16th of January).

(Needless to say, these books have no formal standing; the authoritative versions are the official documents published as a W3C Technical Report.)

There is also draft version for a much larger book on RDF1.1, consisting of all the RDF 1.1 specifications to come, including all the various serializations (including RDFa and JSON-LD). I say “draft”, because those documents are not yet final (i.e., not yet Recommendations); a final version (with, for example, all the cross-links properly set) will be at that URI when RDF 1.1 becomes a Recommendations (probably in February).

April 20, 2011

RDFa 1.1 Primer (draft)

Filed under: Semantic Web,Work Related — Ivan Herman @ 10:21
Tags: , , ,

I have had several posts in the past on the new features of RDFa 1.1 and where it adds functionalities to RDFa 1.0. The Working Group has just published a first draft for an RDFa 1.1 Primer, which gives an introduction to RDFa. We did have such a primer already for RDFa, but the new version has been updated in the spirit of RDFa 1.1… Check it out if you are interested in RDFa!

August 3, 2010

New RDFa Core 1.1 and XHTML+RDFa 1.1 drafts

Filed under: Semantic Web,Work Related — Ivan Herman @ 20:49
Tags: , , , ,

W3C has just published updated drafts of RDFa Core 1.1 and XHTML+RDFa 1.1. These are “just” new heart-beat documents, meaning that they are not fundamentally new (the first drafts of these documents were published last April) but not yet ”Last Call” documents, i.e., the group does not yet consider the specification work finished. Although… in fact it is not far from that point. The WG has spent the last few weeks to get through open issues, and not many are left open at this moment.

So what has changed since my last blog on the subject where I introduced the new features compared to RDFa 1.0? In fact, nothing spectacular. Lots of minor clarifications issues to make things more precise. There has been a change on the treatment of XML Literals: whereas, in RDFa 1.0, XML Literals are automatically generated any time XML markup is in the text, RDFa 1.1 explicitly requires a corresponding datatype specification; otherwise a plain literal is created in RDF. (This is the only backward incompatibility of RDFa 1.0, as foreseen by the charter.)

Probably the most important addition to RDFa Core was triggered by a comment of Jeni Tenison (though the problem was raised by others, too). Jeni emphasized a slightly dangerous aspect of the profile mechanism in RDFa 1.1. To remind the reader: using the @profile attribute the author of an RDFa 1.1 file can refer to another file somewhere on the Web; that “profile file” may include, in one place, prefix declarations, term specifications, and (this is also new in this version!) a default term URI (see again my earlier blog on the details). The question is: what happens if the profile file is unreachable? The danger is that an RDFa 1.1 would possibly generate wrong triples, which is actually worse than not generate triples at all. The decision of the group (as Jeni actually proposed) was that the whole DOM subtree, i.e., all triples would be dropped starting with the element with the un-referenceable profile.

The profile mechanism has stirred quite some interest both among users of RDFa and elsewhere. Martin Hepp was probably the first to publish an RDF 1.1 profile for GoodRelations and related vocabulary prefixes at http://www.heppnetz.de/grprofile/. To use, essentially, his example, this means that one can use

<div profile="http://www.heppnetz.de/grprofile/">
  <span about="#company" typeof="gr:BusinessEntity>
    <span property="gr:legalName">Hepp's bakery</span>,
    see also the <a rel="rdfs:seeAlso" href="http://example.org/bakery">
    home page of the bakery.</a>
</div>

Because Martin’s profile includes a prefix definition for rdfs, too (alongside a number of other prefixes), the profile definition replaces a whole series of namespace declarations that were necessary in RDFa 1.0. I would guess that similar profile files, with term or prefix definitions, will be defined for foaf or for Dublin Core, too. Other obvious candidates for such profile definitions are the “big” users of RDFa information like Facebook or Google, who can specify the vocabularies they understand, i.e., index. (This did come up at the W3C camp in Raleigh, during the exciting discussion on the Facebook vocabulary.) Finally, another interesting discussion generated by RDFa’s profile mechanism occurred at the “RDF Next” Workshop in Palo Alto a few weeks ago: some participants proposed to consider a similar mechanism in a next version of Turtle (I must admit this came as a surprise, although it does make sense…)

As for implementations of profiles? Profiles are defined in such a way that an RDFa processor can recursively invoke itself to extract the necessary information for processing; indeed, RDFa is also used to encode the prefix, term, etc, definitions (Turtle or RDF/XML can also be used, but RDFa is the only required format). This means that an RDFa processor does not have to implement a different parser to handle the profile information. My ”shadow” RDFa distiller implements this (as well as all RDFa 1.1 features) and it was not complicated. It actually implements a caching mechanism, too: some well known and well published profiles can be stored locally so that the distiller does not go through an extra HTTP request all the time (yes, I know, this may lead to inconsistencies in theory but if such cache is refreshed regularly via, say, a crontab job, it should be o.k. in practice). At the moment the content of that cache is of course curated by hand. (The usual caveat applies: this is code in development, with bugs, with possibly frequent and unannounced changes…) You are all welcome to try the shadow distiller to see what RDFa is capable of. Of course, other RDFa 1.1 implementations are in the making. If you have one, it would be good to know about them, the Working Group is constantly looking for implementation experiences…

September 29, 2009

OWL 2 RL closure

OWL 2 has just been published as a Proposed Recommendation (yay!) which means, in laymen’s term, that the technical work is done, and it is up to the membership of W3C to accept it as a full blown Recommendation.

As I already blogged before, I did some implementation work on a specific piece of OWL 2, namely the OWL 2 RL Profile. (I have also blogged about OWL 2 RL and its importance before, nothing to repeat here.) The implementation itself is not really optimized, and it would probably not stand a chance for any large scale deployment (the reader may want to look at the OWL 2 implementation report for other alternatives).  But I can hope that the resulting service can be useful in getting a feel for what OWL 2 RL can give you: by just adding a few triples into the text box you can see what OWL 2 RL means. This is, by the way, an implementation of the OWL 2 RL rule set, which means that it can also accepts triples that are not mandated by the Direct Semantics of OWL 2 (a.k.a. OWL 2 DL). Put it another way, it is an implementation of a small portion of OWL 2 Full.

The core of my implementation turned out to be really easy straightforward: a forward chaining structure directly encoded in Python. I use RDFLib to handle the RDF triples and the triple store. Each triple in the RDF Graph is considered, compared to the premises of the rules; if there is a match then new triples are added to the Graph. (Well, most of the rules contain several triples to match with, and the usual approach is to pick one and explore the Graph deeper check against additional matches. Which one to pick is important, it may affect the overall speed, though.) If, through such a cycle, no additional triples are added to the Graph then we are done, the “deductive closure” of the Graph has been calculated. The rules of OWL 2 RL have been carefully chosen so that no new resources are added to the Graph (only new triples), ie, this process eventually stops.

The rules themselves are usually simple. Although it is possible and probably more efficient to encode the whole process using some sort of a rule engine (I know of implementations based on, eg, Jena’s rules or Jess), one can simply encode the rules using the usual conditional constructs of the programming language. The number of rules is relatively high but nothing that a good screen editor would not manage with copy-paste. There were only a few rules that required a somewhat more careful coding (usually to take care of lists) or many searches through the graph like, for examples, the rule for property chains (see rule prp-spo2 in the rule set). It is also important to note that the higher number of rules does really not affect the efficiency of the final system; if no triple matches a rule then, well, it just does not fire. No side effect of the mere existence of an unused rule.

So is it all easy and rosy? Not quite. First of all, this implementation is of course simplistic in so far as it generates all possible deducted triples that include a number of trivial triples (like ?x owl:sameAs ?x for all possible resources). That means that the resulting graph becomes fairly big even if the (optional) axiomatic triples are not added. If the OWL 2 RL process is bound to a query engine (eg, the new version of SPARQL will, hopefully, give a precise specification of what it means to have OWL 2 RL reasoning on the data set prior to a SPARQL query) then many of these trivial triples could be generated at query time only, thereby avoiding an extra load on the database. Well, that is one place where a proof-of-concept and simple implementation like mine looses against a more professional one:-)

The second issue was the contrast between RDF triples and “generalized” RDF triples, ie, triples where literals can appear in subject positions and bnodes can appear as properties. OWL 2 explicitly says that it works with generalized triples and the OWL 2 RL rule set also shows why that is necessary. Indeed, consider the following set of triples:

ex:X rdfs:subClassOf [
  a owl:Restriction;
  owl:onProperty [ owl:inverseOf ex:p ];
  owl:allValuesFrom ex:A
].

This is a fairly standard “idiom” even for simple ontologies; one wants to restrict, so to say, the subjects instead of the objects using an OWL property restriction. In other words that restriction combined with

ex:x rdf:type ex:X .
ex:y ex:p ex:x .

should yield

ex:y rdf:type ex:A .

Well, this deduction would not occur through the rule set if non-generalized RDF triples were used. Indeed, the inverse of ex:p is a blank node, ie, using it in a triple is not legal; but using that blank node to denote a property is necessary for the full chain of deductions. In other words, to get that deduction to work properly using RDF and rules, the author of the vocabulary would have to give an explicit URI to the inverse of ex:p. Possible, but slightly unnatural. If generalized triples are used, then the OWL 2 RL rules yield the proper result.

It turns out that, in my case, having bnodes as properties was not really an issue, because RDFLib could handle that directly (is that a bug in RDFLib?). But similar, though slightly more complex or even pathological examples can be constructed involving literals in subject positions, and that was a problem because RDFLib refused to handle those triples. What I had to do was to exchange all literals in the graph against a new bnode, perform all the deductions using those, and exchange the bnodes “back” against their original literals at the end. (This mechanism is not my invention; it is actually described by the RDF Semantics document, in the section on Datatype entailment rules.) B.t.w., the triples returned by the system are all “legal” triples, generalized triples play a role during the deduction only (and illegal triples are filtered out at output).

Literals with datatypes were also a source of problems. This is probably where I spent most of my implementation time (I must thank Michael Schneider who, while developing the test cases for OWL 2 RDF Based Semantics, was constantly pushing me to handle those damn datatypes properly…). Indeed, the underlying RDFLib system is fairly lax on checking the typed literals against their definition by the XSD specification (eg, issues like minimum or maximum values were not checked…). As a consequence, I had to re-implement the lexical to value conversion for all datatypes. Once I found out how to do that (I had dive a bit into the internals of RDFLib but, luckily, Python is an interpretative language…) it became a relatively straightforward, repetitive, and slightly time consuming work. Actually, using bnodes instead of “real” literals made it easier to implement datatype subsumptions, too (eg, the fact that, say, an xsd:byte is also a xsd:integer). This became important so that the rules would work properly on property restrictions involving datatypes.

Bottom line: even for a simple implementation literals, mainly literals with datatypes, are the biggest headache. The rest is really easy.  (This is hardly the discovery of the year, but is nevertheless good to remember…)

I was, actually, carried away a bit once I got a hold on how to handle datatypes, so I also implemented a small “extension” to OWL 2 RL by adding datatype restrictions (one of the really nice new features of OWL 2 but which is not mandated for OWL 2 RL). Imagine you have the following vocabulary item:

ex:RE a owl:Restriction ;
    owl:onProperty ex:p ;
    owl:someValuesFrom [
      a rdfs:Datatype ;
      owl:onDatatype xsd:integer ;
      owl:withRestrictions (
          [ xsd:minInclusive "1"^^xsd:integer ]
          [ xsd:maxInclusive "6"^^xsd:integer ]
      )
   ] .

which defines a restriction on the property ex:p so that some its values should be integers in the [1,6] interval. This means that

ex:q ex:p "2"^^xsd:integer.

yields

ex:q rdf:type ex:RE .

And this could be done by a slight extension of OWL 2 RL; no new rules, just adding the datatype restrictions to the datatypes. Nifty…

That is it. I had fun, and maybe it will be useful to others. The package can also be downloaded and used with RDFLib, by the way…

April 26, 2009

WWW2009 Impressions

As usual, when making notes of a conference like WWW2009, in Madrid, one has only a partial view. This is all the more true for a conference of the size of WWWW2009 with around 1000 attendees and with 5-6 parallel tracks. I must admit that I usually have difficulties with so many tracks at the same time; I obviously loose some of the events happening, which is a source of unavoidable frustration. With this caveat, just some of the topics that I will probably remember…

The power of Twitter. Although this was not a “topic” of the conference, this was the first WWW conference where twitter was king. Twitter was everywhere, the #www2009 topic was getting several new entries per second (it even got spammed:-(, and other twitter tags were used for some of the specialized events (like #w3ctrack or #ldow2009) One could get a glimpse of what was happening elsewhere just by following these topics. In fact, this report is much more sketchy than usual simply because my own tweetes from the conference or, of course, all tweetes of the #www2009 topic can very well replace some of the notes I wrote in blogs in earlier years.

Social networks. Going beyond twitter, the ubiquitous presence of social networks, their effect on just about anything is still a major topic, like the continuous flow of papers trying, eg, to extract semantics from tag clouds (eg, the paper of Benjamin Markines et al) or the Googles and Yahoo!-s of this World trying to exploit these tags to improve their search results. (Yahoo’s experimental tag explorer is a good example trying to exploit these further.) Nothing radically new here, but progress is reported on all conferences, and this one was no exception. One of the keynotes, by Pablo Rodriguez from Telefonica, actually claimed that the needs of social networks in terms of network infrastructure are so different that they are bound to require changes on the hardware/firmware level of networks. Posting, for example, a video on a social site may create a sudden peak of high volume access (for example if posted by a “celebrity”) that makes it very different from the more steady flow of data that more traditional sites provide and require. For example local caching in routers might be needed. I am no expert in this at all (anything that is close to hardware is sort of a black box to me) so I cannot judge these statements but it was interesting to hear. Another interesting point he made was that “celebrities” of a specific network may (not necessary intentionally) start a dos attack against a site: think of the amount of http requests flowing to a site mentioned by one of these social network stars!

Web Science. There was a panel (organized by Nigel Shadbold, with Tim Berners-Lee, Ricardo Baeza-Yates, and Mike Brodie). The whole topic is still fairly open (at least for me): what exactly is Web Science and where are the boundaries? What types of research belongs to WS, and what is better kept outside to be handled by other disciplines? What type of abstractions would be necessary to study the Web as a whole (just as chemistry can be seen as a set of abstractions on top of physics)? What type of interdisciplinary research groups should be established? As far as I am concerned, I do not have a response to any of these questions:-( What I could see happening is that under the banner “Web Science” many different sub-disciplines will appear very soon and gain independent life without too much relationships among themselves. As far as I am concerned, I would be more interested by the relationship between the Web and society at large than by the technical aspects, but that is only me. An interesting practical point for the future is that there are plans to combine (eg, co-locate) future WWW conferences with Web Science events; that would really be a gain for both event series in my view.

Computing cloud. Yep, this comes up more an more often. Obviously a big deal in the keynote of Alfred Spector, from Google, but came up elsewhere, too. The a mini-tutorial on Hadoop, MapReduce, and Hive, given by Tom White as part of the Developers’ track, was really interesting and instructive for me. We know that the computing cloud has a great interest for the Semantic Web community; it may indeed be a tool to handle the significant amount of data out there. The LOD data is already available on the Amazon services (thanks to OpenLink), Chris Bizer and friends’ Mobile DBpedia makes use of cloud facilities, the LarKC project also makes use of massively parallel computing (I am not sure they use the cloud), too. Something to keep an eye on, that is for sure; I am sure the topic will gain more importance in future conferences. (And one more technology I should familiarize myself with…)

Power of data. Issues around search have become the dominating theme of the WWW conferences, and this one was no exception. Many research try to exploit the sheer amount and variety of data that has been accumulated by the big search engines, for example. I have heard several talks over the years coming from Google’s R&D lab (including a keynote at this conference). I must admit the overall impression I get from these is that a more or less straightforward exploitation of a huge amount of data is used like a sledgehammer for all problems. (I am probably unfair.) Ricardo Baeza-Yates (from Yahoo!) also reported some work in his keynote on, eg, analyzing the search queries themselves, ie, the paths of different searches performed by users between the time they begin some search and the time they find what they were looking for. (Interesting stuff! By the way, there is also a conference on weblogs and social media, ICWSM; one more conference coming up around Web technologies.) I also listened to a presentation on Yahoo!’s Boss by Ted Drake (again on the Developers’ track): what is interesting is that one can access to (a part of) Yahoo!’s accumulated indexes to build, eg, one’s own search engines but, I presume, one could also use this data for other type of research exploiting the data. Power of data for the masses? (I have heard of Boss before and I would have welcome more technical details at the presentation but, well…)

Web of data, a.k.a. Semantic Web. The conference started by a great workshop on Linked Data. I again rely on twitter notes and the general twitter notes for more details, no need to repeat them here. Suffices it to say that, beyond the individual papers, there were a general “buzz” in the air, a general enthusiasm that was reflected by the high number of participants (over 100). For anybody interested, it is worth looking at all the papers, they were good! Having said that, what I am really waiting for is to see many real application of the LOD (and not only experimental, university usage) but that takes its time; there were no really breathtaking news on that at the workshop.

But, of course, the workshop was for the converted; what was more interesting is to see that the Linked Data concept, and the Semantic Web in general, created more and more interest at the conference proper and not only for the long time Semantic Web adepts. Jim Hendler did a surprise presentation at the Developers’ track (surprise, because a announced speaker could not come, so he took his place) talking to non-Semantic Web developers about what can be done already today with this technology, about the excitement that is out there, about the companies that have already picked up this technology. It was good to get these messages out there again and again. Georgi Kobilarov did also a great presentation on DBpedia at the track; there were several people I talked to afterward who were really carried away by the possibilities opened up by having access to a huge amount of data through the unifying abstraction of RDF, RDFS, and possibly (a little bit of:-) OWL.

I also went to the Semantic Web referreed paper track, obviously. I must admit I was a little bit disappointed because lots of colleagues that I would typically see on such event that were not around. I presume ISWC has now become major competition to WWW in this area and when money is tight, people have to make a choice. In earlier years ISWC was considered to be much more theoretical while WWW had more practical papers, but the last few ISWC’s I attended seemed to indicate that this is changing. I think any of the WWW papers could have been presented at the ISWC without any problems. As a consequence, I guess many people decided that ISWC is a better place to be. It will be interesting to see how things will evolve in future; it is not impossible that Semantic Web, as a topic, will gradually move away from WWW to ISWC. (I would expect specifically Linked Data papers to appear at ISWC very soon!)

That being said: it was nice to see a paper on DERI Pipes (by Danh Le-Phuoc et al) or on Triplify (by Sören Auer et al). This is not the first time I heard about these but it is good to have them more widely published. There was a paper on a rule system benchmark (by Senlin Liang et al); although I am no expert on this, with the advancement of RIF it will be good to have such benchmarks being put forward. The paper of Philippe Cudré-Mauroux et al on the disambiguation of ID’s on linked data issue caught my attention: with the advancement of linked data we enter (as the presenter put it) an “ID Jungle” with tons of URI-s referring, more or less, to the same concept (eg, a specific person), and a simple owl:sameAs is not an ideal solution to handle this. The idMesh system provides a mean to analyze relationships among those ID-s. I must admit I did not follow all details of the paper but it is certainly one of the papers I will have to study in more details when I get to it!

W3C’s “camps”. W3C tried another model this year, replacing the more traditional W3C tracks by two ‘camps’ on mobile web and on social web. But… this is where the large number of parallel track backfired: I could not go to any of them:-( There were all kinds of overlaps with other presentations (eg, the social web camp fully coincided with the Semantic Web paper track). Pity, because the feedback I heard from participants was very positive. Sigh. Well, actually, courtesy of Fabien Gandon, I was present on the social web camp virtually, witness this slide

It was a slightly exhaustive but good week!

March 14, 2009

The art of consensus… in standards and in politics?

When you work at, or with W3C (or any other standard setting organizations, for that matter) there is always a discussion on the pros and cons of consensus building. It is hard to achieve, not always pretty, and it is certainly one of the reasons why the process slows down. But most of the participants also recognize the benefits, too. It is nevertheless not always easy to strike the right balance between consensus and speed, that is for sure.

I realized the other day that some of the political and economical discussions these days provide nice analogies. As we all know, the economical turmoils of the past few months force all governments around the globe to do something. But there is no agreement on what this “something” is; all governments are frenetically trying to give it some shape. This is also true in the country where I happen to live, namely the Netherlands. A few weeks ago the government made some dramatic announcements on the possible effects of the crisis, and also declared that major changes have to be done in the economic and social fabric of the country. And since then? Well, enter the typical Dutch approach: consensus building.

One has to know a little bit how the political system works in this country. There are elections, of course, and various parties make all kinds of promises before those. But after those elections comes the next phase: building a coalition (it never happens to have one party gaining an absolute majority). This coalition is based on building consensus. Future coalition parties come together, and they shape what is called a “government contract” (after all, this country built its wealth on trade!). This is a real contract, that all parties sign, and which reflect the consensus among the parties of what they can achieve together and what they cannot. Each party has to give up some of their electoral promises, but the whole country understands that and, as long as it is clearly stated in that contract, it is perfectly all right. From that point on, the government’s job is to, essentially, fulfil that contract. Of course, creating such a contract is a long process (last time around it took 5-6 months to build a government!). However, the result is that, comparatively, the stability of the Dutch governments, and indeed of society as a whole, is quite remarkable when compared to many countries around. In the 20 years that I have been here I have seen only a few minor strikes (nothing compared to France or Italy…), no major social unrest, and all this coupled with a relatively high living standard.

So to come back to the current state of affairs: the economic turmoils mean that, in fact, a new contract has to be signed because the old one has become, essentially, moot by the bank crisis. So the government parties and the major trade unions are now fiercely negotiating to find a new consensus. This has been going on for weeks and nobody knows what the outcome will be. Maybe I will have to work longer for my pension, maybe I will have tax reductions, maybe I will have to pay a higher tuition fee for my son… all these are on the negotiating table. What is interesting is to see the sharp contrast between this process and the way the crisis is handled in some other countries (like those that I follow more closely, ie, France or Hungary where the governments seem to take fairly one-sided steps without too much consultations with the rest of the society). The Dutch way is certainly way slower and, well, maybe more boring (it is more fun seeing strikes paralysing a whole country like France than just wait for these merchants to finish their negotiations:-) but, maybe, more beneficial on long term. We shall see of course, I may be wrong. But consensus building may prove beneficial again on the long run.

B.t.w., this Dutch way (which is also used in Belgium, actually) has even gained a name: this is the “polder model”, and it even has a Wikipedia page!

January 14, 2009

A different usage of RDFa…

Talis and the University of Plymouth have just published a new SW Case Study at W3C. It is really a nice system which helps university students and instructors alike. There is no need for me to go into the details of the case study (I would probably not do justice anyway), it is much better if you read it right at the source

However, there is a small detail (compared to the rest of the study) that caught my attention because it describes a possible usage of RDFa that, I must admit, I did not consider before. Indeed, here is what the text says:

The interface to build or edit lists uses a WYSIWYG metaphor implemented in Javascript operating over RDFa markup, allowing the user to drag and drop resources and edit data quickly, without the need to round trip back to the server on completion of each operation. The user’s actions of moving, adding, grouping or editing resources directly manipulate the RDFa model within the page. When the user has finished editing, they hit a save button which serialises the RDFa model in the page into an RDF/XML model which is submitted back to the server. The server then performs a delta on the incoming model with that in the persistent store. Any changes identified are applied to the store, and the next view of the list will reflect the user’s updates.

April 24, 2008

Semantic Web W3C Track at WWW2008

Filed under: Semantic Web,Work Related — Ivan Herman @ 3:51
Tags: , , , , ,

Yesterday I chaired a Semantic Web session at the W3C Track at WWW2008. Nice turnout (about 100 people), and I had to cut the discussions to keep within schedule, which is always a good sign…

Three presentations, fairly different from one another. Tom Heath and Chris Bizer made a presentation (co-authored with Tim Berners-Lee) on the Linking Open Data project. Real good stuff. Maybe the most impressive part was when Chris flipped through the figures on the “current” status of the linked dataset, starting from a year ago at WWW2007 up to April 2008. And the fact that, actually, we essentially lost track of how many triplets are out there; there are simply too many of those! I also did not know that Tom worked on Revyu by automatically adding information coming from DPBedia to an entry. I really hope that the coming year will see lots of user applications that rely on this huge amount of public RDF data out there…

Raphaël Troncy made a presentation on managing multimedia content on the Semantic Web. The situation today is really a maze with all kinds of standards, semi-standards, etc, on how to describe, annotate, reason about, say, video. Lots of work ahead, both in the Semantic Web area and in others. Think of the fact that we still do not have a generally accepted URI to describe something like an area in an image, or a specific point in time in a video. (There was, actually, a short discussion after the presentation on how some of the current URI schemes fit, or not fit, general Web Architecture…)

Huajun Chen gave an overview on what is happening in the Semantic Web area in China. In two words: a lot. Some of the technologies developed in China are now well-known all around, some of them less. We should realize that there are more Semantic Web related blogs and subscribers to local mailing lists than anywhere else… I think one of the challenges is to bind the various SW communities beyond the boundaries of languages, where Chinese is probably the largest “local” community. I do not have any magic bullet here, but presentations like Huajun’s are important to have…

February 21, 2008

SW for Health Care and Life Sciences Workshop, W3C Track

Filed under: Semantic Web,Work Related — Ivan Herman @ 11:10
Tags: ,

The program for WWW2008 is really shaping up.  I already blogged a while ago on the SW related stuffs at the conference, and on the LOD workshop program yesterday. Well, the program of the Health Care and Life Sciences Workshop is also public now. Again, lots of great stuff there. Last but not least: the program of the W3C Track is also public with, as usual, a SW session (and others!).

It will be an interesting week (an an interesting place).

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 3,613 other followers