Akoma Ntoso, Standards, W3C

Automating Legal References in Legislation

This is a blog I have wanted to write for quite some time. It addresses what I believe to be the single most important issue when modeling information for legal informatics. It is also, I believe, the most urgent aspect that we need to agree upon in order to promote legal informatics as a real emerging industry. Today, most jurisdictions are simply cobbling together short term solutions without much consideration to the big picture. With something this important, we need to look at the big picture first and come up with a lasting solution.

Citations, references, or links are a very important aspect of the law. Laws are inherently a web of interconnections and interdependencies. Correctly resolving those connections allows us to correctly interpret the law. Mistakes or ambiguities in how those connections are made is completely unacceptable.

I work on projects around the world as well as my work on the OASIS LegalDocumentML technical committee. As I travel to the four corners of the Earth, I am starting to see more clearly how this problem can be solved in a clean and extensible manner.

There are, of course, already many proposals to address this. The two I have looked at the most are both from Italy:
A Uniform Resource Name (URN) Namespace for Sources of Law (LEX)
Akoma Ntoso References (in the process of being standardized by OASIS)

My thoughts derive from these two approaches, both of which I have implemented in one way or another, with varying degrees of success. My earliest ideas were quite similar to the LEX-URN proposal by being based around URNs. However, with time Fabio Vitali at the University of Bologna has convinced me that the approach he and Monica Palmirani put forth with Akoma Ntoso using URLs is more practical. While URNs have their appeal, they really have not achieved critical mass in terms of adoption to be practical. Also, the general reaction I have gotten with LEX-URN encoded references has not been positive. There is just too much special encoding going on within them for them to be readable by the uninitiated.

Requirements

Before diving into this subject too deep, let’s define some basic requirements. In order to be effective, a reference must:
• Be unambiguous.
• Be predictable.
• Be adaptable to all jurisdictions, legal systems, and all the quirks that arise.
• Be universal in application and reach.
• Be implementable with current tools and technologies.
• Be long lasting and not tied to any specific implementation
• Be understandable to mere mortals like myself.

URI/IRI

URIs (Uniform Resource Identifiers) give us a way to identify resources in a computing system. We’re all familiar with URLs that allow us to retrieve pages across the web using hierarchical locations. Less well known are URNs which allow us to identify resources using a structured name which presumably will then be located using some form of a service to map the name to a location. The problem is, a well-established locating service has never come about. As a result, URNs have languished as an idea more than a tool. Both URLs and URNs are forms of URIs.

IRIs are a generalization of URIs to allow characters outside of the ASCII character set supported by normal URIs. This is important in jurisdictions that use more complex character than ASCII supports.

Given the current state of the art in software technology, basing references on URIs/IRIs makes a lot of sense. Using the URL/IRL variant is the safer and more universally accepted approach.

FRBR

FRBR is the Functional Requirements for Bibliographical Records. It is a conceptual entity-relationship model developed by librarians for modeling bibliographic information in databases. In recent years it has received a fair amount of attention for use as the basis for legal references. In fact, both the LEX-URN and the Akoma Ntoso models are based, somewhat loosely, on the model. At times, there is some controversy as to whether this model is appropriate or not. My intent is not to debate the merits of FRBR. Instead, I simply want to acknowledge that it provides a good overall model for thinking about how a legal reference should be constructed. In FRBR, there are four main entities:
1. Work – The work is the “what”, allowing us to specify what it is that we are referring to, independent of which version or format we are interested in.
2. Expression – The expression answers the “from when” question, allowing us to specify, in some manner, which version, variant, or time frame we are interested in.
3. Manifestation – The manifestation is the “which format” part, where we specify the format that we would like the information returned as.
4. Item – The item finally allows us to specify the “from where” part, when multiple sources of the information are available, that we want the information to come from.

That’s all I want to mention about FRBR. I want to pick up the four concepts and work from them.

What do we want?

Picking up the Akoma Ntoso model for specifying a reference as a URL, and mindful of our basic requirements, a useful model to reference a resource is as a hierarchical URL, starting by specifying the jurisdiction and then working hierarchically down to the item in question.

This brings me to the biggest hurdle I have come across when working with the existing proposals. It’s not terribly clear what a reference should be like when the item being referenced is a sub-part of a resource being modeled as an XML document. For instance, how would I refer to section 500 of the California Government Code? Without putting in too much thought, the answer might be something like /us-ca/codes/gov.xml#sec500, using a URL to identify the Government Code followed by a fragment identifier specifying section 500 of the Government Code. The LEX URN proposal actually suggests using the # fragment identifier, referring to the fragment as a partition. There are two problems with this solution though. First, any browser will interpret a reference using the fragment identifier as two parts – the part before the # fragment identifier showing the resource to be retrieved from the server and the part after the fragment identifier as an “id” to the item to scroll to. Retrieving the huge Government code when all we want is the one sentence in Section 500 is a terrible solution. The second problem is that it defines, possibly for all time, how a large document might have been constructed out of sub-documents. For example, is the US Code one very large document, does it consist of documents made out of the Titles, or as it is quite often modeled, is every section a different document? It would be better if references did not capture any part of this implementation decision. A better approach is to allow the “what” part of a reference to be specified as a virtual URL all the way down to whatever is wanted, even when the “what” is found deep inside an XML document in a current implementation. For example, the reference would better be specified as /us-ca/codes/gov/sec500. We’re not exposing in the reference where the document boundaries currently exist.

On to the next issue, what happens when there is more than one possible way to reference the same item? For example, the sections in California’s codes, as is usually the case, are numbered sequentially with little regard to the heading hierarchy above the sections. So a reference specified as /us-ca/codes/gov/sec500 is clear, concise, and unambiguous. It follows the manner in which sections are cited in the text. But /us-ca/codes/gov/title1/div3/chap6/sec500 is simply another way to identify the exact same section. This happens in other places too. /us-ca/statutes/2012/chap5 is the same document as /us-ca/bills/2011/sb730. So two paths identify the same document. Do we allow two identities? Do we declare one as the canonical reference and the other as an alternate? It’s not clear to me.

What about ambiguity? Mistakes happen and odd situations arise. Take a look at both Chapter 14s that exist in Division 6 of Title 1 of the California Government Code. There are many reasons why this happens. Sometimes it’s just a mistake and sometimes it’s quite deliberate. We have to be able to support this. In California, we disambiguate by using “qualifying language” which we embed somehow into the reference. The qualifying language specifies the last statute to create or amend the item needing disambiguation.

The From When do we want it?

A hierarchical path identifies, with some disambiguation, what it is we want. But chances are that what we want has varied over time. We need a way to specify the version we’re looking for or ask for the version that was valid at a specific point in time. Both the LEX URN and the Akoma Ntoso proposals for references suggest using an “@” sign around some nomenclature which identifies a version or date. (The Akoma Ntoso proposal adds the “:” sign as well)

A problem does arise with this approach though. Sometimes we find that multiple versions exist at a particular date. These versions are all in effect, but based on some conditional logic, only one might be operational at a particular time. How one deals with operational logic can be a bit tricky at times. That’s an open issue to me still.

Which Format do we want?

I find specifying the format to be relatively uncontroversial. The question is whether we specify the format using well established prefixes such as .pdf, .odt, .docx, .xml, and .html or whether we instead try to be more precise by embedding or encoding the MIME type into the reference. Personally, I think that simple extensions, while less rigorous and subject to unfortunate variations and overlaps, offer a far more likely to be adopted approach than trying to use the MIME type somehow. Simple generally wins over rigorous but more complex solutions.

The From Where should it come?

This last part, the from where should it come part, is something that is often omitted from the discussion. However, in a world where multiple libraries offering the same resource will quite likely exist, this is really important. Let’s take a look at the primary example once more. We want section 500 of the California Government Code. The reference is encoded as /us-ca/codes/gov/sec500. Where is this information to come from? Without a domain specified, our URL is a local URL so the presumption is that it will be locally resolved – the local system will find it, somehow. What if we don’t want to rely on a local resolution function? What if there are numerous sources of this data and we want to refer to one of them in particular. When we prepend the domain, aren’t we specifying from where we want the information to come from? So if we say http: //leginfo.ca.gov/us-ca/codes/gov/sec500, aren’t we now very precisely specifying the source of the information to be the official California source? Now, say the US Library of Congress decides to extend Thomas to offer state legislation. If we want to specify that copy, we would simply construct a reference as http: //thomas.loc.gov/us-ca/codes/gov/sec500. It’s the same URL after the domain is specified. If we leave the URL as simply /us-ca/codes/gov/sec500, we have a general reference and we leave it to the local system to provide the resolution service for retrieving and formating the information. We probably want to save references in a general fashion without a domain, but we certainly will need to refer to specific copies within the tools that we build.

Resolvers

The key to making this all work is having resolvers that can interpret standardized references and find a way to provide the correct response. It is important to realize that these URLS are all virtual URLs. They do not necessarily resolve to files that exist. It is the job of the resolving service to either construct the valid response, possibly by digging into database and files, or to negotiate with other resolvers that might do all or part of the job of providing a response. For example, imagine that Cornell University offers a resolver at http: //lii.cornell.edu. It might, behind the scenes, work with the official data source at http: //leginfo.ca.gov to source California legislation. Anyone around the world could use the Cornell resolver and be unaware of the work it is doing to source information from resolvers at the official sources around the world. So the local system would be pointed to the Cornell service and when the reference /us-ca/codes/gov/sec500 arose, the local system would defer to the LII service for resolution which in turn would defer to California’s official resolver. In this way, the resolvers would bear the burden of knowing where all the official data sources around the world are located.

Examples

So to end, I would like to sum up with some examples:

[Note that the links are proposals, using a modified and simplified form of the Akoma Ntoso proposal, rather than working links at this point]

/us-ca/codes/gov/sec500
– Get section 500 of the California Government Code. It’s up to the local service to decide where and how to resolve the reference.

http: //leginfo.ca.gov/us-ca/codes/gov/sec500
– Get Section 500 of the California Government Code from the official source in California.

http: //lii.cornell.edi/us-ca/codes/gov/sec500
– Get Section 500 of the California Government Code from Cornell’s LII and have them figure where to get the data from

/us-ca/codes/gov/sec500@2012-01-01
– Get Section 500 of the California Government Code as it existed on January 1, 2012

/us-ca/codes/gov/sec500@2012-01-01.pdf
– Get Section 500 of the California Government Code as it existed on January 1, 2012, in a PDF format

/us-ca/codes/gov/title1/div3/chap6/sec500
– Get Section 500 of the California Government Code, but the fully hierarchy is specified

My blog has gotten very long and I have only just started to scratch the surface. I haven’t addressed multilingual issues, alternate character sets, and a host of other issues at all. It should already be apparent that this is all simply a natural extension of the URLs we already use, but with sophisticated services underneath resolving to items other than simple files. Imagine for a moment how the field of legal informatics could advance if we could all agree to something this simple and comprehensive soon.

What do you think? Are there any other proposals, solutions, or prototypes out there that addresses this? How does the OASIS legal document ML work factor into this?

Standard
Akoma Ntoso, Hackathon, HTML5, Standards, W3C

Update on our Web-based Legislative Editor

It’s been a while since my last blog post. I’ve been busy working on a number of activities. As a result, I have a lot of news to announce regarding the web-based editor, previously known as the AKN/Editor, that we originally built for the “Unhackathon” back in May.

As you might already have guessed, it has a new name. The new name is “LegisPro Web” which now more clearly identifies its role and relationship to Xcential’s XMetaL-based standalone editor “LegisPro”. Going forward, we will be migrating much of the functionality currently available in LegisPro into LegisPro Web.

Of course, there is now a new web address for the editor – http://legisproweb.com. As before, the editor prototype is freely available for you to explore at this address.

As I write this blog this early Sunday morning, I am in Ravenna, Italy where I just participated in the LEX Summer School 2012 put on by the University of Bologna. On Monday, the Akoma Ntoso Developer’s Workshop starts at the same venue. In addition to listening to the other developers present their work, I will be spending an afternoon presenting all the ins and outs of the LegisPro Web editor. I’m excited to have the opportunity to learn about the other developer’s experiences with Akoma Ntoso and to share my own experiences building a web-based XML editor.

Last month we demonstrated the LegisPro Web editor at the National Conference of State Legislators’s (NCSL) annual summit in Chicago this year. It was quite well received. I remain surprised at how much interest there is in an editor that is targetted to a tagging role rather than an editing role.

Of course, there has been a lot of development of the editor going on behind the scenes. I have been able to substantially improve the overall stability of the editor, its compliance with Akoma Ntoso, as well as add significant new functionality. As I become more and more comfortable and experienced with the new HTML5 APIs, I am starting to build up a good knowledge base of how best to take advantage of these exciting new capabilities. Particularly challenging for me has been learning how to intuitively work with the range selection mechanism. The origins of this mechanism are loosely related to the similar mechanism that is available within XMetaL. While I have used XMetaL’s ranges for the past decade, the HTML5 mechanisms are somewhat more sophisticated. This makes them correspondingly harder to master.

And perhaps the most exciting news of all is that the editor now has some customers. I’m not quite ready to announce who they are, but they do include a major government entity in a foreign country. As a result of this win, we will be further expanding our support of Akoma Ntoso to support Debate and Debate Reports in addition to the Bill and Act documents types we currently support. In addition, we will be adding substantial new capabilities which are desired by our new customers. I should also mention that Ari Hershowitz (@arihersh) has joined our team and will be taking the lead in delivering the customized solution to one of our customers.

Alongside all this development, work continues at the OASIS LegalDocumentML Technical Committee. Look to see me add support for the Amendment document type to the editor in support of this activity in the not- too-distant future.

All in all, I think we’re making terrific progress bringing the LegisPro Web editor to life. I started work on the editor as a simple idea a little more than six months ago. We used the “Unhackathon” in May to bootstrap it to life. Since then, it’s taken off all on its own and promises to become a major part of our plans to build a legitimate legal informatics industry around an Akoma Ntoso based standard.

Standard
HTML5, Standards, W3C

Why Not Build a Legislative Editor out of Google Docs?

Ever since I started working on my legislative editor (http://legalhacks.org/editor), I’ve been asked over and over if I was using Google Docs, and if not, why not.

So to answer the first part of the question, the answer is a simple no. I don’t use Google Docs or anything like it.

There are a number of good reasons why I take a different path. The first reason is that Google simply doesn’t open Google Docs to that type of customization. Even if the customization capability was there, there are still plenty of reasons why choosing to build a legislative editor around Google Docs would not necessarily make sense. We can start with the privacy and security concerns with storing legislation in Google’s cloud. Let’s set aside that level of concern to focus more on the technical issues of the editor itself.

We’ll start by considering the difference between a word processor and an XML editor. When done right, an XML editor should superficially look just like a word processor. With a lot of effort, an XML editor can also be made to feel like a word processor. When I was implementing XMetaL for the California Legislature, the goal was very much to achieve the look and feel of a word processor. That is possible, but only to an extent.

There is a big difference between a word processor and an XML editor. Just because modern word processors now save their data in an XML format, they are not XML editors as a result. If you take a look at their file formats, OOXML for Microsoft Word or ODF for Open Office, you’ll see very complex and very fixed schemas that are far more oriented around presentation than one typically desires in an XML document. In fact, we try to separate presentation from structure in XML documents while OOXML and ODF blend them together. That’s what is at the heart of the difference between a word processor and an XML editor. In a word processor you worry about things like page breaks, margins, fonts, and various other attributes of the document that make it “pretty”. In an XML document, typically all of that is going to be added after the fact using a “formula” rather than being customized into each document. So while what you see in an XML document might look WYSIWYG, it’s actually more like “Somewhat WYSIWYG” and your ability to customize the formatting is quite constrained. This approach focuses you on the content and structure of the document and allows the resulting information to be targetted to many different form factors for publication. By not dictating the formatting of the document, the publication engine is more free to choose the best layout for any particular publication form – be it web, paper, mobile, or whatever.

When I explain this, the next question is whether or not my implementation is similar to how Google Docs is implemented. The answer is, again, a simple no. Google Docs’ implementation is dramatically different from the approach that I take. My approach relies on the new APIs being added to the browsers, in a consistent standardized way under the HTML5 umbrella, that allow the text content to be selected, edited, dragged about, and formatted. While these capabilities existed in earlier browsers to some extent, the manner in which they were supported was in no ways consistent. This made supporting multiple browsers really difficult and the resulting application would be a patchwork of workarounds and quirks. Even then, the browser variations would result in inconsistent behaviors between the browsers and the support and maintenance task would be a nightmare. It is for this reason, along with a need for page-oriented layout features, that Google abandoned the approach I am taking – though at a time when the standards that help ensure consistency were lacking.

So how does Google Docs do it then? How come they didn’t get bogged down in a sea of browser incompatibilities amongst all the legacy browsers they support? They do it by completely creating their own editing surface in JavaScript – something codenamed “kix”. Rather than relying on the browser for very much, they instead have their own JavaScript implementation of every feature they need for the editing surface. That is how they are able to implement rulers, pages, and drag boxes in ways you’ve just never seen in a browser before. It’s an amazing accomplishment and it allows them to support a wide range of browsers with a single implementation. That’s what you can do when you’re Google and have deep pockets. It’s a very expensive and very complex solution to the problem I am attempting to solve with modern standards. So while I can reasonably support the future alone with no baggage from the past, they’re able to support future and past browsers by skipping on the baggage but instead having their own custom implementation of everything. While I’m amazed at Google Doc’s accomplishment, attempting a similar thing with an XML editor would be cost and time prohibitive. Keep in mind that a lot of the capabilities of Google Docs’ editing surface deal with presentation aspects of a document, something that is of less concern to the typical XML document.

I’ve been working in XML editors for over 10 years now. Over the years, I’ve spent a lot of time wondering how one might implement a real web-based XML editor. Every time that my thoughts went beyond wondering and moved towards considering, I quickly discovered that all the resulting limitations of divergent implementations of base technology would make the project impractical. That’s partly what drove Google to spend the big bucks on a custom editing surface for their word processor. Now however, HTML5 is beginning to make a web-based XML editor a practical reality. Don’t mistake me, it’s still very difficult. Figuring out how to keep and XML document and an HTML5 view synchronized in not a simple task. While the browsers have all come a long way, they all still have their own weaknesses. Drag and drop has been broken in Safari since 5.1.2. Opera’s selection mechanism breaks when you toggle the contentEditable attribute right now. But these are problems that will disappear with time. As the standards are implemented and the bugs are fixed, I can already see how much HTML5 is going to change the application landscape. I would think long and hard about returning to traditional application development given what I now know about HTML5.

Standard
Akoma Ntoso, HTML5, Standards, W3C

A Pluggable XML Editor

Ever since I announced my HTML5-based XML editor, I’ve been getting all sorts of requests for a variety of implementations. While the focus has been, and continues to be, providing an Akoma Ntoso based legislative editor, I’ve realized that the interest in a web-based XML editor extends well beyond Akoma Ntoso and even legislative editors.

So… with that in mind I’ve started making some serious architectural changes to the base editor. From the get-go, my intent had been for the editor to be “pluggable” although I hadn’t totally thought it through. By “pluggable” I mean capable of allowing different information models to be used. I’m actually taking the model a bit further to allow modules to be built that can provide optional functionality to the base editor. What this means is that if you have a different document information model, and it is capable of being round-tripped in some way with an editing view, then I can probably adapt it to the editor.

Let’s talk about the round-tripping problem for a moment. In the other XML editors I have worked with, the XML model has had to quite closely match the editing view that one works with. So you’re literally authoring the document using that information model. Think about HTML (or XHTML for an XML perspective). The arrangement of the tags pretty much exactly represents how you think of an deal with the components of the document.  Paragraphs, headings, tables, images, etc, are all pretty much laid out how you would author them. This is the ideal situation as it makes building the editor quite straight-forward.

However, this isn’t always the case. Depending on how much this isn’t the case determines how feasible building an editor is at all. Sometimes the issues are minor. For instance, in Akoma Ntoso, a section “num” element is out-of-line with the content block containing the paragraphs. So while it is quite common for the num to be inline in the first paragraph of the the section, that isn’t how Akoma Ntoso chooses to represent this. And it gets more difficult from there when you start dealing with subsections and sub-subsections.

To deal with these sorts of issues, a means of translating back and forth between what you’re editing and the information model you’re building is needed. I am using XSL Transforms, designed specifically for round-tripping to solve the problem. Not every XML model lends itself to document authoring, but by building a pluggable translating layer I’m able to adapt to more models than I have been able to in the past.

Along with these mechanisms I am also allowing for pluggable command structures, CSS styling rules, and, of course, the schema validation. In fact, the latest release of the editor at legalhacks.org has been refactored and now somewhat follows this pluggable architecture.

Next I plan to start working with modules like change tracking / redlining, metadata generation (including XBRL generation), and multilingual support following this pluggable architecture. I’m totally amazed at how much more capable HTML5 is turning out to be when compared to old-fashioned HTML4. I can finally build the XML editor I always wanted.

Standard
Akoma Ntoso, Hackathon, HTML5, Standards, W3C

An HTML5-Based XML Editor for Legislation!

UPDATE: (May 17, 2012) For all the people that asked for more editing capabilities, I have updated the editor to provide rudimentary cut/copy/paste capabilities via the normal shortcut keys. More to follow as I get the cycles to refine the capabilities.

I’ve just released my mini-tutorial for the HTML5-based XML editor I am developing for drafting legislation (actually it’s best for tagging existing legislation at this point).


Please keep in mind that this editor is still very much in development – so its not fully functional and bug-free at this point. But I do believe in being open and sharing what I am working on. We will be using this editor at our upcoming International Legislation Unhackathons (http://legalhacks.org) this coming weekend. The editor is available to experiment with at legalhacks.org site.

There are three reason I think that this approach to building editors is important:

  1. The editor uses an open standard for storing legislative data. This is a huge development. The first couple generations of legislative systems were built upon totally proprietary data formats. That meant that all the data was locked into fully custom tools that were built years ago could only be understood by those systems. Those systems were very closed. That last decade was the development of the XML era of legislative tools. This made it possible to use off-the-shelf editors, repositories, and publishing tools. But the XML schemas that everyone used were still largely proprietary and that meant everyone still had to invest millions of dollars in semi-custom tools to produce a workable system. The cost and risk of this type of development still put the effort out of reach of many smaller legislative bodies.

    So now we’re moving to a new era, tools based on a common open standard. This makes it possible for an industry of plug-and-play tools to emerge, reducing the cost and risks for everyone. The editor I am showing uses Akoma Ntoso for its information model. While not yet a standard, it’s on a standards track at the OASIS Standards Consortium and has the best chance of emerging as the standard for legal documents.

  2. The editor is built upon open web standards. Today you have several choices when you build a legislative editor. First, you can build a full custom editor. That’s a bad idea in this day and age when there are so many existing editors to build upon. So that leaves you with the choice of building your editor atop a customizable XML editor or customizing the heck out of a word processor. Most XML editors are built with this type of customization in mind. They intrinsically understand XML and are very customizable. But they’re not the easiest tools to master – for either the developer or the end user. Another approach is to use a word processor and bend and distort it into being an XML editor. This is something well beyond the original intent of the word processor and dealing with the mismatch in mission statements for a word processor and a legislative drafting tool leaves open lots of room for issues in the resulting legislation.

    There is another problem as well with this approach. When you choose to customize an off-the-shelf application, you have to buy into the API that the tool vendor supplies. Chances are that API is proprietary and you have no guarantee that they won’t change it on a whim. So you end up with a large investment in software built on an application API that could become almost unrecognizable with the next major release. So while you hope that your investment should be good for 10-12 years, you might be in for a nasty surprise at a most inopportune time well before that.

    The editor I have built has taken a different approach. It is building upon W3C standards that are being built around HTML5. These APIs are standards, so they won’t change on a whim – they will be very long lived. If you don’t like a vendor and want to change, doing so is trivial. I’m not just saying this. The proof is in the pudding. This editor works on all four major browsers today! This isn’t just something I am planning to support in the future; it is something I already support. Even while the standards are still being refined, this editor already works with all the major browsers. (Opera is lagging behind in support for some of the application APIs I am using.) Can you do that with an application built on top of Microsoft Office? Do you want to switch to Open Office and have an application you built? You’re going to have to rewrite your application.

  3. Cloud-based computing is the future, Sure, this trend has been obvious for years, but the W3C finally recognizes the web-based application as being more than just a sophisticated website. That recognition is going to change computing forever. Whether your cloud is public or private, the future lies in web-based applications. Add to that the looming demands for more transparent government and open systems with facilitate real public participation and it becomes obvious that the era of the desktop application is over. The editor I am building anticipates this future.

  4. I’ve been giving a lot of thought to where this editor can go. As the standards mature, I learn to tame the APIs, and the browsers finish the work remaining for them, it seems that legislative drafting is only the tip of the iceberg for such an approach to XML-based editing. Other XML models such as DITA and XBRL might well be other areas worth exploring.

    What do you think? Let me know what ideas you have in this area.

Standard
Process, Standards, W3C

What is a Semantic Web?

Tim Berners-Lee, inventor of the World Wide Web, defines a semantic web quite simply as “a web of data that can be processed directly and indirectly by machines“. In my experience, that simple definition quickly becomes confusing as people add their own wants and desires to the definition. There are technologies like RDF, OWL, and SPARQL that are considered key components of semantic web technology. It seems though that these technologies add so much confusion through abstraction that non-academic people quickly steer as far away from the notion of a semantic web as they can get.

So let’s stick to the simple definition from Tim Berners-Lee. We will simply distinguish the semantic web from our existing web by saying that a semantic web is designed to be meaningful to machines as well as to people. So what does it mean for a web of information to be meaningful to machines? A simple answer is to say that there are two primary things that a machine needs to understand about a web. First of all, what the pages are all about, and secondly what the relationships that connect the pages together are all about.

It turns out that making a machine capable of understanding even the most rudimentary aspects of pages and the links that connect them is quite challenging. Generally, you have to resort to fragile custom-built parsers or sophisticated algorithms that analyze the document pages and the references between them. Going from pages with lots of words connected somehow to other pages to a meaningful information model is quite a chore.

What we need to improve the situation are agreed upon information formats and referencing schemes in a semantic web that can more readily be interpreted by machines. Defining what those formats and schemes are is where the subject of semantic webs starts getting thorny. Before trying to tackle all of this, let’s first consider how this all applies to us.

What could benefit more from a semantic web than legal publishing? Understanding the law is a very complex subject which requires extensive analysis and know-how. This problem could be simplified substantially using a semantic web. Legal documents are an ideal fit to the notion of a semantic web. First of all, the documents are quite structured. Even though each jurisdiction might have their own presentation styles and procedural traditions, the underlying models are all quite similar around the world. Secondly, legal documents are rich with relationships or citations to other documents. Understanding these relationships and what they mean is quite important to understanding the meaning of the documents.

So let’s consider the current state of legal publishing – and from my perspective – legislative publishing. The good news is that the information is almost universally available online in a free and easily accessed format. We are, after all, subject to the law and providing access to that law is the duty of the people that make the laws. However, providing readable access to the documents is often the only objective and any which way of accomplishing that objective is simply the requirement. Documents are often published as PDFs which are nice to read, but really difficult for computers to understand. There is no uniformity between jurisdictions, minimal analysis capability (typically word search), and links connecting references and citations between documents are most often missing. This is a less than ideal situation.

We live in an era where our legal institutions are expected to provide more transparency into their functions. At the same time, we expect more from computers than merely allowing us to read documents online. It is becoming more and more important to have machines interpret and analyze the information within documents – and without error. Today, if you want to provide useful access to legal information by providing value-added analysis capabilities, you must first tackle the task of interpreting all the variations in which laws are published online. This is a monumental task which then subjects you to a barrage of changes as the manner in which the documents are released to the public evolves.

So what if there was a uniform semantic web for legal documents? What standards would be required? What services would be required? Would we need to have uniform standards or could existing fragmented standards be accommodated? Would it all need to come from a single provider, from a group of cooperating providers, or would there be a looser way to federate all the documents being provided by all the sources of law around the world? Should the legal entities that are sources of law assume responsibility for publishing legal documents or should this be left to third party providers? In my coming posts I want to explore these questions.

Standard