Lawsuit, Process, technology, Transparency

Transparent legislation should be easy to read — part II

I have some good news to share. After almost two years under the cloud of litigation regarding a challenge to one of our patent applications, we have reached a settlement that concludes the issue. The Patent Trials and Appeals Board (PTAB) ruled in our favor by denying the patent derivation claim made against us. This was on top of earlier rulings in our favor. What is more, both our patent applications have now been allowed.

While the terms of the settlement remain confidential, this has been a costly exercise for us. For me personally, this was very difficult. Not only did I have to defend my honor and integrity, but I have had to spend half of my personal life savings in the defense of Xcential – with no guarantees that I will ever be able to recoup that expense. Using my own savings for a lot of the legal bills was the only way to ensure that Xcential would be able to go on. This has certainly affected my life.

If there is something good to come out of this exercise, it is validation that we’re onto something very valuable.  With the litigation behind us and both patents in our pocket, we are now able to proceed forward with our plans to serve our markets – selectively, of course,

Over a decade ago, I wrote a blog questioning why federal bills are written the way they are. For someone experienced with state legislation, federal legislation is quite cryptic and difficult to understand. It turns out that the style of amending law found in federal legislation is the common form and can be found around the world. In the U.S. Congress, this style is known as cut and bite1 amending. With this style, individual word changes are spelled out in a narrative form. For example, here is a section from California Assembly Bill 2748 from the current session:

SECTION 1.  Section 21377.5 of the Water Code is amended to read:

21377.5.  (a) Notwithstanding Section 21377 of this code or Section 54954 of the Government Code or any other provision of law, the Board of Directors of the Tri-Dam Project, which is composed of the directors of the Oakdale Irrigation District and the South San Joaquin Irrigation District, may hold no more than four regular meetings annually at the a Tri-Dam Project offices. The Board of Directors of the Tri-Dam Project shall adopt a resolution that determines the location of the Tri-Dam Project offices. office that is located in Sonora, California, or Strawberry, California, or within 30 miles of either city.

(b) The notice and conduct of these meetings shall comply with the provisions of the Ralph M. Brown Act (Chapter 9 (commencing with Section 54950) of Part 1 of Division 2 of Title 5 of the Government Code).

You can clearly see what changes are being made. However, if written using cut and bite amending, this equivalent section would read something like:

SECTION 1.  Section 21377.5 of the Water Code is amended by:

(a) Deleting the sixth “the” and replacing it with “a”.

(b) After “Tri-Dam Project”, deleting to the end of the subsection and replacing withoffice that is located in Sonora, California, or Strawberry, California, or within 30 miles of either city.”

This very terse form of amendment gives no context for the change being made. The change in subsection (a) is completely meaningless without any context. What this all means is that a politician tasked with approving these changes must do a significant amount of work to understand what the changes are all about and why they are being made.

Back in 2013, I questioned why the form found most of the U.S. states including my state of California wasn’t used. As seen in the example above, the U.S. states use a different approach to amending – known as amending in full. In this style of amending, the entire section containing the change is restated and the change is shown (some of the time) as stricken and inserted text, as you would find with track changes in a word processor. This approach has the benefit of making the change much clearer by providing the complete context of the change. In California, this approach is mandated by the State Constitution as amended by Proposition 1-a of 1966. This proposition was overwhelmingly approved by the voters of California. The Speaker of the California Assembly at the time, Jesse Unruh, had pushed through this constitutional amendment to establish professional legislature less beholden to special interests that other pressures that were undermining the effectiveness of the legislature at the time. His reforms were quite sweeping. Among the many changes, one part of this initiative was to make legislation more transparent.

The specific provision that was added, Section 9 of Article IV of the California Constitution, reads:

“A statute shall embrace but one subject, which shall be expressed in its title. If a statute embraces a subject not expressed in its title, only the part not expressed is void. A statute may not be amended by reference to its title. A section of a statute may not be amended unless the section is re-enacted as amended.”

This section of the California Constitution contains two rules. The first is the single subject rule, which limits the scope of each statute. The second is the re-enactment rule, which mandates the amendment in full approach, requiring that each section amended by re-enacting the whole section (essentially a repeal of the prior section and enactment of a new amended section as a single action). Most U.S. states have these same rules or very similar rules. These two rules go together. One of the worries of the re-enactment rule is that, by opening an entire section for re-enactment, unwelcome amendments might be added as part of the political process of winning votes. The single-subject rule is a guard against that behavior.

At the time of my original blog, I learned that adopting these rules in Congress would be impossible. For one thing, the U.S. Code is less regular than state codes or revised statutes, especially in the non-positive titles, and that re-enacting an entire section would be quite complex and could cause other difficulties. Re-enactment rules require consistently bite-sized sections. In addition, the House’s equivalent of the single subject rule, the germaneness rule adopted in 1789, didn’t have quite the same effectiveness as the single subject rule of California. Apparently, the Senate’s equivalent rules, found in the Senate Standing Rule XVI and Rule XXII had even more limited applicability.

As a result, I proposed in my blog that amendments in context be used. With amendments in context, a proposed bill is drafted using the amendments in full style that U.S. states use, from which a cut and bite style bill is generated using automation. With this approach, you get the best of both worlds. The bill is drafted in a form that is easy to understand and easy to manage while the worries of unleashing this amending style are circumvented by retaining the existing amending style for parliamentary procedures. However, at the time, the technology just wasn’t available. Under contract to the Law Revision Counsel, Xcential was just beginning down the path of converting the U.S. Code to processable XML that could feed the automation tools. While we had tools to offer that could do amendments in context, we were constrained by our agreement with California as to how much of our technology could be reused – they had been worried we could inadvertently undermine the successes of Proposition 1-a by empowering the special interests with our technology.

Today, more than a decade later, much has changed. The U.S. Code is available in an XML format we designed and a new more modern LegisPro is available that is both web-based and much more powerful than what we had back then. But there have been other changes too. The Posey Rule has been adopted requiring that a comparative print be provided alongside all proposed law showing how the law will be affected. This comparative print is also generated by Xcential technology and alleviates much of the problem by allowing politicians to understand more easily what it is they are voting on. However, it leaves the complexity of drafting and managing the process of creating and managing the cut-and-bite amendments still to be addressed.

This problem isn’t limited to U.S. federal bills. It’s a common problem wherever cut and bite amending is employed, particularly in Commonwealth countrie or countries with Westminster based legislative traditions, even if the term cut and bite isn’t used.

At Xcential, we’re going to return to our core mission – to make government processes better through technology. Our goal is improved efficiency, increased accuracy, and most importantly, better transparency for the benefit of the citizens.

  1. The term cut and bite is also sometimes used to refer to the form of amending used to propose amendments to bills themselves. Another term for these types of bill amendments are page and line amendments as they usually are expressed as references to page and line numbers rather than to provisions. ↩︎
Standard
Lawsuit, technology, Track Changes, Transparency

Lawsuit Update and a Tale about Bicycles

In the past couple weeks, the first two court rulings have come out concerning our battle with the Akin Gump law firm. Both rulings have been in our favor. The first ruling denied Akin Gump’s motion to dismiss, instead allowing four of the five claims in our countersuit – with the judge calling them “plausible”. The second ruling denied Akin Gump’s attempt at a preliminary injunction to stop us from responding to actions from the U.S. Patent Office. The judge found that “there is not a substantial likelihood that (Akin Gump) will prevail,.” She then added “(T)o preclude Xcential from moving forward…would discourage invention, it would discourage innovation, it would discourage companies from investing their own resources to try to come up with workable solutions to commonly identified problems,.”

After reading the transcript, I have an interesting analogy to make. It is based on an analogy that the Akin Gump attorney chose to use – comparing our dispute to that of inventing a bicycle. While it’s not a perfect analogy, it is still a very good analogy.

Imagine you have an idea for a two-wheeled mode of transportation that will help you do your job more effectively. You discover that this idea is called a bicycle and that there are several bicycle manufacturers already – so you approach one to see if their product can do the job. While this company did not invent the bicycle, they specialize in making them and have been doing so for many years. This company only make bicycles. They are not the manufacturer of the commodity materials that make up a bicycle such as the metal tubing or even the tools that bend the tubes to make handlebars. While not a household name, they are very well known among bicycle enthusiasts around the world.

However, you discover that there is a problem. In the highly regulated world of bicycles (maybe automobiles would have been a better analogy), this bicycle maker doesn’t have a bicycle that conforms to your local regulatory market. You have a quick 30-minute call with the bicycle maker, and they say they’re familiar with the regulations in your market, but that making the changes your local regulations require and getting those changes certified is a costly business and is only done for customers willing to help foot the bill. You indicate that you’re still interested and accept their suggestion that they build a prototype, at no charge, of the bicycle that is suitable for your regulatory market. You send them a hand drawn map of the routes you want to ride the bicycle on so that they can understand the regulatory concerns that might apply.

With the suggestion of funding, the bicycle maker goes off starts building a demonstrable prototype of a modified bicycle model that conforms to your local regulatory requirements. As a potential customer for this localized version of the bicycle, you get limited updates from the salesperson you were working with to ensure that you’re still interested and to indicate that the prototype hasn’t been forgotten. He makes a point of buttering you up, as salespeople are wont to do. You never interact with the engineers building the prototype at all.

It turns out that the regulations require mudguards over the wheels and reflectors on the front and rear. In the process of attaching these parts, the engineers at the bicycle company come up with some nifty brackets for attaching the mudguards and reflectors to the bicycle frame in accordance with the regulatory requirements. While the mudguards and reflectors are commodities, the brackets used to attach them are novel, so the engineers apply for patent protection for these brackets. You play no role in the design or manufacture of these brackets.

When the bicycle maker brings their modified bicycle to your office to show you what can be done, you’re wowed by the result, but don’t really have the budget to help cover the cost of getting the changes certified. The bicycle company shelves the project without a paying customer.

Later on, while pondering whether to patent your idea for a bicycle, you come across the bicycle maker’s patent application. Without much of an understanding into the making of bicycles, you conflate the general idea for a bicycle (it was invented decades earlier and is prior art at this point), a bicycle that is adapted to your regulatory market (not something that is patentable), and the nifty brackets that hold the reflectors and mudguards on the frame and are necessary to achieve regulatory compliance in your local market (and that you played no part in designing or manufacturing – but are patentable).

Standard
technology, Track Changes, Transparency, Uncategorized

Xcential in Litigation with Akin Gump Law Firm

As many of you know by now, Xcential and I have been placed in an unfortunate position of having to deal with litigation with a large Washington D.C. lobbying firm, Akin Gump. We have been getting a lot of press, and I feel it is my duty to explain the situation as best I can.

Back in late 2018, Xcential was approached by an attorney at Akin Gump interested in applying our bill drafting and amending application, LegisPro, to improve the process of drafting and amending federal legislation. Initially, he was interested in using LegisPro to generate a bill amendment. This eventually evolved into an investigation into whether LegisPro could generate a federal amending bill from an in-context marked up copy of the law itself. The Akin Gump attorney expressed frustration at having to type, in narrative format, the proposed changes to a federal bill, and sought a simpler solution.

Amending law in context was a use case for LegisPro’s amendment generating capabilities that we had long anticipated. I had even written a blog exploring the idea of amendments-in-context in 2013 which got a lot of coverage on Chris Dorobeck’s podcast and at govloop. I have long made a habit of reporting on the developments in the Legal informatics industry and the work I do at my Legix.info blog including this blog post from April 2018 which describes LegisPro’s feature set at that time.

First a little explanation about how LegisPro is configured to work. As there is no universal way to draft and amend legislation, we must build a custom document model that configures LegisPro for each jurisdiction we work with. Legislation, particularly at the federal level, is complex so this is a substantial task. The customer usually pays for this effort. For the federal government, we did have document models for parts of LegisPro, but they were specific to different use cases and belonged to the federal government, not Xcential. We were not entitled to use these configurations, or any part of them, outside the federal government. This means that, out-of-the-box, LegisPro was not tailored for federal legislation in a way that we could share with Akin Gump. We would have to build a new custom document model to configure LegisPro to draft federal legislation for them.

Once the attorney had had an opportunity to try out a trial version of LegisPro using an account we had provided, we had a meeting during May of 2019. To provide clarity to the conversation as our terminology and his were different, I introduced the terms amending-in-full, cut-and-bite amendments, and amendments-in-context to the Akin Gump attorney. This is the terminology we use in the legal informatics industry to describe these concepts. Seeing the attorney’s enthusiasm towards addressing the problem and being convinced this was a true sales opportunity, I said I would find the time to build a small proof of concept to show how LegisPro’s existing cut-and-bite amendment generator would be configured to generate federal style amendments. I had previously arranged to have a partial conversion of a part of the U.S. Code done by a contractor to support Akin Gump’s trial usage of LegisPro. In the months that followed and using this U.S. Code data set, I set about configuring LegisPro to the task, much of it on my own personal time. Akin Gump did not cover the cost of any of this including my time or the contractor’s fees.

I flew to Washington D.C. for an August 29th, 2019 meeting in the attorney’s office to deliver the demonstration of a working application in person. He was duly impressed and kept exclaiming “Holy S###” over and over. We explained that this was just a proof of concept and that we could build out a complete system using a custom document model for federal legislation. He had already explained that the cost for a custom document model was probably out of reach for Akin Gump, so we should consider the implementation as an Xcential product rather than a custom application for Akin Gump. We considered this approach and our offer to implement a solution was a very small percentage of what the real cost would be. We would need to find many more customers on K Street to cover the development cost of a non-government federal document model. He had explained this approach would earn Xcential a “K Street parade,” a term we used to describe the potential project of building a federal document model to be sold on K Street

As it turns out, even our modest offer of about $1,000 to $2,000 per seat (depending on various choices) and a range of $50,000 to $175,000 for a custom document model and other services was too much for Akin Gump, and they walked away. Our 2019 pricing sheet clearly mentioned that the per seat prices did not include the cost for document conversion or configuration/customization and that a custom document model might be required for an additional fee. Furthermore, we had discussed the need for this custom work on several occasions. Our offer was more than fair as building these systems typically runs into the millions of dollars. Despite considerable costs to us in terms of my time, the use of a contractor, and travel to Washington D.C., we were never under any obligation to deliver anything to Akin Gump, and Akin Gump never paid us anything.

In the process of configuring LegisPro to generate federal amending bills, I came up with some implementation changes to the core product which we felt were novel. They built on a mechanism we had already built for a different project and for which we had separately applied for a patent a year earlier. We went ahead and filed for a patent for those changes, describing the overall processing model using a term I coined called “bill synthesis,” echoing my experience with logic synthesis earlier in my career from which I had drawn inspiration.

Two and a half years later, we learned that Akin Gump had filed a complaint to assume ownership of our patent application. This made no sense at all as our patent application is very implementation specific to LegisPro’s inner workings. The Akin Gump attorney involved had played no part in the design or coding of these details. What he had done was to describe his frustrations with the federal bill drafting use case to us – something we had long been aware of. He was simply asking us for a solution to a problem that was widely known in the industry and previously known to us.

When I got to see the assertions that Akin Gump makes in its court filings, I was astonished by their breadth. Rather than technical details, the assertions are all high-level ideas, insisting that an idea for an innovation the attorney had conceived of in the summer of 2018 is the “proverbial ‘holy grail’” to the legislative drafting industry. However, this innovative idea, as described in the complaint, is an application that closely resembles LegisPro as he would have experienced it during his trial usage in early 2019, including descriptions of key services and user interface features that have long existed.

What does not get any mention in Akin Gump’s filings are the details of our implementation for which we have based our patent claims — around document assembly and change sets. This is not surprising. The attorney is not a software developer and Akin Gump is not a software firm, it is a law firm. Neither he nor the firm have any qualifications in the realm of complex software development.

I can appreciate the attorney’s enthusiasm. When I came across the subject back in 2001, I was so enthusiastic that I started a company around it.

But for me, this is very hurtful. I worked hard, much of it on my own time over summer to build the proof of concept for Akin Gump. Yet I am portrayed in a most unflattering way. While I recall a cordial working arrangement throughout the effort, that is not how Akin Gump’s complaint reads. There is no appreciation for the complexity of writing or even configuring software to draft and amend legislation. The attorney forgets to mention that I succeeded in demonstrating a working application in the form of proof of concept on August 29th in his office. There is no understanding of the deep expertise that I brought to the table. The value of the time that I had spent on this project already was likely worth more than the amount we asked from Akin Gump to deliver a solution.

I wake up many nights angry at what this has come to. We work hard to make our customers happy and to do so at a very fair price. We are a small company, based in San Diego, and having to defend ourselves against the accusations of a wealthy law firm is a costly and frustrating undertaking that distracts from our mission.

What is ironic is that, by taking a litigation route to claim a patent, Akin Gump has all but closed off any likelihood of ever having the capability. There are few software firms in the world capable of creating such a system. In the U.S. I only know of a couple, none of which have the experience and products which Xcential can bring to bear. For this solution to ever see the light of day at the federal level, it is going to take a substantial effort, with many trusted parties working collaboratively.

I once had a boss who would often say “when all you have is a hammer, everything looks like a nail.” Just because you have something in your toolbox, you should choose wisely if using it is the right course of action. For a law firm, litigation is an easy tool to reach for. But is it the right tool?

Standard
Process, technology

Building an Agile Team

We’ve recently built our first true Agile development team. It’s been quite a learning experience, but now we’re seeing the results.

At Xcential, we have lots of waterfall process experience. Our backgrounds come from big waterfall companies like Boeing and Xerox. Over the years we’ve worked on very large projects in very traditional ways. In more recent years, we’ve also had a few Agile projects, largely initiated by customers, that have been good training grounds for us — for better or worse.

Like many companies, in recent years we’ve fallen victim to what the U.S. Department of Defense calls Agile BS — when you apply Agile terminology to your existing way of doing business. It’s a way to dilute Agile and turn it into nothing but a series of buzzwords. We’ve had sprints, standups, product owners, backlogs, and all the other bits of Agile — but we haven’t had the mindset that is necessary to make the Agile process work.

To build an Agile Team, we have needed to make a few key changes. First, we had assemble a team of developers who would gel together to become a performing team as fast as possible. Then, in order to overcome the inertia of the old way of doing things, we had to ensure that the team was trained to tackle the challenge in front of them. Finally, we have had to ensure that all the team members felt empowered to rise up and take ownership for their project.

An Agile team must be self-managing. This means that all the team members must feel the responsibility to deliver and have a commitment to do their part. Getting to that point has been a challenge — from getting management to let go and trust the team to getting the team members to step up and trust that their responsibilities are real.

I like to think of managing a team as being a game of chess. In a traditional arrangement, the managers are the back row while the developers and the interchangeable pawns in the front row — to be assigned here, there and everywhere.

In an Agile team, the roles are different. The team is self-managing. There is no front row and back row. Everyone has an important role in the team. This means that everyone should be challenged to step up to a bigger role than they would have had in a traditional team. While some team members are timid at first, having everyone feel empowered to play an important role is a key to the success of Agile.

We still have some challenges. Developers are still bouncing from one project to another. This discontinuity of effort shows as a reluctance to commit to the story points that will ultimately be necessary to complete the project in a timely way. It also distracts from our efforts to form team bonds. It’s hard to consider the team your home team when you’re feeling like a visitor to every team you work on.

Nonetheless, we’re starting to see real results from our prototype Agile team. Continuous integration procedures have been put in place ensuring a “done” product increment at the end of each sprint. For various reasons, delivery of these iterations to customers have not yet started, but this will be rectified at the end of the next sprint. We have peer reviews which are both improving the quality of the product with providing some degree of cross-training. The team’s velocity is improving, albeit at a slow rate. Over the next few sprints we will start integrating more and more with the other projects — and hopefully drawing them all into our new and more efficient way of building software.

Standard
Uncategorized

Mapping Amending Language to Akoma Ntoso Modifications

In my last blog, I talked about Xcential’s long history working with change management as it applies to legislation and my personal history working in the subject in other fields.

In this blog, I’m going to focus in on change management as it is used in Akoma Ntoso. I’m going to use, as my example, a piece of legislation from the California Legislature. As I implemented the drafting system used in Sacramento (long before Akoma Ntoso), I have a bit of a unique ability to understand how change management is practiced there.

First of all, we need to introduce some Akoma Ntoso terminology. In Akoma Ntoso, a change is known as a modification. There are two primary types of modifcations:

  1. Active modifications — modifications in which one document makes to another document.
  2. Passive modifications — modifications being proposed within the same document.

The snippet I am using for example is a cropped section from AB17 from the current session:

In California, many changes are shown using what they call redlining — or you may know as track changes. However, it would be a mistake to interpret them literally as you would in a word processor — a bit of the reason why it’s difficult to apply a word processor to the task of managing legislative changes.

In the snippet above, there are a number of things going on. Obviously, Section 1 of AB17 is amending Section 1029 of the Government Code. Because California, like most U.S. states, only allows their codes or statutes to be amended-in-full. The entire section must be restated with the amended language in the text. This is a transparency measure to make it more clear exactly how the law is being changed. The U.S. Congress does not have this requirement and Federal laws may use the cut-and-bite approach where changes can be hidden in simple word modifications.

Another thing I can tell right away is that this is an amended bill — it is not the bill as it was introduced. I will explain how I can tell this in a bit.

From a markup standpoint, there are three types of changes in this document. Only two of these three types are handled by Akoma Ntoso:

  1. As I already stated, this bill is amending the Government Code by replacing Section 1029 with new wording. This is an active change in Akoma Ntoso of type insertion.
  2. Less obvious, but Section 1 of AB17 is an addition to the bill as originally introduced. I can tell this because the first line of Section 1, known in California as the action line, is shown in italic (and in blue which is a convention I introduced). The oddity here is that while the section number and the action line are shown as an insertion, the quoted structure (an Akoma Ntoso term), is not shown as inserted. The addition of this section to the original bill is a passive change of type insertion.
  3. Within the text of the new proposed wording for Section 1029, you can also see various insertions and deletions. Here, you have to be very careful in interpreting the changes being shown. Because this is the first appearance of this amending section in a version of AB17, the insertions and deletions shown reflect proposed changes to the current wording of Section 1029. In this case, these changes are informational and are neither an active nor passive change. Had these changes been shown in a section of the bill that had already appeared in a previous version of AB17, these these changes would be showing proposed changes to the wording in the bill (not necessarily to the law) and they would be considered to be passive changes.

The rules are even more complex. Had section 1 been adding a section to the Government Code, then the quoted text being added would be shown as an insertion (but only in the first version of the bill that showed the addition). Even more complex, had the Section 1 been repealing a section of the Government Code, then the quoted text being repealed would be shown as a deletion (and would be omitted from subsequent versions of the bill). This last case is particularly confusing to the uninitiated because the passive modification of type insert is adding an active modification of type repeal. The redlining shows the insertion as an italic insertion of the action line while the repeal is being shown as a stricken deletion of the quoted structure.

The lesson here is that track changes, as we may have learned them in a word processor, aren’t as literal as they are in a word processor. There is a lot of subtle meaning encoded into the representation of changes shown in the document. Being able to control track changes in very complex ways is one of the challenges of building a system for managing legislative changes.

Standard
Uncategorized

Xcential is a Change Management Company

At Xcential, we typically describe ourselves as a legislative technology company. While that is correct, the true answer is more nuanced than that. We purposefully don’t solve problems that are mainstream and relatively easily solved by other off-the-shelf software. Instead, we say that we focus on drafting but, in saying that, we understate what we do. In practice, we focus on a very complex and high-value problem called change management — as it relates to legislation. Few people truly know how to solve this problem.

Twenty years ago, the founders of Xcential worked at an XML database company that was a subsidiary of Xerox. We started Xcential because we thought the legislation was one of the best applications for XML we had ever come across. It was the change management aspects that fascinated me, in particular. While my knowledge of legislation was based on high school civics class, I had a lot of experience in the field of change management.

At the start of my career, I was an electronics design engineer at the Boeing Company. While there, I worked on a very sophisticated form of change management — concurrent fault simulation of behavioral representations of electronic systems. Fault simulation is a deliciously complex differencing problem. In legislation, we think of changes as amendments to the text and we record them as insertions and deletions. In fault simulation, the changes aren’t textual, they are behavioral. We record those changes as observable differences from expected results in something called a fault dictionary. With this dictionary of simulated faults, you are able to backtrack to predict which likely faults are causing the problem.

While managing amendments and managing faults in an electronic system might seem a world apart, algorithmically they are surprising similar. In an amended bill, the objective is to efficiently record changes to a document as deltas (differences) recorded inline within the original text. When simulating an electronic system, the objective is to record thousands of potential failures as shadow circuits (differences) against a single good simulation executing concurrently. The shadow circuits, while a dynamic part of a simulation run, are very analogous to the changes recorded in a document. It’s a very clever techniques for efficiently simulating the behavior of thousands of things that might go without having to run thousands of individual simulations.

Getting my head around the complexities of concurrent fault simulation taught me how to think in a world of asynchronous recursion — electronic systems are inherently asynchronous. Complex recursion in legislative documents is something I must frequently wrestle with, from parsing and responding to complex requests for documents or parts of documents in the URL Resolver to managing the layers of sets of changes that exist in the U.S. Code as laws are amended.

Change management has a lot of applications — not just in managing faults in an electronic circuit or amendments in legislation. Another project at Boeing that I was not directly involved with involved allowing every airliner coming off the assembly line to have it’s own unique document configuration that would evolve through the thirty or so years the aircraft was in service. So many possibilities…

Standard
Process, technology, Uncategorized

GitHub Copilot — Is it the future?

Several months ago, I got admitted to the GitHub Copilot preview. For those of you who don’t know what Copilot is, it’s a AI-based plugin to Visual Studio Code that helps you by suggesting code for you to type. If you like, the suggestion, you hit tab, and on you go.

Join the GitHub Copilot waitlist · GitHub

It may sound like magic, and in some ways, it does seem like that. Apparently, it learns the vast base of open-source code found in the GitHub repositories. This, of course, has led to the inevitable charges that it violates fair use of that code and even that it will ultimately replace developer’s jobs much as factory automation has replaced workers. From my experience, this is more about sensationalism than anything real to worry about.

In my recent posts, I’ve covered the DIKW pyramid. It seems we’ve been stuck in the information layer for a long time, only barely touching the knowledge layer in very rudimentary ways. Yes, there are tools like Siri and Alexa which claim to be AI-based virtual assistants, but they just feel like a whole bunch or complicated programming to achieve something that is more annoying than helpful. There is Tesla Copilot for self-driving cars, but that just seems scary to me. (Full disclosure: I don’t even trust cruise control) To me, GitHub copilot is the first piece of software that truly seems to drive deep into the knowledge layer and even reach the wisdom layer. It’s truly simulating some sort of real smartness.

While the sensationalists love to make it seem that Copilot is lifting code from other people’s work and offering it up as a suggestion, I’ve seen nothing whatsoever that suggests that that is what it is doing. Instead, it truly seems to understand what I am doing. It makes suggestions that could only come from my code. It uses my naming conventions, coding standards, and even my coding style. It seems to have analyzed enough of the code base in my application to understand what local functions and libraries it could draw upon. The code it synthesizes are obviously built on templates that it’s derived by learning. But those templates aren’t just copies of other people’s work. This is how synthesis works in the CAD world I come from (actually, it’s a bit more sophisticated that the synthesis I knew in CAD many years ago) and this is a natural next step in coding technologies.

I’ve been experimenting with what Copilot can do — how far reaching its learning seems to be. It’s able to help me writing JavaScript. What it is able to suggest is remarkable. However, coding assistance is not its only trick. It even helps with writing comments — sometimes with a bit of an attitude too. Last week I was adding a TODO: comment into the loader part of LegisPro to note that it needed to be modernized. Copilot’s unsolicited suggestion for my comment was “Replace the loader with a real loader”. Thanks Copilot. As Han Solo once said, “I’m not really interested in your opinion 3PO”.

Of course, this all leads to the inevitable question. Can it be trained to write legislation? Much to my surprise, it seemingly can. How and why it knows this is completely unknown to me. It’s able to suggest basic amending language and seems to know enough that it can use fragments of quotes from Thomas Jefferson and Benjamin Franklin. I find it incredible that it can even understand the context of legislation and that I did not have to tell it what that context was.

So am I sold on this new technology? Well, yes and no.

It’s not the scary source code stealing and eavesdropping application some would make it out to be. The biggest drawback to it is the same reason I don’t even trust cruise control in my car. It’s not that I don’t trust the computer. It’s that I don’t trust myself to not become lazy and complacent and come to believe the computer is right. I’ve already come across a number of situations where I’ve accepted Copilot’s suggestion without too much thought, only to needlessly wasting hours tracking down a problem that would never have existed if I had actually taken the time to write the code.

It’s an interesting technology, and I believe it’s going to be am important part of how software development evolves in the coming years. But as with all new technologies, it must be adopted with caution.

Standard
Akoma Ntoso, Standards, technology, Uncategorized

What is Good Legislative XML?

I’m often asked what make on XML model better than another when it comes to representing laws and regulations. Just because a document is modeled in XML does not mean that it is useful in that form — the design of the schema matters in terms of what it enables or facilitates.

We have a few rules of thumb that we apply when either designing or adopting an XML schema:

  • Is it semantic?
    Reason: In order to process the information in a document, you have to understand what it is and what it means.
  • Is the presentation separated from the semantics as much as possible?
    Reason: We have moved beyond paper and nowadays it’s important to present information in form factors that just don’t suit the legacy constraints imposed by printing paper.
  • Is all the text (excluding any metadata section) in the natural reading order?
    Reason: The simplest way to present and process the text in a document is in the reading order of the text. This is particularly important is the presentation is to be added to the XML using simple CSS styling (as opposed to HTML transformation) and when the text is subject to complex amending instructions.
  • Does it, to the fullest extent possible, avoid the use of generated text?
    Reason: Similar to the last rule, it’s important for text to be displayed or amended when that text is represented. Generating text opens up a can of worms which can require sophisticated additional processing. Also, from a historical record of the text, which is essential for enacted law, having part of the text be generated by an external algorithm requires that the algorithm itself become part of the permanent record.
  • Is every provision that needs data associated with it permanently identifiable?
    Reason: With modern automation comes the need to not only manage the text of a provision but also state information. For example, is the current status of the provision pending, effective, repealed, or spent? While some of the metadata might be stored with the XML representation of the provision itself, sometimes it is better to store that metadata in a separate part of the document or in an external database. In these cases, it’s important to be able to permanently associate this external metadata with the provision — and this usually requires an immutable (permanent) identifier.
  • Is every provision that is referred to easily locatable?
    Reason: Laws are full of references (or citations). These are to provisions within the same document or to other documents or provisions within those documents. There needs to be a way to accurately and efficiently traverse and process these references. This need usually requires a locating identifier that an unambiguously identify the provision being referred to.
  • If the XML schema is for general use, is there an extensible way to add missing constructs?
    Reason: It is easy to claim to support all the legal traditions in the word, but extremely difficult to do so. While legal traditions are remarkably similar around the world, it’s impossible to predict every single construct that will arise — especially with documents data back hundreds of years. There has to be a way to implement constructs that don’t intrinsically exist within the base XML schema.
  • Is there an extensible metadata mechanism?
    Reason: A primary objective for representing a legislative or regulatory document in XML is for the processability it enables. This invariably means a need to record extensive metadata about the provisions found within the document. As the automation possibilities are endless, there needs to be a way to model and record the metadata that is generated.
  • Does it provide the facilities necessary to automate according to modern expectations?
    Reason: Some structure facilitate automation while others do not. For instance, flat structures can simplify the drafting process, but also make the automation process more difficult. It’s usually better to implement hierarchical structures and then hide the drafting complexity that creates with richer tools.

Standard
Uncategorized

Twenty Years in Legal Informatics!

Today marks my twentieth year in the field of legal informatics. It was January 4th, 2002 that we officially started Xcential. The following week, Brad and I flew up to Sacramento to start our new project to replace California’s aging mainframe system with a modern XML-based drafting system. At the time, with a background in CAD automation, I was relying on what I remembered from high school civics class in high school as my understanding of the field. We’ve come a long way in those twenty years.

When we arrived in Sacramento, our charter was to work closely with the Legislative Data Center to produce a legislative drafting, amending, and publishing solution. The accompanying workflow system would be developed in-house and the database-oriented history system was to be developed by another vendor. There were a few constraints — the system had to be XML-based, the middle tier had to be Enterprise JavaBeans (EJB) and use WebLogic, and the database had to be Oracle. This last constraint had been decided somewhat mysteriously by upper management in the wake of 9/11 and left us scrambling to figure out how XML and an SQL-based relational database would work together. Fortunately, we learned that Oracle was developing XDB and they were open to using us as a guinea pig, for better or worse.

At the time we didn’t realize it, but we were the replacement for an unsuccessful attempt to build a drafting system using Microsoft Word. Somewhat strangely, while that project was wrapping up the same month we were starting, we never got any wind of that project’s existence and, to this day, I’ve never heard anyone ever mention anything about that project in Sacramento. The only hint we got was that we were expressly forbidden from suggesting Microsoft Word as the drafting tool. It was only when we came across the owner of the company that had performed that project at a conference and he bitterly suggested our project would meet the same fate as his, that we realized the project had existed at all. Thankfully, he was wrong and we deployed our solution in late 2004 for the 2005-2006 session. It’s been in use ever since.

So what has changed in the twenty years I’ve been in this field. Well, a lot has changed — and a lot has not. In my last two blogs I’ve discussed the DIKW pyramid and written about how it should be expected that migration through the layers can be expected to take between ten and twenty years.

When we started in 2002, the majority of jurisdictions were still mired in the tail end of the “data” era — having data entry to enter documents into mainframe systems. Other than that, there was little automation. A number of jurisdictions were starting to move forward into the “information” era. There were two distinctly different approaches being taken. Many jurisdiction, as California had done before us, were taking a half-step into the new era using office productivity tools. The reason I consider this a half-step is because, while clearly a more modern approach than data entry into a mainframe, the step did little to prepare for the steps to come — being able to add layers of automation to increase the speed, volume, and efficiency of processing legislation. This was the lesson California had learned with their earlier project, and others have learned since — that without a robust semantic information model, you just can’t build robust automation tools. Many jurisdictions did understand this and were working towards a full step using XML-based tools. Although XML tools at the time were decidedly first generation, the benefits that automation promised outweighed the risks of being an early adopter.

So where are we today? While twenty years ago, most jurisdictions were at the end of the “data” era and start of the “information” era, there has been considerable, if slow, progress. Today most jurisdictions are either somewhere between the midpoint of the “information” era (mostly the office productivity approach) and into the early stages of the “knowledge” era (with the XML approach). Many of the systems deployed in the mid-2000s are now starting to age out and jurisdictions are looking to replace them with systems that can meet the modern demands of the 2020s.

As for Xcential, over the last few years we’ve been progressing from a consulting company to a product company — where we rely on third-party integrators to do implementations. This way we can leverage our 20 years of experience far more effectively. We still do our own implementations, when it makes sense, but we now offer LegisPro as a product that can be implemented by one of our partner companies, by a local integrator, or even by a jurisdiction’s own internal development team. Xcential today is very different from what it was 20 years ago, and our growth over the past year or so has been amazing — and for me quite exhausting.

It will be interesting to see where we are in another twenty years — although I may have retired by then. (most people roll their eyes at this point suggesting they think I’ll never want to retire)

Standard
Process, technology, Track Changes

Moving on Up to Document Synthesis

In my last blog, I discussed the DIKW pyramid and how the CAD world has advanced through the layers while the legal profession was going much slower. I mentioned that design synthesis was my boss Jerry’s favorite topic. We would spend hours at his desk in the evening while he described his vision for design synthesis — which would become the norm in just a few years.

Jerry’s definition of design (or document) synthesis was quite simple — it was the processing of the information found in one document to produce or update another document where that processing was not simple translation. In the world of electronic design, this meant writing a document that described the intended behavior of a circuit and then having a program that would create a manufacturable design using transistors, capacitors, resistors, etc. from the behavioral description. In the software world, we’ve been using this same process for years, writing software in a high-level language and then compiling that description into machine code or bytecode. For hardware design, this was a huge change — moving away from the visual representation of a schematic to a language-based representation similar to a programming language.

In the field of legal informatics, we already see a lot of processes that touch on Jerry’s definition of document synthesis. Twenty years ago, it was seeing how automatable legislation could be, but wasn’t, that convinced me that this field was ready for my skills.

So what processing do we have that meets this definition of document synthesis:

  • In-context amending is the most obvious process. Being able to process changes recorded in a marked up proposed version of a bill to extract and produce a separate amending document
  • Automated engrossing is the opposite process — taking the amending instructions found in one document to automatically update the target document.
  • Code compilation or statute consolidation is another very similar process, applying amending language found in the language of a newly enacted law to update pre-existing law.
  • Bill synthesis is a new field we’ve been exploring, allowing categorized changes to the law to be made in context and then using those changes and related metadata to produce bills shaped by the categorization metadata provided.
  • Automated production of supporting documents from legislation or regulations. This includes producing documents such as proclamations which largely reflect the information found within newly enacted laws. As sections or regulations come into effect, proclamations are automatically published enumerating those changes.

In the CAD world, the move to design synthesis required letting go of the visually rich but semantically poor schematic in favor of language-based techniques. Initially there was a lot of resistance to the idea that there would no longer be a schematic. While at University, I had worked as a draftsman and even my dad had started his career as a draftsman, so even I had a bit of a problem with that. But the benefits of having a rich semantic representation that could be processed quickly outweighed the loss of the schematic.

Now, the legislative field is wrestling with the same dilemma — separating the visual presentation of the law, whether on paper or in a PDF, from the semantic meaning found within it. Just as with CAD, it’s a necessary step. The ability to process the information automatically dramatically increases the speed, accuracy, and volume of documents that can be processed — allowing information to be produced and delivered in a timely manner. In our society where instant delivery has become the norm, this is now a requirement.

Standard
Process, technology, Uncategorized

The Knowledge Pyramid

At the very start of my career at the Boeing Company, my boss Jerry introduced the Knowledge Pyramid the DIKW Pyramid to me one evening. I had an insatiable thirst for learning and he would spend hours introducing me to ideas he thought I could benefit from. To me, this was a profound bit of learning that would somewhat shape my career.

At the time, I was working in CAD support, introducing automation technologies to the various engineering project’s around the Boeing Aerospace division. The new CAD tools were running on expensive engineering workstations and were replacing largely homegrown minicomputer software from the 1970’s.

Jerry explained to me that the legacy software, largely batch tools, that crunched data manually input from drawings represented the data layer. The CAD drawings our tools produced actually represented a digital representation of the designs with sufficient information for both detailed analysis and manufacturing. It would take a generation of new technologies to advance from one layer to the next in the DIKW pyramid — with each generation lasting from ten to twenty years. His interest was in accelerating that pace and so we studied, as part of our R&D budget, artificial intelligence, expert systems, language-based design techniques, and design synthesis.

While data was all about crunching numbers, information was all about understanding the meaning of the data. Knowledge came from being able to use the information to synthesize (Jerry’s favorite topic) new information and to gain understanding. And finally, wisdom came from being able to work predictively based on that understanding.

When I was introduced to legal informatics in the year 2000, it was a bit of a time warp to me. While the CAD world had advanced considerably and even design synthesis was now the norm, legal informatics was stuck in neutral in the data processing world of the late 1970s and early 1980s. Mainframe tools, green screen editors, and data entry was still the norm. It was seeing this that gave me the impetus to work to advance the legal field. The journey I had just taken in the CAD world of the prior 15 years was yet to be taken in the legal field. The transition into information processing was to start with the migration to XML — replacing the crude formatting oriented markup used in the mainframe tools with modern semantic markup that provided for a much better understanding of the meaning of the text.

To say the migration to the future has gone slowly would be an understatement. There are many reasons why this has happened:

  • The legacy base of laws have to be carried along — unchanged in virtually every way. This would be like asking Boeing to advance their design tools while at the same time requiring that every other aircraft design ever produced by the company in the prior century also be supported. For law, it a necessary constraint, but also a tremendous burden.
  • The processes of law are bound by hard-to-change traditions, sometimes enshrined by the constitution of that jurisdiction. This means the tools must adapt more to the existing process than the process can adapt to the tools. Not only does this constraint require incredibly adaptable tools, it is very costly and dampers the progress that can be made.
  • The legal profession, by and large, is not technology driven and their is little vision into what can be. The pressure to keep things as they are is very strong. In the commercial world, companies simply have to advance or they won’t be competitive and will die. Jurisdictions aren’t in competition with one another and so the need to change is somewhat absent.

For advancements to come their needs to be pressure to change. Some of this does come naturally — the hardware the old tools run on won’t last forever. New legislators entering into their political careers will quickly be frustrated by the archaic paper-inspired approach to automation they find. For instance, viewing a PDF on a smartphone is not the best user experience. It is that smartphone generation that will drive the need to change.

Over the next few blogs, I’m going to explore where legal informatics is on the DIKW pyramid and what advancements on the horizon will move us up to higher levels. I’ll also take a look at new software technologies that point the way to the future — for better or worse.

Standard
Uncategorized

I’m Back!!!

After a long hiatus away from my blog, I decided to reinstate it and get back to regular blogging.

There are going to be a few changes. While the subject stays the same, I’m returning this blog to its original intent — a personal blog about the technologies, tools, processes I encounter and the many events I participate it. It’s going to be less about Xcential and LegisPro and more about my experiences in the field of legislative technology.

It’s not that Xcential and LegisPro don’t remain an important part of my life — they remain the central focus. However, as my blog started to become more of a marketing tool and less of a personal blog, my interest started to wane.

Another change is that I’m going to focus on simpler more frequent blogs. They will cover a range of topics:

  • Observations and discoveries about legislative technologies
  • Experiences implementing Akoma Ntoso and other XML document models around the world
  • Modern technologies I learn and apply as part of my job
  • Software development processes and practices
  • Software tools and platforms
  • Events relating to legislative technology

If there is something you think I should cover, let me know in the comments.

Standard
Akoma Ntoso, HTML5, LegisPro Sunrise, Standards, technology, Track Changes, Uncategorized, W3C

LegisPro Sunrise!!!

LegisPro Sunrise is almost done!. It has taken longer than we had hoped it would, but we are finally getting ready to begin limited distribution of LegisPro Sunrise, our productised implementation of our LegisPro drafting and amending tools for legislation and regulations. If you are interested in participating  in our early release program, please contact us at info@xcential.com. If you already signed up, we will be contacting you shortly.

LegisPro Sunrise is a desktop implementation of our web-based drafting and amending products. It uses Electron from GitHub, built from Google’s Chromium project, to bundle all the features we offer, both the client and server sides into a single easy-to-manage desktop application with an installer having auto-update facilities. Right now, the Windows platform is supported, but MacOS and Linux support will be added if the demand is there. You may have already used other Electron applications – Slack, Microsoft’s Visual Studio Code, WordPress, some editions of Skype, and hundreds of other applications now use this innovative new application framework.

Other versions

LegisProAIn addition to the Sunrise edition, we offer LegisPro in customised FastTrack implementations of the LegisPro product or as fully bespoke Enterprise implementations where the individual components can be mixed and matched in many different ways.

Akoma Ntoso model

AkomaNtoso

LegisPro Sunrise comes with a default Akoma Ntoso-based document model that implements the basic constructs seen in many parliaments and legislatures around the world that derive from the Westminster parliamentary traditions.

Document models implementing other parliamentary or regulatory traditions such as those found in many of the U.S. states, in Europe, and in other parts of the world can also be developed using Akoma Ntoso, USLM, or any other well-designed XML legislative schema.

Drafting & Amending

Our focus is on the drafting and amending aspects of the parliamentary process. By taking a digital-first approach to the process, we are able to offer many innovative features that improve and automate the process. Included out of the box is what we call amendments in context where amendment documents are extracted from changes recorded in a target document. Other features can be added through an extensive plugin mechanism.

Basic features

Ease-of-use

While offering sophisticated drafting capabilities for legislative and regulatory documents, LegisPro Sunrise is designed to provide the familiarity and ease-of-use of a word processor. Where it differs is in what happens under the covers. Rather than drafting using a general purpose document model and using styles and formatting to try to capture the semantics, we directly capture the semantic structure of the document in the XML. But don’t worry, as a drafter, you don’t need to know about the underlying XML – that is something for the software developers to worry about.

Templates

TemplateTemplates allow the boilerplate structure of a document to be instantiated when creating a new document. Out-of-the-box, we are providing generic templates for bills, acts, amendments, amendment lists, and a few other document types.

In addition to document templates, component templates can be specified or are synthesized when necessary to be used as parts when constructing a document.

For both document and component templates, placeholders are used to highlight area where text needs to be provided.

Upload/Download

As a result to our digital-first focus, we manage legislation as information rather than as paper. This distinction is important – the information is held in XML repositories (a form of a database) where we can query, extract, and update provisions at any level of granularity, not just at the document level. However, to allow for the migration from a paper-oriented to a digital first world, we do provide upload and download facilities.

Undo/Redo

undoRedoAs with any good document editor, unlimited undo and redo is supported – going back to the start of the editing session.

Auto-Recovery

Should something go wrong during an editing session and the editor closes, an auto-recovery feature is provided to restore your document to the state the document was at, or close to it, when the editor closed.

Contextual Insert Lists

insertLists

We provide a directed or “correct-by-construction” approach to drafting. What this means is that the edit commands are driven by an underlying document model that is defined to enforce the drafting conventions. Wherever the cursor is in the document, or whatever is selected, the editor knows what can be done and offers lists of available documents components that can either be inserted at the cursor or around the current selection.

Hierarchy

hierarchy

Document hierarchies form an important part of any legislative or regulatory document. Sometimes the hierarchy is rigid and sometimes it can be quite flexible, but either way, we can support it. The Sunrise edition supports the hierarchy Title > Part > Chapter > Article > Section > Subsection > Paragraph > Subparagraph out of the box, where any level is optional. In addition, we provide support for cross headings which act as dividers rather than hierarchy in the document. Customised versions of LegisPro can support whatever hierarchy you need — to any degree of enforcement. A configurable promote/demote mechanism allows any level to be morphed into other levels up and down the hierarchy.

Large document support

Rule-making documents can be very large, particular when we are talking about codes. LegisPro supports large documents in a number of ways. First, the architecture is designed to take advantage of the inherent scalability of modern web browsing technology. Second, we support the portion mechanism of Akoma Ntoso to allow portions of documents, at any provision level, to be edited alone. A hierarchical locking mechanism allows different portions to be edited by different people simultaneously.

Spelling Checker

checkSpelling

Checking spelling is an important part of any document editor and we have a solution – working with a third-party service we have tightly integrated with in order to give a rich and comprehensive solution. Familiar red underline markers show potential misspellings. A context menu provides alternative spellings or you can add the word to a custom dictionary.

Tagging support

Tagging

Beyond basic drafting, tagging of people, places, or things referred to in the document is something for which we have found a surprising amount of interest. Akoma Ntoso provides rich support for ontologies and we build upon this to allow numerous items to be tagged. In our FastTrack and Enterprise solutions we also offer auto-tagging technologies to go with the manual tagging capabilities of LegisPro sunrise.

Document Bar

docbar

The document bar at the top of the application provides access to a number of facilities of the editor including undo/redo, selectable breadcrumbs showing your location in the document, and various mode indicators which reflect the current editing state of the editor.

Command Ribbons and Context Menus

menuRibbons

Command ribbons and context menus are how you access the various commands available in the editor. Some of the ribbons and menus are dynamic and change to reflect the location the cursor or selection is at in the editor. These dynamic elements show the insert lists and any editable attributes. Of course, there is also an extensive set of keyboard shortcuts for many commonly used commands. It has been our goal to ensure that the majority of the commonly used documenting tasks can be accomplished from the keyboard alone.

Sidebar

SidebarA sidebar along the left side of the application provides access to the major components which make up LegisPro Sunrise. It is here that you can switch among documents, access on-board services such as the resolver and amendment generator, outboard services such as the document repository, and where the primary settings are managed.

Side Panels

sidePanel

Also on the left side are additional configurable side panels which provide additional views needed for drafting. The Resources view is where you look up documents, work with the hierarchy of the document being edited, and view provisions of other documents. The Change Control view allows the change sets defined by the advanced change control capabilities (described below) to be configured. Other panels can be added as needed.

Advanced features

In addition to the rich capabilities offered for basic  document editing, we provide a number of advanced features as well.

Document Management

Document management allows documents to be stored in an XML document repository. The advantage of storing documents in an XML repository rather than in a simple file share or traditional content management system is that it allows us to granularise the provisions within the documents and use them as true reference-able information – this is a key part of moving away from paper document-centric thinking to a modern digital first mindset. An import/export mechanism is provided to add external documents to the repository or get copies out. For LegisPro Sunrise we use the eXist-db XML database, but we can also provide customised implementation using other repositories.

Resolver

Our document management solution is built on the FRBR-based metadata defined by Akoma Ntoso and uses a configurable URI-based resolver technology to make human readable and permanent URIs into actual URLs pointing to locations within the XML repository or even to other data sources available on the Web.

Page & Line numbering

pagination

There are two ways to record where amendments are to be applied – either logically by identifying the provision or physically by page and line numbers. Most jurisdictions use one or the other, and sometimes even both. The tricky part has always been the page and line numbers. While modern word processors usually offer page and line numbers, they are dynamic and change as the document is edited. This makes this feature of limited use in an amending system. What is preferred is static page and line numbers that reflect the document at the last point it was published for use in a committee or chamber. We accomplish this approach using a back-annotation technology within the publishing service. LegisPro Sunrise also offers a page and line numbering feature that can be run without the publishing service. Page and line numbers can be display in the left or right margin in inline, depending on preference.

Amendment Generation

One of the real benefits of a digital first solution is the many tasks that can be automated – not by simply computerising the way things have always been done, but by rethinking the approach altogether. Amending is one such area. LegisPro Sunrise incorporates an onboard service to automatically generate amendment documents from changes recorded in the target document. Using tracked changes, the document hierarchy, and annotated page and line numbers, we are able to very precisely record proposed changes as amendments. Of course, the amendment generator works with the change sets to allow different amendment sets to be generated by specifying the named set of changes.

Plugin Support

plugins

LegisPro Sunrise is not the first incarnation of our LegisPro offering. We’ve been using the underlying technologies and precursors to those technologies for years with many different customers. One thing we have learned is that there is a vast variation in needs from one customer to another. In fact, even individual customers sometime require very different variations of the same basic system to automate different tasks within their organisation. To that end, we’ve developed a powerful plug-in approach which allows capabilities to be added as necessary without burdening the core editor with a huge range of features with limited applicability. The plugin architecture allows onboard and outboard services to be added, individual commands, menus, menu items, side panels, mode indicators, JavaScript libraries, and text string libraries to be added. In the long term, we’re planning to foster a plugin development community.

Proprietary or Open Source?

There are two questions that always come up relating to our position on standards and open source software:

  1. Is it based on standards? Yes, absolutely – almost to a fault. We adhere to standards whenever and however we can. The model built into LegisPro Sunrise is based on the Akoma Ntoso standard that has been developed over the past few years by the OASIS LegalDocML technical committee. I have been a continual part of that effort since the very beginning. But beyond that, we always choose standards-based technologies for inclusion in our technology stack. This includes XML, XSLT, XQuery, CSS, HTML5, ECMAScript 2015, among others.
  2. Is it open source?
    • If you mean, is it free, then the answer is only yes for evaluation, educational, and non-production uses. That’s what the Sunrise edition is all about. However, we must fund the operation of our company somehow and as we don’t sell advertising or customer profiles to anyone, we do charge for production use of our software. Please contact us at info@xcential.com or visit our website at xcential.com for further information on the products and services we offer.
    • If you mean, is the source code available, then the answer is also yes – but only to paying customers under a maintenance contract. We provide unfettered access to our GitHub repository to all our customers.
    • Finally, if you’re asking about the software we built upon, the answer is again yes, with a few exceptions where we chose a best-of-breed commercial alternative over any open source option we had. The core LegisPro Sunrise application is entirely built upon open source technologies – it is only in external services where we sometimes rely up commercial third-party applications.

1200px-HTML5_logo_and_wordmark.svg         CSS            JavaScript-logo

nodejs-new-pantone-black
AngularJS_logo.svg
electron
What does it cost?

As I already alluded to, we are making LegisPro Sunrise available to potential customers and partners, academic institutions, and other select individuals or organisations for free – so long as it’s not used for production use — including drafting, amending, or compiling legislation, regulations, or other forms of rule-making. If you would like a production system, either a FastTrack or Enterprise edition, please contact us at info@xcential.com.

Coming Soon

Book

I will soon also be providing a pocket handbook on Akoma Ntoso. As a member of the OASIS LegalDocML Technical Committee (TC) that has standardised Akoma Ntoso, it has been important to get the handbook reviewed for accuracy by the other TC members. We are almost done with that process. Once the final edits are made, I will provide information on how you can obtain your own copy.

Standard
Akoma Ntoso, How To, Standards, Uncategorized

Using the <hcontainer> Element Properly

When I started my blog five years ago, I said would try not to get too technical. Overall, I’ve stuck to that. However, with Akoma Ntoso now essentially standardised, I think it is time to start covering some areas of it in a little more technical detail. So, from time to time, I’m going to delve into a little technical mumbo jumbo to cover some subjects that come up frequently.

In this blog, I want to cover the proper use of the <hcontainer> element. Akoma Ntoso has rich support for hierarchical documents as legal documents tend to be strongly hierarchical. Consequently, there is a large selection of element tags to choose from. During the standardisation effort, we tried to identify as many hierarchical constructs we could find in legal documents, but it was impossible to identify every single construct in every single jurisdiction around the world. Indeed, we sometimes decided that some hierarchical levels were just too unique to a specific jurisdiction to warrant inclusion in a standard intended for worldwide adoption. Sometimes, having too many tags is worse than having not enough, especially when there is a way to handle the outlier cases.

So, what is the proper way to use the <hcontainer>? The <hcontainer>, or hierarchical container, is a generic element intended to be use to invent and element that is needed but not found among the existing Akoma Ntoso hierarchical elements. The @name attribute is used to define the name of the new element you’re inventing. For this reason, the value of the @name should be consistent with the element naming convention of Akoma Ntoso:

  • The name should be lowerCamelCase.
  • The name should be in British English rather than another variant of English or another language. (Yes, we have two exceptions to this rule in Akoma Ntoso, one because the English form didn’t exist and one because we didn’t notice a spelling variation)
  • The name should not already exist in Akoma Ntoso.

One question that comes up from time to time is whether an <hcontainer> can be used to define an element that already exists, but in another language. For instance, could I define <hcontainer name=”artículo”> to define a Spanish article rather than use <article>? While there is nothing that prevents this practice, that would not be in the spirit of Akoma Ntoso. A large part of the motivation of Akoma Ntoso is to promote both data and tool interoperability. Localising the element tags completely undermines Akoma Ntoso as a standard. You might as well simply use your own schema. Please consider the consumers of your data when facing this question, not just the producers.

[We use an alternate mechanism, provided by our tools, to present a localized term to the user rather than the element name.]

Another question I’ve have been has to do with hierarchical levels that might not have a formalised name at all. I’ve come across this a number of times, in a number of ways. First, it’s often an issue of very old documents where the document hierarchy was either not formalised or not explicitly stated and conversion involves some degree of guessing. Second, there are sometimes lower levels, for instance, below the section level, where the level names have simply not been formalised or are used inconsistently. Third, I’ve come across a case where the upper levels, above the section, were not named in because the corresponding concepts didn’t really exist in the language used in that jurisdiction. For these cases, we use the <level> element.

The <hcontainer> is a very useful element in Akoma Ntoso. It’s a key part of the design of the schema that allows it to be easily adapted to any legislative tradition. However, it should be used judiciously — only when there isn’t already an alternative.

 

Standard
Akoma Ntoso, LegisPro Sunrise, LEX Summer School, Standards

LEX Summer School 2017

For the past two weeks I’ve been in Italy attending the LEX Summer School and Akoma Ntoso Developer’s Workshop at the Ravenna campus of the University of Bologna. This is my eighth summer school in Ravenna and my tenth overall LEX Summer School including the two U.S. editions. It’s always one of the highlights of my year.

With Akoma Ntoso as a standard now all but completed, a product about to debut, and a couple Akoma Ntoso projects to our name, I thought it would be a good time to reflect how far we have come. Bill Gates once said “Most people overestimate what they can do in one year and underestimate what they can do in 10 years.” This is a case of that. At times, the progress is frustratingly slow and arduous, but when you look back how far we’ve come in 8 years, we’ve made pretty good progress.

When I arrived at the first summer school I attended back in 2010, I had never heard of Akoma Ntoso — let alone learned how to pronounce it. A lot of the discussion still revolved around whether using purpose-built XML tools or re-purposing office productivity software was the way to go. Did the world really need Akoma Ntoso or was Open Office’s XML formats adequate? What about Microsoft’s Office Open XML? Was it an alternative?

We don’t discuss that anymore — the answer is obvious. As Luca Cervone commented to me, all of a sudden the other approaches look so old-fashioned. In fact, the presentations that did still use that approach were apologetic that their decisions dated back to the early 2000s when the answer was less clear.

What we now see is the value of putting data first and paper second. Making paper take the back seat in order to take advantage of the inherent power of treating legislation as data is now clearly the way to go. We see this in all the innovative capabilities that were on display — from the advanced amending tools we’ve worked with the UK and Scottish Parliaments to develop, the rich ontology support tools being developed in several projects, to the various comparison and analysis capabilities that were on show. XML enables all of these capabilities, in ways that other approaches simply cannot.

Another change in the eight years is the extent to which Akoma Ntoso has been embraced, particularly in Europe:

  • In April of this year, the Chief Executive Board of the United Nations approved the use of Akoma Ntoso as the documentation standard throughout the entire system after a detailed analysis. (Akoma Ntoso began as a project of the UN Department of Economic and Social Affairs (UN/DESA) a decade ago).
  • Numerous projects at both the European Parliament and European Commission are now based on Akoma Ntoso, although perhaps in a bit of a disjoint manner.
  • The project I’ve devoted a lot of my life to over the past two years at the U.K. and Scottish Parliaments is committed to Akoma Ntoso. You can watch a video of an early version here.
  • The Italian Senate is adopting Akoma Ntoso to some extent, and the Italian Chamber of Deputies are considering following suit.
  • There are projects underway in Switzerland and South America to adopt Akoma Ntoso.
  • Even the U.S. House of Representatives has a prior commitment to support Akoma Ntoso in some way.

This is all very good progress and much more is simmering in the background.

One of my goals at this LEX Summer School was to start laying the seeds for an open framework API that would allow interoperable plugins to be developed that work with all Akoma Ntoso-based platforms. Here, Luca surprised me by showing the new open source Akomando toolkit. This is a JavaScript toolkit, to be made available via NPM, GitHub, and other means shortly, that will provide the basic utilities one needs to easily process XML. As the LIME editor and Xcential’s LegisPro are largely technologically aligned on modern and open web technologies, this toolkit is a natural fit for both applications. I think this is a very exciting development and one we plan to take advantage of as soon as possible.

So, all in all, not bad. Now it’s time to start building on that momentum. We have lots of ideas percolating that will be revealed in the months to come. I’m looking forward to doing another retrospective at the ten year mark.

Standard
Akoma Ntoso, HTML5, LegisPro Sunrise, LEX Summer School, Standards, technology, Track Changes

The Sun is Rising on Akoma Ntoso — and LegisPro too!

Two great news piece of news this week! First, the documentation for Akoma Ntoso has now been officially released by OASIS. Second, we’re announcing the latest version of  our LegisPro drafting platform for Akoma Ntoso, codenamed “Sunrise”.

After several years of hard work, we’ve made a giant step towards our goal of setting an international XML standard for legal documents. You can find the documents at the OASIS LegalDocML website. A special thanks to Monica Palmirani and Fabio Vitali at the University of Bologna for their leadership in this endeavour.

legispro250Later this week, Xcential will be announcing and showing the latest version of “Sunrise” version of LegisPro, at both NALIT in Annapolis, Maryland and at the LEX Summer School in Ravenna, Italy. This new version represents a long-planned change to Xcential’s business model. While we have a thriving enterprise business, we’re now focusing on also providing more affordable solutions for smaller governments.

Part of our plan is to foster an open community of providers around the Akoma Ntoso standard for legislative XML. With Akoma Ntoso now in place as a standard, we’re looking for ways to provide open interfaces such that cooperative tools and technologies can be developed. One of my goals at this years summer school in Ravenna is to begin outlining the open APIs that will enable this vision.

LegisPro

The new edition of LegisPro will be all about providing the very best options:

  1.  It will provide a word processing like drafting capability your drafters demand — along with the real capabilities you need:
    • We’re not talking about merely providing a way to style a word processing document to look like legislation.
    • We’re talking about providing easy ways to define the constructs you need for your legislative traditions, such as–
      • a configurable hierarchy,
      • configurable tagging of important information,
      • configurable numbering rules,
      • configurable metadata,
      • oh, and configurable styles too.
    • We’re talking about truly understanding your amending traditions and providing the mechanisms to support them, such as–
      • configurable track changes, because we understand that a word processor’s track changes are not enough,
      • as-published page and line markers, because we understand your real need for page and line numbers and that a word processor’s page and line numbering is not that,
      • robust typography, because we know there’s a quite a difference between the casual correspondence a word processor is geared for and the precision demanded in documents that represent laws and regulations.
  2. It will be as capable as we can make it — for real-world use rather than just a good demo:
    • We’re not talking about trying to sell you a cobbled together suite of tools we built for other customers.
    • We’re talking about working with specialists in all the sub-fields of legal informatics to provide best-of-breed options that work with our tools.
    • We’re talking about making as many options available to you as we know there is no one-size-fits-all answer in this field.
    • We’re talking about an extensible architecture that will support on-board plug-ins as well as server-side web-services.
    • We’re talking about providing a platform of choices rather than a box of pieces.
  3. It will be as affordable as we can possibly make it:
    • We’re talking about developing technologies that have been designed to be easily configured to meet a wide variety of needs.
    • We’re talking about using a carefully chosen set of technologies to minimize both your upfront cost and downstream support challenges.
    • We’re talking about providing a range of purchasing options to meet your budgetary constraints as best we can.
    • We’re talking about finding a business model that allows us to remain profitable — and spreads the costs of developing the complex technologies required by this field as widely and fairly as possible.
  4. It is as future-proof as we can possibly make it:
    • We’re not talking about trying to sell you on a proprietary office suite.
    • We’re talking about using a carefully curated set of technologies that have been selected as they represent the future of application development — not the past — including:
      • GitHub’s Electron which allows us to provide both a desktop and a web-based option, (This is the same technology used by Slack, WordPress, Microsoft’s Visual Studio Code, and hundreds of other modern applications.)
      • Node.js which allows us to unify client-side and server-side application development with “isomorphic JavaScript,”
      • JavaScript 6 (ECMAScript 2015) which allows us to provide a truly modern, unified, and object-oriented programming environment,
      • Angular and other application frameworks that allow us to focus on the pieces and not how they will work together,
      • CSS3 and LESS that allows us to provide state-of-the-art styling technologies for the presentation of XML documents,
      • the entire XML technology stack that is critical for enabling an information-centric rather than document-centric system as is appropriate for the 21st century,
      • and, of course, using the Akoma Ntoso schema for legislative XML to provide the best model for sharing data, information, tools, and other technologies. It’s truly a platform to build an industry on.
  5. It is as open as we can possibly make it:
    • We’re not talking about merely using an API published by a vendor attempting to create a perception of openness by publishing an API with “open” in the name.
    • We’re talking about building on a full suite of open source tools and technologies coming from vendors such as Google, GitHub, and even Microsoft.
    • We’re talking about using non-proprietary protocols such as HTTP and WebDAV.
    • We’re talking about providing an open API to our tools that will also work with tools of other vendors that support Akoma Ntoso.
    • And, while we must continue to be a profitable product vendor, we will still provide the option of open access to our GitHub repositories to our customers and partners. (We’ll even accept pull requests)

Our goal is to be the very best vendor in the legislative and regulatory space, providing modern software that helps make government more efficient, more transparent and more responsive. We want to provide you with options that are affordable, capable, and planned for the future. We want to do whatever we can to allay your fears of vendor lock-in by supporting open standards, open APIs, and open technologies. We want to foster an Akoma Ntoso-based industry of cooperative tools and technologies as we know that doing so will be in the best interests of everyone — customers, product vendors, service providers, and the people who support them. As someone once told me many years ago, if you focus on making the pie as large as you can, the crumbs left on the knife will be plenty enough for you.

Either come by our table at NALIT in Annapolis or join us for the Akoma Ntoso Developer’s Conference in Ravenna at the conclusion of the LEX Summer School to learn more. If neither of these options will work for you, you can always learn more at Xcential.com or by sending email to info@xcential.com.

Standard
LEX Summer School, Process, technology, Uncategorized

Escaping a Technology Eddy

Do you need to escape a technology eddy? In fluid dynamics, an eddy is the swirling of a fluid that causes a reverse current against a downstream flow. It often forms behind a major obstacle. The swirling motion of an eddy creates resistance to forward motion by creating a backward force. Eddies are also seen in air and electromagnetic systems.

I see a similar phenomena in my work that I’m going to coin a technology eddy. A technology eddy forms in organisations that are risk adverse, have restricted budgets, or simply are more focused on software maintenance of a major system rather than on software development. Large enterprises, in particular, often find their IT organisations trapped in a technology eddy. Rather than going with the flow of technological change, the organisation drifts into a comfortable period where change is more restricted to the older technologies they are most familiar with.

TechnologyEddy

As time goes by, an organisation trapped in a technology eddy adds to the problem by building more and more systems within the eddy — making it ever more difficult to escape the eddy when the need arises.

I sometimes buy my clothing at Macy’s. It’s no secret that Macy’s, like Sears, is currently struggling against the onslaught of technological change. Recently, when paying for an item, I noticed that their point-of-sale systems still run on Windows 7 (or was that Windows Vista). Last week, on the way to the airport, I realised I had forgotten to pack a tie. So, I stopped in to Macy’s only to find that they had just experienced a 10 minute power outage. Their ancient system, what looked to be an old Visual Basic Active Directory app, was struggling to reboot. I ended up going to another store — for all the other stores in the mall were up and running quite quickly. The mall’s 10 minute power outage cost Macy’s an hour’s worth of sales because of old technology. The technology eddy Macy’s is trapped in is not only costing them sales in the short term, it’s killing them in the grand scheme of things. But I digress…

I come across organisations trapped in technology eddies all the time. IT organisations in government are particularly susceptible to this phenomena. In fact, even Xcential got trapped in a technology eddy. With a small handful of customers and a focus on maintenance over development for a few years, we had become too comfortable with the technologies that we knew and the way in which we built software.

It was shocking to me when I came to realise just how out-of-date we had become. Not only were we unaware of the latest technologies, we were unaware of modern concepts in software development, modern tools, and even modern programming styles. We had become complacent, assuming that technology from the dawn of the Millennium was still relevant.

I hear a lot of excuses for staying in a technology eddy. “It works”, “all our systems are built on this technology”, “it’s what we know how to build”, “newer technologies are too risky”, and so on. But there is a downside. All technologies rise up, have a surprisingly brief heyday, and then slowly fade away. Choosing to continue within a technology eddy using increasingly dated technology ensures that sooner or later, an operating system change or a hardware failure of an irreplaceable part will create an urgent crisis to replace a not-all-that-old system with something more modern. At that point, escaping the eddy will be of paramount importance and you’ll have to paddle at double speed just to catch up. This struggle becomes the time when the price for earlier risk mitigation will be paid — for now the risks will compound.

So how do you avoid the traps of a technology eddy? For me, the need to escape our eddy became most apparent as we got exposed to people, technologies, and ideas that were beyond the comfortable zone in which our company existed. Hearing new ideas from developers beyond our sphere of influence and being exposed to requirements from new customers made us quickly realize that we had become quite old-fashioned in our ways. To stay relevant you must get out and learn — constantly. Go to events that challenge your thinking rather than reinforce it.

Today we are once more a state-of-the-art company. We’ve adopted modern development techniques, upgraded our tools, upgraded our technologies, and upgraded our coding skills. These changes allow us to compete worldwide and build software for multiple customers in a fully distributed way that spans companies, continents, and time zones.

I hope we’ll remember this lesson and focus more on continuous improvement rather than having to endure a crash course of change every few years.

 

Standard
Standards, technology, Uncategorized, W3C

The many lives of JavaScript

I recently worked out that I’ve learned, on average, a new programming language every two to three years. These many languages have been part of my toolbox for somewhere between four to six years before falling away to make room for new technologies. However, there is one programming language that has been a major part of my programming repertoire for almost 22 years now – and that is JavaScript.

My JavaScript programming skills have recently undergone a major renaissance as I’ve adopted JavaScript 6 (aka ECMAScript 2015), for most of my coding. The way I write code today is nothing like the code I wrote just one year ago – and I’ve gone back and largely modernised all active code to be consistent. Today’s programming style uses modern frameworks and is far more object oriented and asynchronous. There are many new features which have totally updated how I write code. Proper (while still limited) classes with mixins have replaced the ugly prototype mechanism I used to use for object orientation. Let and const declarations have caught latent bugs that were hidden in my code. Arrow functions (aka. lambda expressions) and promises have streamlined code that once was quite clunky. The list goes on…

Even my tools have changed. Microsoft’s surprisingly excellent Visual Studio Code has replaced the hodgepodge of tools I once used. We’re in the process of integrating Jasmine and Karma to the process. JavaScript Semistandard Style (no, I still like semicolons) has ensured a very clean code base – as well as catching a multitude of errors and sins.

LifeOfJavaScript

All this change got me thinking about the four lives of JavaScript that I have worked through. Way back when, JavaScript had an awkward birth at the hands of Netscape as the lesser stepchild of the new Java programming language from Sun that was taking away all the attention. JavaScript was just a way to glue Java applets together in the browser. The problem is, Java applets really sucked.

Microsoft quickly saw the value of JavaScript though, and launched their own effort to steal Netscape’s baby. And so, JavaScript was stolen, renamed JScript and made to be the adopted sibling of Microsoft’s other scripting language, VBScript. One good bit progress that Microsoft made was to sponsor the standardisation of the language, although the resulting name of ECMAScript was another in a long string of unfortunate names the language has had to endure.

As JScript, JavaScript was to become an integral part of Microsoft’s entire ActiveX strategy. A lot of really cool technologies (yes, really) came of this allowing JScript to go beyond the browser. As an application extension language, it found its way into the XMetaL XML editor as the customisation technology. We used it and many of the ActiveX technologies to great effect when we implemented California’s bill drafting system. However, it didn’t just end there. We were able to use it on the server-side through Classic ASP and as a shell scripting language through the Windows Script Host. For a Microsoft-centric programmer, this era of JavaScript was a glorious one.

However, ActiveX was seriously flawed. It was entirely proprietary and riddled with problems. Microsoft abandoned it almost as quickly as they had adopted it – moving on to .Net where JScript.net was a non-starter. As Microsoft’s interest in ActiveX and even Internet Explorer waned in the early 2000s, life as a JavaScript programmer became ever gloomier. While the capabilities were awesome, there was obviously no future.

At this point, we made the somewhat painful decision to move away from Microsoft’s outmoded view of the Internet and go back to the basics. While it meant giving up a lot of capability, in the end it was an excellent decision for it pointed to the future. One tiny aspect of Microsoft’s ActiveX vision, the XMLHttpRequest object, escaped from Microsoft and gave rise to a whole different way of programming – Asynchronous JavaScript and XML (AJAX). This development and the emergence of new browsers, first Firefox and then Google’s Chrome with its V8 JavaScript engine, breathed new life into JavaScript.

Freed from Microsoft’s grip, JavaScript has flourished. The past decade has seen a plethora of new technologies. Isomorphic JavaScript (or Universal JavaScript) blurs the distinction between coding for the server and the client. In fact, technologies like Electron turn web-based application development back to the desktop where you can get the best of both worlds.

When I look back on the code I wrote during the ActiveX era (yes, we still support it), it looks prehistoric. Modern JavaScript is so much more capable and flexible than the clunky rendition we had back when COM-based ActiveX was supposed to change the world. As I mentioned earlier, how I program now is completely different – asynchronous programming is a difficult but very worthwhile skill to acquire.

Looking to the future, I see three paths. On one side is a mature but polarising platform that is dominated by Oracle. Oracle’s dominance ensures stability but also deters innovation. Looking to the other side, one finds another mature but polarising platform that is dominated by Microsoft. Here too, Microsoft’s dominance ensures stability but also deters innovation. The result is that it seems that both paths have now had their heyday. You don’t hear very much aspirational news from either technology path anymore — what it must have felt like programming a mainframe in COBOL at the height of the C/C++ era.

The third path seems to be the path of the future – staking out a middle ground that neither technology giant can stomp on. Sure, Google is a technology giant that plays a strong role, but they’re still reasonably well regarded by the development community at large (for now). It is this middle ground that has been the most fertile for new technologies – and JavaScript is right in the thick of it. There are so many new technologies it’s hard to keep track of them all — AngularJS, Node.js, React, Express.js, to name but a few. While this third path can play well with both of the other two, for me it is the path that truly points to the future.

This brings us to the fourth life for JavaScript – building on the momentum of the past decade to mount a credible challenge for enterprise apps. While I initially dismissed many of the new features of the language as mere syntactic sugar, my experience with it has shown it to be more. I now write much better code. I believe we’re on the verge of an explosion in JavaScript-enabled applications that will blur the distinction between the platforms, between the desktop and the browser, and between the server and the client. This is truly an exciting time, once more, to be developing in JavaScript.

It goes without saying, but stay tuned for more…

Standard
Akoma Ntoso, Standards, Uncategorized

Implementing an Akoma Ntoso Editor

Yes, we’ve now built a full real-world legislative drafting editor using the final release of the new OASIS standard for legislative XML known as Akoma Ntoso. No, it wasn’t easy, but drafting tools never are. While our project is not yet a finished implementation, it shows that Akoma Ntoso is adaptable to some of the most challenging demands it will face as a world-wide standard for digital legislation.

Akoma Ntoso is a very ambitious standard. It strives to anticipate all the possible needs that jurisdictions around the world will have while also planning for a wide range of useful applications that can be built on top of the data. The result is a sophisticated schema with many more features than any one implementation will ever need.

The trick is being able to mould Akoma Ntoso to fit the unique needs of a jurisdiction while also providing a user experience that is natural and fits the problem space exactly. This was the challenge that led us to develop a custom web-based XML editor. After surveying the available market of web-based editors, we quickly found that none would be sufficiently adaptable to allow Akoma Ntoso to realize its true potential.

There are two aspects of building an Akoma Ntoso editor that have required particular attention:

  1. Adapting Akoma Ntoso to fit the jurisdiction’s Documents
    If you’ve taken a look at Akoma Ntoso, you know that it’s jam-packed full of tags and features, far more than are ever necessary in a single implementation. Trying to create a single comprehensive implementation of it all, a one-size-fits-all approach, will only yield an overly complicated and unusable tool that will be suitable to nobody. At the same time, despite Akoma Ntoso’s efforts to cover all possible scenarios, there are still gaps in the schema where specifics details to individual jurisdictions are not covered. Akoma Ntoso anticipates this shortcoming by providing a pattern-centric mechanism for extending a set of generic elements to fill in the gaps.
    AkomaNtosoSubset.pngAn authoring tool needs to hide or omit the unused parts of Akoma Ntoso, adapt the parts that are being used to fit the specific requirements of a jurisdiction, and allow for extension of Akoma Ntoso using the generic mechanism for extension in such a way that these extensions would appear to be seamless. As it turns out, almost a third of the elements we’ve implemented are extension elements. The result is an editor that allows a fully compliant Akoma Ntoso document to be drafted (correct by construction), while at the same time ensuring that the document fully complies with the jurisdiction’s model for how that document be represented.
  2. Adapting the editor to fit the jurisdiction’s Document
    XML authoring tools don’t just work out of the box. Rather, they’re toolkits that allow documents that conform to a specific schema or model to be authored. How much flexibility this toolkit provides dictates the type of documents that can be authored. Sadly, it’s difficult for any editor to provide infinite flexibility in any dimension – so very careful consideration is necessary to understand whether or not the editor can be adapted to the need.When we at Xcential implemented California’s bill drafting system a decade ago, we used XMetaL because it provided an extensive customisation capability. Unfortunately, at the outset we failed to realize that XMetaL’s change tracking capabilities were limited and not customisable. When the full challenge of redlining became clear to us well into the project, we realized we were using an editor that couldn’t do the job. Thankfully, the project was able to get (and pay for) the necessary extensions to XMetaL without too much delay.

    One way to understand this problem is in the diagram below. On the left is the intrinsic capability offered by the authoring tool. On the right is a jurisdiction’s requirement. As XML authoring tools are toolkits, there is always a gap between the intrinsic capabilities on the left and the requirements on the right – and this gap must be closed one way or another. One way is to using any programming API offered to add customisations (shown as A). Another way is to limit the jurisdiction’s requirements (shown as B) to better suit the capabilities of the tool. Usually, it takes a combination of both to arrive at a suitable outcome. If the gap cannot be closed (shown as C), then the project is likely doomed to disappointment or even failure.EffortVsCapability.png

    One thing we learned early on is that, when it comes to legislative documents, there really isn’t a lot of wiggle-room in the requirements. The form of the documents is often dictated by long established traditions and good luck trying to change that. This is one case where the expression “It will take an Act of Congress” can be quite literally true.

    This means that the gap will have to be closed through customization and the effort (and risk) to do so will be quite substantial. XMetaL, way back in 2002, provided an extensive set of programmatic APIs to work from, and that very nearly wasn’t enough. Unfortunately, the newer web-based editors haven’t, for many reasons, come close to matching XMetaL’s level of customisability.

Building our own authoring tool

Understanding the challenges of Akoma Ntoso, our customer’s demanding requirements, and the limitations of the state-of-the-art in web-based authoring tools, we embarked on a project several years ago to build our own XML authoring tool. The result is now used in a number of applications. It’s been quite a challenge – and that’s an understatement. Building a highly configurable web-based XML authoring tool that is truly a step ahead of the old desktop editors of twenty years ago has required us to truly harness every aspect of modern web technologies and methodologies.

The result is an XML authoring tool especially adapted to the needs of Akoma Ntoso. However, it’s not just an Akoma Ntoso editor. It’s an XML authoring tool, capable of adapting to any reasonable XML scheme — for the legislative field, regulatory field, or any similar field where the demands of structured documents require a sophisticated level of customization.

If you want to see our tool in action in a bespoke implementation, here’s an early peek:

https://www.youtube.com/watch?v=CTAad2E-9Y4&feature=youtu.be

(This link shows a dated version at this point. It shows the editor as it was around December of 2016. We’ve advanced quite a bit since then — in both the intrinsic capabilities of the editor an in the capabilities built into the bespoke customisation)

Standard
Process, Uncategorized

Becoming Agile

Lately we’ve become quite Agile. More and more, our government customers have started to impose Agile methodologies on us. While I’ve always thought of our existing methodologies as being quite nimble, adopting Agile and Scrum methodologies has required some adaptation on my part.

Early in the game, I started to find Agile to be more of a hindrance than a help. The drumbeat of each sprint was wearing me out – and I started to feel the inevitable effects of burnout creeping into the my every thought.

But then a remarkable thing happened. I found myself not only defending Agile, but advocating it for our other projects. I was quite surprised to find myself having become such a big supporter. So what changed?

Early on, Agile was new for all of us. Our team was new, geographically distributed in three different parts of the world, all 8 hours apart. That team consisted of representatives from a set of customers and several partners all learning to work together to build a challenging solution. We adopted the Scrum methodology and planned out a long series of two week sprints. Each sprint had a set of stories assigned to it as we set off to build the most awesome bill drafting system of all time.

ProgressVsRefinement

The problem was that the pace was too aggressive. In a software development project, you need to manage two different aspects – making forward progress by adding features while ensuring a sound implementation through refinement. Agile methodologies lean away from lots of up-front design. This makes it possible to show lots of forward momentum early, but the trade-off is that the design will need to be refactored often as new requirements are uncovered and added to the picture. We were too focused on the forward momentum and were leaving a trail of unfinished “programming debt” in our wake. This debt was causing me increasing anxiety as time marched on.

There is an important concept in Agile Scrum called the retrospective. It’s all about continuous improvement of the process. As we’ve grown as a team, we’ve become better at implementing retrospectives. These led to the most important change we’ve made – moving from a two week to a three-week sprint. We didn’t just add time to our sprints, we fundamentally changed the structure of a sprint. We still schedule two weeks’ worth of tasks to each sprint, but rather than just assuming that everything will work out just perfectly, we leave a week open for integration, testing, and development slack to be taken up by any refactoring that may have become necessary.

BritSprint

This third week, while arguably slowing us down, ends up helping by allowing us to emerge from each sprint in far better development shape to begin the next sprint. We just have to be disciplined enough to not try and squeeze regular development tasks into that third week. By working down programming debt continuously, subsequent sprints become more predictable. For various reasons, we temporarily returned to two week sprints and the problem of accumulating programming debt returned. The lesson learned is that you can’t build a complex system on top of a rickety foundation – you must continuously work to ensure a robust base upon which you are building. Without this balance, Agile just becomes a way to expedite a project at the expense of good development practices.

Another key change has been in how we use tools that help to do our work. As I mentioned earlier, our development teams are very distributed – around the world. It’s important that we be able to communicate very effectively despite the distance. Daily stand-ups with the entire team are not possible although we do ensure at least two meetings each sprint with the whole team. We use four primary tools – GitHub as our source code repository, AWS for our development and test servers, Slack for casual day-to-day conversation, and JIRA for managing the stories and tasks. It is the use of JIRA that has taken the most adaptation. Our original methodology was quite clumsy, but with each sprint we refine our usage to the point that it has become a very effective tool. Now, a dashboard presents me with a very clear picture of each sprint’s goals and everyone can monitor the progress towards those goals as the sprint progresses – there are no surprises.

Agile and Scrum are allowing a disparate group of customers and vendors to become a very highly performing software development team. We’re far from perfect, but with every sprint we learn more, make changes, and emerge as a better team than before.

 

Standard
Process, Transparency

Changing the way the world is governed. Together.

I’ve recently been marveling at how software development has changed in recent years. Our development processes are increasingly integrated with both our government customers and our commercial partners — using modern Agile methodologies. This largely fulfills a grand vision I was a part of very early in my career.

I started my career at the Boeing Company working on Internal Methods and Processes Development (IMPD). Very soon, the vision that came about was the idea of Concurrent Engineering where all aspects of the product development cycle, including all disciplines, all partners, and all customers, were tightly integrated in a harmonious flow of information. Of course, making the vision a reality at Boeing’s scale has taken some time. Early on, Boeing had great success on the B777 programme where the slogan was “Working Together“. A bit later, with the B787 programme where they went a few (or perhaps many) steps too far, they stumbled for a while. This was all Agile thinking — before there was anything called Agile.

Boeing’s concurrent engineering efforts quickly inspired one of Boeing’s primary CAD suppliers, Mentor Graphics. Mentor was hard at work on their second generation platform of software tools for designing electronic systems. Concurrent Engineering was a great customer-focused story to wrap around those efforts. Mentor’s perhaps arrogant tagline was “Changing the way the world designs. Together.” Inspired, I quickly joined Mentor Graphics as the product manager for data management. Soon I was to find that the magnitude of the development effort had actually turned the company sharply inward and the company had become anything but Agile. Mentor’s struggle to build a product line that marketed Concurrent Engineering became the very antithesis of the concept it touted. I eventually left Mentor Graphics in frustration and drifted away from process automation.

Now, two decades later, a remarkable thing has happened. All those concepts we struggled with way back when have finally come of age. It has become the way we naturally work — and it is something called Agile. Our development processes are increasingly integrated with both our customers and our partners around the world. Time zones, while still a nuisance, have become far less of a barrier than they once were. Our rapid development cycles are quite transparent, with our customers and partners having almost complete visibility into our repositories and databases. Tools and services like GitHub, AWS, Slack, JIRA, and Trello allow us to coordinate the development of products shared among our customers with bespoke layers built on top by ourselves and our partners.

ConcurrentEngineering.png

It’s always fashionable for political rhetoric to bash the inefficiencies of big government, but down in the trenches where real work gets done, it’s quite amazing to see how modern Agile techniques are being adopted by governments and the benefits that are being reaped.

As we at Xcential strive to become great, it’s important for us to look to the future with open eyes such that we can understand how to excel. In the new world, as walls have crumbled, global integration of people and processes has become the norm. To stay relevant, we must continue to adapt to these rapidly evolving methodologies.

Our vision is to change the way the world is governed — through the application of modern automation technology. One thing is very clear, we’re going to do it together with our customers and our partners all around the world. This is how you work in the modern era.

In my next blog post, I will delve a little more into how we have been applying Agile/Scrum and the lessons we have learned.

Standard
Uncategorized

The (Supposed) Limitations of XML

It’s been a while since I updated my blog – a whole year in fact. The reason is that I’ve been hard at work finishing our web-based XML editor, LegisPro, supporting our projects with the U.S. House, while simultaneously developing an Akoma Ntoso-based implementation for the U.K. and Scottish Parliaments. The challenge has been all-consuming.

Next week I will be giving a couple of talks with Matt Lynch of the Scottish Parliament at the LEX Summer School 2016 in Ravenna, Italy and then, the following week, by myself at NALIT 2016 in Indianapolis. My company, Xcential, also intend to show glimpses at our booth the Data Transparency Conference in Washington D.C. on the 28th September. We’ve got a busy month ahead of us.

Recently the tired old question of whether a legislative drafting system is best built on a word processor or using true XML technology was raised yet again. (No, the Open Document Format (ODF) and  Office Open XML don’t make word processors into XML editors.) To me, and everyone I interact with, the answer is quite clear and was settled a decade ago – XML is the way to go. The reason is simple. XML provides a long-lasting data format that can be used to build a comprehensive solution that enables all of the required automation features for legislative drafting. On the other hand, shoe-horning a legislative drafting application into a word processor, never designed for this type of an application, results in too many compromises.

In reliving the debates from 10 years ago, I stumbled across a competitor’s white paper on the subject. While they still promote the white paper, its content is quite dated. It makes the case for the word processor approach rather than XML. I read, with some amusement (or was it irritation), all the perceived shortfalls of XML.

I thought it would be fun to take a look at each of these supposed problems with XML, and provide a counterpoint to each of them. To be fair, this paper was written several years ago and technology doesn’t stand still.  So, here goes:

Point 1: Legislative content and presentation cannot be separated

There is a thread of truth to this. Because so much of the amending process is based around the page and line number paradigm for referring to locations, it is essential that there be a robust and precise means for referring to any part of the document right down to a specific word. However, this is the entire requirement – there is no further need for the presentation to be tied directly to the content. Legislatures including California and the U.S. Congress, have used markup technologies for many decades now, long before the advent of XML (or even SGML). If the requirements mandated that content and presentation not be separated, none of these solutions would have been viable. So let’s consider the specific issue – how to tie page and line numbers with the content. Superficially, a word processor does this with an intrinsic page and line numbering capability. However, you quickly discover a problem – legislation requires page and line numbers fixed to locations in the last official publication. The dynamic recalculating nature of the intrinsic page and line number capability of a word processor renders the capability useless. Instead, the classic way to address this requirement is to produce a separate rendition of the document using a hidden tabular format, one row per each published line and a column for line numbers and a column for content. However, this creates a huge problem – you now have two copies of the legislation, one organized by the document structure and one organized by physical layout. Getting between these two representations precisely becomes a troubling problem to deal with. For XML, this was also a challenge until we came up with a very clean and workable solution almost a decade ago. Now, when we publish the document PDF, we back-annotate unobtrusive markers into the XML. These markers are used to arrange the editor presentation as well as to drive the amendment engine. This works out very nicely. We have implemented this technique several times now with great success.

Point 2: Temporal relationships must be preserved

This one made me laugh. For years, we’ve pointed to issues like this as reasons to go to XML rather than to avoid it. The argument made in the white paper is that XML provides no facilities to model the temporal relationships that are necessary when making citations or establishing other relationships that exist in legislation.  While this is true, it’s also quite misleading. To expect XML to intrinsically provide this facility is to completely fail to understand the role of XML. In fact, word processors have no intrinsic capability to solve this problem either – it’s something that has to be built.

We’ve been addressing this problem since our beginning in 2001 using web-based references or URIs and a clever middle-tier technology we call a resolver that interprets the temporal or versioning aspects of the citation or reference. This problem was solved long ago.

Point 3: Permanence is required

The argument here is confusing. It is totally true that there is an unbendable requirement that legislation be preserved in a form known to last forever (or at least for many centuries). It’s also totally true that there is no digital technology with the proven permanence of paper or vellum. However, this is an argument about the medium used to preserve the content. It’s not an argument about the type of format that should be used to create the content – unless there is an argument that we should give up on digital technologies and return to paper, scissors, and glue. A physical document can be produced for archival purposes regardless of the technology used to draft it.

Personally, it you were to ask me, the logical document to archive would be a vellum printout of the XML. That way, you would have a much easier task of restoring the document at some future date should some catastrophe result in the loss of the digital record. However, I don’t think that’s a likely decision that anyone will make anytime soon. John? 😉

Point 4: Work-in-progress is structurally broken by XML’s rules

This has long been the primary argument against XML editors. Their rigidity in enforcing the rules often gets in the way, especially in the early part of the drafting cycle when the ideas are still fluid. Most XML editors just aren’t designed for this type of work.

This limitation has been one that I’ve focused considerable effort at overcoming for years. Our most recent efforts have made tremendous strides. We’ve tackled this problem in two ways:

First, using the modern DOM Range constructs that are inherent in modern browsers, we’ve been able to loosen up the selection model considerably so that it closely matches selection in a word processor. Using sophisticated programming of the DOM and this range mechanism, we able to match much of the loose editing offered by a word processor.

Second, we go beyond the word processor by allowing the structure to be removed entirely and allowing the words to be rearranged entirely unencumbered by any structure at all (think text editor). Once the words are rearranged and the user is ready to move on, we automatically recreate the structure for the user. It’s a great way for drafters to get their ideas down without worrying about detail. Turns out, it’s also a great way to import foreign content – a nice bonus.

Point 5: XML document validation is insufficient

This is another one that made me laugh. As before, it’s totally true and entirely misleading. XML validation is not intended to be the be-all and end-all of verifying a document. It would be quite remarkable if an XML schema could do that. Curiously, a word processor offers nothing whatsoever in this regard – it must all be custom created.

When it comes to the subject of verifying a document, we use two terms – validation and verification. Validation is the process of ensuring that the document’s XML content adheres to the content model prescribed by the schema. We call this the “outer envelope” of checks. The “inner envelope” is to verify that the document adheres to the jurisdiction’s internal business rules. While off-the-shelf technologies exist to perform the outer XML validation, this inner verification step requires custom software. We’ve built a configuration mechanism that allows us to configure a “model” that our existing software can use for verification rather than building this from scratch each time.

Point 6: There is no common language around which to develop a standard

This one is perhaps the most annoying. To be fair, the white paper is not current and could do with an update – although doing that might undermine its use as a marketing tool. Again, as before, there is some truth to the assertion, but to argue that the differences between jurisdictions disqualifies XML is to see the glass as half empty rather than half full. I’ve been in this field for 15 years in November and have worked on legislative systems on four continents. What’s surprised me is how similar they are rather than how different. Fundamentally, the process of making law is the same almost everywhere. It’s in the details that the differences lie. Yes, in California a resolution is a type of document while in the UK, a resolution is a type of section in a specific type of statutory instrument, but that’s a detail that doesn’t get in the way at all.

There is one point in this area that the white paper makes that is particularly annoying. XML is characterized as a generic model for representing data. That’s only half the story. Everybody in this field knows that there are two very different models that XML serves well – representing data and representing documents. XML, as a derivative from SGML, has stronger origins in the representation of documents than of data. So why are XML’s strengths in representing a document so casually ignored?  Seems a little self-serving to me. JSON, on the other hand is an excellent generic model for representing moderate amounts of data, but a terrible model for representing documents.

The entire argument here can now be refuted as we now have Akoma Ntoso. It’s an XML schema that was initially designed by the University of Bologna at the request of the United Nations. Today, it’s on the verge of becoming an OASIS standard. Akoma Ntoso understands that there is no on-size-fits-all solution to legislative XML. It does this by providing a basic set of constructs that are generally found everywhere, a mechanism to create custom constructs, and an overarching design for how to model the hierarchy of legislation.

The implementation I will be showing in Ravenna, Indianapolis, and Washington D.C. in the coming weeks will demonstrate just how a general-purpose XML model such as Akoma Ntoso for legislation can be applied to the specific needs of a pair of jurisdictions – and a pretty challenging pair of jurisdictions at that.

For me, XML is the easy winner.  With XML you design the document to exactly fit the needs of a jurisdiction and then shape the tools to work with this model. With a word processor, you shoe-horn the needs of the jurisdiction into the limited flexibility offered by a word processor’s intrinsic model and then spend all your time trying to handle the mismatches between what the word processor was designed to do and that the customer wants. Either way it’s challenging, but with a word processor, much more so.

Standard
Process, Standards, Transparency, W3C

Connected Information

As a proponent of XML for legislation, I’m often asked why an XML approach is better than a more traditional approach using a word processor. The answer is simple – it’s all about connected information.

The digital end point in a legislative system can no longer be publication of PDFs. PDFs are nothing but a kludgy way to digitize paper — a way to preserve the old traditions and avoid the future. Try reading a PDF on a cell phone and you see the problem. Try clicking on a citation in a PDF and you see the problem. Try and scrape the information out of a PDF to make it computer readable and you see the problem. The only useful function that PDFs serve is as a bridge to the past.

The future is all about connected information — breaking the physical bounds of what we think of as a document and allowing the nuggets of information found within them to be connected, interrelated, and acted upon. This is the real reason why the future lies with XML and its related technologies.

In my blog last week I provided a brief glimpse into how our future amending tools will work. I explored how legislation could be managed similar to how software is managed with GitHub. This is an example of how useful connected information becomes. Rather than producing bills and amendments as paper documents, the information is stored in a way that it can be efficiently and accurately automated — and made available to the public in a computer readable way.

At Xcential, we’re building our new web-based authoring system — LegisPro. If you take a close look at it, you’ll see that it has two main components. Of course, there is a robust XML editor. However, at the system’s very heart is a linking system — something we call a resolver. It’s this resolver where the true power lies. It’s an HTTP-based system for managing all the linkages that exist in the system. It connects XML repositories, external data sources, and even SQL databases together to form a seamless universe of connected information.

We’re working hard to transform how legislation, and indeed, all government information is viewed. It’s not just about connecting laws and legislation together through simple web links. We talking about providing rich connections between all government information — tying financial data to laws and legislation, connecting regulatory information together, associating people, places, and things to government data, and on and on. We have barely started to scratch the surface, but it’s clear that the future lies with connected information.

While we today position LegisPro as a bill authoring system — it’s much more than that. It’s some of the fundamental underpinnings necessary for a system to transform government documents of today into the connected information of tomorrow.

Standard
Process, Track Changes

Can GitHub be used to manage legislation?

Every so often, someone suggests that GitHub would be a great way to manage legislation. Usually, we roll our eyes at the naïve suggestion and that is that.

However, there are a good many similarities that do deserve consideration. What if the amending process was supported by a tool that, while maybe not GitHub, worked on the same principles?

My company, Xcential, built the amending solution for the California Legislature, using a process we like to call Amendments in Context. With this process, a proposed revision of a bill is drafted and then the amendments necessary to produce that revision are extracted as an amendment document. That amendment document, which really becomes an enumeration of proposed changes in a report, is then submitted to the committee for approval. If approved, the revised document that was drafted earlier then becomes the next official version of the bill. This process differs from the traditional process in which an amendment document is drafted, itemizing changes to be made. When the committee approves the amendments, there is a mad rush, usually overnight, to implement (or execute) those amendments to the last version in order to produce the next version. Our Amendments in Context automated approach is more accurate and largely eliminates the overnight bottleneck of having to execute approved amendments before the start of business the following day.

Since implementing this system for California, we’ve been involved in a number of other jurisdictions and efforts that deal with the amending process. This has given us quite a good perspective on the various ways in which bill amendments get handled.

As software developers ourselves, we’ve often been struck by how similar the bill amendment process is to the software development process — the very thing that invariably leads to the suggestion that GitHub could be a great repository for legislation. With this all in mind, let’s compare and contrast the bill amending process with the software development process using GitHub.

(We’ll make suitable procedural simplifications to keep the example clear)

BILL AMENDING PROCESS SOFTWARE ENHANCEMENT PROCESS
Begin a proposed amendment Begin a proposed enhancement
Create a copy of the last version of a bill. In the U.S. and other parts of the world that still use page and line numbers, cleverly annotated page and line number information from the last publication must be included. This copy will be modified to reflect the proposed changes. Create a new software branch. This branch will be modified to implement the proposed enhancement
Make the proposed changes using redlining, showing the changes as insertions and deletions. Carefully craft the changes to obey the drafting rules and any political sensitivities regarding how the changes are shown. Make the proposed changes to the software — testing and debugging as needed.
Redlining Software
Generate the amendment Prepare to commit
The amendment generator examines the redlining (insertions and deletions), carefully grouping changes together to produce a minimized set of amendments. These amendments are expressed in the familiar, at least in the U.S., “on page X, line Y, strike ‘this’ and replace with ‘that'” or something along those lines. (For jurisdictions that don’t use an amendment generator, a manually written amendment document, enumerating the amendments, is the starting point) A differencing engine compares the source code with the prior version, carefully grouping changes together to produce a minimized set of hunks. If you use a tool such as SourceTree by Atlassian, these hunks are shown as source code with lines to be removed and lines to be inserted.
Amendment Hunks
Save the amendment document alongside the revised bill with redlining Commit the changes to GitHub
Vote on the amendments Submit for review
The amendment document goes to committee where it is proposed and then either adopted or rejected. The procedures here may differ, depending on the jurisdiction. In California, multiple competing amendment documents (known as instruction amendments) may be proposed at any one time, but only one can be adopted and it is adopted in whole. Other jurisdictions allow multiple amendment documents to be adopted and individual amendments with any amendment document to be adopted or rejected. The review board considers the proposed enhancement and decides whether or not to incorporate them into the next release. They may choose to adopt the entire enhancement or they may choose to adopt only certain aspects of it.
Execute the amendment Merge into mainline
In California, because only single whole amendments can be adopted, executing an adopted amendment is quite easy — the redlined version of the bill simply becomes the next version. However, in most jurisdictions, this isn’t so easy. Instead, each amendment must be applied to a new copy of the bill, destined to become the next version. Conflicts that arise must be resolved following a prescribed set of procedures. Incorporating an enhancement into the mainline involves a merge of the enhancement branch into the mainline. If an enhancement is not adopted in whole, then approved changes may be cherry picked. When conflicts between different sets of approved enhancements occur, GitHub requires manual intervention to resolve the issues. This process is generally a lot less formal than resolving conflicts in legislation.

So, as you can see, there are a lot of similarities between amending a bill and implementing a software enhancement. The basic process is essentially identical. However, the differences lie in the details.

Git is designed specifically for the software development process. The legislative process has quite a different set of requirements and traditions which must be met. It simply isn’t possible to bend and distort the legislative process to fit the model prescribed by Git. However, that doesn’t mean that something like GitHub is out of the question. What if there was a GitHub for Legislation — a tool with an associated repository, modeled after Git and GitHub, specifically designed for managing legislation?

This example shows the power of adopting XML for drafting legislation. With properly designed XML, legislation becomes a vast store of machine-readable information that can meet the 21st century challenges of accuracy, efficiency, and transparency. We’re not just printing paper anymore — we’re managing digital information.

Standard
Uncategorized

LegisPro edit will soon be ready for beta!

Our new rulemaking LegisProedit is coming along nicely. It’s a web-based XML drafting tool specifically designed for the rigors of rulemaking tasks such as legislative bill drafting. It supports both the Akoma Ntoso and the USLM legislative models and can be customized to support any other model if necessary.

This past week I gave a demonstration of it at the LEX US Summer School at George Mason University in Washington D.C. With trepidation, I allowed everyone to have a hands-on experience with it as I provided guidance. This was the first time the editor had been used by anyone outside of Xcential and the first time we had stressed server performance. While certainly not glitch free, the editor exceeded my expectations for this point in the development process and all went well. It worked!

This week we are talking about the editor at NCSL by way of a screenshot demo that I am sharing here:

The next opportunities to try the editor hands-on will be at the LEX Summer School in Ravenna, Italy next month and we will also be showing it later in the month at NALIT in Sacramento, California.

The QuickStarter beta program is still in the process of being finalized. We are currently envisioning different levels of participation, from basic beta testing to a full-fledged evaluation program for anyone looking to use it or a part of it in an upcoming project.

More information can be found at http://xcential.com/legispro or you can contact us at info@xcential.com.

Standard
Akoma Ntoso, HTML5, LegisPro Web, LEX Summer School, Standards, Transparency

Data Transparency Breakfast, LEX US Summer School 2015, First International Akoma Ntoso Conference, and LegisPro Edit reveal.

Last week was a very good week for my company, Xcential.

We started the week hosting a breakfast put on by the Data Transparency Coalition at the Booz Allen Hamilton facility in Washington D.C.. The topic was Transforming Law and Regulation. Unfortunately, an issue at home kept me away but I was able to make a brief pre-recorded presentation and my moderating role was played by Mark Stodder, our company President. Thank you, Mark!

Next up was the first U.S. edition of the LEX Summer School from Italy. I have attended this summer school every year since 2010 in Italy and it’s great to see the same opportunity for an open dialog amongst the legal informatics community finally come to the U.S. Monica Palmirani (@MonicaPalmirani), Fabio Vitali, and Luca Cervone (@lucacervone) put on the event from the University of Bologna. The teachers also included Jim Mangiafico  (@mangiafico) (the LoC data challenge winner), Veronique Parisse (@VeroParisse) from the European Union, Andrew Weber (@atweber) from the Library of Congress, Kirsten Gullickson (@GullicksonK) from the Office of the Clerk at the U.S. House of Representatives, and myself from Xcential. I flew in for an abbreviated visit covering the last two days of the Summer School where I covered how the U.S. Code is modeled in Akoma Ntoso and gave the students an opportunity to try out our new bill drafting editor — LegisProedit.

After the Summer School concluded, it was followed by the first International Akoma Ntoso Conference on Saturday, where I spoke about the architecture of our new editor as well as how the USLM schema is a derivative of the Akoma Ntoso schema. We had good turnout, from around the world, and a number of interesting speakers.

This week is NCSL in Seattle where we will be discussing our new editor with potential customers and partners. Mark Stodder from Xcential will be in attendance.

In a month, I’ll be in Ravenna once more for the European LEX Summer School — where I’ll be able to show even more progress towards the goal of a full product line of Akoma Ntoso tools. It’s interesting times for me.

The editor is coming along nicely and we’re beginning to firm up our QuickStarter beta plans. I’ve already received a number of requests and will be getting in touch with everyone as soon as we’re ready to roll out the program. If you would like to participate as a beta tester — or if you would just like more information, please contact us at info@xcential.com.

I’m really excited about how far we’ve come. Akoma Ntoso is on the verge of being certified as an official OASIS standard, our Akoma Ntoso products are coming into place, and interest around the world is growing. I can’t wait to see where we will be this time next year.

Standard
Akoma Ntoso, HTML5, LegisPro Web, LEX Summer School, Standards, Track Changes, W3C

Coming soon!!! A new web-based editor for Akoma Ntoso

I’ve been working hard for a long time — building an all new web-based editor for Akoma Ntoso. We will be showing it for the first time at the upcoming Akoma Ntoso LEX Summer School in Washington D.C.

Unlike our earlier AKN/Editor, this editor is a pure XML editor designed from the ground up using the XML capabilities that modern browsers possess. This editor is much more robust, more precise,  and is very scalable.

NewEditor

Basic Features

  1. Configurable XML models — including Akoma Ntoso and USLM
  2. Edit full documents or portions of large documents
  3. Flexible selection and editing regardless of XML structure
  4. Built-in redlining (change tracking) supporting textual AND structural changes
  5. Browse document sources with drag-and-drop.
  6. Full undo & redo
  7. Customizable attribute editor
  8. Search and replace
  9. Modular architecture to allow for extensive customization

Underlying Technology

  1. XML-based editing component
    • DOM 4 support
    • XPath Support
    • CSS Styling
    • Sophisticated event model
  2. HTTP-based resolver architecture for retrieving documents
    • Interpret citations
    • Deference URLs
    • WebDAV adaptors to document repositories
    • Query repositories with XQuery or databases with SQL
  3. AngularJS-based User Interface using HTML5
    • Component modules for easy customization
  4. XML repository for storing documents
    • Integrate any XML repository
    • Built-in support for eXist-db
  5. Validation & Publishing
    • XML Schema validator
    • XSL-FO publishing

We’ll reveal a lot more at the LEX Summer School later this month! If you’re interested in our QuickStart beta program, drop me a note at grant.vergottini@xcential.com.

Standard
Akoma Ntoso, LEX Summer School, Standards

Upcoming U.S. and European events related to Akoma Ntoso

In my last blog post I covered the public review of the new proposed Akoma Ntoso (LegalDocML) standard for legal documents. Please keep the comments coming. In order to comment, please send email to legaldocml-comment@lists.oasis-open.org. If you wish to subscribe to this mailing list, please follow the instructions at https://www.oasis-open.org/committees/comments/index.php?wg_abbrev=legaldocml

In addition, there are three upcoming events related to Akoma Ntoso which you may wish to participate in: (this list coming from Monica Palmirani, the chair of the OASIS LegalDocML technical committee)

1. Akoma Ntoso Summer School, 27-31 July, 2015, George Mason University, Fairfax, Virginia (USA): http://aknschool.cirsfid.unibo.it
Registration fee: http://aknschool.cirsfid.unibo.it/logistics/registrations-and-fees/
Application Form: http://aknschool.cirsfid.unibo.it/wp-content/uploads/2015/05/ApplicationForm.pdf
Brochure:
http://aknschool.cirsfid.unibo.it/wp-content/uploads/2015/05/brochure_2015_US_DEF.pdf
Deadline: end of June, 2015.

2. IANC2015 (First International Akoma Ntoso Conference): August 1st, 2015, George Mason University, Fairfax, Virginia (USA)
Brochure: http://aknschool.cirsfid.unibo.it/wp-content/uploads/2015/05/AKN-CONFERENCE1.pdf
Call for contributions:
http://www.akomantoso.org/akoma-ntoso-conference/call-for-contributions/
Deadline: June 19th, 2015.

3. Summer School LEX2015, 7-15 Sept. 2015, Ravenna, Italy: http://summerschoollex.cirsfid.unibo.it
Registration fee: http://summerschoollex.cirsfid.unibo.it/?page_id=66
Application Form: http://summerschoollex.cirsfid.unibo.it/wp-content/uploads/2010/04/ApplicationForm2.pdf
Brochure:
http://summerschoollex.cirsfid.unibo.it/wp-content/uploads/2015/05/brochure_2015_LEX1.pdf
Deadline: July, 15th, 2015.

I have been participating in the European LEX Summer school every year since 2010 and find it to be both inspirational and very valuable. If you’re interested in understanding where the legal informatics field is headed, I encourage you to find a way to attend any of these events. I will be speaking/teaching at all three events.

Standard
Akoma Ntoso, LegisPro Web, LEX Summer School, Standards, Track Changes

Akoma Ntoso (LegalDocML) is now available for public review

It’s been many years in the making, but the standardised version of Akoma Ntoso is now finally in public review. You can find the official announcement here. The public review started May 7th and will end on June 5th — which is quite a short time for something so complex.

I would like to encourage everything to take part in this review process, as short as it is. It’s important that we get good coverage from around the world to ensure that any use cases we missed get due consideration. Instructions for how to comment can be found here.

Akoma Ntoso is a complex standard and it has many parts. If you’re new to Akoma Ntoso, it will probably be quite overwhelmed. To try and cut through that complexity, I’m going to try and give a bit of an overview of what the documentation covers, and what to be looking for.

There are four primary documents

  1. Akoma Ntoso Version 1.0 Part 1: XML Vocabulary — This document is the best place to start. It’s an overview of Akoma Ntoso and describes what all the pieces are and how they fit together.
  2. Akoma Ntoso Version 1.0 Part 2: Specifications — This is the reference material. When you want to know something specific about an Akoma Ntoso XML element or attribute, this is the document to go to. In contains very detailed information derived from the schema itself. Also included with this is the XML schema (or DTD if you’re still inclined to use DTDs). and a good set of examples from around the world.
  3. Akoma Ntoso Naming Convention Version 1.0. This document describes two very interrelated and important aspects of the proposed standard — how identifitiers are assigned to elements and how IRI-based (or URI-based) references are formed. There is a lot of complexity in this topic and it was the subject to numerous meetings and an interesting debate at the Coco Loco restaurant in Ravenna, Italy, one evening while being eaten by mosquitoes.
  4. Akoma Ntoso Media Type Version 1.0 — This fourth document describes a proposed new media type that will be used when transmitting Akoma Ntoso documents.

This is a lot of information to read and digest in a very short amount of time. In my opinion, the best way to try and evaluate Akoma Ntoso’s applicability to your jurisdiction is as follows:

  • First, look at the basic set of tags used to define the document hierarchy. Is this set of tags adequate. Keep in mind that the terminology might not always perfectly align with your terminology. We had to find a neutral terminology that would allow us to define a super-set of the concepts found throughout the world.
  • If you do find that specific elements you need are missing, consider whether or not that concept is perhaps specific to your jurisdiction. If that is the case, take a look at the basic Akoma Ntoso building blocks that are provided. While we tried to provide a comprehensive set of elements and attributes, there are many situations which are simply too esoteric to justify the additional tag bloat in the basic standard. Can the building blocks be used to model those concepts?
  • Take a look at the identifiers and the referencing specification. These parts are intended to work together to allow you to identifier and access any provision in an Akoma Ntoso document. Are all your possible needs met with this? Implicit in this design is a resolver architecture — a component that parses IRI references (think of them as URLs) and maps to specific provisions. Is this approach workable?
  • Take a look at the basic metadata requirements. Akoma Ntoso has a sophisticated metadata methodology behind it and this involves quite a bit of indirection at times. Understand what the basic metadata needs are and how you would model your jurisdictions metadata using this.
  • Finally, if you have time, take a look at the more advanced aspects of Akoma Ntoso. Consider how information related to the documents lifecycle and workflow might be modeled within the metadata. Consider your change management needs and whether or not the change management capabilities of Akoma Ntoso could be adapted to fit. If you work with complex composite documents, take a look at the mechanisms Akoma Ntoso provides to assemble composite documents.

Yes, there is a lot to digest in just a few weeks. Please provide whatever feedback you can.

We’re also now in the planning stages for a US LEX Summer School. If you’ve followed my blog over the years, you’ll know that I am a huge fan of the LEX Summer School in Ravenna, Italy — I’ve been every year for the past five years. This year, Kirsten Gullikson and I convinced Monica and Fabio to bring the Summer School to Washington D.C. as well. The summer school will be held the last week of July 2015 at George Mason University. The class size will be limited to just 30, so be sure to register early once registration opens. If you want to hear me rattle on at length about this subject, this is the place to go — I’ll be one of the teachers. The Summer School will conclude with a one day Akoma Ntoso Conference on the Saturday. We’ll be looking for papers. I’ll send out a blog with additional information as soon as it’s finalized.

You may have noticed that I’ve been blogging a lot less lately. Well, that’s because I’ve been heads down for quite some time. We’ll soon be in a position to announce our first full Akoma Ntoso product. It’s an all new web-based XML editor that builds on our experiences with the HTML5 based AKN/Editor (LegisPro Web) that we built before.

This editor is composed of four main parts.

  1. First, there is a full XML editing component that works with pure XML — allowing it to be quite scalable and very XML precise. It implements complex track changes capabilities along with full redo/undo. I’m quite thrilled how it has turned out. I’ve battled for years with XMetaL’s limitations and this was my opportunity to properly engineer a modern XML editor.
  2. Second, there is a sophisticated resolver technology which acts as the middleware, implementing the URI scheme I mentioned earlier — and interfacing with local and remote document resources. All local document resources are managed within an eXist-db repository.
  3. Third, there is the Akoma Ntoso model. The XML editing component is quite schema/model independent. This allows it to be used with a wide variety of structured documents. The Akoma Ntoso model adapts the editor for use with Akoma Ntoso documents.
  4. And finally, there is a very componentised application which ties all the pieces together. This application is written as an AngularJS-based single page application (SPA). In an upcoming blog I’ll detail the trials and tribulations of learning AngularJS. While learning AngularJS has left me thinking I’m quite stupid at times, the goal has been to build an application that can easily be extended to fit a wide variety of structured editing needs. It’s important that all the pieces be defined as modules that can either be swapped out for bespoke implementations or complemented with additional capabilities.

Our current aim is to have the beta version of this new editor available in time for the Summer School and Akoma Ntoso conference — so I’ll be very heads down through most of the summer.

Standard
Akoma Ntoso, LEX Summer School, Process, Standards, Track Changes, Transparency, W3C

Achieving Five Star Open Data

A couple weeks ago, I was in Ravenna, Italy at the LEX Summer School and follow-on Developer’s Workshop. There, the topic of a semantic web came up a lot. Despite cooling in the popular press in recent years, I’m still a big believer in the idea. The problem with the semantic web is that few people actually get it. At this point, it’s such an abstract idea that people invariably jump to the closest analog available today and mistake it for that.

Tim Berners-Lee (@timberners_lee), the inventor of the web and a big proponent of linked data, has suggested a five star deployment scheme for achieving open data — and what ultimately will be a semantic web. His chart can be thought of as a roadmap for how to get there.

Take a look at today’s Data.gov website. Everybody knows the problem with it — it’s a pretty wrapper around a dumping ground of open data. There are thousands and thousands of data sets available on a wide range of interesting topics. But, there is no unifying data model behind all these data dumps. Sometimes you’re directed to another pretty website that, while well-intentioned, hides the real information behind the decorations. Sometimes you can get a simple text file. If you’re lucky, you might even find the information in some more structured format such as a spreadsheet or XML file. Without any unifying model and with much of the data intended as downloads rather than as an information service, this is really still Tim’s first star of open data — even though some of the data is provided as spreadsheets or open data formats. It’s a good start, but there’s an awful long way to go.

So let’s imagine that a better solution is desired, providing information services, but keeping it all modest by using off-the-shelf technology that everyone is familiar with. Imagine that someone with the authority to do so, takes the initiative to mandate that henceforth, all government data will be produced as Excel spreadsheets. Every memo, report, regulation, piece of legislation, form that citizens fill out, and even the U.S. Code will be kept in Excel spreadsheets. Yes, you need to suspend disbelief to imagine this — the complications that would result would be incredibly tough to solve. But, imagine that all those hurdles were magically overcome.

What would it mean if all government information was stored as spreadsheets? What would be possible if all that information was available throughout the government in predictable and permanent locations? Let’s call the system that would result the Government Information Storehouse – a giant information repository for information regularized as Excel spreadsheets. (BTW, this would be the future of government publishing once paper and PDFs have become relics of the past.)

How would this information look? Think about a piece of legislation, for instance. Each section of the bill might be modeled as a single row in the spreadsheet. Every provision in that section would be it’s own spreadsheet cell (ignoring hierarchical considerations, etc.) Citations would turn into cell references or cell range references. Amending formulas, such as “Section 1234 of Title 10 is amended by…” could be expressed as a literal formula — a spreadsheet formula. It would refer to the specific cell in the appropriate U.S. Code Title and contain programmatic instructions for how to perform the amendment. In short, lots of once complex operations could be automated very efficiently and very precisely. Having the power to turn all government information into a giant spreadsheet has a certain appeal — even if it requires quite a stretch of the imagination.

Now imagine what it would mean if selected parts of this information were available to the public as these spreadsheets – in a regularized and permanent way — say Data.gov 2.0 or perhaps, more accurately, as Info.gov. Think of all the spreadsheet applications that would be built to tease out knowledge from the information that the government is providing through their information portal. Having the ability to programmatically monitor the government without having to resort to complex measures to extract the information would truly enable transparency.

At this point, while the linkages and information services give us some of the attributes of Tim’s four and five star open data solutions, but our focus on spreadsheet technology has left us with a less than desirable two star system. Besides, we all know that having the government publish everything as Excel spreadsheets is absurd. Not everything fits conveniently into a spreadsheet table to say nothing of the scalability problems that would result. I wouldn’t even want to try putting Title 42 of the U.S. Code into an Excel spreadsheet. So how do we really go about achieving this sort of open data and the efficiencies it enables — both inside and outside of government?

In order to realize true four and five star solutions, we need to quickly move on to fulfilling all the parts of Tim’s five star chart. In his chart, a three star solution replaces Excel spreadsheets with an open data format such as a comma separated file. I don’t actually care for this ordering because it sacrifices much to achieve the goal of having neutral file formats — so lets move on to full four and five star solutions. To get there, we need to become proficient in the open standards that exist and we must strive to create ones where they’re missing. That’s why we work so hard on the OASIS efforts to develop Akoma Ntoso and citations into standards for legal documents. And when we start producing real information services, we must ensure that the linkages in the information (those links and formulas I wrote about earlier), exist to the best extent possible. It shouldn’t be up to the consumer to figure out how a provision in a bill relates to a line item in some budget somewhere else — that linkage should be established from the get-go.

We’re working on a number of core pieces of technology to enable this vision and get to full five star open data. We integrating XML repositories and SQL databases into our architectures to give us the information storehouse I mentioned earlier. We’re building resolver technology that allows us to create and manage permanent linkages. These linkages can be as simple as citation references or as complex as instructions to extract from or make modifications to other information sources. Think of our resolver technology as akin to the engine in Excel than handles cell or range references, arithmetic formulas, and database lookups. And finally, we’re building editors that will resemble word processors in usage, but will allow complex sets of information to be authored and later modified. These editors will have many of the sophisticated capabilities such as track changes that you might see in a modern word processor, but underneath you will find a complex structured model rather than the ad hoc data structures of a word processor.

Building truly open data is going to be a challenging but exciting journey. The solutions that are in place today are a very primitive first step. Many new standards and technologies still need to be developed. But, we’re well on our way.

Standard
Akoma Ntoso, LEX Summer School, Standards, Track Changes

2014 LEX Summer School & Developer’s Workshop

This week I attended the 2014 LEX Summer School and the follow-on Developer’s Workshop put on by the University of Bologna in Ravenna, Italy. This is the fifth year that I have participated and the third year that we have had the developer’s extension.

It’s always interesting to me to see how the summer school has evolved from the last year and who attends. As always, the primary participation comes from Europe – as one would expect. But this year’s participants also came from as far away as the U.S., Chile, Taiwan, and Kenya. The United States had a participant from the U.S. House of Representatives this year, aside from me. In past years, we have also had U.S. participation from the Library of Congress, Lexus Nexus, and, of course, Xcential. But, I’m always disappointed that there isn’t greater U.S. participation. Why is this? It seems that this is a field where the U.S. chooses to lag behind. Perhaps most jurisdictions in the U.S. are still hoping that Open Office or Microsoft Office will be a good solution. In Europe, the legal informatics field is looking beyond office productivity tools towards all the other capabilities enabled by drafting in XML — and looking forward to a standardized model as a basis for a more cost effective and innovative industry.

As I already mentioned, this was our third developer’s workshop. It immediately followed the summer school. This year the developer’s workshop was quite excellent. The closest thing I can think of in the U.S. is NALIT, which I find to be more of a marketing-oriented show and tell. This, by comparison, is a far more cozy venue. We sit around, in a classroom setting, and have a very open and frank share and discuss meeting. Perhaps it’s because we’ve come to know one another through the years, but the discussion this year was very good and helpful.

We had presentations from the University of Bologna, the Italian Senate, the European Parliament, the European Commission, the UK National Archives, the US House of Representatives, and myself representing the work we are doing both in general and for the US House of Representatives. We closed out the session with a remote presentation from Jim Mangiafico on the work he is doing translating to Akoma Ntoso for the UK National Archives. (Jim, if you don’t already know, was the winner of the Library of Congress’ Akoma Ntoso challenge earlier this year.)

What struck me this year is how our shared experiences are influencing all our projects. There has been a marked convergence in our various projects over the last year. We all now talk about URI referencing schemes, resolvers to handle them, and web-based editors to draft legislation. And, much to my delight, this was the first year that I’m not the only one looking into change tracking. Everybody is learning that differencing isn’t always the best way to compute amendments – often you need to better craft how the changes are recorded.

I can’t wait to see the progress we make by this time next year. By then, I’m hoping that Akoma Ntoso will be well established as a standard and the first generation of tools will have started to mature. Hopefully our discussion will have evolved from how to build tools towards how to achieve higher levels of compliance with the standard.

I also hope that we will have greater participation from the U.S.

Standard
Akoma Ntoso, Standards, Transparency

Look how far legal informatics has come – in just a few years

Back in 2001 when I started in the legal informatics field, it seemed we were all alone. Certainly, we weren’t – there were many similar efforts underway around the country and around the world. But, we felt alone. All the efforts were working in isolation – making similar decisions and learning similar lessons. This was the state of the field, for the most part, for the next 6 to 8 years. Lots of isolated progress, but few opportunities to share what we had learned and build on what others knew.

In 2010, I visited the LEX Summer School, put on by the University of Bologna in Ravenna, Italy. What became apparent to me was just how isolated the various pockets of innovation had become around the world. There was lots of progress, especially in Europe, but legal informatics, as an industry, was still in a fledgling state – it was more of an academic field than a commercial industry. In fact, outside of academic circles, the term legal informatics was all but meaningless. When I wrote my first blog in 2011, I looked forward to the day when the might be a true Legal Informatics industry.

Now, just a few years later, it’s stunning how far we have come. Certainly, we still have far to travel, but now we’re all working together towards common goals rather than working alone towards the same, but isolated, goals. I thought I would spend this week’s blog to review just how far we have come.

  1. Working together
    We have come together in a number of important dimensions:

    • First of all, consider geography. This is a small field, but around the world we’re now all very much connected together. We routinely meet, share ideas, share lessons, and share expertise – no matter which continent we work and reside on.
    • Secondly, consider our viewpoints. There was once a real tension between the transparency camp, government, external industry, and academia. If you participated at the 2014 Legislative Data and Transparency conference a few weeks ago in Washington D.C., one of the striking things was how little tension remains between these various viewpoints. We’re all now working towards a common set of goals.
  2. Technology
    • I remember when we used to question whether XML was the right technology. The alternatives were to use Open Office or Microsoft Office, basing the legislative framework around office productivity tools. Others even proposed using relational database technology along with a forms-based interface. Those ideas have now generally faded away – XML is the clear choice. And faking XML by relying on the fact that the Open Document Format (ODF) or Office Open XML formats are based on XML, just isn’t credible anymore. XML means more than just relying on an internal file format that your tools happen to use – it means designing information models specifically to solve the challenges of legal informatics.
    • I remember when we used to debate how references should be managed. Should we use file paths? Should we use URNs? Should we use URLs? Today the answer is clear – we’re all settling around logical URLs with resolvers, sometimes federated, to stitch together a web of interconnected references. Along with this decision has been the basic assumption that web-based solutions are the future – desktop applications no longer have a place in a modern solution.
    • Consider database technology. We used to have three choices – use the file system, try and adapt mature but ill-fitting relational databases, or take a risk with emerging XML databases. Clearly XML databases were the future – but was it too early? Not anymore! XML database technology, along with XQuery, have come a long way in the past few years
  3. Standards
    Standards are what will create an industry. Without them, there is little opportunity to re-use – a necessary part of allowing cost-effective products to be built. After a few false starts over the years, we’re now on the cusp of having industry standards to work with . The OASIS LegalDocML (Akoma Ntoso) and LegalCiteM technical committees are hard at work on developing those standards. Certainly, it will be a number of years before we will see the all the benefits of these standards, but as they come to fruition, a real industry can emerge.
  4. Driving Forces
    Ten years ago, the motivation for using XML was to replace outdated drafting systems, often cobbled together on obsolete mainframes, that sorely needed replacement. The needs were all internal. Now, that has all changed. The end result is no longer a paper document which can be ordered from the “Bill Room” found in the basement of the Capitol building. It’s often not even a PDF rendition of that document. The new end result is information which needs to be shared in a timely and open way in order to achieve the modern transparency objectives, like the DATA Act, that have been mandated. This change in expectations is going to revolutionize how the public works with their representatives to ensure fair and open government.
  5. In the past dozen years, things sure have changed. Credit must be given to the Monica Palmirani (@MonicaPalmirani) and Fabio Vitali at the University of Bologna – an awful lot of the progress pivots around their initiatives. However, we’ve all played a part in creating an open, creative, and cooperative environment for Legal Informatics to thrive as more than just an academic curiosity – as a true industry with many participants working collaboratively and competitively to innovate and solve the challenges ahead.

Standard
Process, Standards, Transparency

Imagining Government Data in the 21st Century

After the 2014 Legislative Data and Transparency conference, I came away both encouraged and a little worried. I’m encouraged by the vast amount of progress we have seen in the past year, but at the same time a little concerned by how disjointed some of the initiatives seem to be. I would rather see new mandates forcing existing systems to be rethought rather than causing additional systems to be created – which can get very costly over time. But, it’s all still the Wild Wild West of computing.

What I want to do with my blog this week is try and define what I believe transparency is all about:

  1. The data must be available. First and foremost, the most important thing is that the data be provided at the very least – somehow, anyhow.
  2. The data must be provided in such a way that it is accessible and understandable by the widest possible audience. This means providing data formats that can be read by ubiquitous tools and, ensuring the coding necessary to support all types of readers including those with disabilities.
  3. The data must be provided in such a way that it should be easy for a computer to digest and analyze. This means using data formats that are easily parsed by a computer (not PDF, please!!!) and using data models that are comprehensible to widest possible audience of data analysts. Data formats that are difficult to parse or complex to understand should be discouraged. A transparent data format should not limit the visibility of the data to only those with very specialized tools or expertise.
  4. The data provided must be useful. This means that the most important characteristics of the data must be described in ways that allow it to be interpreted by a computer without too much work. For instance, important entities described by the data should be marked in ways that are easily found and characterized – preferably using broadly accepted open standards.
  5. The data must be easy to find. This means that the location at which data resides should be predictable, understandable, permanent, and reliable. It should reflect the nature of the data rather than the implementation of the website serving the data. URLs should be designed rather than simply fallout from the implementation.
  6. The data should be as raw as possible – but still comprehensible. This means that the data should have undergone as little processing as possible. The more that data is transformed, interpreted, or rearranged, the less like the original data it becomes. Processing data invariably damages its integrity – whether intentional or unintentional. There will always be some degree of healthy mistrust in data that has been over-processed.
  7. The data should be interactive. This means that it should be possible to search the data at its source – through both simple text search and more sophisticated data queries. It also means that whenever data is published, there should be an opportunity for the consumer to respond back – be it simple feedback, a formal request for change, or some other type of two way interaction.

How can this all be achieved for legislative data? This is the problem we are working to solve. We’re taking a holistic approach by designing data models that are both easy to understand and can be applied throughout the data life cycle. We’re striving to limit data transformations by designing our data models to present data in ways that are both understandable to humans and computers alike. We are defining URL schemes that are well thought out and could last for as long as URLs are how we find data in the digital era. We’re defining database solutions that allow data to not only be downloaded, but also searched and queried in place. We’re building tools that will allow the data to not only be created but also interacted with later. And finally, we’re working with standards bodies such as the LegalDocML and LegalCiteM technical committees at OASIS to ensure well thought out world wide standards such as Akoma Ntoso.

Take a look at Title 1 of the U.S. Code. If you’re using a reasonably modern web browser, you will notice that this data is very readable and understandable – its meant to be read by a human. Right click with the mouse and view the source. This is the USLM format that was released a year ago. If you’re familiar with the structure of the U.S. Code and you’re reasonably XML savvy, you should feel at ease with the data format. It’s meant to be understandable to both humans and to computer programs trying to analyze it. The objective here is to provide a single simple data model that is used from initial drafting all the way through publishing and beyond. Rather than transforming the XML into PDF and HTML forms, the XML format can be rendered into a readable form using Cascading Style Sheets (CSS). Modern XML repositories such as eXist allow documents such as this to be queried as easily as you would query a table in a relational database – using a query language called XQuery.

This is what we are doing – within the umbrella of legislative data. It’s a start, but ultimately there is a need for a broader solution. My hope is that government agencies will be able to come together under a common vision for our information should be created, published, and disseminated – in order to fulfill their evolving transparency mandates efficiently. As government agencies replace old systems with new systems, they should design around a common open framework for transparent data rather building new systems in the exact same footprint as the old systems that they demolish. The digital era and transparency mandates that have come with it demand new thinking far different than the thinking of the paper era which is now drawing to a close. If this can be achieved, then true data transparency can be achieved.

Standard
HTML5, LegisPro Web, Standards, Track Changes, W3C

Building a browser-based XML Editor

Don’t forget the 2014 U.S. House Legislative Data and Transparency Conference this week.

I’m now hard at work on our second generation web-based XML editor. In my blog last week, I talked about the need for and complexities of change tracking in a legislative editor. In this blog, I want to describe more of the overall motivation.

A couple years ago, we built an HTML5-based legislative editor for Akoma Ntoso. We learned a lot from the effort and had some success with a couple customers whose needs matched the capabilities of the editor. The editor was built to use and exploit, to the fullest extent, many of the new APIs added to modern browsers to support HTML5. We found that, by focusing on HTML5, a lot of the complexities of dealing with browser quirks and incompatibilities were a thing of the past – allowing us to focus on building the editing functions.

The editor worked by transforming the XML document into a close representation of the XML, expressed as HTML5 tags. Using HTML5 features such as the @contenteditable attribute along with modern CSS, the browser DOM, selection ranges, drag and drop, and a WebDAV repository API, we were able to implement a fairly sophisticated web-based legislative editor.

But, not everything went smoothly. The first problem involved the complexity of mapping all the intricacies of XML into an HTML5 representation, and then maintaining that representation in the browser. Part of the difficulty stems from the fact the HTML5 is not specifically an XML dialect – and browsers tend to do HTML5 things that aren’t always XML friendly. The HTML5 DOM is deliberately rather loose and forgiving (it’s a big part of why HTML was successful in the first place) while XML demands a very precise and rigid DOM.

The second problem we faced was scalability. While the HTML5 representation wasn’t all that heavyweight, the bigger problem was the transformation cost going back and forth between HTML5 and XML. We sometimes deal with very large legislation and laws. In our bigger cases, the cost of transformation was simply unreasonable.

So what is the solution? Well, early last year we started experimenting with using a browser to render XML documents with a CSS directly – without any transform into HTML. Most modern browsers now do this very well. For the most part, we were able to achieve an acceptable rendition in the browser without any transformation.

There were a few drawbacks to this approach. For one, links were dead – they didn’t inherently do anything. Likewise, implementing something like the HTML @style attribute didn’t just naturally work. Before we could entertain the notion of a pure XML-based editor built within the XML infrastructure in the browser, we had to find a solution that would allow us to enrich the XML sufficiently to allow it to behave like an HTML page.

Another problem arose in that our prior web-based editor relied upon the @contenteditable feature of HTML. That is an HTML feature rather than a browser feature. Using XML as our base environment, we no longer had access to this facility. This wasn’t a total loss as our need for a rich change tracking environment required us to find a better approach that @contenteditable offered anyway.

With solutions to the major problems behind us, we started to take a look at the other goals for the editor:

  • Track Changes – This was the subject of my blog last week. For us, track changes is crucial in any editor targeted at legislation – and it must work at both the structural and textual level equally well. We use the feature for two things – redlining changes as is common in the U.S. and the automatic generation of amendment documents (amendments in context). Differencing can get you part way there – but it excludes the ability to adequately craft the changes in a way that deal with political sensitivities. Track Changes is a very complex feature which must be built into the very core of the editor – tacking it on later will be very difficult, if not impossible.
  • Scalability – Scalability is very important to our applications. We need to support very large documents. Even when we deal with document fragments, we need to allow those fragments to be very large. Our approach is to create editing islands within a large document loaded into the browser. This amounts to only building the editing superstructure around the parts of the document being edited rather than the whole document. It’s like building the scaffolding around only the floors being worked on in a skyscraper rather than trying to envelope the entire building in scaffolding.
  • Modularity – We’re building a number of very different applications currently – all of which require XML editing. To allow this variability, our new XML editor is written as a web-based component rather than a full-fledged application. Despite its complexity, on the surface it’s deceivingly simple. It has no user interface at all aside from the editing canvas. It’s completely driven by a well thought out JavaScript API. Adding the editor to a document is very simple. A single link, added to the bottom of the XML document, adds the editor to the document. With this component, we’re able to include it within all of the applications we are building.
  • Configurability – We need to support a number of different models – not just Akoma Ntoso. To achieve this, an XML-based configuration file is used to define the behaviors for any XML model. Elements can be defined as read-only, templates can be defined (or derived), and even the track changes behavior can be configured for individual elements. The sophistication being defined within the configuration files is to allow us to model all the variants of legislative models we have encountered without the need for extensive programming-level customization.
  • Browser Support – We’re pushing the envelope when it comes to browser support. Our current focus is on Google’s Chrome browser. Support for all the browsers aside from Internet Explorer should be relatively easy. Our experience has shown that the browsers are now quite similar. Internet Explorer is the one exception – in this particular area. Years ago, IE was the best browser when it came to XML support. While IE had many other compatibility issues, particularly with CSS, it led the way in supporting XML. However, while Microsoft has made tremendous strides moving forward to match the other browsers and modern standards, they’ve neglected XML. Their circa 1999 legacy capabilities for XML do no match modern standards and are quite deficient. Hopefully, this is something that will soon be rectified.

It’s not all smooth sailing. I have been finding a number of surprising issues with Google Chrome. For instance, whitespace management is a bit fudged at times. Chrome thinks nothing of adding the occasional non-breaking space to maintain whitespace when editing the DOM. What’s worse – it will inexplicably convert this into a text node that reads ” ” after a while. This is a character entity that is not defined in XML. I have to work hard to constantly reverse this odd behavior.

All in all, I’m excited by this new approach to building a web-based XML editor. It’s a substantial increase in sophistication over our prior web-based XML editor. This editor will be far more robust, scalable, and configurable in comparison to our prior editor and other editors we have worked on. While we still have a way to go in our development, we’ve found solutions to all the risky issues. It’s a future-looking approach – support can only get better. It doesn’t rely on compatibility modes or any other remnants of prior eras in web technology. This approach is really working out quite nicely for us.

Standard
LegisPro Web, Track Changes

Tracking Changes with Legislative Drafting

We’re in the process of rebuilding our legislative editor – from the ground up. There are many reasons why we are doing this, which I will leave to my next blog. Today, I want to focus on the most important reason of all – change tracking.

Figure 1: The example above shows non-literal redlining and two different change contexts. An entire bill section is being added – the “action line” followed by quoted text. Rather than showing the entire text in an inserted notation, only the action line is shown. The quoted text reflects a different change context – showing changes relative to the law. In subsequent versions of this bill, the quoted text will no longer show the law as its change context but rather the prior version. It’s complicated!

For us, change tracking is an essential feature of any legislative editor. It’s not something that can be tacked on later or implemented via a customization – it’s a core feature which must be built in to the base editor from the very outset. Change tracking dictates much of the core architecture of the editor. That means taking the time to build in change tracking into the basic DOM structures that we’re building – and getting them right up front. It’s an amazingly complex problem when dealing with an XML hierarchy.

I’ve been asked a number of questions by people that have seen my work. I’ll try to address them here:

Why is change tracking so important? We use change tracking to implement a couple of very important features. First of all, we use it to implement redlining (highlighting the changes) in a bill as it evolves. In some jurisdictions, particularly in the United States, redlining is an essential part of any bill drafting system. It is used to show both how the legislation has evolved and how it affects existing law.

Secondly, we use it to automatically generate “instruction” amendments (floor or committee amendments). First, page and line markers are back-annotated into the existing bill. That bill is then edited to reflect the proposed changes – carefully crafting the edits using track changes to avoid political sensitivities – such as arranging a change so as not to strike out a legislator’s name. When complete, our amendment generator is used to analyze the redlining along with the page and line markers to produce the amendment document for consideration. The cool thing is that to execute the amendments, all we need to do is accept or reject the changes. This is something we call “Amendments in Context” and our customer calls “Automatic Generation of Instruction Amendments” (AGIA).

How is legislative redlining different from change tracking in Word? They’re very similar. In fact, the first time we implemented legislative redlining, we made the mistake of assuming that they were the same thing. What we learned was that legislative redlining is quite a bit more complex. First of all, the last version of the document isn’t the only change context. The laws being amended are another context which must be dealt with. This means that, within the same document, there are multiple original sources of information which must be compared against.

Secondly, legislative redlining has numerous conventions, developed over decades, to indicate certain changes that are difficult or cumbersome to show with literal redlining. These amount to non-literal redlining patterns which denote certain changes. Examples include showing that a paragraph is being merged or split, a provision is being renumbered, a whole bill is being gutted and replaced with all new text, and even that a section, amending law (creating a different change context), is being added to a new version of the bill.

The rules of redlining can be so complex and intricate that they require any built-in change tracking mechanism in an off-the-shelf editor to be substantially modified to meet the need. Our first legislative editor was implemented using XMetaL for the State of California. At first, we tried to use XMetaL’s change tracking mechanisms. These seemed to be quite well thought out, being based on Microsoft Word’s track changes. However, it quickly became apparent that this was insufficient as we learned the art of redlining. We then discovered, much to our alarm, that XMetaL’s change tracking mechanism was transparent to the developer and could not be programmatically altered. Our solution involved contracting the XMetaL team to provide us with a custom API that would allow us to control the change tracking dimension. The result works, but is very complex to deal with as a developer. That’s why they had hidden it in the first place.

Why can’t differencing be used to generate an amendments document? We wondered this as well. In fact, we implemented a feature, called “As Amends the Law” in our LegisWeb bill tracking software using this approach. But, it’s not that straight-forward. First of all, off-the-shelf differencers lack an understanding of the political sensitivities of amendments. What they produce is logically correct, but can be quite politically insensitive. The language of amendments is often very carefully crafted to not upset one side or another. It’s pretty much impossible to relay this to a program that views its task as simply comparing two documents. Put another way, a differencer will show what has changed rather than how it was changed.

Secondly, off-the-shelf differencers don’t understand all the conventions that exist to denote many of the types of amendments that can be made – especially all the non-literal redlining rules. Asking a legislative body to modify their decade’s old customs to accommodate the limitations of the software is an uphill battle.

What approaches to change tracking have you seen? XMetaL’s approach to change tracking is the most useful approach we’ve encountered in XML editors. As I already mentioned, its goal is to mimic the change tracking capabilities of Microsoft Word. It uses XML processing instructions very cleverly to capture the changes – one processing instruction per deletion and a pair for insertions. The beauty of this approach is that it isolates the challenge of change tracking from the document schema – ensuring wide support for change tracking without any need to adapt an existing schema. It also allows the editor to be customized without regard for the change tracking mechanisms. The change tracking mechanisms exist and operate in their own dimension – very nicely isolated from the main aspects of editing. However, when you need to program software in this dimension, the limited programmability and immense complexity becomes a drawback.

Xopus, a web-based editor, tries to mimic XMetaL’s approach – actually using the same processing instructions as XMetaL. However, it’s an apparent effort to tack on change tracking to an existing editor and the result is limited to only tracking changes within text strings. They’ve seemingly never been able to implement a full featured change tracking mechanism. This limits its usefulness substantially.

Another approach is to use additional elements in a special namespace. This is the approach taken by ArborText. The added elements (nine in all), provide a great deal of power in expressing changes. Unfortunately, the added complexity to the customizer is quite overwhelming. This is why XMetaL’s separate change dimension works so well – for most applications.

Our approach is to follow the model established by XMetaL, but to ensure the programmability we need to implement legislative redlining and amendment generation. In the months to come, I will describe all this in much more detail.

Standard
Process, Standards, Transparency

Improving Legal References

In my blog last week, I talked a little about our efforts to improve how citations are handled. This week, I want to talk about this in some more detail. I’ve been participating on a few projects to improve how citations and references to legal citations are handled.

Let’s start by looking at the need. Have you noticed how difficult it is to lookup many citations found in legislation published on the web? Quite often, there is no link associated with the citation. You’re left to do your own legwork if you want to lookup that citation – which probably means you’ll take the author’s word for it and not bother to follow the citation. Sometimes, if you’re lucky, you will find a link (or reference) associated with the citation. It will point to a location, chosen by the author, that contains a copy of the legal text being referenced.

What’s the problem with these references?

  • If you take a look at the reference, chances are it’s a crufty URL containing all sorts of gibberish that’s either difficult or impossible to interpret. The URL reflects the current implementation of the data provider. It’s not intended to be meaningful. It follows no common conventions for how to describe a legal provision.
  • Wait a few years and try and follow that link again. Chances are, that link will now be broken. The data provider may have redesigned their site or it might not even exist anymore. You’re left with a meaningless link that points to nowhere.
  • Even if the data does exist, what’s the quality of the data at the other end of the link. Is the text official text, a copy, or even a derivative of the official text? Has the provision been amended? Has it been renumbered? Has it been repealed? What version of that provision are you looking at now? These questions are all hard to answer.
  • When you retrieve the data, what format does it come in? Is it a PDF? What if you want the underlying XML? If that is available, how do you get it?
  • The object of our efforts, both at the standards committee and within the projects we’re working on at Xcential, is to tackle this problem. The approach being taken involves properly designing meaningful URLs which are descriptive, unambiguous, and can last for a very long time – perhaps decades or longer. These URLs are independent of the current implementation – they may not reflect how the data is stored at all. The job of figuring out how to retrieve the data, using the current underlying content management system, is the job of a “resolver”. A resolver is simply an adapter that is attached to a web server. It intercepts the properly designed URL references and then transparently maps them into the crufty old URLs which the content management system requires. The data is retrieved from the content management system, formatted appropriately, and returned as if it really did exist at the property designed URL which you see. As the years go by and technologies advance, the resolver can be adapted to handle new generations of content management system. The references themselves will never need to change.

    There are many more details to go into. I’ll leave those for future blogs. Some of the problems we are tackling involve mapping popular names into real citations, working through ambiguities (including ones created in the future), handling alternate data sources, and allowing citations to be retrieved at varying degrees of granularity.

    I believe that solving the legal references problem is just about the most important progress we can make towards improving the legal informatics field. It’s an exciting time to be working in this field.

Standard
Akoma Ntoso, HTML5, LegisPro Web, Standards, Track Changes, Transparency, Uncategorized, W3C

Legal Citations and XML Editing for Legislation

It’s been quite some time since my last blog post – almost six months. The reason is that I’ve been very busy. We are doing a lot of exciting development within Xcential. We are developing a number of quite challenging projects around the globe.

If you’ve been following my blog, you may remember that I was working on an HTML5-based XML editor. That development was two years ago now. We’ve come a long way since then. The basic editor has been stripped down, componentized, and has being rebuilt to be a far more robust, scalable, and adaptable solution. There are more details below, which I will expand upon as the editor rolls out over the next year.

    Legal Citations

It was almost a year ago since the last Legislative Data and Transparency Conference in Washington D.C. (The next one is coming up) At that time, I spoke about the need for improved citation management in published XML documents. Well, we’ve come a long way since then. Earlier this year a Technical Committee was formed within OASIS to begin developing some standards. The Legal Citation Markup Technical Committee is now hard at work defining markup models for legal citations. I am a member of that TC.

The reference management part of our HTML5-based editor has been separated out as a separate project – as a citation interpreter and reference resolver. In our development tests, it’s integrated with eXist as a local repository. We also source documents from external sources such as LII.

We now have a few citation management projects underway, using our resolver technology. These are exciting projects which will be a huge step forward in improving how citations are managed. It’s premature to talk about this in any detail, so I’ll just leave this as a teaser of stuff to come.

    XML Editing for Legislation

The OASIS Legal Document ML Technical Committee is getting ready to make a large announcement. While this progress is being made, at Xcential we’ve been hard at work refining the state-of-the-art in XML editing.

If you recall the HTML5-based editor for Akoma Ntoso from a couple of years back, you may remember that is was based around all the new HTML5 technologies that have recently been incorporated into web browsers. We learned a lot from that effort – both good and bad. While we were able to get a reasonable tagging editor, using facilities that made editing far easier, we still faced difficulties when it came to basic XML editing and scalability.

So, we’ve taken a more ambitious approach to produce a very generalized XML editing platform. Using what we learned as the basis, our new editor is far more capable. Rather than relying on the mapping of XML into an equivalent HTML5 structure, we now directly use the XML facilities that are built into the browser. This approach is both far more robust and far more scalable. But the most exciting aspect is change tracking. We’re building change tracking directly into the basic editing engine – from the outset. This means that we can track all changes – whether the changes are in the text or in the structure. With all browsers now correctly implementing the standardized DOM Range model, our change tracking model has to be very sophisticated. While it’s hellishly complex, my experience in implementing change tracking technologies over many years is really coming in handy.

If you’ve used change tracking in XMetaL, you know the limitations of their technology. XMetaL’s range selection constrains how you can select which limits the flexibility of deletion. This simplifies the problem for the XMetaL customizer, but at a serious usability price. It’s one of the biggest limiting factors of XMetaL. We’re dealing with this problem once and for all with our new approach – providing a great way to implement legislative redlining.

Redlining Take a look at the totally contrived example on the left. It’s admittedly not a real example, it comes from my stress testing of the change tracking facilities. But look at what it does. The red text is a complex deletion that spans elements with little regard to the structure. In our editor, this was done with a single “delete” operation. Try and do this with XMetaL – it takes many operations and is a real struggle – even with change tracking turned off. In fact, even Microsoft Word’s handling of this is less than satisfactory, especially in more recent versions. Behind the scenes, the editor is using the model, derived from the schema, to control this deletion process to ensure that a valid document is the result.

If you’re particularly familiar with XMetal, you will notice something else too. That deletion cuts through the structure of a table!!!! XMetaL can only track changes within the text of table cells, not the structure. We’re making great strides towards proper legislative redlining technologies, and we are excited to work with our partners and clients to put them into practice.

Standard
Akoma Ntoso, Hackathon, Standards

Sheldon’s Roommate Agreement – in Akoma Ntoso

A Legal Open Document Hackathon was held yesterday at the University of Bologna in Italy – focused on Akoma Ntoso documents. You can learn more about it here:

https://plus.google.com/u/0/events/c03pd1llrcvg7d0t0fj5sh41cbk
http://codexml.cirsfid.unibo.it/material-for-legal-open-document-dec-10/

I wasn’t able to directly participate but I had my own mini-hackathon as well. But, rather than focusing on another boring piece of legislation that nobody wants to read, I thought I would have a little fun with it. If you know me, you’ll know that I’m a huge fan of The Big Bang Theory television show. You could say I have a few Sheldon-like tendencies of my own.

sheldonsRoommateAgreement

I’ve often thought that the complex roommate agreement that Sheldon had Leonard sign would make a great example for a legal document modeled in Akoma Ntoso. Of course, it’s not a piece of legislation, but surprisingly, it has many of the attributes of legislation. It is even, much like legislation, a bit of a chaotic mess. I had to make a number of extrapolations or “fixes” in order to get a reasonably consistent and workable document. I sure hope, if Sheldon’s desire to win a Nobel prize in physics is to be realized, that he is a better theoretical physicist than he is a legal drafter. Perhaps we should offer to give him a few pointers in document theory and the logical organization of ideas – he really needs them.

Nonetheless, the example afforded me an opportunity to show a number of features Akoma Ntoso:

  1. In article 10, section 9, there is an example of conditional effectivity. The provision is only effective in the event the either roommate has a girlfriend. As Leonard has had a few on-again, off-again girlfriends, it was a bit of fun figuring out when this provision was in effect. I didn’t consider Amy to be Sheldon’s girlfriend as the pertinent issues have yet to arise.
  2. In season 5, episode 15, Sheldon wins back Leonard’s friendship by amending the agreement to add “Leonard’s Day”.
  3. There are a number of “addendums” to various articles. This isn’t something that is directly supported by Akoma Ntoso, so I used the extension facilities of Akoma Ntoso to add generic tags with @name attributes to model the extensions I needed.
  4. The agreement is a complex document made up of the main document and at least three appendices

Sheldon’s Roommate Agreement

I present to you, Sheldon’s Roommate Agreement, as much as is known to date and with a few “corrections” on my part:

<?xml version="1.0" encoding="UTF-8"?>
<akomaNtoso xmlns="http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD05"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD05 akomantoso30.xsd">
   <act name="roommateAgreement" contains="multipleVersions">
      <meta>
         <identification source="#sheldon">
            <FRBRWork>
               <FRBRthis value="/us/cbs/bigBangTheory/roommateAgreement/main"/>
               <FRBRuri value="/us/usc/bigBangTheory/roommateAgreement"/>
               <FRBRdate date="2013-12-10" name="generation"/>
               <FRBRauthor href="#sheldon" as="#lessor"/>
               <FRBRcountry value="us"/>
            </FRBRWork>
            <FRBRExpression>
               <FRBRthis
                  value="/us/cbs/bigBangTheory/roommateAgreement/en@/main"/>
               <FRBRuri value="/us/cbs/bigBangTheory/roommateAgreement/en@"/>
               <FRBRdate date="2013-12-10" name="generation"/>
               <FRBRauthor href="#sheldon" as="#lessor"/>
               <FRBRlanguage language="en"/>
            </FRBRExpression>
            <FRBRManifestation>
               <FRBRthis
                  value="/us/cbs/bigBangTheory/roommateAgreement/en@/main.xml"/>
               <FRBRuri value="/us/cbs/bigBangTheory/roommateAgreement/en@.akn"/>
               <FRBRdate date="2013-12-10" name="generation"/>
               <FRBRauthor href="#vergottini" as="#marker"/>
            </FRBRManifestation>
         </identification>
         <publication name="roommateAgreement" date="2013-12-10"
            showAs="Sheldon's Roommate Agreement"/>
         <lifecycle source="#sheldon">
            <eventRef id="gen__s_1__ep_1" date="2007-09-24" 
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_1__ep_17" date="2008-05-19"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_2__ep_1" date="2008-09-22"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_3__ep_1" date="2009-09-21" 
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_3__ep_19" date="2010-04-12"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_4__ep_24" date="2011-05-19"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_5__ep_7" date="2011-10-27"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_5__ep_14" date="2012-01-26"
               source="#sheldon" type="generation"/>
            <eventRef id="gen__s_5__ep_15" date="2012-02-02"
               source="#sheldon" type="generation"/>
            <eventRef id="amd__s_5__ep_15" date="2012-02-02"
               source="#sheldon" type="amendment"/>            
         </lifecycle>
         <analysis source="#sheldon">
            <passiveModifications>
               <textualMod type="insertion" id="adds_leonards_day" 
                  incomplete="false">
                  <source href="#s_5__ep_15"/>
                  <destination href="#add_1"/>
               </textualMod>
            </passiveModifications>
         </analysis>
         <temporalData source="#sheldon">
            <temporalGroup id="period_1">
               <timeInterval refersTo="#signed" 
                  start="#gen_s_1__ep_1"/>
            </temporalGroup>
            <temporalGroup id="period_102">
               <timeInterval refersTo="#proposed" 
                  start="#gen__s_5__ep_15"/>
            </temporalGroup>
            <temporalGroup id="period_roommate_has_girlfriend">
               <timeInterval refersTo="#datingPenny1"
                  start="#gen__s_1__ep_17" end="#airing__s_2__ep_1"/>
               <timeInterval refersTo="#datingPriya1" 
                  start="#gen__s_3__ep_1" end="#airing__s_3__ep_19"/>
               <timeInterval refersTo="#datingPriya2"
                  start="#gen__s_4__ep_24" end="#airing__s_5__ep_7"/>
               <timeInterval refersTo="#datingPenny2"
                  start="#gen__s_5__ep_14"/>
            </temporalGroup>
         </temporalData>
         <references source="#bigBangTheory">
            <TLCRole id="lessor" 
               href="/ontology/role/lessor" 
               showAs="Lessor"/>
            <TLCRole id="lessee" 
               href="/ontology/role/lessee" 
               showAs="Lessee"/>
            <TLCRole id="marker" 
               href="/ontology/role/marker" 
               showAs="Lessee"/>
            <TLCPerson id="sheldon" 
               href="/ontology/person/cast/sheldonCooper"
               showAs="Sheldon Cooper"/>
            <TLCPerson id="roommate" 
               href="/ontology/person/cast/roommate"
               showAs="Roommate"/>
            <TLCPerson id="vergottini" 
               href="/ontology/person/xcential/vergottini"
               showAs="Grant Vergottini"/>            
            <TLCEvent id="s_2__ep_6"
               href="/ontology/tvShow/bigBangTheory/season2/episode6"
               showAs="Season 2 Episode 6"/>
            <TLCEvent id="s_2__ep_10"
               href="/ontology/tvShow/bigBangTheory/season2/episode10"
               showAs="Season 2 Episode 6"/>
            <TLCEvent id="s_3__ep_15"
               href="/ontology/tvShow/bigBangTheory/season3/episode15"
               showAs="Season 3 Episode 10"/>
            <TLCEvent id="s_3__ep_21"
               href="/ontology/tvShow/bigBangTheory/season3/episode21"
               showAs="Season 3 Episode 21"/>
            <TLCEvent id="s_3__ep_22"
               href="/ontology/tvShow/bigBangTheory/season3/episode22"
               showAs="Season 3 Episode 22"/>
            <TLCEvent id="s_4__ep_2"
               href="/ontology/tvShow/bigBangTheory/season4/episode2"
               showAs="Season 4 Episode 2"/>
            <TLCEvent id="s_4__ep_21"
               href="/ontology/tvShow/bigBangTheory/season4/episode21"
               showAs="Season 4 Episode 21"/>
            <TLCEvent id="s_4__ep_24"
               href="/ontology/tvShow/bigBangTheory/season4/episode24"
               showAs="Season 4 Episode 24"/>
            <TLCEvent id="s_5__ep_15"
               href="/ontology/tvShow/bigBangTheory/season5/episode15"
               showAs="Season 5 Episode 15"/>
            <TLCEvent id="s_5__ep_18"
               href="/ontology/tvShow/bigBangTheory/season5/episode18"
               showAs="Season 5 Episode 18"/>
            <TLCEvent id="s_6__ep_15"
               href="/ontology/tvShow/bigBangTheory/season6/episode15"
               showAs="Season 6 Episode 15"/>
         </references>
      </meta>

      <preface>
         <block name="title" id="title">
            <docTitle>The Roommate Agreement</docTitle>
         </block>
      </preface>

      <body period="period__1">
         <article id="art_1">
            <num>1</num>
            <heading id="art_1__heading">Upon becoming a roommate</heading>
            <section id="art_1__sec_1">
               <num>1</num>
               <content id="art_1__content">
                  <p>A roommate gets an ID Card, a lapel pin, FAQ sheet and a
                     key. New roommates may be interested in the live webchat on
                     Tuesday nights called “Apartment Talk.”</p>
               </content>
            </section>
            <section id="art_1__sec_3">
               <num>3)</num>
               <content id="art_1__sec_3__content">
                  <p>(call for an emergency meeting)</p>
                  <p class="sourceNote">(<ref id="ref_3" href="#s_2__ep_10"
                        >Season 2 Episode 10</ref>)</p>
               </content>
            </section>
            <section id="art_1__sec_5">
               <num>5</num>
               <subsection id="art_1__sec_5__ssec_A">
                  <num>A</num>
                  <content id="art_1__sec_5__ssec_A__content">
                     <p>Roommate must drive Sheldon to and from work, the comic
                        book store, the barber shop, and the park for one hour
                        every other Sunday for fresh air.</p>
                     <p class="sourceNote">(<ref id="ref_4" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref>)</p>
                  </content>
               </subsection>
               <subsection id="art_1__sec_5__ssec_B">
                  <num>B</num>
                  <content id="art_1__sec_5__ssec_B__content">
                     <p>Roommate is tasked to bring home all take out dinners. (
                        Standard orders are located in Appendix B, and are also
                        down-loadable from Sheldon’s FTP server)</p>
                  </content>
               </subsection>
            </section>
            <section id="art_1__sec_9">
               <num>9</num>
               <heading id="art_1__sec_9__heading">Miscellany</heading>
               <paragraph id="art_1__sec_9__para_1">
                  <num>[1]</num>
                  <heading id="arc_1__sec_9__para_1__heading">Flag</heading>
                  <content id="art_1__sec_9__para_1__content">
                     <p>The apartment's flag is a gold lion rampant on a field
                        of azure” and should never fly upside down—unless the
                        apartment’s in distress.</p>
                     <p class="sourceNote">(<ref id="ref_1" href="#s_3__ep_22"
                           >Season 3 Episode 22</ref>_</p>
                  </content>
               </paragraph>
               <paragraph id="art_1__para_2">
                  <num>[2]</num>
                  <content id="art_1__para_2__content">
                     <p>If one of the roommates ever invents time travel, the
                        first stop has to aim exactly five seconds after this
                        clause of the Roommate Agreement was signed.</p>
                     <p class="sourceNote">(<ref id="ref_2" href="#s_3__ep_22"
                           >Season 3 Episode 22</ref>_</p>
                  </content>
               </paragraph>
            </section>
            <section id="art_1__sec_27">
               <num>27</num>
               <paragraph id="art_1__para_5">
                  <num>5</num>
                  <content id="art_1__para_5__content">
                     <p>The roommate agreement, like the American flag, cannot
                        touch the ground.</p>
                     <p class="sourceNote">(<ref id="ref_6" href="#s_6__ep_15"
                           >Season 6 Episode 15</ref>)</p>
                  </content>
               </paragraph>
            </section>
            <section id="art_1__sec_37">
               <num>37</num>
               <subsection id="art_1__sec_37__ssec_B">
                  <num>B</num>
                  <heading id="art_1__sec_37__ssec_B__heading">Miscellaneous
                     duties</heading>
                  <content id="art_1__sec_37__ssec_B__content">
                     <p>Roommate is obligated to drive Sheldon to his various
                        appointments, such as to the dentist. Roommate must also
                        provide a "confirmation sniff" to tell if questionable
                        dairy products are edible.</p>
                     <p class="sourceNote">(<ref id="ref_7" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref>)</p>
                  </content>
               </subsection>
            </section>
            <section id="art_1__sec_209">
               <num>209</num>
               <content id="art_1__sec_209__content">
                  <p>Sheldon and roommate both have the option of nullifying
                     their roommate agreement, having no responsibilities or
                     obligations toward each other, other than paying rent and
                     sharing utilities.</p>
                  <p class="sourceNote">(<ref id="ref_8" href="#s_5__ep_15"
                        >Season 5 Episode 15</ref>)</p>
               </content>
            </section>
            <section id="art_1__sec_XX">
               <num>XX</num>
               <heading id="art_1__sec_XX__heading">Settling ties</heading>
               <content id="art_1__sec_XX__content">
                  <p>All ties will be settled by Sheldon.</p>
                  <p class="sourceNote">(<ref id="ref_9" href="#s_3__ep_22"
                        >Season 3 Episode 22</ref>)</p>
               </content>
            </section>
         </article>

         <article id="art_2">
            <num>2</num>
            <heading id="art_2__heading">Co-habitation</heading>
            <section id="art_2__sec_1">
               <num>1</num>
               <subsection id="art_2__sec_1__ssec_A">
                  <num>A</num>
                  <content id="art_2__sec_1__ssec_A__content">
                     <p>No "hootennanies", sing-alongs, raucous laughter,
                        clinking of glasses, celebratory gunfire, or barbershop
                        quartets after 10.p.m.</p>
                     <p class="sourceNote">(<ref id="ref_10" href="#s_5__ep_18"
                           >Season 5 Episode 18</ref>)</p>
                  </content>
               </subsection>
               <subsection id="art_2__sec_1__ssec_B">
                  <num>B</num>
                  <content id="art_2__sec_1__ssec_B__content">
                     <p>Roommate does not now nor does intend to play percussive
                        or brass instruments.</p>
                  </content>
               </subsection>
               <subsection id="art_2__sec_1__ssec_C">
                  <num>C</num>
                  <heading id="art_2__sec_1__ssec_C__heading"
                     >Temperature</heading>
                  <content id="art_2__sec_1__ssec_C__content">
                     <p>The thermostat must be kept at 71 degrees at all
                        times.</p>
                     <p class="sourceNote">(<ref id="ref_11" href="#s_3__ep_22"
                           >Season 3 Episode 22</ref>)</p>
                  </content>
               </subsection>
            </section>
            <section id="art_2__sec_2">
               <num>2</num>
               <heading id="art_2__sec_2__heading">Television and
                  movies</heading>
               <subsection id="art_2__sec_2__ssec_B">
                  <num>B</num>
                  <content id="art_2__sec_2__ssec_B__content">
                     <p>Roommates agree that Friday nights shall be reserved for
                        watching Joss Whedon's brilliant new series Firefly.</p>
                     <p class="soruceNote">(<ref id="ref_12" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref>)</p>
                  </content>
               </subsection>
            </section>
            <section id="art_2__sec_3">
               <num>3</num>
               <content id="art_2__sec_3__content">
                  <p>Roommate has the right “to allocate fifty percent of the
                     cubic footage of the common areas”, but only if Sheldon is
                     notified in advance by e-mail.</p>
                  <p class="sourceNote">(<ref id="ref_13" href="#s_3__ep_22"
                        >Season 3 Episode 22</ref>)</p>
               </content>
            </section>
            <section id="art_2__sec_4">
               <num>4</num>
               <heading id="art_2__sec_4__heading">[Pets]</heading>
               <content id="art_2__sec_4__content">
                  <p>Pets are banned under the roommate agreement, with the
                     exception of service animals, like cybernetically-enhanced
                     helper monkeys.</p>
                  <p class="sourceNote">(<ref id="ref_14" href="#s_3__ep_21"
                        >Season 3 Episode 21</ref>)</p>
               </content>
            </section>
            <section id="art_2__sec_5">
               <num>5</num>
               <heading id="art_2__sec_5__heading">[Take-out
                  restaurant]</heading>
               <content id="art_2__sec_5__content">
                  <p>The selection of a new take-out restaurant requires public
                     hearings and a 60-day comment period.</p>
                  <p class="sourceNote">(<ref id="ref_15" href="#s_4__ep_21"
                        >Season 4 Episode 21</ref>)</p>
               </content>
            </section>
            <hcontainer name="addendums" id="art_2_addendums">
               <hcontainer name="addendum" id="art_2_add_A">
                  <num>A</num>
                  <content id="art_d_add_A_content">
                     <p>Sheldon [will] ask at least once a day how roommate is
                        even if he doesn't care.</p>
                     <p class="sourceNode">(<ref id="ref_16" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref>)</p>
                  </content>
               </hcontainer>
               <hcontainer name="addendum" id="art_2__add_B">
                  <num>B</num>
                  <content id="art_2__add_B__content">
                     <p>Sheldon [will] no longer stage spontaneous biohazard
                        drills after 10 p.m. </p>
                     <p class="sourceNote">(<ref id="ref_17" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref>)</p>
                  </content>
               </hcontainer>
               <hcontainer name="addendum" id="art_2__add_C">
                  <num>C</num>
                  <content id="art_2__add_C__content">
                     <p>Sheldon [will] abandon his goal to master Tuvan throat
                        singing. </p>
                     <p class="sourceNote">(<ref id="ref_18" href="#s_3__ep_15"
                           >Season 3 Episode 15</ref> )</p>
                  </content>
               </hcontainer>
            </hcontainer>
         </article>

         <article id="art_3">
            <num>3</num>
            <heading id="art_3__heading">The Bathroom</heading>
            <section id="art_3__sec_1">
               <num>1</num>
               <content id="art_3__sec_1__content">
                  <p>Roommates will acknowledge and use the two pieces of tape
                     in the bathroom designated for specific purposes:</p>
                  <blockList id="art_3__sec_1__content_ul_1">
                     <item id="art_3__sec_1__content_ul_1__item_1">
                        <p>Tape A: Located in front of the sink. Person must brush
                        and floss teeth behind the line.</p>
                     </item>
                     <item id="art_3__sec_1__content_ul_1__item_2">
                        <p>Tape B: Located in front of the toilet, those who stand
                        up to pee must stand in front of it.</p>
                     </item>
                  </blockList>
               </content>
            </section>
            <section id="art_3__sec_2">
               <num>2</num>
               <content id="art_3__sec_2__content">
                  <p>Before the use of a shower, the party agrees to wash his or
                     her feet in his or her designated bucket.</p>
               </content>
            </section>
            <section id="art_3__sec_7">
               <num>7</num>
               <intro id="art_3__sec_8__intro">
                  <p>The shower can have at most one occupant, except in the
                     event of an attack by water soluble aliens.</p>
               </intro>
               <subsection id="art_3__sec_8__ssec_B">
                  <num>B</num>
                  <paragraph id="art_3__sec_8__ssec_B__para_9">
                     <num>9</num>
                     <content id="art_3__sec_8__ssec_B__para_9__content">
                        <p>The right to bathroom privacy is suspended in the
                           event of force majeure.</p>
                        <p class="sourceNote">(<ref id="ref_19"
                              href="#s_4__ep_21">Season 4 Episode 21</ref>)</p>
                     </content>
                  </paragraph>
               </subsection>
            </section>
            <hcontainer name="addendums" id="art_3__addendums">
               <hcontainer name="addendum" id="art_3__add_J">
                  <num>J</num>
                  <content id="art_d__add_J__content">
                     <p>When Sheldon showers second, any and all measures shall
                        be taken to ensure an adequate supply of hot water.</p>
                     <p class="sourceNote">(<ref id="ref_20" href="#s_4__ep_21"
                           >Season 4 Episode 21</ref>)</p>
                  </content>
               </hcontainer>
            </hcontainer>
         </article>

         <article id="art_10">
            <num>10</num>
            <heading id="art_10__heading">Visitors</heading>
            <section id="art_10__sec_8">
               <num>8</num>
               <heading id="art_10__sec_8__heading">Over-night guests</heading>
               <intro id="art_10__sec_8__intro">
                  <p>There has to be a 24 hour notice if a non-related female
                     will stay over night.</p>
                  <p>(<ref id="ref_21" href="#s_3__ep_21">Season 3 Episode
                        21</ref>)</p>
               </intro>
               <subsection id="art_10__sec_8__ssec_C">
                  <num>C</num>
                  <heading id="art_1-__sec_8__ssec_C__heading">Females</heading>
                  <paragraph id="art_10__sec_8__ssec_C__para_4">
                     <num>4</num>
                     <heading id="art_10__sec_8__ssec_C__para_4__heading"
                        >Coitus</heading>
                     <content id="art_10__sec_8__ssec_C__para_4__content">
                        <p>Roommates shall give each other 12 hours notice of
                           impending coitus.</p>
                        <p class="sourceNote">(<ref id="ref_22"
                              href="#s_3__ep_22">Season 3 Episode 22</ref>)</p>
                     </content>
                  </paragraph>
               </subsection>
            </section>
            <section id="art_10__sec_9" period="period__roommate_has_girlfriend">
               <num>9</num>
               <heading id="art_10__sec_9__heading">Cohabitation Rider</heading>
               <intro id="art_10__sec_9__content">
                  <p>[This clause is] activated when roommate starts "living
                     with" a girlfriend in the apartment.</p>
                  <p>A girlfriend shall be deemed "living with" roommate when
                     she has stayed over for A: ten consecutive nights or B: for
                     more than nine nights in a three-week period or C: all the
                     weekends of a given month plus three weeknights.</p>
                  <p class="sourceNote">(<ref id="ref_23" href="#s_2__ep_10"
                        >Season 2 Episode 10</ref>)</p>
               </intro>
               <subsection id="art_10__sec_9__ssec_A">
                  <num>A</num>
                  <content id="art_10__sec_9__ssec_A__content">
                     <p>Upon a live-in girlfriend, there shall be a change in
                        the distribution of shelves in the fridge.</p>
                     <p class="sourceNote">(<ref id="ref_24" href="#s_2__ep_10"
                           >Season 2 Episode 10</ref>)</p>
                  </content>
               </subsection>
               <subsection id="art_10__sec_9__ssec_B">
                  <num>B</num>
                  <content id="art_10__sec_9__ssec_B__content">
                     <p>Apartment vacuuming shall be increased from three to
                        four times a week to accommodate the increased
                        accumulation of dead skin cells.</p>
                     <p class="sourceNote">(<ref id="ref_25" href="#s_2__ep_10"
                           >Season 2 Episode 10</ref>)</p>
                  </content>
               </subsection>
               <subsection id="art_10__sec_9__ssec_C">
                  <num>C</num>
                  <content id="art_10__sec_9__ssec_C__content">
                     <p>A change in the bathroom schedule shall be
                        implemented.</p>
                     <p class="sourceNote">(<ref id="ref_26" href="#s_2__ep_10"
                           >Season 2 Episode 10</ref>)</p>
                  </content>
               </subsection>
               <subsection id="art_10__sec_9__ssec_D">
                  <num>D</num>
                  <content id="art_10__sec_9__ssec_D__content">
                     <p>Girlfriend does not now nor does she intend to play
                        percussive or brass instruments.</p>
                     <p class="sourceNote">(<ref id="ref_27" href="#s2__ep_10"
                           >Season 2 Episode 10</ref>)</p>
                  </content>
               </subsection>
            </section>
         </article>

         <section id="sec_XX">
            <num>XX</num>
            <heading id="sec_XX__heading">Durable power of attorney</heading>
            <content id="sec_XX__content">
               <p>The other roommate get power of attorney over you, and may
                  make end-of-life decisions for you (reciprocal).</p>
               <p class="sourceNote">(<ref id="ref_28" href="#s_4__ep_24">Season
                     4 Episode 24</ref>)</p>
            </content>
         </section>

         <hcontainer name="addendums" id="addendums" period="period__102">
            <hcontainer name="addendum" id="add_1">
               <num>1</num>
               <heading id="add_1__heading">Leonard's Day</heading>
               <content id="add_1__content">
                  <p>Once a year, Leonard and Sheldon take one day to celebrate
                     the contributions Leonard gives to Sheldon's life, both
                     real and imaginary. Leonard does not get breakfast in bed,
                     the right to sit in Sheldon's spot, or permission to alter
                     the thermostat; the only thing that Leonard gets is a
                     thank-you card. This day is called "Leonard's Day."</p>
                  <p class="sourceNote">(<ref id="ref_29" href="#s_5__ep_15"
                        >Season 5 Episode 15</ref>)</p>
               </content>
            </hcontainer>
         </hcontainer>

      </body>
      <attachments>
         <doc name="appendix">
            <meta>
               <identification source="#sheldon">
                  <FRBRWork>
                     <FRBRthis value="/us/cbs/bigBangTheory/takeOutOrders/main"/>
                     <FRBRuri value="/us/cbs/bigBangTheory/takeOutOrders"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#sheldon" as="#lessor"/>
                     <FRBRcountry value="us"/>
                  </FRBRWork>
                  <FRBRExpression>
                     <FRBRthis
                        value="/us/cbs/bigBangTheory/takeOutOrders/en@/main"/>
                     <FRBRuri value="/us/cbs/bigBangTheory/takeOutOrders/en@"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#sheldon" as="#lessor"/>
                     <FRBRlanguage language="en"/>
                  </FRBRExpression>
                  <FRBRManifestation>
                     <FRBRthis
                        value="/us/cbs/bigBangTheory/takeOutOrders/en@/main.xml"/>
                     <FRBRuri
                        value="/us/cbs/bigBangTheory/takeOutOrders/en@.akn"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#vergottini" as="#marker"/>
                  </FRBRManifestation>
               </identification>
            </meta>
            <preface>
               <block name="title">
                  <docNumber>Appendix B</docNumber>
                  <docTitle>Standard take-out orders</docTitle>
               </block>
            </preface>
            <mainBody>
               <clause id="app_B__cls_1"> </clause>
            </mainBody>
         </doc>
         <doc name="appendix">
            <meta>
               <identification source="#sheldon">
                  <FRBRWork>
                     <FRBRthis
                        value="/us/cbs/bigBangTheory/futureCommitments/main"/>
                     <FRBRuri value="/us/cbs/bigBangTheory/futureCommitments"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#sheldon" as="#lessor"/>
                     <FRBRcountry value="us"/>
                  </FRBRWork>
                  <FRBRExpression>
                     <FRBRthis
                        value="/us/cbs/bigBangTheory/futureCommitments/en@/main"/>
                     <FRBRuri
                        value="/us/cbs/bigBangTheory/futureCommitments/en@"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#sheldon" as="#lessor"/>
                     <FRBRlanguage language="en"/>
                  </FRBRExpression>
                  <FRBRManifestation>
                     <FRBRthis
                        value="/us/cbs/bigBangTheory/futureCommitments/en@/main.xml"/>
                     <FRBRuri
                        value="/us/cbs/bigBangTheory/futureCommitments/en@.akn"/>
                     <FRBRdate date="2013-07-26T05:14:38" name="generation"/>
                     <FRBRauthor href="#vergottini" as="#marker"/>
                  </FRBRManifestation>
               </identification>
            </meta>
            <preface>
               <block name="title">
                  <docNumber>Appendix C</docNumber>
                  <docTitle>Future commitments</docTitle>
               </block>
            </preface>
            <mainBody>
               <clause id="app_c__cls_37">
                  <num>37</num>
                  <heading id="app_c__cls_37__heading">[Large Hadron
                     Collider]</heading>
                  <content id="app_c__cls_37__content">
                     <p>In the event one friend is ever invited to visit the
                        Large Hadron Collider, now under construction in
                        Switzerland, he shall invite the other friend to
                        accompany him.</p>
                     <p>(<ref id="app_c__ref_1" href="#s_3__ep_15">Season 3
                           Episode 15</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_AA">
                  <num>[AA]</num>
                  <heading id="app_c__cls_AA__heading">[Super powers]</heading>
                  <content id="app_c__cls_AA__content">
                     <p>Specifies what happens if one friend gets super powers
                        (he will name the other one as his sidekick)</p>
                     <p class="sourceNote">(<ref id="app_c__ref_2"
                           href="#s_2__ep_10">Season 2 Episode 10</ref>)</p>
                     <p class="sourceNote">(<ref id="app_c__ref_3"
                           href="#s_3__ep_15">Season 3 Episode 15</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_BB">
                  <num>[BB]</num>
                  <heading id="app_c__cls_BB__heading">[Zombies]</heading>
                  <content id="app_c__cls_BB__content">
                     <p>Specifies what happens if one friend is bitten by a
                        Zombie (the other can't kill him even if he turned)</p>
                     <p class="sourceNote">(<ref id="app_c__ref_4"
                           href="#s_3__ep_15">S3 Ep15</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_CC">
                  <num>[CC]</num>
                  <heading id="app_c__cls_CC__heading">[MacArthur
                     grant]</heading>
                  <content id="app_c__cls_CC__content">
                     <p>Specifies what happens if one friend wins a MacArthur
                        grant </p>
                     <p class="sourceNote">(<ref id="app_c__ref_5"
                           href="#s_3__ep_15">Season 3 Episode 15</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_DD">
                  <num>[DD]</num>
                  <heading id="app_c__cls_DD__heading">[Bill Gates]</heading>
                  <content id="app_c__cls_DD__content">
                     <p>Specifies what happens if one friend gets invited to go
                        swimming at Bill Gate's house (he will take the other
                        friend to accompany him)</p>
                     <p class="sourceNote">(<ref id="app_c__ref_6"
                           href="#s_3__ep_15">Season 3 Episode 15</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_EE">
                  <num>[EE]</num>
                  <heading id="app_c__cls_EE__heading">[Skynet]</heading>
                  <content id="app_c__cls_EE__content">
                     <p>Specifies what happens if one friend needs help to
                        destroy an artificial intelligence he's created and
                        that's taking over Earth.</p>
                     <p class="sourceNote">(<ref id="app_c__ref_7"
                           href="#s_2__ep_6">Season 2 Episode 6</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_FF">
                  <num>[FF]</num>
                  <heading id="app_c__cls_FF__heading">[Body
                     snatchers]</heading>
                  <content id="app_c__cls_FF__content">
                     <p>Specifies what happens if one friend needs help to
                        destroy someone they know who's been replaced with an
                        alien pod.</p>
                     <p class="sourceNote">(<ref id="app_c__ref_8"
                           href="#s_2__ep_6">Season 2 Episode 6</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_GG">
                  <num>[GG]</num>
                  <heading id="app_c__cls_GG__heading">[Godzilla]</heading>
                  <content id="app_c__cls_GG__content">
                     <p>Specifies what happens if someone threatens to destroy
                        Tokyo.</p>
                     <p class="sourceNote">(<ref id="app_c__ref_9"
                           href="#s_2__ep_6">Season 2 Episode 6</ref>)</p>
                  </content>
               </clause>
               <clause id="app_c__cls_74">
                  <num>74</num>
                  <subclause id="app_c__cls_74__scls_C">
                     <num>C</num>
                     <heading id="app_c__cls_74__scls_C__heading"
                        >[Robots]</heading>
                     <content id="app_c__cls_74__scls_C__content">
                        <p>The various obligations and duties of the parties in
                           the event one of them becomes a robot.</p>
                        <p class="sourceNote">(<ref id="app_c__ref_10"
                              href="#s_4__ep_2">Season 4 Episode 2</ref>)</p>
                     </content>
                  </subclause>
               </clause>
            </mainBody>
         </doc>
      </attachments>
   </act>
</akomaNtoso>
Standard
Uncategorized

Is it time to rethink how we are governed?

We have seen the worst of our government in the past few weeks. Our politicians have seemingly forgotten that their mission is to solve problems. Instead, they’ve regressed back to settling differences through tribal conflict. Isn’t that something that we should have put behind us centuries ago?

Why is it that our politicians can never solve complex problems?

I have always been fascinated with complex problem solving. It’s why I found myself a job at the Boeing Company at the start of my career. My job was to find ways to use computer automation to help Boeing solve ever more complex problems. While at Boeing, I was introduced to the discipline of systems engineering.

In the 1940′s, with the urgency of World War II as the impetus, large systems integrators like Boeing and AT&T had to find a way to eliminate the unpredictability of trial and error engineering. That way was systems engineering – which replaced the guessing game of early engineering efforts with a predictable engineering discipline that would allow new complex systems to be reliably brought online very fast.

The results speak for themselves. It’s that discipline in engineering that has given us the tremendous advances in aeronautics and electronics in the decades that have followed. Those supercomputers most people carry in their pockets would never have been possible were it not for the discipline of systems engineering.

Systems engineering imposes a rigorous problem-solving process. – Requirements are analyzed and quantified, alternatives are thoroughly studied, and the most optimal solution is selected. Emotions are wrung out of the process as soon as possible. When a problem is too large or appears insurmountable, it is broken down into smaller problems that are solved individually. Each step along the way and every decision is exhaustively documented and reviewed by peers. It’s a scalable process that allows any problem, no matter how complex or difficult, to be tackled with a good probability of success.

Of course, it’s not a perfect process. There are plenty of strong opinions, politicking, and sometimes even special interests to deal with. However, engineers are able to handle this as they are trained to work through their differences to find the best answers. Engineers are taught to detect and avoid the pitfalls of relying on opinions and ideology. Instead, they must relentlessly seek true and indisputable facts. Being able to do this effectively is a condition of employment. Engineers that can’t follow the process must be let go – businesses simply cannot afford to keep underperformers.

The problems that systems engineers must tackle are many times more complex than anything that our politicians will ever have to address. While the results are never perfect, and challenges abound, when a new plane makes its way out to the runway for that first flight, it’s a certainty that it will fly. The discipline of the process almost guarantees it.

Contrast this to the way our politicians solve problems. In the unlikely event that their metaphorical plane will ever find its way out to a runway, chances are it will come to an ugly end at the end of the runway crumpling into a pile of wishful thinking and intentional sabotage.

What’s the difference? Simply put, in systems engineering, opinions are suppressed and facts are emphasized while politicians seem to practice the exact opposite of this.

Why is it that we intuitively understand that the world’s most complex problems cannot be solved by people who rely on opinions and ideology, and yet that is exactly how we try to solve the world’s most important problems?

I am often asked what my vision is for legal informatics – the form of computer automation that targets legislative work. I’ve been pondering that question a lot over the past few weeks. Modern computing has revolutionized our lives. In the past twenty years alone, the way we interact with others, buy and sell products, keep ourselves entertained, and manage our lives has changed many times over thanks to computers and the Internet. Too often though, when I look at how we apply legal informatics, we’re simply computerizing outmoded nineteenth century processes – which, as we have seen in recent events, don’t work anymore.

I think it’s time that we rethink how we are governed – using the tools and technologies that have improved so many other aspects of our lives. Maybe then, we can have leaders who are problem solvers.

Standard
Akoma Ntoso, Standards, Transparency

The U.S. Code in Akoma Ntoso

I’m on my way to Italy this week for my annual pilgrimage to Ravenna, Italy and the LEX Summer School put on by the University of Bologna. This is my fourth trip to the class. I always find it so inspirational to be a part of the class and the activities that surround it. This year I will be talking about the many on-going projects that we have underway as well as talking, in depth, about the HTML5 editor I built for Akoma Ntoso.

Before I get to Italy, I wanted to share something I’ve been working on. It should come as absolutely no surprise to anyone that I’ve been working on producing a version of the U.S. Code in Akoma Ntoso. A few weeks ago, the U.S. Office of the Law Revision Counsel released the full U.S. Code in XML. My company, Xcential, helped them to produce that release. Now I’ve taken the obvious next step and begun work on a transform to convert that XML into Akoma Ntoso – the format currently being standardized by the OASIS Legal Document ML technical committee. I am an active member of that TC.

U.S. Code

About 18 months ago, I learned of a version of the U.S. Code that had been made available in XML. While that XML release was quite far from complete, I used to to produce a representation in Akoma Ntoso as it stood back then. My latest effort is a replacement and update of that work. The new version of XML released by the OLRC is far more accurate and complete and is a better basis for the transform than earlier release was. And besides, I have a far better understanding of the new version – having had a role in its development.

My work is still very much a work-in-progress. I believe in openly sharing my work in the hope of inspiring other to dive into this subject – so I’m releasing a partial first step in order to get some feedback. Please note that this work is a personal effort – it is not a part of our work with the OLRC. At this point I’ve written a transform to produce Akoma Ntoso XML according to the most recent schema released a few weeks ago. The transform is not finished, but it gives a pretty good rendition of the U.S. Code in Akoma Ntoso. I’m using the transform as a vehicle to identify use cases and issues which I can bring up with the OASIS TC at our weekly meetings. As a result, there are a few open issues and the resulting XML does not fully validate.

I’m making 8 Titles available now. They’re smaller Titles which are easier for me to work with as I refine the transform. Actually, I do have the first 25 Titles converted into Akoma Ntoso, but I’ll need to address some performance and space issues with my tired old development server before I can release the full set. Hopefully, over the next few months, I’ll be able to complete this work.

When you look at the XML, you will notice a “proposed” namespace prefix. This simply shows proposed aspects of Akoma Ntoso that are not yet adopted. Keep in mind that this is all development work – do not assume that the transformation I am showing is the final end result.

I’m looking for feedback. Monica, Fabio, Veronique, and anyone else – if you see anything I got wrong or could model better, please let me know. If anyone finds the way I modeled something troubling, please let me know. I’m doing this work to open up a conversation. By trying Akoma Ntoso out in different usage scenarios, we can only make it better.

Don’t forget the Library of Congress’ Legislative Data Challenge. Perhaps my transformation of the U.S. Code can inspire someone to participate in the challenge.

Standard
Akoma Ntoso, Hackathon, HTML5, LegisPro Web, Standards, Transparency, W3C

Web-Based XML Legislative Editor Update

It’s been quite a while since I gave an update on our web-based XML legislative editor – LegisProweb. But that doesn’t mean that nothing has been going on. Quite the contrary, this has been a busy year for the editor project.

Let me first recap what the editor is. It’s an XML editor, written entirely around HTML5 technologies. It was first developed last year as the centerpiece to a Hackathon that Ari Hershowitz and I staged in San Francisco and around the world. While it is designed as a general purpose XML editor and can be configured to model any XML schema, it’s primarily configured to support Akoma Ntoso.

LegisProWeb

Since then, there has been a lot of continuing interest in the editor. If you attended the 2013 Legislative Data and Transparency Conference this past May in Washington DC, you may have noticed Jim Harper of the Cato Institute demonstrating their “Deepbills” project. The editor you saw is a heavily customized early version of LegisProweb, reconfigured to handle the XML format that the US Congress publishes legislation in.

And that’s not the only place where LegisProweb has been adopted. We’re in the finishing stages of a somewhat larger implementation we did for Chile. This is an Akoma Ntoso implementation – focused on debates and debate reports rather than on legislation. One interesting point worth noting – this implementation is done in Spanish. LegisProweb is quite easily localized.

The common thread between these two implementations in the use case – they’re both implementations focused on tagging metadata within pre-existing documents rather than on creating new documents from scratch. This was the focus of the Hackathon we staged back in 2012 – little did we know how much of a market would exist for an editor focused on annotation rather than document creation. And there’s more to still come – we’ve been quite surprised in the level of interest in this particular use-case.

Of course, we’re not satisfied with an editor that can only annotate existing documents. We’ve been hard at work turning the editor into a full-featured legislative editor that works equally well at creating new documents as it does at annotating existing documents. In addition, we’ve made the editor very customizalble as well as adding capabilities to manage the comments and discussions that might revolve around a document as it is being created and annotated.

Most recently, the editor has been upgraded to the latest version of Akoma Ntoso coming out of the OASIS legal document ML technical committee where I am an active member. Along with that effort, the validator has been separated to run as a standalone Akoma Ntoso validator. I talked about that in my blog last week. I’m busy using the validator as I work frantically to complete an Akoma Ntoso project I am working on this week. I’ll talk some more about this project next week.

So where do we go from here? Well, the first big effort is to modularize the technologies found within the editor. We now have a diverse set of customers right now and they can all benefit from the various bits and pieces that make up LegisProweb. By modularizing the pieces, we’ll be able to pick and choose which parts we use when and how. Separating out the validator was the first step. We’ll also be pulling out the reference resolver, attaching it to a native XML database, and partitioning out the client-side to allow the editing component to be used without the full editing environment offered by LegisProweb.

One challenge that remains is handling redlining – managing insertions and deletions. This is a very difficult subject – and one I tackled in the work I did implementing the XML editor used by the California legislature. I took a very different approach in trying to solve the problem with LegisProweb, but I’m not happy with the result. So, I’ll be returning to the proven approach we used way back when we built the original LegisPro editor on XMetaL.

As you can tell, we’ve got our work for the next year cut out for us.

Standard
Akoma Ntoso, LegisPro Web, Standards, Transparency, W3C

Free Akoma Ntoso Validator

How are people doing with the Library of Congress’ Akoma Ntoso Challenge? Hopefully, you’re making good progress, having fun doing it, and in so doing, learning a valuable new skill with this important emerging technology.

I decided to make it easy for someone without an XML Editor to validate their Akoma Ntoso documents for free. We all know how expensive XML Editors tend to be. If you’re like me, you’ve used up all the free trials you could get. I’ve separated the validation part of our LegisProweb editor from the editing base to allow it to be used as a standalone validator. Now, all you need to do is either provide a URL to your document or, even easier, drop the text into the text area provided and then click on the “Validate” button. You don’t even need to go find a copy of the Akoma Ntoso schema or figure out how to hook it up to your document – I do all that for you.

To use the validator, simply draft your Akoma Ntoso XML document, specifying the appropriate namespace using the @xmlns namespace declaration, and then paste a copy into the validator. I’ll go and find the schema and then validate your document for you. The validation results will be shown to you conveniently inline within your XML source to help you in making fixes. Don’t worry, we don’t record anything when you use the validator – it’s completely anonymous and we keep no record of your document.

You can validate either the 2.0 version of Akoma Ntoso or the latest 3.0 version which reflects the work of the OASIS LegalDocumentML committee. Actually, there are quite a few other formats that the validator also will work with innately and, by using xsi:schemaLocation, you can point to any XML schema you wish.

Give the free Akoma Ntoso XML Validator a try. You can access it here. Please send me any feedback you might have.

Validator1Input Form Validator2Validation Results
Standard
Transparency

U.S. House of Representatives release the U.S. Code in XML.

This week marked a big milestone for us. The U.S. House of Representatives released the U.S. Code in XML. You can see the announcement by the Speaker of the House, John Boehner (R-Ohio), here. This is a big step forward towards a more transparent Congress. As many of you know, my company, Xcential, has worked closely with the Law Revision Counsel on this project. It has been an honor to provide our expertise as part of our on-going efforts with the U.S. House of Representatives.

This project has been a great opportunity for us to update the U.S. House of Representatives technology platform by introducing new XML schema techniques along with robust and high performance conversion tools. Our eleven years in this field, working on an international scale, has given us valuable insights into XML techniques which we were able to bring to bear to ensure that success of this project.

The feedback has been very good:

As you can expect, members of the technical community have swiftly picked up on this release and are actively finding ways to use the data it provides. Josh Tauberer of GovTrack.us has already started – check out his work here. Why did I already know he would be the first to jump in. 🙂

Of course, if you know me, you’ll know that I also have something up my sleeve. I’ll be spending my weekends and evenings for the next few weeks to release an Akoma Ntoso transform coincident with an upcoming OASIS LegalDocML announcement. Keep watching my blog for more info.

This project has been one of numerous projects we are working on right now. We have a very similar project underway in Asia and an Akoma Ntoso project nearing completion using our HTML5-based editor, LegisProWeb, in South America. I’ll be providing an update on LegisProweb in the coming weeks.

Standard
Akoma Ntoso, LegisPro Web, Standards, Transparency

Akoma Ntoso Challenge by the Library of Congress

As many of you may have already read, the U.S. Library of Congress has announced a data challenge using Akoma Ntoso. The challenge lasts for three months and offers a $5,000 prize to the winner.

In this challenge, participants are asked to mark up four Congressional bills, provided as raw text, into Akoma Ntoso.

If you have the time to participate in this challenge and can fulfill all the eligibility rules, then I encourage you to step up to the challenge. This is a good opportunity to give Akoma Ntoso a try – to both learn the new model and to help us to identify any changes or adaptations that must be made to make Akoma Ntoso suitable for use with Congressional legislation.

You are asked, as part of you submission, to identify gaps in Akoma Ntoso’s design along with documenting the methodology you used to construct your solution to the four bills. You’re also encouraged to use any of the available open-source editors that are currently available for editing Akoma Ntoso and to provide feedback on their suitability to the task.

I would like to point out that I also provide an Akoma Ntoso editor at http://legisproweb.com. It is free to use on the web along with full access to all the information you need to customize the editor. However, while our customers do get an unrestricted internal license to the source code, our product is not open source. At the end of the day, I must still make a living. Nonetheless, I believe that you can use any editor you wish to create your four Akoma Ntoso documents – it’s just that the sponsors of the competition aren’t looking for feedback on commercial tools. If you do choose to use my editor, I’ll be there to provide any support you might need in terms of features and bug fixes to help speed you on your way.

Standard
Process, Transparency

Transparent legislation should be easy to read

Legislation is difficult to read and understand. So difficult that it largely goes unread. This is something I learned when I first started building bill drafting systems over a decade ago. It was quite a let down. The people you would expect to read legislation don’t actually do that. Instead they must rely on analyses, sometimes biased, performed by others that omits many of the nuances found within the legislation itself.

Much of the problem is how legislation is written. Legislation is often written so as to concisely describe a set of changes to be made to existing law. The result is a document that is written to be executed by a law compilation team deep within the government rather than understood by law makers or the general public. This article, by Robert Potts, rather nicely sums up the problem.

Note: There is a technical error in the article by Robert Potts. The author states “These statutes are law, but since Congress has not written them directly to the Code, they are added to the Code as ‘notes,’ which are not law. So even when there is a positive law Title, because Congress has screwed it up, amendments must still be written to individual statutes.” This is not accurate. Statutory notes are law. This is explained in Part IV (E) of the DETAILED GUIDE TO THE CODE CONTENT AND FEATURES.

So how can legislation be made more readable and hence more transparent? The change must come in how amendments are written – with an intent to communicate the changes rather than just to describe them. Let’s start by looking at a few different ways that amendments can be written:

1) Cut-and-Bite Amendments

Many jurisdiction around the world use the cut-and-bite approach to amending, also known as amendments by reference. This includes Congress here in the U.S., but it is also common to most of the other jurisdictions I work with. Let’s take a look at a hypothetical cut-and-bite amendment:

SECTION 1. Section 1234 of the Labor Code is amended by repealing “$7.50” and substituting “$8.50”.

There is no context to this amendment. In order to understand this amendment, someone is going to have to go look up Section 1234 of the Labor Code and manually make apply the change to see what it is all about. While this contrived example is simple, it already involves a fair amount of work. When you extrapolate this problem to a real bill and the sometimes convoluted state of the law, the effort to understand a piece of legislation quickly becomes mind-boggling. For a real bill, few people are going to have either the time or the resources to adequately research all the amendments to truly understand how they will affect the law.

2) Amendments Set Out in Full

I’ve come to appreciate the way the California Legislature handles this problem. The cut-and-bite style of amending, as described above, is simply disallowed. Instead, all amendments must be set out in full – by re-enacting the section in full as amended. This is mandated by Article 4, section 9 of the California Constitution. What this means is that the amendment above must instead be written as:

Section 1. Section 1234 of the Labor Code is amended to read:

1234. Notwithstanding any other provision of this part, the minimum wage for all industries shall be not less than $8.50 per hour.

This is somewhat better. Now we can see that we’re affecting the minimum wage – we have the context. The wording of the section, as amended, is set out in full. It’s clear and much more transparent.

However, it’s still not perfect. While we can see how the amended law will read when enacted, we don’t actually know what changed. Actually, in California, if you paid attention to the bill redlining through its various stages, you could have tracked the changes through the various versions to arrive at the net effect of the amendment. (See note on redlining) Unfortunately, the redlining rules are a bit convoluted and not nearly as apparent as they might seem to be – they’re misleading to the uninitiated. What’s more, the resulting statute at the end of the process has no redlining so the effect of the change is totally hidden in the enacted result.

Setting out amendments in full has been adopted by many states in addition to California. It is both more transparent and greatly eases the codification process. The codification process becomes simple because the new sections, set out in full, are essentially prefabricated blocks awaiting insertion into the law at enactment time. Any problems which may result from conflicting amendments are, by necessity, resolved earlier rather than later. (although this does bring along its own challenges)

3) Amendments in Context

There is an even better approach – which is adopted to varying degrees by a few legislatures. It is to build on the approach of setting out sections in full, but adds a visible indication of what has changed using strike and insert notation. I’ll refer to this as Amendments in Context.

This problem is partially addressed, at the federal level, by the Ramseyer Rule which requires that a separate document be published which essentially does shows all amendments in context. The problem is that this second document isn’t generally available – and it’s yet another separate document.

Why not just write the legislation showing the amendments in context to begin with? I can think of no reason other than tradition why the law, as proposed and enacted, shouldn’t show all amendments in context. Let’s take a look at this approach:

Section 1. Section 1234 of the Labor Code is amended to read:

1234. Notwithstanding any other provision of this part, the minimum wage for all industries shall be not less than $7.50 $8.50 per hour.

Isn’t this much clearer? At a glance we can see that the minimum wage is being raised a dollar. It’s obvious – and much more transparent.

At Xcential, we address this problem in California by providing an amendments in context view for all state legislation within our LegisWeb bill tracking service. We call this feature As Amends the LawTM and it is computed on-the-fly.

Governments are spending a lot of time, energy, and money on legislative transparency. The progress we see today is in making the data more accessible to computer analysis. Amendments in context would make the legislation not only more accessible to computer analysis – but also more readable and understandable to people.

Redlining Note: If redlining is a new term to you, it is similar to, but subtly different, to track changes in a word processor.

Standard
Akoma Ntoso, Standards, Transparency

Legislative Data: The Book

Last week, as I was boarding the train at Admiralty station in Hong Kong to head back to the office, I learned that I am writing a book. +Ari made the announcement on his blog. It seems that Ari has found the key to getting me to commit to something – put me in a situation where not doing it is no longer an option. Oh well…

Nonetheless, there are many good reasons why now is a good time to write a book. In the past year we have experienced a marked increase in interest in the subject of legislative data. I think that a number of factors are driving this. First, there is renewed interest in driving towards a worldwide standard – especially the work being done by the OASIS LegalDocumentML technical committee. Secondly, the push for greater transparency, especially in the USA, is driving governments to investigate opening up their databases to the outside world. Third, many first generation XML systems are now coming due for replacement or modernization.

I find myself in a somewhat fortuitous position of being able to view these developments from an excellent vantage point. From my base in San Diego, I get to work with and travel to legislatures around the world on a regular basis. This allows me to see the different ways people are solving the challenges of implementing modern legislative information managements systems. What I also see, is how many jurisdictions struggle to set aside obsolete paper-based models for how legislative data should be managed. In too many cases, the physical limitations of paper are used to define the criteria for how digital systems should work. Not only do these limitations hinder the implementation of modern designs, they also create barriers that will prevent fulfilling the expectations that come as people adapt to receiving their information online rather than by paper.

The purpose of our book will be to propose a vision for the future of legislative data. We will share some of our experiences around the world – focusing on the successes some legislatures have had as they’ve broken legacy models for how things must work. In some cases the changes involve simply better separating the physical limitations of the published form from the content and structure. In other cases, we’ll explain how different procedures and conventions can not only facilitate the legislative process, but also make it more open and transparent.

We hope that by producing a book on the subject, we can help clear the path for the development of a true industry to serve this somewhat laggard field. This will create the conditions that will allow a standard, such as Akoma Ntoso, to thrive which, in turn, will allow interchangeable products to be built to serve legislatures around the world. Achieving this goal will reduce the costs and the risks of implementing legislative information management systems and will allow the IT departments of legislatures to meet both the internal and external requirements being placed upon them.

Ari extended an open invitation to everyone to propose suggestions for topics for us to cover. We’ve already received a lot of good interest. Please keep your ideas coming.

Standard
Akoma Ntoso, HTML5, LegisPro Web, Standards, Transparency

2013 Legislative Data and Transparency Conference

Last week I participated in the 2013 Legislative and Transparency Conference put on by the U.S. House of Representatives in Washington D.C.

It was a one day event that featured numerous speakers both within the U.S. government and in the surrounding transparency community around D.C. My role, at the end of the day, was to speak as a panelist along with Josh Tauberer of GovTrack.us and Anne Washington of The George Washington University on Under-Digitized Legislative Data. It was a fun experience for me and allowed me to have a friendly debate with Josh on API’s versus bulk downloads of XML data. In the end, while we both fundamentally agree, he favors bulk downloads while I favor APIs. It’s a simple matter of how we use the data.

The morning sessions were all about the government reporting the progress they have made over the past year relating to their transparency initiatives. There has been substantial progress this year and this was evident in the various talks. Particularly exciting was the progress that the Library of Congress is making in developing the new congress.gov website. Eventually this website will expand to replace THOMAS entirely.

The afternoon sessions were kicked off by Gherardo Casini of the UN-DESA Global Centre for ICT in Parliament in Rome, Italy. He gave an overview of the progress, or lack thereof, of XML in various parliaments and legislatures around the world. He also gave a brief mention of the progress in the LegalDocumentML Technical Committee at OASIS which is working towards the standardization of Akoma Ntoso. I am a member of that technical committee.

The next panel was a good discussion on extending XML. The panelists were Eric Mill at the Sunlight Foundation who, among other things, talked about the HTML transformation work he has been exploring in recent weeks. I mentioned his efforts in my blog last week. Following him was Jim Harper at the Cato Institute. He talked about the Cato Institute’s Deepbills project. Finally, Daniel Bennett gave a talk on HTML and microdata. His interest in this subject was also mentioned in my blog last week.

One particularly fun aspect of the conference was walking into the entrance and noticing the Cato Institute’s Deepbills editor running on the table at the entrance. The reason it was fun for me is that their editor is actually a customization of an early version of the HTML5-based LegisPro Web editor which I have spent much of the past year developing. We have developed this editor to be an open and customizable platform for legislative editing. The Cato Project is one of four different implementations which now exist – two are Akoma Ntoso based and two are not. More news will come on this development in the not-too-distant future. I had not expected the Cato Institute to be demonstrating anything and it was quite a nice surprise to see software I had written up on the display.

If there was any recurring theme throughout the day, it was the call for better linked data. While there has been significant progress over the past year towards getting the data out there, now it is time to start linking it all together. Luckily for me, this was the topic I had chosen to focus on in my talk at the end of the day. It will be interesting to see the progress that is made towards this objective this time next year.

All in all, it was a very successful and productive day. I didn’t have a single moment to myself all day. There were so many interesting people to meet that I didn’t get a chance to chat with nearly as many as I would have liked to.

For an amusing yet still informative take on the conference, check out Ari Hershowitz’s Tabulaw blog. He reveals a little bit more about some of the many projects we have been up to over the past year.

https://cha.house.gov/2013-legislative-data-and-transparency-conference

Standard
Uncategorized

Legislative Terminology — The Same but Different

In my last blog, I covered a lot of the variations I find around the world. I do a lot of document analysis, working to map various legislative traditions into Akoma Ntoso. Doing the job right sometime means understanding nuances and resisting the temptation to apply rules learned elsewhere.

There are a number of terms that often require very careful consideration:

  • In legislation in the English speaking world, the “middle” layer is usually the Section. Numbering is sequential starting at the beginning of the document and continuing to the end of the document regardless of the hierarchy above. In non-English speaking countries, this level is the Article and the Section is a upper level like a Part or Chapter.

    However, there are exceptions. In the US Constitution, this practice is not followed. In the US Constitution, sections are found in articles. This arrangement is the opposite way around to European legislation where articles are found in sections. This doesn’t really make a lot of sense. In a newspaper, articles are found in sections of the paper like the business or sports section. This same structure exists in HTML5. Perhaps Thomas Jefferson and the other framer’s of the US Constitution were trying to add a bit of European flair to their work, but got the order backwards. Many Constitutions around the world are modelled on the US Constitution and adopt the same unusual Article/Section arrangement.

    One quirk I came across lately was most confusing and presented an interesting conundrum. While the prevailing practices in the jurisdiction were British in tradition, a few statutes adopted a more European style. The sections were numbered sequentially and always referred to as sections. However, the numbering never explicitly calls out the level type (e.g. the section number is “2.” rather than “Sec 2.”) Nonetheless, knowing that this level is a Section, we had modelled the sections as akn:section. However, we then discovered a small handful of statutes that had upper level sections as found in European legislation (e.g. SECTION 3). So, in these documents, there were two complete difference types of constructs both called sections. While this was probably an error caused by drafting rules not being enforced properly, the result was enacted law containing this error. We ended up using an akn:hcontainer with a @name = section to create another distinct type of Section.
  • One common area of confusion is the use of plurals. We see this all over the place. For example, in some jurisdictions, the Section type construct is known as a Regulation and the document containing them is called Regulations. Other jurisdictions refer to the sections as Section, and the document itself is the Regulation.

    This same practice is found with rules, but in that case, the section type construct is called a Rule and the document is known as Rules. In this case, this naming practice is nearly universal.

    We find this same inconsistency with Bill Amendments. In some jurisdiction, each individual change is referred to as an Amendment and the collective whole are Amendments or an Amendment List. In other jurisdictions the individual changes are known as Instructions and the collective whole is the Amendment. This difference can be confusing when mapping to Akoma Ntoso as that schema implies the former convention as this is more common in Europe while the latter approach is more prevalent in the U.S.
  • Another area of confusion is the difference between an Annex and a Schedule. The European concept of an Annex is separate document treated somewhat as an attachment to the base document. However, a Schedule is different — it clearly a part of the Body of the document. While it is most often found at the end of the body of the document, in some jurisdictions which complex hierarchical structures, schedules can also be found at the end of any upper hierarchical level. This construct is one that cannot currently be modelled in Akoma Ntoso without resorting to akn:hcontainer although the proposed next version includes akn:schedule to rectify this.

Mapping a jurisdiction’s legislation into Akoma Ntoso can be tricky. The mapping isn’t always straightforward and almost always an exhaustive analysis of the entire body of existing laws will reveal that there are no hard and fast rules. As existing law can’t just be “fixed” to be consistent, it is often necessary to come up with creative ways to handles the oddities that are found.

Standard
Akoma Ntoso, Process

Legislative Archeology

One of the cool aspects of my job is that I get to work in different legislative traditions around the world. Sometimes it feels like a bit of an archeological dig, uncovering the history of how a jurisdiction came to be. Traditions are layered upon other traditions to result in unique practices that are clearly derived from one another.

While I am no scholar in the subject and I’ve yet to come across any definitive description of the subject, I find exploring the subject quite fascinating.

So far I’ve come across four distinct, but related, legislative traditions:

  1. The Westminster-inspired traditions found in the UK and around the world in the far reaches of the former British Empire.
  2. The U.S. Federal traditions which are a distinct variant of UK inspired legislation, but which have come quite different and complex in comparison. I think that the structure of the U.S. government, as specified by the U.S. Constitution, has led to substantial evolution of legislative practices.
  3. The U.S. state’s tradition, which are also a distinct variant of UK inspired legislation, but which have changed largely thanks to legislative reforms in the mid-twentieth century.
  4. European traditions which are largely similar to Westminster, but which tend to have their own unique twist, sometimes dating back to Roman times.

I generally simplify the four traditions based on few key characteristics which I find to be key distinguishers. It’s like looking at DNA and, while finding that a lot of the sequences remain the same, the are a few key differences that will reveal the genealogy of the jurisdiction.

UK traditions are generally layers and layers of statutes which are the law of the land. Bills either lay down new laws or amend existing law. Bills that only amend existing laws are often known as amending bills. It often seems that there are around seven hundred to a thousand base statutes. Subsidiary or secondary legislation, as in rules, regulations, etc., are quite closely related to primary legislation and is quite similar in structure.

US Federal traditions start the very slow process of re-compiling statutes as a single large code, the U.S. Code. As this process has been very slow and arduous, the result is a hybrid system with both a code and with statutes. The separation of powers causes subsidiary legislation to be far more distinct and the relationship to primary legislation is much less obvious.

U.S. States have also adopted codes (or in some cases, revised statutes) as a means to tidy up and arrange the laws in a more orderly fashion. In general, this task was undertaken in the mid-twentieth century and is complete. Another reform that came at the same time was a forced simplification of bills. Whereas Federal bills can become gigantic omnibus bills with lats

Standard
Uncategorized

Comparing DOCX to Akoma Ntoso for Legislation

After describing what makes for good legislative XML, I feel I should bring up a favorite topic of mine — why word processors don’t make for good legislative drafting tools.

Lately, we’ve been implementing round tripping tools to allow Akoma Ntoso documents to be imported and exported from Microsoft Word. This is to facilitate migration from a largely office productivity-oriented system to an XML-based one and to allow the exchange of documents with external clients that don’t have access to the internal systems being used to draft and manage legislation. It’s been quite a difficult process. The round-tripping itself has been quite straight forward. Exporting a document is relatively easy and reimporting that exported document, unchanged, isn’t difficult. What is very problematic is trying to ingest documents drafted or extensively edited using a word processor. The DOCX markup quickly becomes a tangled mess. Even when a document looks fine visually, there can be a lot going wrong on the inside, revealing the drafter’s struggle with the word processor to get a document that at least looks right. To avoid the problematic mess, we tend to resort to interpreting the words and discarding the structure and internal metadata entirely. It’s not perfect, but it’s at least manageable.

I’m going to compare the prominent word processing format today, DOCX (well, at least the WordprocessingML part of it) to Akoma Ntoso in respect to how they stack up to each other on my list:

  • Is it semantic?
    DOCX: No, not at all. DOCX is a serialization of the inner workings of Microsoft Word. It makes no attempt to be anything else.
    Akoma Ntoso: Yes, this is the fundamental approach Akoma Ntoso takes.
  • Is the presentation separated from the semantics as much as possible?
    DOCX: No, the presentation is tied directly into the document itself, and what’s more, is very proprietary.
    Akoma Ntoso: Yes, although you can apply presentation directly inline in cases, such as tables, where necessary.
  • Is all the text (excluding any metadata section) in the natural reading order?
    DOCX: Yes, for the most part.
    Akoma Ntoso: Yes, for the most part.
  • Does it, to the fullest extent possible, avoid the use of generated text?
    DOCX: No, and this is one of the most frustrating and infuriating parts of working with DOCX.
    Akoma Ntoso: Mostly, but it doesn’t preclude practices that ensure this rule is followed.
  • Is every provision that needs data associated with it permanently identifiable?
    DOCX: Mostly.
    Akoma Ntoso: Yes, via the @wId or the @GUID attributes.
  • Is every provision that is referred to easily locatable?
    DOCX: Not without extensive customization.
    Akoma Ntoso: Yes, via a standardized locator mechanism using the @eId/@wId attributes.
  • If the XML schema is for general use, is there an extensible way to add missing constructs?
    DOCX: No, unless you regard styling as your constructs (a bad idea) or want a complex customization task.
    Akoma Ntoso: Yes, via the seven elements found in the generic model.
  • Is there an extensible metadata mechanism?
    DOCX: Yes, but it’s complicated.
    Akoma Ntoso: Yes, but it’s complicated.
  • Does it provide the facilities necessary to automate according to modern expectations?
    DOCX: No, the presentation oriented structure of DOCX does little to enable downstream automation.
    Akoma Ntoso: Yes, Akoma Ntoso encourages a hierarchical content structure that is ideal for downstream automation.

Of course, Akoma Ntoso looks a lot better for legislative documents than does DOCX files. That should be no surprise — Akoma Ntoso is purpose-built for legislation while DOCX is a general purpose document model intended for no single purpose. But it is also fundamentally very different. While Akoma Ntoso is designed to be in modern standards-based document information model for legislation, DOCX is a serialization of the archaic data structures that exist within Microsoft Word. DOCX reflects the proprietary inner workings of Microsoft Word rather than the semantic meanings to be found within a document.

Akoma Ntoso has its drawbacks too. It’s complex, a bit academic, and has to span a very broad range of legal traditions make it a good fit for most legislative traditions, but a perfect fit for none.

Standard