The American Law Institute (ALI) is writing a new Principles of the Law of Software Contracts (PLSC), to replace the failed Uniform Computer Information Transactions Act (UCITA). I recently attended ALI’s annual meeting, in which we reviewed the Principles and am giving a first status report at the Conference of the Association for Software Testing (CAST July 9-10).
The Principles raise important issues for our community. For example:
- They apply a traditional commercial rule to software contracting–a vendor who knows of a hidden material defect in its product but sells the product without disclosing the defect is liable to the customer for damage (including expenses) caused by the defect.
- A defect is hidden if a reasonably observant person in the position of the buyer would not have found it in an inspection that one would expect a reasonably diligent customer to perform under the circumstances.
- A defect is material if it is serious enough to be considered a significant breach of the contract.
I think this is an excellent idea. It reflects the fact that no one can know all of the bugs in their product and lets vendors shield themselves from liability for defects they didn’t know about, but it demands that vendors reveal the serious bugs that they know about, so that customers can (a) make better informed purchase decisions and (b) avoid doing things that have serious effects.
I think we could help clarify it:
- When should we hold a software company responsible for “knowing” about a defect? Is an irreproducible bug “known”? What about a bug that wasn’t found in the lab but a customer reported it? One customer reported it? 5000 customers reported it? How long should we allow the company to investigate the report before saying that they’ve had enough time to be held responsible for it?
- What counts as disclosure to the customer? The bugs Firefox and OpenOffice are disclosed in their open-to-the-public bug databases. Is this good enough? Maybe it is for these, but I tried figuring out the bugs published in Apache’s database and for many reports, had no clue what these people were writing about. Serious problems were reported in ways that tied closely to the implementation and not at all to anything I would know to do or avoid. For their purpose (help the developers troubleshoot the bug), these reports might have been marvelous. But for disclosure to the customer? What should our standards be?
- What is a material defect anyway? Do the criteria differ depending on the nature of the product? Is a rarely-occuring crash more material in a heart monitor than a word processor? And if a bug corrupts data in a word processor, do we have different expectations from Word, OpenOffice Writer, Wordpad, and my 12-year-old niece’s StudentProjectEditor?
- What about the idea of security by obscurity? That some security holes won’t be exploited if no one knows about them, and so we should give the vendor a chance to fix some bugs before they disclose them? This is a controversial idea, but there is evidence that at least some problems are much more exploited after they are publicized than before.
- Another issue is reverse engineering. Historically, reverse engineering of products (all products–hardware, software, chemical, mechanical, whatever) has been fair game. American know-how has a lot of “building a better mousetrap” in it, and to build a better one, you start by reverse engineering the current one. There have been some very recent, very expansive rulings (such as Baystate v Bowers) that have enforced EULA-based restrictions against software reverse engineering that they would exclude black box testing (for example, for a magazine review).
- The Principles lay out several criteria under which unreasonable restrictions could be ruled unenforceable by a judge. To me, these seem like reasonable criteria (a contract clause is unenforceable if it conflicts with federal or state law, public policy (for example, as expressed in the constitution and statutes), is outrageously unfair, or would be considered an abuse of copyright or patent under (mainly) antitrust doctrines.
- Does it make sense for us to identify solid examples of contexts in which reverse engineering should be permissible (for example, poses absolutely no threat to the vendor’s legitimate interests) and others in which the vendor might have more of a basis and rationale for seeking protection? We can identify these shades of gray with much more experience-based wisdom than lawyers who don’t even know what a loop is, let alone being unable to code their way out of one.
There are plenty of other issues. CAST will provide an opening forum for discussion–remember, CAST isn’t just a place for presentations. We make time for post-presentation discussion, for hours if the conference participants want to go that long.