Home Thinking Aloud GPT-3:  An AI That Makes Cars, Not Wrenches And What It Means...

GPT-3:  An AI That Makes Cars, Not Wrenches And What It Means For The Legal Profession

1436
0

by Rudy DeFelice, CEO of Keesal Propulsion Labs and author of “The Pink Box Experience – How Talked-About Companies Got That Way

One doesn’t have to dig too deep into legal organizations to find AI skeptics. AI is getting tremendous attention and significant venture capital, but AI tools frequently underwhelm in the trenches. Here are a few reasons why that is and why I believe GPT-3, a beta version of which was recently released by the OpenAI Foundation, might be a game-changer in legal and other knowledge-focused organizations.

GPT-3 is getting a lot of oxygen lately because of its size, scope and capabilities. However, it should be recognized that a significant amount of that attention is due to its association with Elon Musk. The OpenAI Foundation which created GPT-3 was founded by heavy hitters Musk and Sam Altman and is supported by Mark Benioff, Peter Thiel and Microsoft, among others. Arthur C. Clarke once observed that great innovations happen after everyone stops laughing.  Musk has made the world stop laughing in so many ambitious areas that the world is inclined to give a project in which he’s had a hand a second look. GPT-3 is getting the benefit of that spotlight. I suggest, however, that the attention might be warranted on its merits.

Here are a few reasons why some AI-based tools have struggled in the legal profession and how GPT-3 might be different.

1. Because Not Every Problem Is A Nail.

It is said that when you’re a hammer, every problem is a nail. The networks and algorithms that power AI are quite good at drawing correlations across enormous datasets that would not be obvious to humans. One of my favorite examples of this is a loan-underwriting AI that determined that the charge level of the battery on your phone at the time of application is correlated to your underwriting risk. Who knows why that is? A human would not have surmised that connection. Those things are not rationally related, just statistically related.

This capability makes AI tools good at grouping like things together to facilitate users finding them based upon revealed correlations. Consequently, many AI applications are some variant of finding stuff better. It is what they do well. However, “finding stuff” is not a first-order problem in legal organizations. It is merely a means to an end.

The “end” in legal organizations, is a document of some kind. Documents are their widget, the thing legal teams build. Finding information that is relevant to creating a document is helpful. Actually producing that document, though, is far more helpful.

Producing documents, as it turns out, is something GPT-3 does very well. That is at the heart of its distinction from many other AI tools – its ability to produce sophisticated documents. At its core, GPT-3 is a text prediction engine. It is designed to accept as input a string of text and from that input predict, from a statistical analysis of everything it has ingested, what text should come next. That process can be repeated recursively, so from a simple text string an entire document can be generated.

It does this through statistics and algebra, more or less. GPT-3 has read, essentially, everything. At least, all substantive publicly available documents in huge portions of the internet, which at this point in history represents a material segment of all expressed human knowledge. Based upon that, it can predict, given some input of text, what text is statistically likely to come next. You can feed it a few lines and it predicts the next. Moreover, early testers claim that you can instruct GPT-3 to write in a certain voice. Your document can be created in the voice of Hemingway, Shakespeare, or Barack Obama. Pretty cool stuff.

This, I think, is the breakthrough for legal organizations. GPT-3 isn’t just finding stuff for you. GPT-3 is making stuff for you. Certainly other AI products add value. It’s not trivial that we have something that makes wrenches. But it’s another thing entirely, if you sell cars, to have something that makes a car.

2. With Data Sets, Sometimes The Juice Isn’t Worth The Squeeze.

Most enterprises that have implemented AI tools confront the training dataset problem. Algorithms that were designed with enormous datasets, depend upon such large datasets in operation. When such tools come out of the lab and into the enterprise, assembling the appropriate dataset is often a gating factor.

The issue in legal organizations is one of scale and effort. The volume of documents in most legal organizations, even large ones, are nowhere near the numbers for which AI tools were designed. In addition, vetting and assembling such datasets and authenticating a product’s performance after training on such collections can be extremely time consuming. In areas such as contract intelligence,  tools that are trained on large publicly available data, such as the SEC’s EDGAR database, can be an exception to this problem. These tend to work out of the box on an organization’s contracts, which tend to be similar to the large public dataset. However, absent this pre-delivery training, it is frequently found that creating and monitoring the dataset is a bar to success in an organization.

GPT-3, however, has been pre-trained on billions of substantive documents from large collections of publicly available documents. Since GPT-3 is pre-trained with a vast dataset, it is functional out of the box for the purpose of generating documents. Early research suggests that it can be hyper-tuned on an organization’s own data, but it doesn’t have to be. This solves the primary challenge for business users in getting out of the gate with some AI tools.

3. Thinking Isn’t As Important As Doing.

One criticism that has been levied on GPT-3 is that it does not “reason” as humans do, so on occasion its output is absurd. That’s an accurate criticism and the public conversation about GPT-3 is not short of humorous examples.

GTP-3 is a statistical engine, without the reasoning ability of humans or the yet-to-be-created “strong AI.” Frequently people ask whether an AI will pass the Turing Test (meaning would it fool a human into thinking he/she were interacting with another human). While that is a useful shorthand for measuring an AI’s reasoning ability compared to humans, it doesn’t say much about its usefulness. In knowledge organizations where creating documents is a central activity, usefulness is judged by a tool’s ability to do that task, not its ability to fool someone about the source. For that purpose, GPT-3 appears to be well-suited. While the absurd output that GPT-3 sometimes creates can be fun to see, the wrong turns are pretty obvious and unlikely to escape even cursory review. Most of us can probably point to some pretty absurd output from humans too, but it’s hardly a reason to dismiss them as participants in the ecosystem.

What GPT-3 does is create stuff, rapidly, based upon a significant chunk of human knowledge. For all that doing, perhaps its lack of thinking, can be forgiven.

Some Applications For GPT-3 Worth Exploring In Legal Organizations.

Since GPT-3 is good at generating documents, it’s easy to imagine applications of this technology in legal organizations. Almost any task that is document-oriented (except presumably those where the unique facts overwhelm all other aspects of the document) are good candidates. Here are a few that we will be exploring with our corporate legal department clients, which I suspect are representative of those that make sense in other organizations:

Powering Intake Systems:  In our work with corporate legal departments, a common problem is managing the interaction between business units and the legal departments. The requester wants prompt, accurate help. The legal department wants to give that help while complying with headcount and bandwidth constraints.

One aspect of the intake process can be providing immediate answers to common questions.  GPT-3 can be part of that solution by providing contextual answers (rather than selecting from an inventory of stock answers like typical bots) as well as creating first drafts of documents. GPT-3’s ability to create answers and documents on the fly can enable chat and intake systems that users find useful rather than off-putting.

Document creation: Creating first-pass documents based on what has been done before is not only powerful, it’s what humans already do. Early in my legal practice a senior partner once told me “We created a set of documents in the Garden of Eden and have just been modifying them ever since.” GPT-3’s garden is much larger – it can include your collection, plus everything else. Also, it analyzes 175 billion parameters, so it makes statistically valid decisions incredibly fast. One can imagine GPT-3 being part of the process that creates initial drafts of legal memoranda, contracts, policy manuals, HR documents, RFP’s and audit responses, among other things commonly created by finding and patching together prior versions of these documents by people.

Where We Go From Here.

GPT-3 is not the only helpful AI tool at our disposal. However, it does represent a transition from making the raw materials of end products to making end products themselves. For legal organizations and other knowledge workers, that is a material change. In addition, the enormous dataset upon which it is pre-trained removes one of the barriers to experimentation and implementation.  GPT-3 has its limits.   But frequently the first limit is our imaginations.   In the case of GPT-3, stretching our imaginations might serve us well.

My company specializes in understanding what business capabilities can be enabled by all the cool new tech coming into the world.  We believe that GPT-3 provides some new tools in a legal department’s arsenal and will be focused on assessing practical, impactful solutions, hopefully making better legal organizations in the process.  Once the world stops laughing, of course.

 

Rudy DeFelice is co-founder and CEO of Keesal Propulsion Labs, a digital transformation company serving the law departments of the Fortune 500. Rudy is an attorney, technology entrepreneur, TEDx speaker and best-selling author. He is an alumnus of Harvard Business School and the University of Connecticut School of Law. He is author of “The Pink Box Experience – How Talked-About Companies Got That Way“.