Illustrating ChatGPT’s limitations, Hammond described how he prompted ChatGPT to generate a description of the MSAI program. The bot response skewed toward the norm by incorrectly stating that MSAI is a two-year degree program — the program is 15 months. This error is an example of bias, or an erroneous assumption, based on ChatGPT’s training data. The bot also omitted unique information about the program, namely MSAI’s industry partnerships.
“When you are looking for an answer that is best practice or conventional wisdom, those are marvelous places for statistical methods,” Hammond said. “But if you start wandering into the realm of the bespoke, or the unique, you’ll run into problems.”
Hammond stressed the importance of understanding the nature of a task and confining technologies to the tasks they were built to solve. He suggested a language model like ChatGPT might not, for example, be suitable for the task of determining how changing a clause in a contract will impact the document.
“You have to understand the length and breadth of the technology and where it collapses, and make sure the task is not one that demands something beyond its limits,” Hammond said. “ChatGPT might be good at taking a test. But, because of the nature of the underlying mechanism, it may never be capable of genuine reasoning, being imaginative, or thinking beyond the moment.”
Implications for law and legal services
McGinnis discussed his expectations regarding howal technologies like ChatGPT may affect legal services and law, including increasing computational efficiency, improving accuracy, and reducing costs.
He suggested that certain areas of law are more conservative and stable over time, like trust laws, might be more easily impacted by technology than edge cases and laws that are rapidly changing, like cybersecurity.
“I don’t think, at least in the foreseeable future, that