Wakeling is particularly impressed by Harvey’s translation skills. It is strong in mainstream law, but struggles in specific niches, where it is more prone to hallucinations. “We know the limits and people are extremely knowledgeable about the risk of hallucinations,” he says. “Within the company we have gone very far with a large training program.”
Other lawyers who spoke to WIRED were cautiously optimistic about using AI in their practice.
“It’s certainly very interesting and definitely indicative of some of the fantastic innovations happening within the legal industry,” said Sian Ashton, client transformation partner at law firm TLT. “However, this is definitely a tool in its infancy and I wonder if it really does much more than provide precedent documents that are already available in the company or through subscription services.”
AI will likely continue to be used for entry-level work, says Daniel Sereduick, a data protection lawyer based in Paris, France. “Drafting legal documents can be a very labor-intensive task that AI seems to understand quite well. Contracts, policies and other legal documents are often prescriptive, so AI’s capabilities in gathering and synthesizing information can do a lot of work.”
But, as Allen & Overy has discovered, the output of an AI platform will need to be carefully assessed, he says. “Part of the legal profession is about understanding your client’s specific circumstances, so the output will rarely be optimal.”
Sereduick says that while the output of legal AI must be carefully controlled, the input can be equally challenging to manage. “Data submitted to an AI could become part of the data model and/or training data, and this would most likely violate confidentiality obligations to clients and the data protection and privacy rights of individuals,” he says.
This is a particular concern in Europe, where the use of this kind of AI may violate the principles of the European Union’s General Data Protection Regulation (GDPR), which regulates how much data about individuals can be collected and processed by companies.
“Can you legally use a piece of software built on that foundation? [of mass data scraping]? In my opinion, this is an open question,” said data protection expert Robert Bateman.
Law firms likely need a solid legal basis under the GDPR to feed personal data about clients they monitor into a generative AI tool like Harvey, and there need to be contracts for the processing of that data by third parties operating the AI tools says Bateman.
Wakeling says Allen & Overy does not use personal data for Harvey’s deployment, and would not do so unless it could be satisfied that all data would be locked and protected from any other use. Deciding when that requirement has been met would be a matter for the company’s information security department. “We are extremely careful with customer data,” says Wakeling. “Right now we’re using it as a non-personal data system, non-client data system to save time on research or drafting, or preparing a plan for slides, things like that.”
International law is already hardening when it comes to feeding generative AI tools with personal data. About Europe, the AI law of the EU wants to regulate the use of artificial intelligence more strictly. At the beginning of February, the Italian Data Protection Agency boarded to prevent generative AI chatbot Replika from using its users’ personal data.
But Wakeling believes Allen & Overy can leverage AI while keeping customer data secure, all while improving the way the company operates. “It’s going to make a real difference to productivity and efficiency,” he says. Small tasks that would otherwise take precious minutes of a lawyer’s day can now be outsourced to AI. “When you add that up over the 3,500 lawyers who now have access to it, that’s a lot,” he says. “Even if it’s not a complete disruption, it’s impressive.”