LAMBROS SPYROU
Artificial Intelligence (AI) is already influencing the legal profession and has the potential to greatly influence the profession in the future. AI is currently benefiting the legal profession by performing mechanical tasks and saving substantial costs and time to both large and small law firms. This impact on the profession will be beneficial, contingent upon laws and regulations being introduced that will impose restrictions on AI and its application within the profession, so that it does not come to replace humans. This blog post will argue that the only way AI will be detrimental to the profession is if AI technologies eventually come to replace human lawyers and judges. However, this is a distant future prospect. AI technologies, in their present form, are beneficial to the legal profession when they assist human lawyers in doing their every-day tasks more efficiently, accurately and providing more cost-effective legal advice to their clients. But, it is of paramount importance that AI is regulated by the Government to ensure this vast use of AI in the law is trustworthy and transparent.
AI in the legal profession
The influence of AI in the legal profession is already evident from the fact that numerous large law firms across the UK are using AI technologies. For instance, the law firm Addleshaw Goddard (AG) is using AI to provide better results to their clients. AG is using Kira, which ‘is a powerful AI system’, used to quickly interrogate and manage large volumes of information saving significant amounts of time. Furthermore, one of the largest and most historic law firms in the UK, Freshfields, has also invested in the AI program, Kira, and is consistently using this program for their every-day operations, such as reviewing contracts. Kira can identify ‘all agreements with potentially problematic provisions.’ Additionally, a relatively smaller law firm, Muckle, has been using AI technologies since 2016 to accelerate ‘large, complex disputes’. Additionally, a recent study from the Coldwell Banker Richard Ellis (CBRE) Group, found that 89% of law firms are already utilising AI or have imminent plans to do so. Consequently, it is illustrated that law firms, whether large or small, are willing to invest in AI technologies to facilitate mechanical, every-day tasks.
AI is benefiting the legal profession by saving significant costs to its clients while also providing more accurate, efficient and timely results. These extraordinary results enable lawyers to tackle more complex and creative tasks that can make an impact on the law and society. The global consulting firm, McKinsey, has asserted that lawyers are already utilising AI technologies to evaluate the thousands of documents gathered during discovery, and to determine the most important ones for further review by legal staff. The international law firm, Cleary Gottlieb used AI during discovery to determine which of the thousands of documents collected were documents that should not be investigated by prosecutors due to lawyer-client privilege. As one of the lawyers of the firm pointed out, “from the 500,000 we started with, we quickly made our way to identifying 15,000 documents that were privileged.” Notably, the cost to perform this review by AI was $50,000, instead of the potentially millions in billable hours the job usually would have cost.
Additionally, an AI lawyer, CaseCruncher Alpha, won a challenge against 100 lawyers from London’s magic circle firms. The challenge was to predict whether the Financial Ombudsman would authorize a claim by analysing hundreds of PPI (payment protection insurance) mis-selling cases. Overall, the 2 contestants presented 775 predictions, with the AI lawyer, CaseCruncher having an accuracy rate of 86.6 percent, whereas the lawyers merely obtained a 66.3 percent correct. Likewise, in a new study expressed on Hacker Noon, twenty of the USA’s top corporate lawyers competed against an AI program, called the LawGeex AI, to figure out who could identify the defects in five non-disclosure agreements (NDA) faster and with more accuracy. The challenge was set up by an impartial team of specialists, including law professors from Duke, UCLA, and a senior corporate lawyer. The AI program attained “an average 94 percent accuracy rate, higher than the lawyers, who achieved an average rate of 85 percent.” Incredibly, “it took the lawyers an average of 92 minutes to complete the NDA issue spotting, compared to 26 seconds for the LawGeex AI.”
These examples show that AI can genuinely assist lawyers in analysing these documents, and to reduce the wordiness of these documents, which can enable one party to identify the main issues. Moreover, AI can reduce the costs of legal advice and free up time for lawyers to concentrate on more complex tasks. An AI system that reviews contracts allows lawyers to work on ‘higher-level tasks’ and it makes ‘legal advice accessible and affordable for all.’ There are, therefore, multiple benefits of using AI within the legal profession, including efficiency, accuracy, costs and the ability to free up time for the lawyers to undertake more challenging tasks.
80% of consumers think that it is more significant to obtain more cost-effective legal advice than for the job of solicitors to be retained. This, then, illustrates that clients will want to use AI because it would be more affordable and that people are, indeed, willing to use AI and do not consider it as a threat, which suggests that AI is bound for mass market acquisition. The fact that clients will be willing to receive legal advice from a law firm that encourages use of AI is shown by a statistic from PwC that 72% of business executives think ‘AI will be the business advantage of the future.’ Another incentive for clients to use AI in the UK is evident from the fact that AI is expected to add £232 billion to the UK economy by 2030 and $15.7 trillion to the global economy. Moreover, in a keynote speech by AG, it was asserted that clients are now expecting better quality services for a lesser price.[i] Hence, AI systems might be adopted by all law firms in the future, rendering the influence of AI systems on the legal profession, gigantic. This is comprehensively summarised in the statement of Girardi, who asserted that “it may even be considered legal malpractice not to use AI one day.”
Could AI replace Lawyers and legal professionals?
Identifying the great benefits that AI can provide the legal profession, it seems that the major detriment of AI is if it comes to replace human jobs. However, it appears improbable that AI will replace human lawyers in the near future, due to the limitations of its use to only mechanical tasks and the lack of interpersonal skills that it possesses.
As Thomas asserts, “AI is not going to replace managers, but managers who use AI will replace the managers who do not.” As Richardson and Girardi both agree, no matter how sophisticated AI becomes, it will never be a substitute for the judgment and decision-making only humans can provide. Indeed, human lawyers and judges can provide justice, enforce the rule of law and impact society in a way that AI may never be able to do. As Australian law firm Best Hooper implies, a client will not be able to create a relationship of trust and loyalty with their solicitor, if that solicitor is an AI robot. Correspondingly, it is evident that AI replacing human lawyers would be detrimental to the profession in terms of business efficacy. The firm continued to rightly acknowledge that answers to legal questions are not always black and white and therefore, AI technologies will not be able to replace human lawyers in the near future, since the current AI does not possess such skill. This is evident from the AI, CaseCruncher, which recognised that AI technologies are only better at human lawyers in predicting conclusions when the question is outlined “precisely”. Currently, AI can merely analyse information they collect, lacking interpersonal and other skills required by a lawyer.
The Observer asserted that AI is currently undertaking the tasks previously completed by entry-level lawyers and thus also issuing a warning as to the possibility of certain jobs within the legal profession being replaced. Dodd supports the position that AI could supersede some of the mechanical tasks completed by junior lawyers and paralegals. Correspondingly, Morison and Harkens observed that paralegals were ranked in the first quartile of those to be replaced by a study looking at the jobs that are likely to become automated in the future, because AI can scan documents to identify essential words and phrases.[ii] In this study, lawyers due to their interpersonal, advisory roles were placed in the fourth quartile of least likely to be superseded.
However, AI cannot currently talk to a client or present arguments in front of a judge in a trial. Similarly, “AI’s present capability meets a sizable need in the legal space by automating a number of high-volume, recurring tasks that otherwise take lawyers’ focus away from more meaningful work.” Consequently, it appears that entry-level lawyers will be allowed to focus on more significant tasks rather than performing recurring work, which is beneficial to both the profession and society. However, if AI manages to be able to replace human lawyers in the distant future, this could also benefit society in that it would provide cheaper legal advice to citizens. Nevertheless, since the study cited by Morison and Harkens indicated that lawyers are one of the most challenging professions to replace, if AI is able to reach this level of intelligence, which will be close to the ‘human-level machine intelligence’(HLMI) described by Bostrom,[iii] then the very existence of humanity is under threat. As Bostrom emphasises, once a machine can surpass the general intelligence of humans; humans will no longer be the dominant life-forms on this planet and “our fate would be sealed”. Therefore, despite the benefit to society that the replacement of human lawyers might provide, the bigger picture indicates that this would be detrimental. Correspondingly, a collaboration between AI and humans seems the most reasonable solution, as according to Forbes, ‘lawyers and judges are only as good as the information they receive, and AI has the potential to significantly increase the quality of information.’ Appropriately, although there are signs of AI threatening jobs within the legal profession, Richardson observes, “AI isn’t going to replace the need for critical thinking. We still need to prepare students to think like lawyers, and I don’t think that’s ever going to change.”
Could AI replace Judges?
Judges, in the Morison and Harkens study, were ranked in the second quartile, because robot judges will provide quicker and cost-efficient judgments, with enhanced information, making justice more accessible to people. Nevertheless, although Susskind has predicted that online courts, working with disrupting technology such as AI, will intrinsically modify the duties of traditional litigators and of judges, he does not expect them to be capable of resolving ‘the most complex and high-value disputes’.[iv]
UCL has developed an AI judge that predicted the verdict of English cases concerning torture and degrading treatment with a 79% accuracy. In that 79% of cases, the AI systems provided the exact same verdict as the court itself. Nevertheless, improvement is to be made upon that 79% if AI technologies are to start replacing human lawyers. However, what is significant about this AI judge is that it is able to not only consider the legal evidence, but also to consider moral questions of right and wrong. This, then, illustrates that AI could potentially be a threat to the job of human judges in the future.
Regulations on AI
Regulations and laws are already changing around AI. Calo argued that AI-specific regulations will emerge, they will likely not be significant reforms but a continual, constant process of small steps that could apply to multiple areas including ‘consumer protection, privacy and tort liability.’ These regulations may subsequently have to be adjusted and adapted depending on the benefit or detriment that some of the AI systems will have in our lives. As the Law Society of England and Wales has emphasised, AI is still in the early stages of its development and therefore, they suggest that regulations should remain limited to first gain context of its forms and the potential ramifications of its use. Similarly, Stilgoe suggested that we first need to understand emerging technologies before we impose appropriate regulations.
However, businesses would like clarity on the regulations of the use of AI. Consequently, as the Financial Times reports, strict regulations on AI are desirable. The LSG suggests that AI systems must have strict liability, which will hold them accountable. This is consistent, with the IBM ethical issues on AI, as they indicated that holding AI accountable is crucial for ethical standards. Consequently, strict liability and the AI ethical standards developed by IBM, will ensure that AI will thrive in all areas, including the legal profession and the regulations can be sufficient to prevent the technologies from replacing humans. This is because the strict liability will apply when the AI has conducted harm to individuals and it is in the interests of justice to hold the coders who created the AI accountable. Accordingly, despite some unexpected scenarios where the coders could not have anticipated the actions of AI, it is only fair that this is so. In fact, this does not have to act as a deterrent to innovation, for if the coders conduct their operations ethically, there should be no reason for AI to act unethically.
AI is still in its infancy. Therefore, currently there are not many regulations regarding the use of AI. Appropriately, the AI Principles developed by the Organisation for Economic Co-operation and Development (OECD), to which the UK is a party, recognises five values-based principles for the responsible administration of reliable AI. Firstly, the OECD is proposing that AI ought to benefit people and the Earth by propelling inclusive progression, sustainable development and prosperity. Secondly, AI systems ought to be created with the intention to abide by the rule of law, human rights, democratic values and diversity, and they should involve proper safeguards. For instance, allowing human intervention where it is required to make sure we have a fair and just society. Likewise, Article 22(1) of the General Data Policy Regulations (GDPR), which provides that decisions should not be solely automated and subsection(3) provides that a data controller shall impose appropriate safeguards, which include the right of human intervention. Calo recognised that the EU’s GDPR is important in the regulation of AI, as through the GDPR, citizens can acquire information regarding AI-based decisions influencing them. He rightly identifies that public opinion is significant in this situation. If people as citizens or consumers outline their distress regarding the administration of AI, the reputation of companies could suffer as they attempt to build profitable and respectable businesses, “or by governments responding to those public pressures.” Accordingly, the OECD continued to suggest that there should be transparency and proper information given to the public regarding AI programs to make sure that the public understands AI-based results and can challenge them. Fourthly, AI programs shall operate in a strong, dependable and safe process for the whole duration of their use and probable risks should be continually evaluated and managed. Lastly, organisations and individuals advancing, establishing or running AI programs should be held responsible for their appropriate operation in alignment with the above principles. These recommendations are crucial and will likely influence numerous Government regulations. As the OECD has emphasised, although their recommendations are not legally binding, they are extremely influential. The fact that the OECD has the power to influence decisions of other organisations and governments is evident by the recognition of the G20 and their support for the suggestions made by the OECD. Additionally, as Calo asserted, just like with any disruptive technology, the government has a duty to regulate AI to be in the public interest and to make certain that the costs and benefits of AI are evenly dispensed everywhere in society. Consequently, the common matter in all of these suggestions, is the fact that AI should be used in a way that is beneficial and in the public interest.
Governments will need to play a central role to ensure that AI is beneficial to the legal profession. As Stilgoe implies, we cannot allow powerful private companies to create unlimited emerging technologies, without regulations. Similarly, Google suggests that Governments take GDPR as the foundation to ensure safety, privacy, fairness and accountability. The OECD has also urged Governments to enable public and private investment in research and development to stimulate innovation in reliable AI; promote attainable AI ecosystems with digital facilities and technology to share information; ensure a policy setting that will allow implementation of reliable AI; empower the AI experts and help employees adapt; and collaborate across borders and branches to develop honest administration of trustworthy AI. These regulations should be implemented immediately by Governments, as they are not regulations which tend to hurt innovation, they are merely ensuring that AI created from the day of the regulations onwards, is reliable. Consequently, if the Government implements fair and transparent measures to the development of AI, it is in the right direction to beneficial and reliable AI.
Overall, AI is already influencing the legal profession and an even bigger impact will likely be made in the future. Whilst it is highly improbable that AI will be able to replace human lawyers and judges in the near future, due to the lack of their interpersonal skills, there has been impressive work done by AI thus far like the LawGeex’s win in a challenge against human lawyers. The use of AI technologies is currently beneficial since it assists human lawyers to operate their mechanical every-day functions more efficiently, cost-effective and accurately. As Dr Aletras emphasises, “we don’t see AI replacing judges or lawyers, but we think they’d find it useful for rapidly identifying patterns in cases that lead to certain outcomes.” These powerful incentives will drive the mass market success of AI in the legal profession. However, the use of AI will only be beneficial assuming that fair and transparent AI is imposed by the Government to ensure that AI is trustworthy, ethical and enforced in a way that prevents the replacement of human lawyers.
[i] Addleshaw Goddard Guest Lecture, ‘Legal Technology’ (Newcastle University, Law School Lecture Theatre, 23 October 2019)
[ii] John Morison and Adam Harkens, ‘Re-engineering justice? Robot judges, computerised courts and (semi) automated legal decision-making’ (2019) 39 Legal Studies 619; R. Susskind Tomorrow’s Lawyers: An Introduction to Your Future (Oxford: Oxford University Press, 2nd edn, 2017)
[iii] Nick Bostrom, ‘Superintelligence: Paths, Dangers, Strategies’ (Oxford University Press, 1st edn, 2014)
[iv] R Susskind Tomorrow’s Lawyers: An Introduction to Your Future (Oxford: Oxford University Press, 2nd edn, 2017) 121