Use of Artificial Intelligence (AI) in the Field of Law

Arizona Law Journal of Emerging Technologies
Volume 6 Article 5, 04-2023
Download Article Here
Image from PEXELS.COM

USE OF ARTIFICIAL INTELLIGENCE (AI) IN THE FIELD OF LAW

Maoyu Wang[*]

 

Abstract

The Arizona Supreme Court has sought ways to innovate justice by lowering costs for legal services. Artificial intelligence (AI) appears as a possible solution. Using AI in the field of law may help decrease costs for legal services. Additional benefits include saving time for lawyers and helping lawyers better communicate with the client. However, an AI tool inherently comes with shortcomings due to the way it operates, e.g., deep learning. The nature of being a black box to programmers and developers causes the AI to have unknown risks especially in lawsuits. Although many AI tools exist on the market and the public can envision the future of their applications, legal disputes may arise when those unknown risks become realities. An AI tool may fail in giving optimal outputs or give biased outputs, resulting in claims like ineffective assistance of counsel or legal malpractice. Apart from the legal ambiguities of topics around using AIs, the client’s interest is not best served by adding additional litigation. This Note proposes ways for lawyers to mitigate their liabilities through contracting indemnification clauses with AI program providers. Alternatively, to suing lawyers and adding additional legal disputes, the client may seek insurances against unknow risks caused by AIs. State supreme courts may gradually increase the amount of AI uses and protect clients through express written consent. Lastly, state legislatures also have an interest in passing laws protecting citizens’ data privacy.

Introduction − Why Should AI Matter in Law

 

The Arizona Supreme Court has taken many steps to accelerate innovation and increase access to justice, including allowing non-lawyers to provide legal services[1] and allowing businesses to invest in legal entities.[2] The Arizona Supreme Court’s efforts might decrease the costs of legal services significantly.[3] If someone can provide legal services without the burden of going to law school, the cost of that service should come down.[4] But how about a robot? Won’t legal services be extremely cheap after the Court replaces human legal service providers with robot providing legal services? For customer service in the banking industry, for example, AI programs answer the phone, not humans. Can we replace lawyers and paralegals with robots?

Meanwhile, AI technology is experiencing rapid growth and is widely used in all facets of daily life[5], which further raises questions about its potential use in the field of law. However, the use of AI programs in law faces many barriers. For example, there are questions as to whether the technology is mature enough to use. Another problem is that using AIs not only comes with benefits like low costs, but also comes with challenges, similar to the problems[6] that develop from allowing paraprofessionals to provide legal services and non-lawyers to own law firms.

This Notes explores the current and potential future uses of AI in the field of law. In addition, it considers the benefits and shortcomings of using AI programs to replace human legal services. The Note also reviews existing relevant laws and regulations that can affect the use of AI in providing legal services and further explores the possibilities of potential developments of those laws. Lastly this Note concludes the article by suggesting steps courts can take to protect clients and lawyers.

Part I − History and Technology Overview of AI

The first robotic machine dates to ancient times.[7] As early as 400 B.C., making things fly became a reality when an Egyptian inventor, Archytus, introduced a flying wooden pigeon to the world.[8] Since then, humans have been obsessed with making non-living things move, including with the first human-like robot in the first century A.D.[9] A wide array of engineering designs have appeared, but these machines were all designed to perform routine tasks with the same mechanical methods.[10] Not until the 20th century did engineers start to consider automation.[11] The concept of AI emerged when Alan Turing published a paper about mathematical logic that mimics how humans make decisions with information.[12] In the article, he suggested a machine with unlimited memory and a mechanism that can read from and write to that memory.[13] Turing explicitly proposed a machine that could think.[14] Later, the general public came to refer to Turing’s idea as the universal “Turing machine,” which is capable of improving itself with the information it obtains.[15] In 1955, Allen Newell, Cliff Shaw, and Herbert Simon wrote the first AI computer program, Logic Theorist, and later presented it in the first AI conference—Dartmouth Summer Research Project on AI.[16]

At the end of the twentieth century, Deep Blue, an AI program, beat then world chess champion Garry Kasparov in a six game match.[17] While Deep Blue made specific chess moves independently, programmers trained Deep Blue to make independent decisions and could review how Deep Blue made those decisions.[18] With training and data, Deep Blue has at least partially found the most optimal plays at certain chess positions.[19] And, because the decision-making processes are subject to review, Deep Blue is said to be an explainable AI.[20]

Like chess, Go (or weiqi) requires two players to move pieces on the game board in turn.[21] Originating in China, Go has been played by many for thousands of years.[22] Go boards are made up of a 19 by 19 grid.[23] Go is far more complicated than chess since there are large quantities of possible moves at each position.[24] Go’s complex nature prevented AI programs from finding an optimal strategy.[25] Furthermore, Go requires understanding the values of different moves, which is difficult to quantify with mathematical models.[26] Success in Go necessitates that players have an “intuition” of the correct moves.[27] AlphaGo, developed by International Business Machines (IBM), used a different approach than Deep Blue to beat then-world champion Go player Ke Jie.[28] IBM programmers used a neural network to build AlphaGo, allowing AlphaGo to play against itself repeatedly until it found the optimal strategy to play each position.[29] Neural networks use multiple layers of information to predict an output when given an input.[30] Because AlphaGo’s neural network was nonlinear, programmers could not review the decision-making.[31] AlphaGo’s decision-making is then said to be a black box.[32]

Both Deep Blue and AlphaGo employed machine learning,[33] an advanced type of AI.[34] However, the neural network approach AlphaGo employed was a significant advancement.[35] It is important to understand the basic terminologies of AI (machine learning, neural network, etc.) to further this discussion, as the current AI or possible future AI use in law are tied to these different areas of AI. AI has three subsets: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).[36] In general, programmers refer to ANI as weak AI because ANI is designed to only perform specific narrow tasks.[37] For example, AlphaGO and Deep Blue are weak AIs.[38] Programmers refer to AGI and ASI as strong AI.[39] Stronger AIs perform tasks like humans in general.[40] Specifically, AGI can perform tasks and have intelligence similar to humans, while ASI is smarter than humans.[41] Currently, programmers have not successfully constructed AGI and ASI yet.[42]

Within the field of AI, machine learning is a subset.[43] Machine learning programs use mathematical formulations to perform tasks and are coded to improve their ability to perform those tasks when given additional information.[44] For example, online music services often use machine learning algorithms to process information about the songs someone listens to; the music service will then make recommendations based on the user’s preferences.[45]

Machine learning is further divided into supervised machine learning, semi-supervised machine learning, and unsupervised machine learning.[46] Recall that machine learning like Deep Blue seeks to obtain an optimal solution.[47] The biggest difference between the three is human intervention: supervised machine learning uses labeled data sets containing inputs and outputs.[48] Supervised machine learning requires the teacher to label the outputs as desired.[49] For instance, imagine a function f(x) with the x as the independent variable (input) and y as the dependent variable (output), and the function is meant to double the input.[50] The teacher inputs 5 as x. Since the teacher wants f(x) to double the input, the teacher will label the output as 10.[51] This way, the teacher corrects the output to the desired value and allows the code to learn to calculate the desired value.[52] Conversely, unsupervised machine learning does not contain the desired value y for each x.[53] Semi-supervised machine learning is between the two, having only some outputs labeled and others unlabeled.[54]

Supervised machine learning often employs a neural network for deep learning.[55] Simply put, neural networks mimic the human brain with different layers and nodes because the nodes, like neurons, in the prior layer transmit information to the nodes in the next layer.[56] Only when the node in the prior layer receives a signal higher than the threshold needed will the node send a signal to the next layer, in the same way that neurons light up and signals information.[57] This way, only the signals exceeding the threshold will be received by the next layer, acting like a filter.[58] Eventually, by controlling what type of signal passes through each layer, users can arrive at the desired signal at the output level.[59] Lastly, deep learning is a type of neural network that has more than three layers.[60] The middle layers are all called hidden layers.[61]

Diagram Description automatically generated[62]

In the upcoming Parts, these terminologies are linked to different uses of the AI in the field of law and how different uses have benefits and shortcomings.

Part II − Current & Proposed Uses of AI in the Field of Law

Although mechanical machines such as autopilots and self-driving cars use AI,[63] machine learning is rapidly developing.[64] Many industries entrust machine learning’s ability to perform different jobs, such as stock trading and marketing,[65] transforming our world. Like using AI programs in stock trading, lawyers have started using software that can perform certain legal tasks.[66] These include legal due diligence, litigation prediction, data trend analytics, document generation, intellectual property portfolio reviews, automatic billing, [67] and e-discovery,[68] among others.

a. Legal Due Diligence

In mergers and acquisitions deals, lawyers review contracts and documents to gain insight about the legal framework and financial statutes related to a business.[69] After reviewing all relevant documents, lawyers assess any potential legal risks and assist the buyer with appropriate legal forms and closing structures.[70] In contrast with financial due diligence, legal due diligence involves reading, tracking, and comprehending written natural language and reasoning, instead of objective numbers; additionally, document review can involve thousands of contracts, which increases the difficulties for lawyers.[71]

The traditional method for lawyers to review documents involves reading documents in a physical data room, but software now helps lawyers to collaborate with local counsel in a virtual data room.[72] AI further eases the process by extracting data and sorting documents into different groups, such as jurisdictions and contract types.[73] The automation helps lawyers search for different things quickly, like contract clauses, but lawyers must still review the information themselves.[74] This automation speeds up the process for lawyers to review documents, but does not replace lawyers’ legal analysis of documents.

b. Litigation Prediction

Clients frequently ask lawyers whether they will ultimately win their trial.[75] Lawyers also consider whether the case will win at trial to assess settlements.[76] Researchers are currently using AI to predict a case’s outcome.[77] Apart from the legal merits of the case, which lawyers consider, machine learning researchers use other parameters like case backgrounds, variables pertaining to specific judges, and chronological variables.[78] In predicting U.S. Supreme Court cases, researchers have achieved over 70% accuracy.[79]

Much like using AI to predict case outcomes, clients could also seek to predict whether there could be a lawsuit. Therefore, researchers have also used machine learning to predict the likelihood of litigation.[80] In evaluating the likelihood of litigation, researchers consider parameters pertaining to specific lawsuit types like a company’s industry membership, growth, and size in securities class action lawsuits.[81] With different research results, lawyers could potentially use these predictions to aid their own legal analysis in predicting litigation outcomes or likelihood.[82]

c. Date Trend Analytics

Instead of predicting a case’s outcome, data trend analytics AIs highlight pertinent information and provide lawyers with useful trends in certain judges’ rulings on particular issues by reading case laws.[83] The information does not stop at judges’ rulings; the machine learning algorithm also helps lawyers find other relevant information like a party’s background, opposing counsel’s background, and circuit rulings.[84]

By processing natural language in case law, AI also assists lawyers in legal research by improving the results of each search.[85] The AI improves searches by recognizing the relevant laws and cases that may be of use to support an argument.[86] The AI also predicts relevant cases by understanding what lawyers previously have used for certain fact patterns.[87] This type of AI assistance provides information like ruling trends and relevant cases for lawyers to consider. However, the lawyers still produce their own legal analysis.

d. Document Generation

AI also has the ability to assist lawyers in creating legal documents. Lawyers can use the machine learning software to fill out contracts by inputting data.[88] For example, the algorithm can complete nondisclosure agreement (NDA) templates.[89] The software also streamlines the NDA process by allowing parties to sign the agreements electronically.[90]

Another use of AI is to generate legal writing. Since machine learning can understand written natural language[91] and find patterns in different writings,[92] the AI can process the texts in memorandums of law and find patterns. Users can then use the AI to generate legal memorandums by inputting certain information, like an NDA.[93] Even if the AI cannot generate an entire memorandum of law, the AI may still be able to generate parts of it, such as the introduction or the conclusion.

e. Intellectual Property Portfolio Review

AI will also aid inventors in determining if their invention may be patented. An invention must be novel and not be available to public to be patentable, with a few exceptions.[94] Before an applicant applies for a patent with the United States Patent and Trademark Office (USPTO), the applicant often conducts a patentability search.[95] The goal of this search is to determine if this invention has been patented or is otherwise available to the public.[96] AI can assist patentability searches by analyzing the patent application and looking through relevant technical literature.[97] Additionally, the software can help patent attorneys to draft their patent application by proofing the application for errors.[98]

f. Automatic Billing

Additionally, AI may help lawyers with the day-to-day operations of their practice. For example, AI can aid in automating the billing process for lawyers.[99] One billing software can track e-mails and other lawyer activities to include them in the invoices.[100] Billing can also be centralized, and the software can adjust billing items automatically.[101] Billing assistance helps to reduce billing costs and increase accuracies.[102]

g. E-Discovery

In the discovery stage of a lawsuit, lawyers obtain relevant nonprivileged information.[103] Lawyers review and determine whether documents are relevant and privileged.[104] Machine learning algorithms can analyze documents to assess whether they qualify as relevant and nonprivileged, or the algorithms can prompt lawyers to review certain documents more closely.[105] Relevance involves the probative value of certain facts,[106] which is a legal conclusion. Similarly, the determination of privilege is also a legal conclusion.[107] Because making decisions about relevance and privilege involves legal conclusions, the AI inevitably engages in legal analysis for the lawyer. In the case of prioritizing review documents, the software leaves the final decisions to the lawyer, which is more like the previously mentioned legal due diligence AI.

Part III − How Using AI Benefits Litigants in the Field of Law

With the United States being at the lower end of the spectrum of accessibility to the legal system,[108] the Arizona Supreme Court innovates the legal field with efforts to decrease legal costs.[109] In hopes of decreasing legal costs, the Court abolished the American Bar Association’s Ethics Rule 5.4,[110] and allowed nonlawyers to provide some legal services.[111] Another way to decrease litigation costs substantially is to use AI in different areas of legal practice.[112] There are other benefits to using AIs in legal services which eventually may result in a decrease in litigation costs.[113]

One benefit of using AI is to save time for lawyers.[114] With the modern computing power, AI can extract and analyze the same amount of information in a shorter time frame compared to lawyers.[115] AI engines accomplish this through quickly searching and identifying relevant information.[116] And through saving lawyers’ time, AIs eventually decrease costs for clients and increase access to justice.[117] Additionally, when reviewing thousands of documents, such as working in legal due diligence,[118] lawyers often must conduct tedious reviews that involve minimum legal analysis.[119] In conducting mundane readings, lawyers experience fatigue, boredom, and distraction.[120] On the other hand, AI engines do not experience those feelings, performing more consistently throughout the entire task, which increases the quality of work product.[121] Specifically, some software can maintain cross-references in documents without errors, maintaining language consistency.[122] The lawyer then can save time on checking for errors and saves costs for the client.[123] Moreover, when lawyers focus on work involving legal analysis and higher-level work, they are happier with their work and experience less stress and burnout.[124] And more lawyers for the same amount of work helps lawyers to have more personal time to maintain their mental health.

The previous discussion talked about how researchers are constructing algorithms that can predict litigation outcomes with fairly high accuracies, and algorithms that can predict litigation risks.[125] When a party applies prediction algorithms, the lawyer can identify the winning chance or litigation risks early in the matter.[126] This, in turn, allows the lawyer to devise strategies to avoid lawsuits and risks;[127] or if the lawyer decides that the risk of losing is sufficiently high,[128] then the party may choose to settle. Having a more optimized strategy, the client will have lower litigation costs.[129]

With video conferencing becoming increasingly popular since the pandemic,[130] online court hearings are also booming.[131] When online court proceedings are held, the court can use AI that understands natural language. Using artificial intelligence that understands natural language, like the ones in legal due diligence,[132] the court can record and transcribe the court record without the need for a stenographer.[133] When there are court proceedings involving foreign-language speakers, the AI software can translate the spoken language to the those that the court and parties can understand in real-time.[134] Through using the technology to manage and process court proceedings with the support of technology, the court and the client can both save time and costs on court records.[135]

Other than benefits that involve decreasing litigation costs, using AI software can also benefit litigants by allowing more time for lawyers to communicate important issues with their clients.[136] Because AI completes tasks that are more time consuming and low-level, the lawyers are more available to engage in critical thinking about the case and explore more creative strategies for the client.[137] Lawyers can then sufficiently communicate these strategies and explain the law to the client.[138]

Many benefits of using AI in the field of law are intertwined because most uses relate to performing some sort of low-level mundane work.[139] Most of them can eventually lead to a decrease in litigation costs by prioritizing lawyers in more creative works, a massive benefit for the client.[140] The benefit without direct monetary decrease still helps lawyers to be more available to their client.[141] However, with such benefits, uses of AI may also cause issues for courts and litigants. In the following chapter, the issues of using AI are discussed.

Part IV − Potential Challenges with Applying AI in the Field of Law

Within a minute of taking off, the Captain and the First-Officer on Ethiopian Airlines flight 302 noticed deviations of the left and right angle of attack (AOA) values.[142], [143] Shortly after, engaging the autopilot (AP) onboard controlling the (AOA), the Captain requested the AP to be engaged.[144] Around thirty seconds after, the Captain advised the First-Officer that they are having flight control issues.[145] However, the aircraft nosed down four times without any commands from the pilots, despite engaging the AP.[146] Eventually, the crew completely lost control of the aircraft and could not prevent it from plummeting its altitude. The aircraft crashed 6 minutes after taking off,[147] losing the entire flight crew and all of its passengers, totaling 157 souls onboard.[148]

A few days after this tragic incident, the aircraft in question, Boeing 737 Max 8, had been completely grounded for safety concerns.[149] What is different about this new variant of Boeing 737 is its new control system, Maneuvering Characteristics Augmentation System (MCAS).[150] The MCAS is an AI system that can push the aircraft’s nose down after the new aircraft model gained more natural tendency to nose up due to the new location and shape of the engine.[151] The nose up momentum can compromise the aircraft’s safety if the AOA achieves stalling angle.[152] However, Ethiopian Airlines flight 302’s AOA sensor malfunctioned and transmitted the incorrect values to the system.[153] Then MCAS forced the aircraft to nose down even though the aircraft was not at stalling angle, eventually crashing the aircraft.[154] A failing AI can lead to severe results.

Similarly, when a litigant loses the trial, they can suffer huge financial losses. When a criminal defendant loses the trial, they may face severe legal penalties. Certainly when lawyers use the AI for motions or work products without proofreading, they run a risk of a failing AI, resulting in severe results for their clients. However, will there only be sunshine and laughter if AIs perform exactly how they should? Can an AI compromise a party’s winning chance without failing altogether? The rest of the chapter will explore the risk possibilities of impact even when AIs do not technically malfunction.

a. Deep Learning Presents Bias in Results

Recall that machine learning algorithms use the information to improve themselves and output useful music preference predictions.[155] Similarly, legal research algorithms use general trends of legal arguments to make suggestions to lawyers.[156] When given data, machine learning algorithms improve themselves at performing these tasks.[157] But when machine learning algorithms analyze data, either by extraction or feeds by human interventions, biases can still exist or otherwise be created by the AI.[158] For instance, Google’s targeted advertising exhibited gender-bias when the algorithm produced a considerably larger amount of high paying positions to men than women.[159]

Biases in machine learning can be attributed to mainly three different causes, problem framing, data collection, and data preparation.[160] For example, when customers’ credit scores are at issue, the company must determine its goal to achieve; whether to maximize profit or to maximize the rate of loan payments.[161] In the sense of legal research, one example could be to either prioritize relevant case law or prioritize relevant fact patterns.

When it comes to data collection, machine learning can create inherent biases essentially in two different ways, by either inputting unrealistic data sets or data sets with existing biases.[162] For instance, in the case of predicting behavior of the U.S. Supreme Court, the machine learning algorithm is optimized with cases from 1816 to 2015.[163] The algorithm is naturally worse at predicting any future Supreme Court cases participated in by Justice Gorsuch, Justice Kavanaugh, and Justice Barrett because those three Justices did not decided the cases being evaluated.[164], [165] Existing biases happen when the data set itself has biases.[166] Recall that data sets contain labels by teachers of the algorithm.[167] When the teacher has certain biases, like an employer employing only certain backgrounds against others, the existing biases of data sets can be transferred from the teacher.[168] Systemic bias can also act as pre-existing biases in machine learning.[169], [170] For example, an AI software assigning scores of possibilities of criminal responsibilities has exhibited racial bias.[171] What’s worse, machine learning algorithms may amplify pre-existing biases in the social system, resulting in worse inequalities or discriminations.[172]

The third cause of bias happens during data preparation, when the teacher has to consider and determine the exact attributes to input to the algorithm.[173] In the case of predicting the Supreme Court decisions, the researchers considered attributes like the identity of the Justice and identities of the parties.[174] However, research has shown that weather may affect judges’ decisions.[175] Then a question arises as to whether including temperature changes may increase the accuracy of the prediction by the algorithm.[176]

The three biases are difficult to correct because biases can be hard to detect in the AI process.[177] In the case of gender bias in employment hiring algorithms, the algorithm obtained a bias towards the wordings more attributive to men than women, causing inherent prejudice against women.[178] Additionally, deep learning algorithms often use data sets in pairs, one for improving capabilities and one for validating capabilities.[179] When the data sets contain pre-existing biases, the validating data sets reaffirm the biases developed in the algorithm.[180] In the problem framing, the teacher introduces inherent biases in framing problems because the framing itself lacks social context.[181] Examples would include research about litigation risks that is conducted about based on security class action lawsuits,[182] which may not be accurate in predicting litigation risks for breach of contract lawsuits.[183] Lastly, a huge challenge for researchers is how to define fairness.[184] Many researchers have argued for different ways to define fairness in terms of mathematics, including categories like “predicted outcome” and “predicted and actual outcome[s].”[185] Researchers label the outcome with predicted results, in hope of attaining “true” results by achieving equilibrium rates between false positive and false negative.[186] Social perception of fairness varies over time as the society progresses, which further results in biases and inaccuracies, especially for older algorithms.[187]

b. Data Accessibility Limits the Development of Machine Learning Algorithms

AI improves itself through data training, which requires a large number of data sets with high quality.[188] Collection of data presents two main challenges to AI developers, including maintaining the amount and quality of data and ethically obtaining data sets.[189] With a whopping 15 percent quality data access rate, some users trying to develop AI tools fail to advance AI prototypes out of the testing stage.[190] Meanwhile, policy concerns require developers to balance the accessibility with ethical access, like when data users must access data by obtaining consent.[191] Similarly, data accessibility presents challenges to developing AI in the field of law. For example, in the case of improving legal research for lawyers, AI uses data from other lawyers’ research results,[192] which falls under the limitations of writings available to the public. Additionally, like raising concerns for healthcare data privacy issues surrounding AI, [193] by collecting lawyers’ research keywords or views, strategies may be revealed to the data collection engine, raising confidentiality concerns.

c. Algorithms Cannot Perform Tasks Intelligently in Different Situations Like Humans Can

In 2021, a New Jersey district court in a patent infringement action rewarded the plaintiff damage enhancement of three times the original $125 million in damages found.[194] The court rewarded the damage enhancement in accordance with the judge’s discretion to “increase the damage up to three times the amount found.”[195] However, AIs lack any capabilities of sophisticated intelligence like directions because weak AIs cannot perform tasks like humans do.[196] Although some argue for the benefits of AI judges strictly following the law, AIs are not competent in issuing discretions when the law so requires.[197] One argument in favor is that AIs can use factors to grant relevant discretions in exceptional cases. However, recall that the mechanism of AIs is to use prior data to predict future outcomes. Exceptional cases, by definition, are hard to come by, resulting in a lack of relevant or quality data. This would compromise AI’s capabilities for discretion. AIs do not have moral values and cannot comprehend the concept of justice.[198] Therefore AI programs cannot simply execute programing codes in mathematic logic in hopes of a fair judicial outcome.[199] Additionally, AI’s lack of comprehension may hinder legal applications other than judicial decisions. One example may be the intellectual property patentability search for compliance with the novelty requirement of a patent.[200] AIs cannot comprehend the innovative value of the new patent, and thus it will fail in determining a patent’s novelty value.[201] Another example can be contemplated under the proposed use of AI with memorandum composition. In composing a statement of facts, the writer must consider both legal relevance and emotional relevance when including facts.[202] However, AI programs cannot properly evaluate emotional significance in facts simply because AIs cannot comprehend emotions.[203]

Part V − Relevant Rules and Laws in Using AI to Provide Legal Services

a. Using AI Must Comply with Professional Responsibilities for Lawyers

As this Note has discussed, the AI programs are essentially taking some portion of the work from lawyers, if not performing lawyers’ work entirely.[204] To survey relevant rules and regulations, one must consider regulations on the legal professions. Although lawyers must pass a bar exam, which covers the ability for legal analysis, ethical responsibilities, and moral responsibilities, lawyers have not always been regulated this way.[205] As time passes, the legal profession has become more and more regulated.[206] Although state supreme courts have their own regulations of lawyers,[207] states adopt the American Bar Association’s Model Rules of Professional Conduct (ABA Model Rules), or at least models after it.[208] In this case, AI can be considered as a lawyer-equivalent or a nonlawyer-equivalent providing legal services. Either way, the AI user must still comply with the ABA Model Rules.[209]

i. ABA Model Rules Regarding Competency

Mainly three different ABA Model Rules may be implicated when lawyers decide to use AI programs in assisting legal work, Model Rules 1.1, 1.4, and 1.6.[210] Model Rules 1.1 requires competent legal services when a lawyer represents a client.[211] Specifically, the technologies a lawyer uses, here the AI programs, also fall under this rule.[212] For lawyers, violation of competency could be easier to discern, e.g., lack of relevant legal knowledge.[213] But more questions arise when we try to define competency for a computer algorithm. Under California Competency Rules, a lawyer “shall not intentionally, recklessly, with gross negligence or repeatedly fail to perform legal services with competence.”[214] As this Note previously pointed out, AIs do not have the humanlike intelligence like intention.[215] The question follows: does the language “intentionally, recklessly, with gross negligence” exempt AI programs unless the repetitive failures exist? Also, must an AI program know every case law and statute to be competent? Or will an AI program which knows enough case law and statutes be competent? Perhaps the standard of competency for lawyers will be instructive in determining competency for AIs.

One way to consider the standards of competency is through the remedies of incompetence. When competency issues exist for a legal representation, a criminal defendant claims ineffective assistance of counsel by the lawyer.[216] The state of Florida provides a test for ineffective assistance of counsel:

First, the claimant must identify particular acts or omissions of the lawyer that are shown to be outside the broad range of reasonably competent performance under prevailing professional standards. Second, the clear, substantial deficiency shown must further be demonstrated to have so affected the fairness and reliability of the proceeding that confidence in the outcome is undermined.[217]

Firstly, to highlight the relevant parts that may apply to AI programs, the case law requires the claimant to specifically identify the incompetent act, either erroneous actions or omission of action.[218] Secondly, the inquiry of whether the action is incompetent is under a reasonable standard of being a lawyer.[219] “Not every action or omission” will be ineffective assistance of counsel.[220] When the lawsuit is based on an action or omission of an action that is a legal tactic, the claimant fails to prove the ineffective assistance of counsel claim.[221] Recall that the AI program often uses deep learning to make decisions, which is a black box.[222] While the lawyer and the court cannot audit the decision,[223] requirement an audit of the decision-making process is likely irrelevant. The decision is under review within the profession’s scope of reasonableness.[224]

Although competency requirements are not exactly the same as legal malpractice, the legal malpractice lawsuit commonly arises from a lawyer’s incompetency in civil lawsuits.[225] A lawyer is subject to legal malpractice if the plaintiff can prove: “(1) that an attorney-client relationship existed, which placed a duty upon the attorney to exercise reasonable professional care, skill and knowledge in providing legal services to that client; (2) a breach of that duty; and (3) resultant harm legally caused by that breach.”[226] Similar to the claim of ineffective assistance of counsel, the legal malpractice claim requires a reasonableness inquiry on that breach of the attorney’s duty.[227] However, unlike the claimants raising ineffective assistance of counsel who can easily identify an omission of action, e.g., waiving closing argument,[228] claimants of an AI program’s breach may be hard to identify in a civil lawsuit. Let’s suppose the AI program is being used for translation purposes during a video deposition,[229] and the attorney is deposing a foreign witness under Rule 28(b) of Federal Rules of Civil Procedure.[230] The AI program makes a crucial mistake hearing the spoken natural language and translates incorrectly. The claimant may never discover that a crucial piece of information is missing because all the claimant sees is the English version of the foreign language. In this case, the difficulties for a claimant to raise a legal malpractice claim is significantly increased.

Lastly, when lawyers use any AI program, they implicate the technology component of Model Rules 1.1.[231] Training for lawyers needs to include curricula for using AI programs, either in law schools or in law firms.[232] Additionally, lawyers should be aware of the limitations of using AIs, like biases, because these limitations can affect the program’s accuracy.[233]

ii. ABA Model Rules Regarding Communication

The Communication Rule requires the attorney to keep the client informed about any material issue.[234] Specifically, the lawyer must explain why certain decisions are being made.[235] After being trained in basic AI knowledge, lawyers will not have problems explaining how they have used the AI and how AI works.[236] However, lawyers are not typically educated enough to explain material matters to the client on some occasions. For example, during the decision of whether to settle, the lawyer might use an AI program to predict the outcome of a trial.[237] The lawyer cannot explain why the AI program predicts that the optimal choice is to settle because the AI operation is a black box.[238] Lawyers can always resort to explaining how accurate the result is when that AI is used. It is unclear if a lawyer fulfills professional responsibilities by giving a ramification that the lawyer uses an AI with certain accuracy. This also requires the legal community to set a requirement for how accurate an AI must be before a lawyer uses it. This will ultimately be a specific percentage. The downfall is that leaving a client’s success to an AI’s percentage accuracy could be inhumane. For example, if an AI only has certain accuracy, this AI is bound to fail at some point. The unlucky client will suffer injustice from that failure.

iii. ABA Model Rules Regarding Confidentiality

Model Rule 1.6 prohibits lawyers from revealing confidential information regarding the representation of the client.[239] By using AI in legal research or other fact-pattern related legal work, the lawyer risks violating the client’s confidentiality through data sharing.[240] Data sharing happens when user information is shared with developers to improve the AI algorithm.[241] The client may complain to the bar regarding confidentiality issues when the lawyer shares the client’s information.[242] But what about when AI companies share the data? The Federal Trade Commission (FTC) has the standing to sue a company when the company unfairly or deceptively uses the user’s data.[243] The FTC has sued data brokers for selling data and others for misusing a user’s data.[244] While users in the United States must wait for FTC to take action, the European Union allows individuals to seek remedies against data privacy violations.[245] The EU’s General Data Protection Regulation (GDPR) prohibits data processing that is unfair and that violates data security, which is more convenient for consumers to seek remedies.[246]

b. Laws for Self-Driving Cars May be Instructive

Existing laws relevant to using AI programs to provide legal services are limited to the ABA Model Rules and the legal liability under malpractice.[247] Although no direct law governs the use of AI programs to provide legal services, laws governing the use of AI in self-driving cars do exist.[248] In 2015, Arizona initiated the program to test and develop self-driving cars on certain designated roads.[249] Gradually, Arizona started allowing wider applications of self-driving cars, including deliveries by self-driving cars.[250] Effective September 2021, Arizona’s statute for self-driving cars allows state-wide applications for self-driving cars, with a few restraints for operating without a human.[251] On the other hand, having enacted its self-driving law earlier than Arizona, Florida allows self-driving cars to operate without a human driver.[252]

On the federal level, the American Vision for Safter Transportation through Advancement of Revolutionary Technologies (AV START Act) was introduced in 2017 but stalled in Congress.[253] Instead, the Department of Transportation issued multiple guidelines on autonomous vehicles.[254] The guidelines included information on different topics, including safety, research, and integration.[255] With proper research, which will likely happen with legal AI program developers like Thomson Reuters and Wolters Kluwer,[256] AI tools may be sufficient to perform like a lawyer or judge.[257] Although currently AI programs are used in conjunction with human lawyers, [258] supreme courts should supply guidelines and promulgate rules to regulate AI uses. In the next Part, this note explores the possible regulatory efforts and other sources available to support the development and use of AI in the field of law.

Part VI − What Happens When Things Go Wrong: Possible Legal Propositions of Regulating and Allocating Liabilities Arising from Using AI Tools in the Field of Law.

One essential purpose of law generally is to determine liabilities between parties to put an end to a dispute.[259] The allocation of liabilities has been a question for AI use.[260] Since the AI program itself cannot bear liabilities,[261] courts must find other sources of liabilities for remedies, such as legal malpractice lawsuits against the lawyer. The following are a few ways courts can allocate liabilities and how government can regulate disputes arising from legal AI misuse.

a. Ways the Lawyer Can Limit Liabilities and Feel Safe to Use AI Programs in Lawsuits

The first method may be a liability waiver which an individual would sign before participating in virtually any activities.[262] Liability waivers are agreements to release a party from negligence claims.[263] In the case of using AI tools in lawsuits, a potential consideration is to require one party contract away another party’s liabilities in a lawsuit, e.g., liability of legal malpractice.[264] One proposition is to limit liabilities for the lawyer if the lawyer’s malpractice arises from using AI tools. Between a lawyer and a client, the ABA Model Rules prohibits a lawyer from contracting away the lawyer’s malpractice liability.[265] However, the lawyer can limit any potential liability if the client is “independently represented in making the agreement.”[266] For the lawyer who represents a client in an agreement limiting malpractice liabilities, the lawyer commonly meets difficulties to “evaluate the desirability of making such an agreement before a dispute has arisen.”[267] This evaluation is significantly more difficult with deep learning’s black box operation.[268] Furthermore, signing a liability agreement results in more complications. The liability agreement limits the lawyer representing the client in a legal dispute, but the agreement does not limit liabilities for the lawyer representing the client in making this agreement. When malpractice in the original legal dispute happens, the client may still sue the latter lawyer for malpractice for advising on the agreement. Essentially, this agreement does not effectively limit malpractice liabilities. Instead, this agreement merely transfers the malpractice liability downstream.

Although the lawyer cannot effectively waive their malpractice, the ABA Model Rules are silent on any agreements between the lawyer and the AI program provider.[269] Between the lawyer and the AI program provider, the lawyer can mitigate liabilities by an indemnification clause in a contract. An indemnification clause requires a party to protect the other from liability or litigating liability arising out of harms to a third party.[270] The lawyer can choose to only use AI programs if the AI program provider indemnifies the lawyer when a malpractice claim arises. Even though the AI program provider can indemnify the lawyer in malpractice litigation, the indemnification clause becomes rather null in bar complaints. Indeed, sometimes additional counsel is necessary for the lawyer and the indemnification clause mitigates the cost.[271] What the indemnification clause cannot mitigate is the harm to a lawyer’s practice. As a result of a bar complaint, the lawyer can be suspended, reprimanded publicly, and even disbarred.[272] When misconduct is rather new—like misusing AI tools—the Board on Professional Responsibility (the Board) reviews the conduct extensively.[273] The Board considers the lawyer’s moral character before issuing severe penalties like disbarment.[274] The question then becomes whether using AI tools to achieve a sub-optimal results warrants disapprobation of a lawyer’s moral character. In immigration law, the concept of “moral turpitude” is not viewed as a legal standard.[275] “[T]he term [is] not clearly defined or definable” in law.[276] One way to consider moral character is the action leading other people to “[discern] morally trustworthy and untrustworthy people.”[277] Bad morality tends “to bring harm to others in the future.”[278] Furthermore, to assess someone’s moral character the actor’s “vantage point” is relevant.[279] A technical tool’s failure is not sufficient to call into question the user’s trustworthiness, nor to confirm that the user has a tendency to harm others. If the lawyer does not know that the AI tool is deficient, with a “vantage point” of ignorance the lawyer then does not demonstrate bad morality. However, this may raise questions about the lawyer’s competency under ABA Model Rules.[280] If the lawyer does not know that the AI tool is deficient, then lawyer then is free of competency issues because a reasonably competent lawyer would also have difficulties identifying any deficiencies.[281]

The EU Commission’s expert group proposed a strict liability requirement (liability without finding faults) for the AI program providers.[282] Similar to indemnification clause discussed above, the Report on Liability for AI and other Emerging Technologies suggests the indemnification be imposed on AI program providers.[283] When the program is being updated by the provider, the strict liability exists.[284] State supreme courts or legislatures can impose strict product liability on AI program providers to increase regulation.

b. How Can the Client Be Protected?

As discussed above, the client can certainly sue the lawyer for legal malpractice and file a bar complaint. However, the client may not want to sue the lawyer for legal malpractice and incur the burden of litigation and its costs.[285] One way to solve this problem is to have insurance against sub-optimal performance by AI programs. The United Kingdom takes an approach freeing AI manufacturers from liability, assigning liabilities of insured self-driving vehicles to insurers and liabilities of uninsured vehicles to the vehicle owner.[286] Although the UK government does not mandate insurance for self-driving vehicles, the owner may still have their best interests served by being insured. This is analogous to legal malpractice insurance.[287] One way for the lawyer to mitigate liability will be having insurance against potential AI-related malpractice lawsuits. Similarly, the state supreme courts or legislatures may also mandate insurance for clients involved in cases where lawyers use AI programs. In that case, the insurer will be liable for the client’s claim of the lawyer’s malpractice.

c. Steps to Further Innovation: What Can Courts and Legislatures Do to Decrease the Cost of Justice

Like the ABA Model Rules[288] and federal guidelines for autonomous vehicles[289], the federal government is likely to issue guidelines specifically about AI use in the field of law. As the Arizona Supreme Court has an interest in decreasing barriers to legal services by promulgating rules on services by nonlawyers,[290] the rules can also include services by AI programs. To research and establish precedence, for example, the Arizona Supreme Court may limit the services to certain district courts or certain claims with smaller stakes.[291] Although many states like Arizona adopted the ABA Model Rules for communications in their state rules,[292] the implications of communications lack precedent in courts. The state supreme courts may introduce rules explicitly requiring lawyers to explain the details of risks associated with using AI tools.

Furthermore, states have an interest in protecting its citizens’ privacy.[293] Although the FTC has standing to sue entities who misrepresent their data privacy policies, consumers must wait for the FTC to sue when there is a misrepresentation.[294] For instance, Verizon recently changed its data privacy policy.[295] In its revised privacy policy, Verizon will not be misrepresenting its privacy policies when it collects customer browsing history, and the FTC has no claim against Verizon. Among many Verizon policy updates, users are likely to miss the important privacy provision. Even if users read these provisions, most users may not understand the legal significance behind these provisions. Therefore, the state legislatures should enact data privacy protection laws to protect confidential information used by AI programs, allowing individuals to seek remedies for confidentiality breaches.[296]

Conclusion

Innovating the way legal services are provided can substantially decrease the cost for clients. Using AI programs can be the next step to further this innovation. However, AI programs inherently contain inaccuracies, such as biases, and can output suboptimal results. When suboptimal results happen, legal issues arise. Liability allocation and mitigating liability become essential to the topic. However, the main legal remedies for a client currently are malpractice lawsuits and bar complaints, which are less of a remedy per se. Issues like complications and ambiguity that are specific to using AI programs under ABA Model Rules exist, causing unknown risks to lawyers and clients. Further, the client and the lawyer also face challenges relating to privacy issues like data sharing and collection.

Facing these challenges, lawyers can mitigate or limit their liability by having an indemnification clause with the AI program providers. Courts may recognize that a deficiency in AI tools does not warrant a severe bar penalty. On the side of clients, clients can seek remedies under legal malpractice or simply obtain insurance against unknown AI deficiencies. Furthermore, courts and legislatures may recognize strict liability when AI program providers are unsuccessful in mitigating AI errors.

To protect its citizens, states should actively take steps to gradually allow AI programs to be used under regulation. Like nonlawyer legal service providers, the state supreme courts may promulgate rules to allow certain uses of AI programs for certain small claims or low stake legal services. The courts can also require the client’s written consent to explain the risks and benefits when any AI program is involved. Lastly, state legislatures have an interest in protecting their citizens’ privacy by passing laws limiting data collection and sharing.

      • Maoyu Wang is a J.D. Candidate and Tech Law Fellow at the University of Arizona James E. Rogers Law School. The author gratefully thanks Professor Derek Bambauer, J.D., for guidance in preparing this Note.
      1. Ariz. Code of Jud. Admin. [hereinafter ACJA] § 7-208.
      2. ACJA § 7-209.
      3. Maya Steinitz & Victoria Sahani, You No Longer Have to be a Lawyer to Practice Law in Arizona. That’s Good and Bad, AZCentral (Feb. 6, 2021), https://www.azcentral.com/story/opinion/op-ed/2021/02/06/arizona-no-longer-restricts-law-lawyers-here-pro-con/4339871001.
      4. Id.
      5. Darrell West & John Allen, How Artificial Intelligence is Transforming the World, Brookings (Apr. 24, 2018), https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/.
      6. See Steinitz & Sahana, supra note 3.
      7. Eric Roberts, Robotics: A Brief History, Sᴛᴀɴꜰᴏʀᴅ, https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/robotics/history.html (last visisted Jan. 24, 2022).
      8. Id.
      9. Id.
      10. Id.
      11. Id.
      12. Rockwell Anyoha, The History of Artificial Intelligence, Hᴀʀᴠᴀʀᴅ (Aug. 28, 2017), https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.
      13. Alan Turing and the Beginning of AI, BRITANNICA, https://www.britannica.com/technology/artificial-intelligence/Alan-Turing-and-the-beginning-of-AI (last visited Jan. 24, 2022).
      14. Alan M. Turing, Computing Machinery and Intelligence, 49 Mind 433 passim (1950).
      15. Alan Turing and the Beginning of AI, supra note 13.
      16. Anyoha, supra note 12.
      17. Chessgames, https://www.chessgames.com/perl/chessplayer?pid=29912 (last visited Jan. 24, 2022).
      18. Adam Rogers, What Deep Blue and AlphaGo Can Teach Us About Explainable AI, Forbes (May 9, 2019, 7:45 AM), https://www.forbes.com/sites/forbestechcouncil/2019/05/09/what-deep-blue-and-alphago-can-teach-us-about-explainable-ai/?sh=7f3a4fb052fd.
      19. John Menick, Move 37: Artificial Intelligence, Randomness, & Creativity, Mousse Magazine (2016), https://johnmenick.com/writing/move-37-alpha-go-deep-mind.html.
      20. Rogers, supra note 18 (explainable in a way that the AI’s decision-making can be explained by the programmer after reviewing the decision-making processes).
      21. A Brief Hist. of Go, Am. Go Ass’n, https://www.usgo.org/brief-history-go (last visited Jan. 24, 2022).
      22. Id.
      23. Menick, supra note 19.
      24. Id.
      25. Id.
      26. Id.
      27. Id.
      28. Rogers, supra note 18.
      29. Id.
      30. What is Artificial Intelligence (AI), IBM (Jun. 3, 2020), https://www.ibm.com/cloud/learn/what-is-artificial-intelligence (last visited Apr. 25, 2023). Non-linear neural network is when the neural network is more than one layer of nodes and that the layers require activation of prior layer to the latter layer, like neurons.
      31. Rogers, supra note 18.
      32. Id.
      33. Rogers, supra note 18.
      34. Sindhu Velu et al., An Empirical Science Research on Bioinformatics in Machine Learning, 7 J. Mᴇᴄʜꜱ Cᴏɴᴛɪɴᴜᴀ & Mᴀᴛʜᴇᴍᴀᴛɪᴄᴀʟ Sᴄɪꜱ (Special Issue) 86 (2020).
      35. Id.
      36. Eda Kavlakoglu, AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?, IBM (May 27, 2020), https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
      37. Id.
      38. Id.
      39. Id.
      40. Id.
      41. Id.
      42. Id.
      43. Id.; see also Rogers, supra note 18.
      44. Partick Grieve, Deep learning vs. machine learning: What’s the Difference?, Zendesk Blog (Mar. 8, 2022), https://www.zendesk.com/blog/machine-learning-and-deep-learning.
      45. Id.
      46. Jason Brownlee, Supervised and Unsupervised Machine Learning Algorithms, Mach. Learning Mastery (Aug. 20, 2020), https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms.
      47. Menick, supra note 19.
      48. Brownlee, supra note 46.
      49. Id.
      50. Id.
      51. Id.
      52. Id.
      53. Id.
      54. Id.
      55. What is Supervised Learning, IBM, https://www.ibm.com/cloud/learn/supervised-learning, (last visited Apr. 15, 2023).
      56. Id.
      57. Id.
      58. Id.
      59. Id.
      60. Kavlakoglu, supra note 36.
      61. Id.
      62. Id.
      63. AI & Robotics, Tesla, https://www.tesla.com/AI (last visited Apr. 15, 2023).
      64. Rogers, supra note 18.
      65. See ITP, Man vs Machine: How AI robots are Taking over Online Trading, Arabian Bus. (Aug. 26, 2021), https://www.arabianbusiness.com/money/money-wealth/alternative-assets/467610-man-vs-machine-how-ai-robots-are-taking-over-online-trading; see also What is AI Marketing?, Marketing Evolution (July 20, 2022), https://www.marketingevolution.com/marketing-essentials/ai-markeitng.
      66. Daniel Faggella, AI in Law & Legal Practice – a Comprehensive View of 35 Current Applications, Emerj A.I. Rsch. (Sept. 7, 2021), https://emerj.com/ai-sector-overviews/ai-in-law-legal-practice-current-applications.
      67. Id.
      68. See Text IQ at Relativity, Relativity, https://www.relativity.com/data-solutions/textiq (last visited Mar. 14, 2023).
      69. How AI is changing legal due diligence, Imprima (Dec. 4, 2020), https://www.traverssmith.com/media/6382/how-ai-is-changing-legal-due-diligence.pdf (last visited Jan. 24, 2022).
      70. Legal Due Diligence, Divestopedia, https://www.divestopedia.com/definition/6619/legal-due-diligence (last visited Apr. 15, 2023).
      71. How AI is changing legal due diligence, Supra note 69, at 2-3.
      72. Id. at 3.
      73. Id. at 4.
      74. Id.
      75. Phil Sokowicz, Five Ways Legal Teams Can Begin to Leverage Artificial Intelligence, Forbes (Oct. 12, 2021, 2:04 PM), https://www.forbes.com/sites/forbesbusinesscouncil/2021/10/12/five-ways-legal-teams-can-begin-to-leverage-artificial-intelligence/?sh=750b083d7498.
      76. Id.
      77. Faggella, supra note 66.
      78. Daniel Martin Katz et al., A General Approach for Predicting the Behavior of the Supreme Court of the United States, 12(4) PLoS One 1, at 4 (2017).
      79. Id.
      80. Gene Moo Lee et al., Predicting Litigation Risk via Machine Learning, Harvard L. Sch. F. On Corp. Governance, Dec. 2020, at 1, 16 https://dx.doi.org/10.2139/ssrn.3740954.
      81. Id. at 1.
      82. Faggella, supra note 66.
      83. Id.
      84. Id.
      85. Alison Wilkinson, How AI is Revolutionizing Legal Research, Kira (Apr. 13, 2020), https://kirasystems.com/learn/how-ai-is-revolutionizing-legal-research.
      86. Id.
      87. Id.
      88. Faggella, supra note 66.
      89. Id.
      90. Id.
      91. Wilkinson, supra note 85.
      92. Faggella, supra note 66.
      93. Id.
      94. 35 U.S.C. § 102.
      95. Faggella, supra note 66.
      96. Vic Lin, What is a Patentability Search (Novelty Search)?, Pat. Trademark Blog, https://www.patenttrademarkblog.com/patentability-search-novelty-search (last visited Mar. 14, 2023).
      97. Faggella, supra note 66.
      98. Id.
      99. Id.
      100. Id.
      101. Id.
      102. Id.
      103. Fed R. Civ. P. 26(b); see also Ariz. R. Civ. P. 26(b).
      104. What is Document Review?, Zapproved, https://zapproved.com/blog/what-is-document-review (last visited Jan. 24, 2022).
      105. Ajith Samuel, Artificial Intelligence Will Change E-Discovery in the Next Three Years, L. Tech. Today (Apr. 24, 2019), https://www.lawtechnologytoday.org/2019/04/artificial-intelligence-will-change-e-discovery-in-the-next-three-years.
      106. Barkley v. McKeever Enter., Inc., 456 S.W.3d 829, 843 (Mo. 2015).
      107. United States v. Zolin, 491 U.S. 554, 565 (1989).
      108. Robert S. Teuton, One Small Step and a Giant Leap: Comparing Washington, D.C.’s Rule 5.4 with Arizona’s Rule 5.4 Abolition, 65 ARIZ. L. REV. 223 (2023).
      109. Id.; see also Steinitz, supra note 3.
      110. Teuton, supra note 108, at 224.
      111. ACJA § 7-208.
      112. See Faggella, supra note 66.
      113. Avaneesh Marwaha, Seven Benefits of Artificial Intelligence for Law Firms, L. Tech. Today (July 13, 2017), https://www.lawtechnologytoday.org/2017/07/seven-benefits-artificial-intelligence-law-firms.
      114. Id.
      115. Id.
      116. Id.
      117. Id.
      118. How AI is changing legal due diligence, supra note 69.
      119. Marwaha, supra note 113.
      120. See id.
      121. Id.
      122. Id.
      123. Id.
      124. Soojung Chang, The Benefits of Using Artificial Intelligence in Law, Ross (Apr. 20, 2018), https://blog.rossintelligence.com/post/benefits-ai-law.
      125. Katz, supra note 78.
      126. Marwaha, supra note 113.
      127. Id.
      128. Katz, supra note 78.
      129. Marwaha, supra note 113.
      130. 20 Astonishing Video Conference Statistics for 2021, Digital in the Round (July 10, 2021), https://digitalintheround.com/video-conferencing-statistics.
      131. Jamie Foote, AI: Delivering Real Benefits to Lawyers & Courts, Lawyer Monthly (Sep. 30, 2020), https://www.lawyer-monthly.com/2020/09/ai-delivering-real-benefits-to-lawyers-and-courts.
      132. See How AI is changing legal due diligence, supra note 69 at 4.
      133. Foote, supra note 131.
      134. Id.
      135. See id.
      136. Marwaha, supra note 113.
      137. Id.
      138. Id.
      139. Id.
      140. Id.
      141. Id.
      142. Fed. Democratic Republic of Ethiopia Ministry of Transp., Aircraft Accident Investigation Bureau Interim Investigation Report on Accident to the B737-8 (MAX) Registered ET-AVJ Operated by Ethiopian Airlines On 10 March 2019, at 17 (2020).
      143. Gary A. Flandro et al., Basic Aerodynamics: Incompressible Flow 171 (Cambridge Univ. Press, 2012) (The angle of attack is “the angle between the freestream and the chord line,” where the chord line is the connection between the leading edge and the trailing edge of the cross-section of an aircraft wing).
      144. Fed. Democratic Republic of Ethiopia Ministry of Transp., supra note 142, at 22.
      145. Id. at 25.
      146. Id. at 25-30.
      147. Id. at 18.
      148. Id.
      149. Denise Lu et al., From 8,600 Flights to Zero: Grounding the Boeing 737 Max 8, N.Y. Times (Mar. 13, 2019), https://www.nytimes.com/interactive/2019/03/11/world/boeing-737-max-which-airlines.html.
      150. Jon Ostrower, What is the Boeing 737 Max Maneuvering Characteristics Augmentation System?, Air Current (Nov. 13, 2018), https://theaircurrent.com/aviation-safety/what-is-the-boeing-737-max-maneuvering-characteristics-augmentation-system-mcas-jt610.
      151. Ostrower, supra note 150.
      152. John Mongan & Marc Kohli, Artificial Intelligence & Human Life: Five Lessons for Radiology from the 737 MAX Disasters, Radiology: Artificial Intelligence, Mar. 2020, at 1, 1.
      153. Fed. Democratic Republic of Ethiopia Ministry of Transp., supra note 142, at 266.
      154. Id. at 1.
      155. Grieve, supra note 44.
      156. Wilkinson, supra note 85.
      157. Menick, supra note 19.
      158. Eirini Ntoutsi et al., Bias in Data-Driven Artificial Intelligence Systems—an Introductory Survey, 10 WIREs: Data Mining and Knowledge discovery 3, at 1-2 (2020).
      159. Id. at 2.
      160. Karen Hao, This is How AI Bias Really Happens–And Why It’s So Hard to Fix, MIT Tech. Rev. (Feb. 4, 2019), https://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/?ref=hackernoon.com.
      161. Id.
      162. Id.
      163. Katz et al., supra note 78, at 1.
      164. Hao, supra note 161.
      165. Justices 1789 to Present, Sup. Ct. of the U.S., https://www.supremecourt.gov/about/members_text.aspx (last visited Apr. 15, 2023) (Justice Gorsuch was appointed in 2017; Justice Kavanaugh was appointed in 2018; Justice Barrett was appointed in 2020).
      166. Hao, supra note 161.
      167. Menick, supra note 19.
      168. Hao, supra note 161.
      169. Id.
      170. See Brandon Vaidyanathan, Systemic Racial Bias in the Criminal Justice System is Not a Myth, Public Discourse (June 29, 2020), https://www.thepublicdiscourse.com/2020/06/65585.
      171. Julia Angwin et al., Machine Bias, Pro Publica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing; see also William McGurn, Systemic Bias Against Asians, Wall St. J. (Dec. 6, 2021, 6:27 PM), https://www.wsj.com/articles/systemic-anti-asian-bias-san-francisco-merit-test-sat-harvard-california-minorities-racism-11638830364 (pointing out that the pre-existing racial bias in the current criminal system carries into the AI assigning possibilities of criminal responsibilities).
      172. Ntoutsi et al., supra note 159, at 3.
      173. Hao, supra note 161.
      174. Katz et al., supra note 78 at 3.
      175. See Anthoney Heyes & Soodeh Saberian, Temperature & Decisions: Evidence from 207,000 Ct. Cases, 11 Am. Econ. J.: Applied Econ. 238, 238 (2019), https://pubs.aeaweb.org/doi/pdfplus/10.1257/app.20170223.
      176. See generally Hao, supra note 161.
      177. Id.
      178. Id.
      179. Id.
      180. Id.
      181. Id.
      182. Lee et al., supra note 80.
      183. Hao, supra note 161.
      184. Ntoutsi et al., supra note 159.
      185. Id. at 5.
      186. Id.
      187. Hao, supra note 161.
      188. See Gregory Vial et al., The Data Problem Stalling AI, 62 MIT Sloan Mgmt. Rev. 47 (Dec. 08, 2020).
      189. Mohammad Reza Kameli, The Dichotomy Between Safeguarding Data Privacy & Promoting Individualized Healthcare Using Artificial Intelligence: How Mod. Reg. Miss the Mark & New Standards Can Reconcile the Two, 4 Ariz. L. J. Emerging Tech. 1, 4, https://azlawjet.com/2020/11/the-dichotomy-between-safeguarding-data-privacy-and-promoting-individualized-healthcare-using-artificial-intelligence.
      190. Paramita Ghosh, Challenges of Data Quality in the AI Ecosystem, Dataversity (Nov. 12, 2019), https://www.dataversity.net/challenges-of-data-quality-in-the-ai-ecosystem/#.
      191. Menick, supra note 19, at 5.
      192. Wilkinson, supra note 85.
      193. Kameli, supra note 190, at 8.
      194. EagleView Tech., Inc. v. Xactware Solutions., Inc., 522 F. Supp. 3d 40, 55 (D.N.J. 2021).
      195. 35 U.S.C. § 284.
      196. Kavlakoglu, supra note 36.
      197. Alexander Evstratov & Igor Guchenkov, The Limitations of Artificial Intelligence (Legal Problems), 4 L. Enf’t Rev. 13, 15 (Jul. 3, 2020), https://doi.org/10.24147/2542-1514.2020.4(2).13-19.
      198. John P. Mueller & Luca Massaron, Artificial Intelligence for Dummies 131 (Katie Mohr et al. eds., 1st ed. 2018).
      199. Marina Dneprovskaya & Sergey Abramitov, Digital Technology in Activities of Russian Courts: Prospects of Artificial Intelligence Application, 138 Advances in Econ., Bus. and Mgmt. Rsch. 209, 210 (May 5, 2020), https://doi.org/10.2991/aebmr.k.200502.034.
      200. Faggella, supra note 66.
      201. Mueller & Massaron, supra note 199.
      202. Legal Memos Made Easy, Point First Legal Writing http://pointfirstwriting.com/legal_memo/write_memo/facts.html#b_purposeful-complete (last visited Jan. 24, 2022).
      203. Mueller & Massaron, supra note 199.
      204. Faggella, supra note 66.
      205. See Benjamin Barton, Why Do We Regulate Lawyers?: An Economic Analysis of The Justifications for Entry & Conduct Regulation, 33 Ariz. State L. J. 429, 429-31 (2001).
      206. Id. at 431.
      207. Benjamin Barton, An Institutional Analysis of Lawyer Regulation: Who Should Control Lawyer Regulation – Courts, Legislatures, or the Market, 37 Ga. L. Rev. 1167, 1173 (2003).
      208. Shearson Lehman Bros. v. Wasatch Bank, 139 F.R.D. 412, 414 (D. Utah 1991) (Utah adopts ABA Model Rules); see also Gilda Russell, Ethical Lawyering in Massachusetts, (Massachusetts Continuing Legal Education 2021) (Massachusetts adopts “rules that govern attorneys in the commonwealth”); see also Frye v. Tenderloin Hous. Clinic, Inc., 129 P.3d 408, 426n.12 (2006) (“California has not adopted the ABA Model Rules, they may be ‘helpful and persuasive.’”).
      209. John Villa, Ethical Responsibility for the Actions of Other Lawyers & Non-Lawyers in Corporate Counsel’s Office, 1 Corp. Couns. Guidelines § 3.30 (2020).
      210. Anthony Davis, The Future of Law Firms (& Lawyers) in the Age of Artificial Intelligence, 27 The Pro. Law. 1, 8-10 (2020).
      211. Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 2020).
      212. Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 2020).
      213. Model Rules of Pro. Conduct r. 1.1 cmt. 1 (Am. Bar Ass’n 2020).
      214. Cal. Rules of Pro. Conduct r. 3-110(A) (State Bar of Cal. 2021).
      215. Mueller & Massaron, supra note 201.
      216. Jeffrey Jackson et al., 6 MS. Prac. Encyc. MS. L. § 59:48 (2d ed.) (Oct. 2021).
      217. Elledge v. State, 911 So. 2d 57, 67 (Fla. 2005) citing Maxwell v. Wainwright, 490 So.2d 927, 932 (Fla.1986).
      218. Id.
      219. Id.
      220. Adams v. State, 81 P.3d 394, 407 (2003) citing Briones v. State, 848 P.2d 966, 976–77 (1993).
      221. Id.
      222. Rogers, supra note 18; Menick, supra note 19.
      223. Rogers, supra note 18; Menick, supra note 19; see also Davis, supra note 213 (“Also problematic is the fact that there is no independent analysis of the efficacy of any given AI solution, so that neither lawyers nor clients can easily determine which of several products or services actually achieve either the results they promise, nor which is preferable for a given set of problems.”)
      224. Adams v. State, 81 P.3d 394, 407 (2003) citing Briones v. State, 848 P.2d 966, 976–77 (1993).
      225. Dian Cox & Neal Bowling, Malpractice v. Misconduct, Lewis Wagner, at 1-2 (May 2021), https://www.lewiswagner.com/9C8985/assets/files/News/malpractice%20article%20-%20national%20-%20marketing%20-%205162012%20_2_.pdf.
      226. Yager v. Clauson, 166 N.H. 570, 572-73, 101 A.3d 6 (2014).
      227. Id.
      228. Mansfield v. State, 911 So. 2d 1160, 1174 (Fla. 2005).
      229. How AI is changing legal due diligence, supra note 69.
      230. “A deposition may be taken in a foreign country . . .” Fed. R. Civ. P. 28(b).
      231. Model Rules of Pro. Conduct r. 1.1 cmt. 8 (Am. Bar Ass’n 2020).
      232. Davis, supra note 211, at 6.
      233. Hao, supra note 161.
      234. Model Rules of Pro. Conduct r. 1.4 (Am. Bar Ass’n 2020).
      235. J. Nick Badgerow, Can We Talk?: The Lawyer’s Ethical, Professional & Proper Duty to Communicate with Clients, 7 Kan. J. of L. & Pub. Pol’y 105, 111 (1998).
      236. Hao, supra note 161.
      237. Sokowicz, supra note 75.
      238. Rogers, supra note 18; Menick, supra note 19.
      239. Model Rules of Pro. Conduct r. 1.6 (Am. Bar Ass’n 2020).
      240. Kameli, supra note 190, at 8.
      241. Id.
      242. What to Do When You’re Mad at Your Lawyer, Nolo, https://www.nolo.com/legal-encyclopedia/problems-with-lawyer-tips-strategies-29925.html (last visited Jan. 24, 2022).
      243. 15 U.S.C. § 45(a).
      244. Alexander Reicher & Yan Fang, FTC Privacy & Data Security Enforcement & Guidance Under Section 5, 25 Competition 89, 110-114 (2016).
      245. Council Regulation 2016/679, art. 78, 2016 O.J. (L 119) 1, 80.
      246. Id., art. 5, at 35-36.
      247. Davis, supra note 211, at 8.
      248. Ariz. Rev. Stat. § 28-9702 (2021); also see Fla. Stat. § 316.85.
      249. Ariz. Exec. Order No. 2015-09.
      250. Autonomous Vehicles Testing & Operating in the State of Arizona, Ariz. Dep’t of Transp., https://azdot.gov/motor-vehicles/professional-services/autonomous-vehicles-testing-and-operating-state-arizona (last visited Jan. 24, 2022).
      251. Ariz. Rev. Stat. § 28-9702.
      252. Fla. Stat. § 316.85.
      253. Senate Committee Leaves AV Bill Out of Transportation Package, AASHTO J. (June 18, 2021), https://aashtojournal.org/2021/06/18/senate-committee-leaves-av-framework-out-of-transportation-bill (last visited Jan. 24, 2022).
      254. USDOT Automated Vehicles Activities, U.S. Dep’t of Transp., https://www.transportation.gov/AV (Mar. 28, 2022).
      255. Id.
      256. Davis, supra note 211, at 10.
      257. Evstratov & Guchenkov, supra note 198, at 14.
      258. Faggella, supra note 66.
      259. Cronusprod, The Purpose of Law & Its Functions in Society, Cronus L. (Sep. 2, 2019), https://cronuslaw.com/the-purpose-of-law-and-its-functions-in-society.
      260. Legal Bots Raise Liability and Ethics Concerns, Epiq, https://www.epiqglobal.com/en-us/resource-center/articles/legal-bots-raise-liability-and-ethics-concerns (last visited Jan. 24, 2022).
      261. Id.
      262. Deedee Gasch, Liability Waivers: Can You Contract Away Negligence?, WilmingtonBiz (Mar. 11, 2019), http://www.wilmingtonbiz.com/insights/deedee_gasch/liability_waivers_can_you_contract_away_negligence%C2%A0/2353.
      263. Id.
      264. Cox & Bowling, supra note 226.
      265. Model Rules of Pro. Conduct r. 1.8(h) (Am. Bar Ass’n 2020).
      266. Model Rules of Pro. Conduct r. 1.8 cmt. 17 (Am. Bar Ass’n 2020).
      267. Id.
      268. Rogers, supra note 18.
      269. Model Rules of Pro. Conduct (Am. Bar Ass’n 2020).
      270. Walsh v. Morse Diesel, Inc., 533 N.Y.S.2d 80, 82 (1988).
      271. Dolores Dorsainvil, Tips on How to Deal with a Bar Counsel Complaint, The Gavel (Oct. 17, 2012), https://ylsgavel.wordpress.com/2012/10/17/tips-on-how-to-deal-with-a-bar-counsel-complaint.
      272. S.J.C. R. 4:01.
      273. In re Howes, 52 A.3d 1, 13 (D.C. 2012).
      274. In re Allen, 509 N.E.2d 1158, 1159 (1987).
      275. 23 A.L.R. Fed. 480 § 3 (Originally published in 1975).
      276. Id.
      277. Eric Helzer & Clayton Critcher, What Do We Evaluate when We Evaluate Moral Character?, in Atlas of Moral Psych. 99, 103 (Gray & Graham ed., 2018).
      278. Id.
      279. Id. at 101.
      280. See Model Rules of Pro. Conduct r. 1.1 (Am. Bar Ass’n 2020).
      281. Yager, 166 N.H. at 572-73, 101 A.3d at 9.
      282. Eur. Comm’n, Liability for Artificial Intelligence & Other Emerging Digital Technologies 39 (2019).
      283. Id. at 42.
      284. Id.
      285. What Does the Public Really Think about Lawsuits?, JDSUPRA (July 15, 2020), https://www.jdsupra.com/legalnews/what-does-the-public-really-think-of-19779/ (over half of people surveyed would not sue someone unless absolutely necessary).
      286. Automated & Electric Vehicles Act 2018, c. 18, § 2 (UK).
      287. FAQs on Malpractice Insurance for the New or Suddenly Solo Attorney, AM. BAR ASS’N, https://www.americanbar.org/groups/lawyers_professional_liability/resources/faqs_on_malpractice_insurance_for_the_new_or_suddenly_solo_attorney (last visited Jan. 24, 2022).
      288. Model Rules of Pro. Conduct (Am. Bar Ass’n 2020).
      289. USDOT, supra note 255.
      290. ACJA § 7-208.
      291. Id.
      292. Ariz. State Sup. Ct. R. 42.
      293. Ariz. Const. art. II, § 8.
      294. Reicher & Fang, supra note 245.
      295. Full Privacy Policy, Verizon Wireless, https://www.verizon.com/about/privacy/full-privacy-policy (last visited Jan. 24, 2022).
      296. Council Regulation 2016/679, art. 78, 2016 O.J. (L 119) 1, 80.