Why Artificial Intelligence Shouldn’t Be a Patent Inventor

Arizona Law Journal of Emerging Technologies
Volume 5 Article 5, 04-2022
Download Article Here
Image from wired.com

 

WHY ARTIFICIAL INTELLIGENCE SHOULDN’T BE A PATENT INVENTOR

Pressley Nietering, JD Candidate[1]

I. Introduction

As you may have surmised from the title, this Note explains why Artificial Intelligence (“AI”) should not be listed as an inventor on patents. Before that can be done though, the term “artificial intelligence” needs to be defined. This is not an easy task because it has no universally recognized definition.[2] Its best definition is therefore a broad one; it encompasses the “science and engineering of making intelligent machines.”[3] AI is multiple “related and often-connected technologies[,]” such as “deep learning, natural language processing, and expert systems.”[4] Deep learning is at the center of what is considered modern AI.[5] Deep learning uses neural networks to learn from large amounts of data.[6]

AI has come a long way from being a futuristic concept hinted at in Alan Turing’s 1950 work, “Computing Machinery and Intelligence,” which famously asked, “Can machines think?”[7] At the time, Turing—controversially—thought that machines within 50 years would be able to pass as human up to 30 percent of the time in what he called the “imitation game.”[8] A large step towards Turing’s prediction occurred five years later when John McCarthy, a computer science professor at Stanford, coined the term “artificial intelligence.”[9]

Since 1955, when the term AI was created, AI has exploded in popularity and ability, moving AI closer to Turing’s prediction. Among other developments, Intel has created Pohoiki Springs, a neuromorphic system which is designed to use circuits that mimic the brain’s neuro-biological architecture.[10] Pohoiki Springs is claimed to have the brainpower of a small mammal.[11] AI’s progress will continue since AI “probably” will reach overall human ability by 2050 and is “very likely” to reach it by 2075.[12] AIs have also become increasingly creative, which is often thought of as a human quality. In recent years, AI systems have created a movie-trailer,[13] made recipes, written a novel, and made original compositions.[14]

Still, despite the advanced nature of some AI, some scholars have argued that AI’s creativity is just following an algorithm—the mere “outputs of a process whose steps are precise and explicit”—and analogized the creativity to “slavish copying.”[15] This is not a new criticism; Turing even acknowledged this critique in his 1950 article.[16] However, this criticism belongs in the past to a time when AI would just automate pre-programmed steps. It ignores the deep-learning approaches inherent in some modern AI.[17] For this Note, it is assumed that AI can invent because, if it cannot already, it will soon be able to.[18]

The question then becomes if AI can be named as an inventor on a patent. This paper proposes that courts should continue to not allow AI to be listed as an inventor. Currently, US patent law is silent on the subject. 35 U.S.C. § 101 merely states, “Whoever invents or discovers . . . may obtain a patent.”[19] There is no mention of who, or what, fits under “whoever.” § 100 provides little more clarity, defining an inventor as “the individual . . . who invented or discovered the subject matter of the invention.”[20] The term “individual” is used elsewhere in the Patent Act but is not defined.[21] Use of the term “individual” suggests the Patent Act is referencing a human but not necessarily.

Due to § 100, it was likely assumed that AI could not be the inventor listed on a patent. However, it was not until this past year that the question was definitively answered, thanks to Stephen Thaler, inventor of an AI named DABUS (“Device for Autonomous Bootstrapping of Unified Sentience”).[22] Thaler submitted numerous patent applications across the world with DABUS listed as the inventor.[23] The South African Patent Office allowed the patent application, granting a patent for a “food container based on fractal geometry.”[24] It should be noted though that South Africa is a non-examining country.[25] In non-examining countries, a completed patent application is granted without checking if their patent eligibility requirements are met, and granted patents are valid until proven otherwise.[26] Two days after the South African patent was granted, the Federal Court of Australia issued a ruling allowing DABUS to be listed as an inventor.[27] Despite South Africa and Australia allowing the patent, the UK Patent Office, the European Patent Office, and the USPTO rejected Thaler’s application.[28] U.S. District Court Judge Leonie Brinkema from Alexandria, VA ultimately affirmed the USPTO’s rejection.[29]

It is not surprising that DABUS’s invention was the first known computer-conceived invention to reach the courts. There are few AIs in the world capable of producing inventions. While there could be “underground” AI inventors, there are only a few known anecdotal examples of autonomous-AI inventors outside of DABUS.[30] With only about 1 to 2% of patents ever asserted through litigation,[31] AI-conceived inventions could be slipping through the cracks and not getting noticed. However, with the increasing prevalence of advanced AI, AI-conceived inventions will likely be a larger controversy in the coming decades.

There are numerous problems with permitting AI systems to be inventors for patent purposes. These problems include creating issues with the analogous art requirement, failing to meet the enablement standard, recalibrating who the Person Having Ordinary Skill in the Art is, generating constitutional concerns about incentivizing AI, producing similar incentives to have AIs treated as the authors of copyrighted works, and setting the stage for other non-human entities to have intellectual property rights. These problems need to be answered before AI is allowed to be an inventor on patents. This Article addresses these concerns, then proposes a solution: having the discoverer of the AI’s invention be the “inventor.”

It should be noted that this paper makes a key assumption about AIs. At least a few scholars have thought of AI-inventorship as a spectrum.[32] At one end of the spectrum is the sole inventor working without the benefit of AI. At the other end of the spectrum is AI-solely inventing with no input or direction from humans. In the middle of this vast spectrum is an increasing amount of AI-contribution. This Note examines the end of the spectrum where AI is the primary inventor. The invention process could involve some input or direction from humans, but AI performs the conception step. This is important because conception is often considered the “touchstone of inventorship.”[33] Conception “is the formation in the mind of the inventor, of a definite and permanent idea of the complete and operative invention, as it is hereafter to be applied in practice.”[34] It should be noted that this Note does not address AI-generated claim sets.[35]

II. Allowing AI to be an Inventor Would Warp the Obviousness Requirement, Potentially Freezing Out Human Inventors

The obviousness standard serves a critical role as a gatekeeper of unpatentable inventions.[36] An invention may be literally novel but still be unpatentable if it provides only a slight variation on known inventions “in the art to which the claimed invention pertains.”[37] Unlike the novelty analysis, relevant prior art for obviousness only comes from analogous art.[38] This analogous art can either be “from the same field of endeavor, regardless of the problem addressed”[39] or a reference from a different field that solves the same problem.[40] The objective standard for obviousness, a Person Having Ordinary Skill In The Art (a “PHOSITA”), is presumed to have access to all prior art references in analogous fields, regardless of how unrealistic that actually is.[41]

At a certain level, this makes sense. People trying to invent a solution generally know a field very well and look to the same problem in other related fields for a solution. Not awarding patents for combining art from analogous fields recognizes the ingenuity inherently involved with combining disparate fields.[42] Courts rightfully expect people to not know all prior art across all fields and want to reward inventors for extraordinary inventing activity.[43]

However, AI does not have an analogous art limitation.[44] Often times, problems in one discipline are solved by knowledge from another discipline. It is the mark of an extraordinary human to know when to combine references but, for an AI, it is commonplace.[45] For example, take a problem in electrical engineering where advances in culinary arts, mechanical engineering, and chemistry are directly relevant to solving that problem. The solution to the electrical engineering problem may be a relatively simple idea borrowed from these other fields, and, to an AI, an obvious invention. To a human whose knowledge is confined to one field though, the invention would not be obvious. While it takes a genius human to combine fields to find a solution, it only takes an ordinary AI. However, under the current interpretation of patent laws, the patent would be granted to the AI for “ordinary” AI activity.

AI reaching across disciplines for a solution is not a far-out hypothetical either. One AI machine, called the Creativity Machine, has already invented the cross-bristle design of the Oral‑B CrossAction toothbrush, new “super-strong materials,” spy devices that search the Internet for terrorist messages,[46] and automobile designs.[47] These inventions are in vastly different fields, covering different problems. While it is likely relatively easy to program AI to have such a diverse training set, there are likely few humans with such a diverse knowledge base.

If an AI were allowed to be an inventor, then too many patents would be granted to AIs under the current system. There already are massive problems stemming from too many patents being granted.[48] These problems would be exasperated by allowing AI to be an inventor because AI can consider non-analogous art too easily and combine them until it finds a working combination.

The “‘race to patent’ derived from the easiness to invent in the context of AI” would force Congress and the courts to revisit the analogous art limitation.[49] There are no easy answers though because if the analogous art requirement was dropped and all art were to be considered when evaluating prior art, humans without the benefit of AI would struggle to get patents. This is because if more art can be considered, “the more likely it is to find prior art that makes the invention obvious/lacking [the] inventive step.”[50] This could shut down innovation for small businesses or humans that lack sophisticated AI systems. Innovation would be concentrated in the few companies that can afford expensive advanced AI systems.

While letting AIs be an inventor could lead to more overall innovation in society, the social cost of either too many patents being granted or humans being frozen from innovation is too great. The risk does not outweigh the reward.

III. If AIs Were Allowed to be Inventors, Problems Would Arise with the Enablement Standard.

Another concern with allowing patents on AI-created inventions is the AI may not be able to meet the enablement standard. The enablement standard, codified in 35 U.S.C. § 112(a), is met when the specification can teach a person of ordinary skill in the art “how to make and use the full scope of the claimed invention without undue experimentation.”[51] In the patent exchange quid pro quo, the enablement standard helps ensure that the public is provided with a meaningful disclosure on how to build and use the patented invention in exchange for a limited legal monopoly.[52] When examining if a patent meets the enablement standard, courts examine the Wands factors: “(1) the quantity of experimentation necessary, (2) the amount of direction or guidance presented, (3) the presence or absence of working examples, (4) the nature of the invention, (5) the state of the prior art, (6) the relative skill of those in the art, (7) the predictability or unpredictability of the art, and (8) the breadth of the claims.”[53]

Most enablement analysis centers around an inverse relationship between the amount of information provided and the predictability and amount of knowledge in the art.[54] This means that the more that is known in an art, the less that is explicitly required to be detailed in the specification. Conversely, the opposite is true; the less that is known and predictable in an art, the more that the specification explicitly needs to provide.[55] This is consistent with the patent system encouraging disclosure.

However, AI-conceived inventions will be hard-pressed to meet the enablement requirement. Artificial intelligences, particularly the advanced versions capable of inventing, are oftentimes very opaque and called a “black box.”[56] This is because the inner mechanisms of the brain often involve deep learning, or the use of multiple algorithms to emulate the neural networks of the human brain.[57] Since an AI cannot “explain” its work in the way that a human can, to meet the enablement requirement, the final invention would need to be able to be reverse-engineered. Otherwise, the person writing the patent, likely the end-user or AI programmer, may not know how to detail making and using the invention.

Only certain products can be reverse-engineered though. Therefore, if AI were allowed to be an inventor, without a way to track or trace how the AI arrived at its output, either the type of products produced or invented would be limited or they would have to be protected through trade secrets and not the patent system.

IV. If AI’s Were Allowed to be Inventors, the PHOSITA Standard Would Either Allow Too Many Patents or Prevent Humans from Inventing

Since the U.S. Patent Act of 1952, the obviousness standard has been officially measured by if an invention would have been obvious to a Person Having Ordinary Skill In The Art (PHOSITA).[58] A PHOSITA also plays a role in determining claim construction, infringement, if a best mode was disclosed, if claims are adequately definite,[59] and if a specification is adequate.[60]

The beauty of the PHOSITA standard is that it sets both a floor and a ceiling for an inventor’s skill level.[61] The floor is an ordinary person in the relevant field, reflecting the “common sense notion that the question of whether a variation is trivial should not be determined from the perspective of someone who knows nothing about the field in question.”[62] Similarly, the obviousness standard excludes those of extraordinary skill because then the obviousness standard would swallow most patents.[63] It can “be viewed as a collar on the obviousness standard that both: (1) prevents the patentability of trivial inventions and (2) preserves the patentability of meritorious ones.”[64]

While a PHOSITA would seem to be an objective standard that changes only with the field of invention, courts have dramatically increased the skill level of who they consider to be one of “ordinary skill.”[65] From the earliest patent decision in the 1800s through the 1960s, the PHOSITA was largely considered to be an ordinary mechanic or artisan in the trade.[66] However, it has lately been considered to be a professional researcher or research team[67] who has knowledge of “hidden or difficult to locate prior art.”[68] This means that the skill level of the PHOSITA has increased in two dimensions: (1) the level of skill, from a mechanic to a researcher and (2) the scope of prior art that the PHOSITA is aware of.[69] This change in the skill level of a PHOSITA has already resulted in making patents more difficult to obtain. This might be for the better; overall skill levels have increased with specialization and longer life spans.[70] This change in skill level means less patents are granted for ordinary inventive activity.

However, the skill level increase of a PHOSITA from mechanic to researcher would pale in comparison to the skill level increase that would occur if AIs were allowed to be inventors. If an AI is allowed to be an inventor, the “person” of ordinary skill’s skill level necessarily becomes much higher. “The idea of a PHOSITA understanding all of the prior art in her field was always fictional, but now it is possible for a skilled entity, in the form of a computer, to possess such knowledge.”[71]

One only needs to look to the popular game show Jeopardy! for an example. Watson, an IBM computer, was able to beat two historically great Jeopardy champions Brad Rutter and Ken Jennings at their own game.[72] Rutter and Jennings certainly have more than “ordinary skill” in the art of trivia, but they were no match for the 200 million pages of information fed to Watson for the game.[73] Watson simply had more access to knowledge and was quicker than the Jeopardy champions. It should be noted that, for this Jeopardy contest, Watson was not allowed to access the internet; later versions of Watson will have internet connectivity.[74] This means Watson can get even smarter than he was in that Jeopardy tournament. Watson playing Jeopardy is just one example of how AI surpasses human capabilities.

As AIs start to replace humans in a given field, Watson and other AIs will dramatically skew the calculus in determining what “ordinary skill” is. Allowing AIs to be inventors without adjusting the PHOSITA standard would lead to too many patents, and therefore too many legal monopolies, being granted for “average” or ordinary AI activities. The powerful legal monopoly that is a patent would suddenly be concentrated in AI owners and would squeeze humans out of the inventor marketplace.

That is not to say that AI becoming more prevalent in a field should not influence the PHOSITA’s overall skill level. AI can automate certain aspects of business, enabling humans to become hyper-efficient.[75] AI also provides scientists with more potential solutions to test, allows scientists to find dead ends in hours rather than months, and helps optimize materials.[76] One MIT researcher estimates that a materials discovery process that ordinarily takes 15 to 20 years could be reduced to just two to five years with AI and machine learning.[77] For a similar example, Watson can interpret a patient’s genome and prepare an actionable report within ten minutes.[78] This is a process that would take a team of experts previously around 160 hours.[79] If AI’s use as a tool becomes more commonplace in a given field, the skill level of a PHOSITA should subsequently become higher.

In this way, it is similar to industries adopting other tools like ordinary computers or calculators. Similar to companies that chose not to adopt this earlier technology, companies in high-tech industries that do not adopt AI will likely struggle to invent patentable material.[80] While that is a danger, failure to adjust the PHOSITA skill level for using AI as a tool would mean research teams that employ Watson-like AI would develop too many patents.[81] This would create a glut of patents in the marketplace, which would likely decrease the cost of non-practicing entities (NPEs) to purchase these patents and, subsequently, lead to more assertions of NPES against Practicing Entities.[82]

The permissible use of AI as a tool adjusting the skill level of a PHOSITA has caused one scholar to contend that this will be a problem regardless of whether AI is allowed to be inventors.[83] However, using AI as a tool to invent is different from AI doing the inventing. Using AI as a tool means there are still limits to what can be invented. A human still has to conceive the idea for the invention and, as courts have noted, conception is the hard part of the invention. This would provide some limitation on how quickly the skill level of a PHOSITA can increase. If AI is allowed to be an inventor though, the PHOSITA standard will either allow too many patents or squeeze out humans from being able to patent.

V. Artificial Intelligence Lacks the Ability to be “Incentivized,” So Awarding AI Patents Is Unconstitutional

A patent is a decidedly anti-free market tool, so the Founders provided Congress the power to create and regulates patents through the Intellectual Property Clause.[84] This clause gives Congress the power to grant temporary monopolies to authors and inventors “to promote the progress of science and useful arts.”[85] Courts have been careful to distinguish this purpose from just rewarding an artist’s labor, noting that “patent laws promote … progress by offering inventors exclusive rights for a limited period as an incentive for their inventiveness and research efforts.”[86]

For AI to be able to have patents granted to it, Congress needs to be able to grant patents to AI. Machines cannot be incentivized like humans through money or industry stature though, so Congress should not be able to constitutionally grant limited monopolies to AI-inventors.[87] Some scholars have argued that not allowing AI to be an inventor would run counter to the policy of patent law of incentivizing inventions.[88] This argument does have some merit; building an AI-system is incredibly expensive and not guaranteed to succeed. For example, IBM spent $4 billion on preparing Watson, the Jeopardy-winning machine that runs on 2880 processor cores and over 100 algorithms,[89] to enter the healthcare industry.[90] However, Watson struggled to diagnose patients and, potentially as a result, IBM has struggled to find buyers for its Watson oncology product.[91] Similarly, Alphabet’s DeepMind project struggled commercially, with over $2 billion invested in the project but only returning $125 million in 2018.[92] DeepMind’s deep reinforcement learning simply may not work in less controlled environments, meaning there is little guarantee that it will someday be worth the investment.[93] AIs are certainly a risky investment, and companies may need extra incentive or reduced risk to invest in them.

Another argument that favors allowing AI-conceived inventions to be constitutionally patentable is that patent law is built to be flexible and adapt to new inventions. For example, courts’ interpretation of patentable subject matter under § 101 has evolved because, to put it simply, “times change.”[94] In Bilski, the Supreme Court rejected using the machine-or-transformation test as the sole criterion for determining subject-matter eligibility due to “unforeseen innovations such as computer programs.”[95] While rejecting the claims at issue in Bilski on a narrow basis,[96] the Court explained that “Section 101 is a dynamic provision designed to encompass new and unforeseen inventions,”[97] and a per se rule would “frustrate the purposes of the patent law.”[98] Bilski arguably demonstrates that courts are not hesitant to evolve patent law with the times.

However, these arguments are not persuasive. Patents are commonly thought of as an exchange where inventors invest resources in return for a limited monopoly.[99] The limited monopoly is, hopefully, enough to cover the research and development cost and any obstacles encountered.[100] This limited monopoly needs to be balanced with the ultimate goal of the patent system: “bring[ing] new designs and technologies into the public domain through disclosure.”[101] Therefore, the exchange cannot go too far in one direction. The Constitution does not permit for anything extra beyond the initial limited monopoly to be provided, meaning inventions created by the initially patented invention should not be entitled to a patent on inventions their invention creates. AI can already be patented.[102],[103] Here, the exchange for the AI is complete once the patent is granted, giving the AI inventor a monopoly on that AI.

Further, many AI inventors likely opt out of the patent system, meaning the government should not be concerned with incentivizing AI since these inventors are not availing their AIs of the patent system. A patent is a powerful legal monopoly, so, if AI-developers choose to not patent AI or AI is not patentable, the Constitution does not provide for the incentivizing of developers to create more AI. It is hard to tell how many inventors are opting out of the patent system, since the alternative is trade secret protection, which, by its nature, is hard to quantify. However, the high rate of scientific publications to patents indicates there is much scientific discovery around AI occurring without patents.[104] This means that there is a lot of research and development occurring around AI and the AI field; it just is not resulting in patentable inventions.

There are numerous reasons why AI-inventors would not want to patent their inventions. Trade secret protection is often preferred to patent protection because it is cheaper and quicker to obtain and can extend to ideas that are not patentable.[105] For example, software is generally thought of as “disclosing,” or easy to reverse-engineer.[106] Therefore, software is “expensive to create but relatively easy to reproduce,” so software developers are more inclined to try for patent protection.[107] However, AI is often hard or impossible to reverse-engineer. It is often referred to as a black-box since it is so opaque.[108] Thus, there is little incentive for AI-developers to seek patent protection when it is inherently protected.

Even if AI-developers wanted to seek a patent, many AI systems are not patentable because they deal with the mere application of algorithms.[109] Therefore, the patent system should not concern itself with incentivizing the development of AI because it inherently exists outside the patent system. Alice Corporation v. CLS Bank International provides the test for patentability, which AIs likely fail. In Alice, the Supreme Court was tasked with determining if a patent that was directed at a “computerized scheme for mitigating settlement risk—i.e., the risk that only one party to an agreed-upon financial exchange will satisfy its obligation.”[110] The computer scheme at issue worked through using a computer as an intermediary and creating shadow credit and debit records to reflect the real accounts of institutions.[111] The computer would update these accounts in real-time and would only allow the transaction to complete when the transacting party had enough money to satisfy their obligation.[112]

To settle the question regarding whether a computer system is patentable, the Alice Court asks, “Is there a claim relating to a patent-ineligible abstract idea?”[113] If the answer to this question is yes, then the next question becomes, “Is there anything more to this claim?”[114] The second question in the test is searching for an “inventive concept,” or “an element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept] itself.”[115] In affirming the denial of the patent application, Justice Thomas wrote for the court:

These cases demonstrate that the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention. Stating an abstract idea while adding the words ‘apply it’ is not enough for patent eligibility… Stating an abstract idea while adding the words “apply it with a computer” simply combines those two steps, with the same deficient result. Thus, if a patent’s recitation of a computer amounts to a mere instruction to “implemen[t]” an abstract idea on… a computer, that addition cannot impart patent eligibility. [116]

Unfortunately for many AI-inventors, their AI systems often do little more than apply an algorithm. For example, in PUREPREDICTIVE, Inc. v. H20.AI, Inc., the defendant was trying to patent an AI designed “to generate a predictive ensemble in an automated manner … with little or no input from a user or expert.”[117] The district court characterized the method as performing predicative analysis in three steps: (1) receive data and generate “learned functions;” (2) evaluate the effectiveness of the learned functions and create a rule set for additional data input; (3) select the most effective rule set for additional data input.[118] The court determined that the claims were “directed to the patent-ineligible abstract concept of testing and refining mathematical algorithms.”[119] Then, despite the patent at issue seemingly claiming an advanced AI, the court ruled that the claims at issue did not “show an inventive concept sufficient to transform its claim” and denied the claim.[120]

Ex Parte Joerg Mitzlaff is another example of AI being unpatentable under Alice.[121] In Mitzlaff, the claims at issue were directed towards a “computer-implemented method” that “establish[ed] a communication session between a user of a computer implemented marketplace and a computer implemented conversational agent associated with the marketplace that is designed to stimulate a conversation with the user to gather listing information.”[122] The Patent Trial and Appeal Board (PTAB) agreed with the Examiner that the claim was directed to a form of “shopping support.”[123] The patent was ultimately denied because the claim was “directed to merely using the recited computer-related elements to implement the underlying abstract idea, rather than being limited to any particular advances in the computer-related elements.”[124] PUREPREDICTIVE and Mitzlaff demonstrate that AI often only implement an abstract idea and thus are not patentable under Alice. While some commentators have argued that Alice goes too far,[125] it is well-established law. Under Alice, many AI systems are not patentable, and the Constitution does not set up for courts to incentivize unpatentable inventions. Since AI takes place outside of the patent system, there is no basis for Congress to work to protect AI’s inventions.

Also, there is evidently already incentive for tech companies to develop AI. Currently in the US, AI cannot be named as an inventor on patents.[126] This fact did not stop companies from spending $37.5 billion on AI software and hardware in 2018, a figure which is expected to grow to $97.9 billion by 2023.[127] While there is always some incentive to invent, the large investment in AI indicates that there is already a socially adequate amount of incentive to invent AI. This incentive likely results from there being significant profitable activity stemming from AI that is not invention conception. For example, Watson can analyze patients’ genomes and provide treatment recommendations.[128] This is non-innovative activity that can lead to further patentable inventions.[129] Further, the few AI-conceived inventions can be protected by trade secret if necessary. It is therefore disingenuous to say that there is no to little incentive for developers to create AI.

Since AIs cannot be incentivized, the only incentivizing being done by allowing AI-created inventions is encouraging people to make more, better AIs and encourage them to direct their AIs towards innovation, thus increasing the overall rate of innovation. It is a very narrow but important distinction. The Constitution does not explicitly provide Congress the power to grant limited monopolies for the acceleration of innovation. Courts would have to take a more expansive view of the IP clause to allow it. However, courts have noted that “the drafters mandated a specific mode of accomplishing the particular authority granted [to create patents],”[130] suggesting that Congress should read the Intellectual Property Clause narrowly. Therefore, allowing AI-inventions to be patented to increase the incentivization of invention of AIs is unconstitutional and should not happen.

VI. Allowing AIs to Patent Invention Would Open the Door for Otherwise Copyrightable Material Produced by AI to Receive Protection

Similarly, if AIs were allowed to be inventors, a push to have AI-produced work be copyright-eligible would happen using similar logic. The Copyright Act[131] grants a copyright for any “original work . . . of authorship fixed in any tangible medium of expression”[132] but does not detail the requirements for authorship.[133] Currently, federal courts and Congress have yet to address if artificial intelligence can be the author of copyrighted material.[134] For now, federal courts defer to the Copyright Office, which has the Human Authorship Requirement.[135] This Requirement is detailed in the Compendium and explains that the Copyright Office “will register an original work of authorship, provided that the work was created by a human being.”[136]

However, federal courts deferring to the Copyright Office likely will not suffice in the near future. Nor should they defer. At best, the Copyright Office’s policy has shaky underpinnings. The Copyright Office cites to the famous Trade-Mark Cases[137] for the Human Authorship Requirement, not a copyright case. Copyrights and trademarks are not interchangeable, and they have different constitutional bases. Also, the Compendium is careful to state in its introduction that it “does not override any existing statute or regulations,” and “[t]he policies and practices set forth in the Compendium do not in themselves have the force and effect of law.”

Therefore, the Compendium will likely not suffice as justification for much longer since there are increasing amounts of AI-generated otherwise-copyrightable works. In 1984, a computer system named Racter wrote the book The Policeman’s Beard is Half Constructed.[138] Similarly, Forbes uses AI to write short articles for their website.[139] AIs and algorithms are becoming so prevalent that the Neukom Institute for Computational Science at Dartmouth College announced a “Turing Test in Creativity,” the first short story prize for algorithms.[140] It is not long before federal courts will be forced to address the issue.

Copyright would likely be forced to follow patent law and allow AIs to be listed as authors if AIs are allowed to be inventors on patents because the Supreme Court has noted the “historic kinship” between patent and copyright law.[141] This kinship likely stems from their shared basis in the IP clause.[142] While the Supreme Court has cautioned “in applying doctrine formulated in one area to the other,”[143] lower courts often fail to heed this warning.[144] Since Sony, 37 opinions have cited this “historic kinship,” and few of these opinions consider if there actually are doctrinal similarities that justify expanding a rule from one field into the other.[145] Further, since 1984, none have “both considered the rationale and heeded the caution when extending a rule in a new legal context.”[146] Therefore, it is likely that, if AI were allowed to be the inventor for patents, they would also be allowed to be authors for copyrighted works.

A few problems would arise with courts allowing AI to author works. The Copyright Act’s purpose is not “to secure a fair return for an ‘author’s’ creativity.”[147] Rather, the Act’s “ultimate aim” is “to stimulate artistic creativity for the general public good.”[148] However, AI cannot be incentivized,[149] so any works that AI produce should be outside the scope of the Copyright Act.[150] This is a heightened concern with the Copyright Act because copyrighting uncopyrightable works implicates free speech concerns. Further, the length of copyright protection for an author is generally tied to the life of the author or joint authors.[151] AI has an infinite life though since it is a computer. Therefore, would AI-written works have copyrights that last forever?[152] If so, a copyrighted work would never be donated to the public. This would violate the “exchange” that is envisioned under the IP system. These are problems that need to be addressed before AI should be allowed to produce copyrighted material.

VI. Other Non-Human Entities Would Also Likely Push for Their Works to Receive Intellectual Property Protection

Another problem with allowing AIs to be an inventor on patents is that other non-human entities may push for intellectual property protection using similar logic. Numerous commentators have expressed concerns that, if AIs are allowed to be an inventor, other IP rules preventing non-humans from creating copyrights or patents would need to be addressed.[153] Allowing AI to be an inventor could open the floodgates on who is capable of creating a creative work. For example, the Federal Circuit has declared, “[O]nly natural persons can be ‘inventors’”[154] and in a case regarding the first to conceive an idea, noted that “people conceive, not companies.”[155] Corporations, who have many of the same problems as AI about inventing, would likely push to be listed as creators for patents and copyrights.

Caselaw preventing animals from creating copyrightable materials would need to be revisited as well. The most notable example of this caselaw is the “Monkey Selfie” case.[156] In Naruto v. Slater, the copyright at issue was a book published of selfies that a monkey had taken.[157] Naruto, a macaque living on a reserve, took a camera that a wildlife photographer had left unattended and took selfies with it.[158] PETA filed suit against the book’s publishers, alleging copyright violation on behalf of Naruto.[159] The Ninth Circuit Court of Appeals held that Naruto and all other animals lacked statutory standing under the Copyright Act.[160] In dicta, the court noted that, as a matter of statutory interpretation, since animals are not expressly authorized to have standing, they do not have standing.[161] The court also pointed to other text in the Copyright Act for support, such as the terms “children,” “grandchildren,” “legitimate,” “widow,” and widower,” that “imply humanity.”

Despite this statutory language, if AI inventions were to become patent eligible, companies, animals, and other creators that cannot currently file for intellectual property rights would likely try to have their works protected. Many arguments that apply to why these creators cannot get intellectual property rights also apply to AI systems obtaining patents, so these arguments would be undercut. This could lead to a total revamping of the patent system as animals and corporations would rush to get their creation patented, resulting in more frivolous patent application and less donated to the public domain.

VIII. If AI’s Inventions are Allowed to be Patented, Problems Would Arise with Determining Who Is Entitled to the Patent

If AI was allowed to be an inventor, an important issue would develop with who ultimately owns, or is assigned, the patents of the AI-invented devices. Possible options include an AI’s owner, the software programmers who programmed the AI, an investor, the data supplier who exposed the AI to the data that it taught itself from, the trainers who checked the AI system’s results and corrected the AI system’s processes, the AI system itself, or the end-user.[162] There is merit to awarding the patent to each respect stakeholder, but there are problems associated with each.

There are numerous problems associated with having an AI own the patent. For AI to own property, it would need to have personhood. There is some precedent in other legal systems; in 2017, Saudi Arabia became the first country to recognize a robot as a citizen but was criticized heavily for it.[163] However, while what it means to have personhood is a hotly contested issue, here, it is not debatable that AIs do not have personhood. Whatever criteria is used to define a “person,” AI lacks it. AIs lack a soul,[164] a consciousness,[165] a free will,[166] and feelings.[167] AI owning a patent would also create numerous standing issues. These issues include, but are not limited to, determining who enforces the AI’s rights, what remedies should be granted when those rights are aggrieved, and determining what other rights that AI should be granted.[168] It would create an unusual situation where a piece of property, the AI, owns property itself. For likely many reasons, this proposal does not seem often suggested by commentators.

Another option for AI-invention is assigning rights to an AI’s owner, which is not necessarily the person who developed the AI system. An AI developer could sell their AI to someone else. Investing rights in the AI’s owner would be consistent with how other personal property is treated.[169] However, an owner may have little role in an AI’s invention besides licensing a purchased computer, so granting them a patent would be unfair.[170] Further, assigning an AI system’s inventions to the AI’s owner might reduce the annual number of inventions because AI end users would be less inclined to seek out AI to invent their idea, knowing that they would not be entitled to anything that the AI invented.[171] For the patent system, which is designed to promote inventive activity, this would be counter-intuitive.

The assumed answer for most would likely be to assign an AI’s patents to the AI’s programmer. However, there are numerous problems to this approach. First, it is not equitable for AI-developers to hold patents for inventions their AI creates. That is because AIs, or at least the deep-learning ones responsible for inventions, are not necessarily programmed to invent. The AI systems are not just following their code; they are trained to invent.[172] The programmer’s role is therefore similar to the role of a parent of an inventor: “aid[ing] in the conception of the entity that creates the work, rather than creating the work themselves.”[173] There is a large leap from selecting training data to train an AI to event to performing the inventive step required for a patentable invention. Just as parents should not claim the inventions of their child, AI programmers should not claim the work of AI.

An example of AI being trained is AlexNet, an AI image recognition system designed to recognize pastries at checkout.[174] To train AlexNet to identify pastries, the software was not simply programmed with different images of pastries.[175] Pastries can change shape or look slightly different, and an AI may not be able to recognize it.[176] Instead, the AI was shown new images of pastries, and if the software was incorrect, the software would “adjust the connections between its layers of neurons” until it was correct.[177] Thus, if a new pastry was created that did not look like a pastry and AlexNet was able to identify it as a pastry, it would be improper to credit AlexNet’s programmers with AlexNet’s latest success. Similarly, because AI is the one performing the “inventive step,” it would not be appropriate for programmers to be able to profit from the AI’s invention.

Another reason why AI-programmers should not be entitled to the inventions created by AI systems is that this approach would lead to patents being concentrated in a few companies. Given the existing capabilities of AI and the rate at which AI capabilities are increasing, it is not hard to envision a future where most patentable inventions are created by AI, if allowed. If AI-programmers were to hold the patents for what their AI created, patents would be largely held, or locked-in, to a few select companies in a few select countries.[178] “Front-runners” in AI-development are already likely to benefit disproportionately from AI.[179] Leading AI countries could experience an additional 20 to 25 percent in net economic benefits due to AI, compared to only 5 to 15 percent in net economic benefits for developing countries.[180] Similarly, front-running AI companies could double their cash flow by 2030, and nonadopters of AI could experience about a 20 percent decline in cash flow.[181] Allowing AI-programmers to have the patents from their AI would exasperate this inequality, concentrating even more resources in a select few countries and companies.

Similarly, an AI-programmer should not be entitled to the benefits of their AI’s inventions due to the exchange envisioned by the IP Clause. During the legal monopoly period granted by a patent on the AI itself, the same AI could theoretically have numerous patentable inventions that it creates. Therefore, the AI programmer would be entitled to numerous subsequent legal monopolies, all for the labor that it took for one invention—the AI. For these reasons, this approach would violate the Constitution and be unfair to other inventors who are not entitled to multiple legal monopolies for more “normal” or non-AI inventions.

Another policy concern associated with AI programmers having patents for the AI-inventions is that software companies that build AI, such as IBM, are unlikely to be involved in the field their AI is employed in. This policy would, therefore, fail to allocate patent rights to companies that most value them. For example, IBM’s Watson is involved in both law and medicine.[182] However, IBM does not own a law office or hospital.[183] Therefore, if Watson was to create a patent for the legal or medical industry, it would be inefficient for IBM to have the patent. Further, because these software companies are not in the fields their patents would be in, they would be non-practicing entities (NPE). NPEs are inefficient for society because they often have a high social cost in the form of licensing fees paid to avoid “nuisance” suits and because they often lead to a loss of progress in an inventive field.[184] Also, these NPEs would have to undergo significant policing measures to protect the inventive results of their AI.[185]

Lastly, the end-user wanting to use AI to innovate in their field would be disincentivized to use AI if they would not be entitled to the patents on their invention.[186] As one scholar wrote:

For example, the use of IBM’s Watson AI to develop new drugs by a pharmaceutical research company might compromise the ability to receive a patent in their own name, creating a clear disincentive to using a system like Watson. Why invest the time and money but give the rewards to IBM?[187]

Of course, incentivizing innovation is a problem with every stakeholder. However, it would need to be addressed before AI can be an inventor, particularly since the goal of the IP system is to incentivize invention.

The other likely stakeholder for who the AI’s patent should be assigned to would be the user who ultimately directs the AI, or the end-user. This is a similar approach to the one advocated by the National Commission on New Technological Uses of Copyrighted Works on who should control AI-created copyrightable works.[188] However, the National Commission’s findings were from a different era and should be disregarded. Most notably, the National Commission’s Final Report noted that “[t]he development of [the] capacity for ‘artificial intelligence’ has not yet come to pass,” and this “development is too speculative to consider at this time.”[189] Similarly, the National Commission’s findings that computers were not authors was seemingly based on a simple computer: “[t]he computer may be analogized to or equated with, for example, a camera, and the computer affects the copyright status of a resultant work no more than the employment of a still or motion-picture camera, a tape recorder, or a typewriter.”[190] The National Commission was not making a conclusion based on unfair comparisons; given the set of facts at the time, most computers were just tools to be used for a human’s creativity.

That is certainly not the case anymore though. AI is capable of much more than it was in 1976, so analogies to cameras or typewriters are no longer appropriate. AI can now serve as a creator. As such, it would be improper to reward someone for merely commanding an AI to invent. This end-user could have used no creativity, particularly if they just licensed an advanced AI instead of having programmed their own. The patent law system does not reward mere licensees. Similarly, another problem is that AI-programmers would be less inclined to license their AI to end-users because they would want the resulting patents from the AI’s inventions for themselves. Instead, companies like IBM would be more likely to just use AI for themselves, limiting the good that AI can accomplish because AI capabilities would be kept out of the market.

The other option available would be to donate any AI-created inventions to the public domain. AI-created inventions would, therefore, have to be protected as trade secrets, if at all. There is some merit to this idea. That is because it avoids certain problems of other solutions like lock-up, NPEs, and constitutional concerns about AIs being unable to be incentivized.

However, like the other approaches, this approach has its drawbacks. AI programmers, AI owners, and end-users would probably all disfavor this approach because it favors no one.[191] While the other approaches benefit one party at the expense of the others, this approach favors nobody, so it would likely be unpopular politically. No single group would advocate for this approach, so it is the least likely to happen. Another concern is that it would limit the range of inventions that AIs would be directed to invent. Because trade secret would be the only available protection available for inventions, AIs would likely be directed to inventions that could not be reverse-engineered or protected through licenses. This would overall limit the good that AI can accomplish. Another concern is that it would encourage inequitable conduct.[192] Inequitable conduct occurs when someone intends to deceive the USPTO about something material.[193] Inequitable conduct appears to usually apply in the context of withholding references, but it could apply to someone claiming they invented something that an AI system did.[194] Here, AI programmers would have an incentive to deceive the USPTO about who invented their product since, otherwise, it would not be protected.

Some commentators advocate for rewarding all the stakeholder: the AI programmers, trainers, owners, and operators.[195] The size of the patent assignment would be proportional to the “difficulty and the extent of innovativeness in the setting of the end goals and parameters.”[196] This approach is supported by John Locke’s labour theory. Under this theory, “the labour of [a worker’s] body and the work of his own hands . . . are properly his.”[197] This is a great approach in theory because it rewards each stakeholder for their contributions. However, in practice, it would be hard to apportion contribution sizes for each patent. This would likely lead to fierce litigation battles in the case of valuable patents over who contributed what proportion of ingenuity to the invention. Therefore, it is, in turn, not something that should be encouraged.

In many cases, ownership of the patent would likely be determined by contract. However, the default rule would likely be used as a baseline in contract negotiations,[198] and, if it is inefficient, would “impose needless transaction costs upon parties who … seek to opt out of them to reach” their desired position.[199] Courts would, therefore, need to be prepared to determine who is the default owner of a patent that AI invented. However, there are many problems associated with each stakeholder.

IX. A Brief Proposed Solution to the AI-Invention Dilemma

Perhaps the best, and perhaps the only, solution to the AI-invention is to have the person who discovered the invention be the listed inventor.[200] Section 101 of the Patent Act starts out, “[w]hoever invents or discovers . . . .”[201] Currently, the “discovery” portion of section 101 does little work due to other patent rules, such as the prohibition against patenting natural processes and abstract ideas.[202] However, there have been a few cases where it is relevant, such as Dennis v. Pitner.[203] In Dennis, the patent at issues covered an insecticide made essentially from the root of a cube plant found in South American countries.[204] The defendant contended that the patented article was a product of nature, so it was unpatentable.[205] The court ultimately upheld the patent, but said this about the distinction between discovery and invention:

It is true that an old substance with newly discovered qualities possessed those qualities before the discovery was made. But it is a refinement of distinction both illogical and unjustifiable and destructive of the laudable object of the statute to award a patent to one who puts old ingredient A with old ingredient B and produces a cure for ailment C, and deny patent protection to one who discovers that a simple and unadulterated or unmodified root or herb or a chemical has ingredients or health-giving qualities, hitherto unknown and unforeseen.[206]

Under this solution, the AI produces the metaphorical “root,” which the end-user or AI-programmer ultimately patents because they “discover” it. If an AI-programmer, owner, or end-user is ultimately unsatisfied with who “discovers” the inventions that a respective AI produces, it could be modified by who controls the patent by contract.

This approach avoids many of the problems associated with determining that AI can invent. Much of existing patent would not need to be modified to accommodate AI-inventorship, and it would avoid recognizing AI personhood rights. The court would not need to determine a “default” person to award the patent to because whoever is ultimately awarded the patent is determined by who discovered the patentable AI creation. Determining who “discovered” a patented invention is an easier task than determining contribution proportions and provides incentives that trade secret protection does not. Consequently, this also avoids constitutional issues about having to incentivize AI, similar attempts for AIs to create copyrighted works, and similar aspirations for non-human entities to obtain intellectual property rights.

X. Conclusion

There are numerous reasons why AI should not be able to be an inventor on patents. While the district court was right to deny DABUS’s patent, the more difficult question remains what should be done with these inventions once they are created? One potential solution to these problems is for courts to deny AI as inventors and instead award a patent to the “discoverer” of the AI’s invention. This approach needs to be considered and debated by more scholars as to its potential merits and drawbacks.

 

  1. * Pressley Nietering is a 3L student at James E. Rogers College of Law. The author would like to thank Professor Bambauer and the staff at the Journal of Emerging Technologies for their helpful feedback.
  2. U.S. Pat. & Trademark Off., Public Views on Artificial Intelligence and Intellectual Property Policy, at ii (2020).
  3. See Bob Lambrechts, May It Please the Algorithm, 89 J. Kan. Bar Ass’n, Jan. 2020, at 36, 38.
  4. Id.
  5. Id.
  6. Ed Burns & Kate Brush, Deep Learning Definition, TechTarget, https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network (last updated Mar. 2021).
  7. See generally Alan M. Turing, Computing Machinery and Intelligence, 59 Mind 433 (1950).
  8. Id. at 442.
  9. John McCarthy, Computer History Museum, https://computerhistory.org/profile/john-mccarthy/?alias=bio&person=john-mccarthy (last visited Dec. 17, 2021). While no AI has passed the Turing Test yet, there have been many close contenders. See generally Stephen Johnson, The Turing Test: AI Still Hasn’t Passed the “Imitation Game, The Big Think (Mar. 2022), https://bigthink.com/the-future/turing-test-imitation-game/.
  10. Kyle Wiggers, Intel Debuts Pohoiki Springs, a Powerful Neuromorphic Research for AI Workloads, VentureBeat (Mar. 18, 2020, 7:25 AM), https://venturebeat.com/2020/03/18/intel-debuts-pohoiki-springs-a-powerful-neuromorphic-research-system-for-ai-workloads/.
  11. Id.
  12. Vincent C. Müller & Nick Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion, in Fundamental Issues of Artificial Intelligence 555, 568 (2018).
  13. John R. Smith, IBM Research Takes Watson to Hollywood with the First “Cognitive Movie Trailer,” IBM: THINK Blog (Aug. 31, 2016), https://www.ibm.com/blogs/think/2016/08/cognitive-movie-trailer/.
  14. Bernard Marr, Can Machines and Artificial Intelligence Be Creative?, Forbes (Feb. 28, 2020, 12:42 AM), https://www.forbes.com/sites/bernardmarr/2020/02/28/can-machines-and-artificial-intelligence-be-creative/.
  15. James Grimmelmann, There’s No Such Thing As A Computer-Authored Work-and It’s A Good Thing, Too, 39 Colum. J. L. & Arts 403, 408 (2016).
  16. See Turing, supra note 6, at 450. Turing writes, “A variant of [this] objection states that a machine can never do anything really new…. A better variant of the objection states that a machine can never take us by surprise.” Id. (internal quotations omitted).
  17. Russ Pearlman, Recognizing Artificial Intelligence (AI) As Authors and Inventors Under U.S. Intellectual Property Law, 24 Rich. J. L. & Tech., no. 2, 2018, at 27 (arguing against Grimmelmann’s stance).
  18. For examples of AI performing near-inventive activity, see generally Ryan Abbott, Everything Is Obvious, 66 UCLA L. Rev. 2, 37 (2019).
  19. Emphasis added.
  20. 35 U.S.C. § 100(f).
  21. See, e.g., 35 U.S.C. § 115.
  22. Utkarsh Patil, South Africa Grants a Patent with an Artificial Intelligence (AI) System as the Inventor – World’s First!!, mondaq (Oct. 19, 2021), https://www.mondaq.com/india/patent/1122790/south-africa-grants-a-patent-with-an-artificial-intelligence-ai-system-as-the-inventor-world39s-first.
  23. Id.
  24. Id.
  25. Id. (citing Thaler v Comm’r of Pats. [2021] FCA 879 (30 July 2021) (Austl.)).
  26. Patent Examination in South Africa, Smit & Van Wyk, https://www.svw.co.za/patent-examination/ (last visited Dec. 16, 2021).
  27. Patil, supra note 21.
  28. Id.
  29. Thaler v. Hirshfeld, No. 20-CV-903 (LMB/TCB), 2021 WL 3934803, at *1 (E.D. Va. Sept. 2, 2021).
  30. For a list of AI-created inventions, see, e.g., Daria Kim, ‘AI-Generated Inventions’: Time to Get the Record Straight?, 69 GRUR Int’l 443 (2020).
  31. Kenneth Lustig, No, the Patent System Is Not Broken, Forbes (Feb. 9, 2012, 11:25 AM), http://www.forbes.com/sites/forbesleadershipforum/2012/02/09/no-the-patent-system-is-not-broken/.
  32. See Tabrez Y. Ebrahim, Artificial Intelligence Inventions & Patent Disclosure, 125 Penn St. L. Rev. 147, 151 (2020); Tim W. Dornis, Artificial Intelligence and Innovation: The End of Patent Law As We Know It, 23 Yale J. L. & Tech. 97, 110–11 (2020).
  33. See, e.g., Grantley Pat. Holdings, Ltd. v. Clear Channel Commc’ns, Inc., 540 F. Supp. 2d 724, 733 (E.D. Tex. 2008).
  34. Hybritech, Inc. v. Monoclonal Antibodies, Inc., 802 F.2d 1367, 1376 (Fed. Cir. 1986) (internal quotations omitted).
  35. One company, Cloem, takes an original claim and, using AI, can draft 50,000 surrounding claims using similar words and alternative definitions. The “cloems” may not make sense since the AI cannot process language fully, but some will just by sheer numbers. The “cloems” are then instantly published, preventing competitors from claiming rights in similar fields. See, e.g., Dennis Crouch, Would you Like 10,000 Cloems with that Patent, Patentlyo (Oct. 1, 2014), https://patentlyo.com/patent/2014/10/would-cloems-patent.html. While “Cloems” are certainly an interesting concept, their power is likely mitigated due to the enablement requirement.
  36. Jonathan J. Darrow, The Neglected Dimension of Patent Law’s Phosita Standard, 23 Harv. J.L. & Tech. 227, 231 (2009); see John R. Allison et. al., Understanding the Realities of Modern Patent Litigation, 92 Tex. L. Rev. 1769, 1784 (2014) (finding that obviousness is one of the most litigated patent issues).
  37. 35 U.S.C. § 103.
  38. Jeffrey T. Burgess, The Analogous Art Test, 7 Buff. Intell. Prop. L.J. 63, 67 (2009) (“A reference is excluded from an obviousness analysis if it is not within an analogous art to that of the invention.”).
  39. In re Johenning, No. 93-1217, 31 F.3d 1177 (Fed. Cir. 1994) (affirming Board’s conclusion that a water bed frame and a water bed mattress were in the same field so a reference about a water bed frame constituted prior art).
  40. In re Clay, 966 F.2d 656, 659 (Fed. Cir. 1992) (“A reference is reasonably pertinent if, even though it may be in a different field from that of the inventor’s endeavor, it is one which, because of the matter with which it deals, logically would have commended itself to an inventor’s attention in considering his problem.”); see also Burgess, supra note 37, at 67 (“Analogous arts might generally be defined as those areas within which a PHOSITA seeking to solve the same problem with which the inventor was concerned would be inclined to research for a solution.”).
  41. See Standard Oil Co. v. Am. Cyanamid Co., 774 F.2d 448, 454 (Fed. Cir. 1985) (noting that a PHOSITA is envisioned as working in his shop with all the prior art references—which he is presumed to know—hanging on the walls around him.”); In re Winslow, 365 F.2d 1017 (C.C.P.A. 1966).
  42. See C & A Potts & Co v. Creager, 155 U.S. 597, 607–08 (1895) (“Indeed, it often requires as acute a perception of the relations between cause and effect, and as much of the peculiar intuitive genius which is a characteristic of great inventors, to grasp the idea that a device used in one art may be made available in another, as would be necessary to create the device de novo.”).
  43. See Application of Wood, 599 F.2d 1032, 1036 (C.C.P.A. 1979) (“The rationale behind this rule precluding rejections based on combination of teachings of references from nonanalogous arts is the realization that an inventor could not possibly be aware of every teaching in every art.”).
  44. Abbott supra note 17, at 37 (“However, a machine is capable of accessing a virtually unlimited amount of prior art.”).
  45. Ana Ramalho, Patentability of AI-Generated Inventions: Is a Reform of the Patent System Needed?, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3168703, [https://perma.cc/W3HL-MS6M] (“The use of AI in the inventing process can cause the field of analogous arts to be broadened in practice, given the unbiased nature of AI (and therefore the real possibility that AIs will look for solutions to problems in non-analogous fields).”).
  46. Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079, 1085 (2016).
  47. Ralph D. Clifford, Intellectual Property in the Era of the Creative Computer Program: Will the True Creator Please Stand Up?, 71 Tul. L. Rev. 1675, 1680 (1997).
  48. See, e.g., Too Many Patents, Patent Progress, https://www.patentprogress.org/systemic-problems/too-many-patents/ (last visited Mar. 25, 2022); see also Richard A. Posner, Why There are Too Many Patents In America, The Atlantic (July 12, 2012), https://www.theatlantic.com/business/archive/2012/07/why-there-are-too-many-patents-in-america/259725/ (explaining problems associated with the recent increase in granted patents).
  49. Ramalho, supra note 44, at 24.
  50. Id.
  51. U.S. Patent and Trademark Office, supra note 1, at 10.
  52. Christina MacDougall, The Split over Enablement and Written Description: Losing Sight of the Purpose of the Patent System, 14 Intell. Prop. L. Bull. 123, 127 (2010)
  53. U.S. Patent and Trademark Office, supra note 1, at 10; Storer v. Clark, 860 F.3d 1340, 1345 (Fed. Cir. 2017).
  54. In re Fisher, 427 F.2d 833, 839 (C.C.P.A. 1970).
  55. See, e.g.Chiron Corp. v. Genentech Inc., 363 F.3d 1247, 1254 (Fed. Cir. 2004).
  56. See Naveen Joshi, Understanding the Black Box Problem of Artificial Intelligence, BBN Times (May 18, 2021), https://www.bbntimes.com/technology/understanding-the-black-box-problem-of-artificial-intelligence.
  57. Id.
  58. Jonathan J. Darrow, The Neglected Dimension of Patent Law’s PHOSITA Standard, 23 Harv. J.L. & Tech. 227, 232 (2009).
  59. Dan L. Burk & Mark A. Lemley, Is Patent Law Technology-Specific?, 17 Berkeley Tech. L.J. 1155, 1186–87 (2002).
  60. 35 U.S.C. § 112 (“The specification shall contain a written description of the invention… in such full, clear, concise, and exact terms as to enable any person skilled in the art” to make and use the invention.).
  61. See Env’t Designs, Ltd. v. Union Oil Co. of California, 713 F.2d 693, 697 (Fed. Cir. 1983) (“The important consideration lies in the need to adhere to the statute, i.e., to hold that an invention would or would not have been obvious, as a whole, when it was made, to a person of “ordinary skill in the art”—not to the judge, or to a layman, or to those skilled in remote arts, or to geniuses in the art at hand.”).
  62. Darrow, supra note 57, at 233.
  63. Id. at 234; see also Abington Textile Mach. Works v. Carding Specialists (Canada) Ltd., 249 F. Supp. 823, 829 (D.D.C 1965) (finding that an expert witness had extraordinary skill in the art).
  64. Darrow, supra note 57, at 234.
  65. Id. at 239.
  66. Id.
  67. Id. at 243–47.
  68. Brenda M. Simon, The Implications of Technological Advancement for Obviousness, 19 Mich. Telecomm. & Tech. L. Rev. 331, 340 (2013).
  69. Darrow, supra note 57, at 237.
  70. Id. at 248.
  71. Abbott, supra note 45, at 1124 (internal citations omitted).
  72. See Jo Best, IBM Watson, TechRepublic , (Sept. 9, 2013, 8:45 AM), https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next//..
  73. Id.
  74. Id.
  75. PricewaterhouseCoopers, Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalize? (2017), https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf.
  76. David Rotman, AI is Reinventing the Way We Invent, MIT Tech. Rev. (Feb. 15, 2019), https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/.
  77. Id.
  78. Abbott, supra note 17 at 22–23.
  79. Id.
  80. See Ernest Fok, Challenging the International Trend: The Case for Artificial Intelligence Inventorship in the United States, 19 Santa Clara J. Int’l L. 51, 72 (2021).
  81. See id.
  82. Id.
  83. See id.
  84. See U.S. Cᴏɴsᴛ. art. I, § 8, cl. 8.
  85. Id.; Feist Publications, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 349 (1991).
  86. Diamond v. Chakrabarty, 447 U.S. 303, 307 (1980).
  87. Kaelyn R. Knutson, Anything You Can Do, AI Can’t Do Better: An Analysis of Conception as a Requirement for Patent Inventorship and a Rationale for Excluding AI Inventors, 11 Cybaris®, no. 2, art. 2, 2020, at 1, 16.
  88. Ben Hattenbach & Joshua Glucoft, Patents in an Era of Infinite Monkeys and Artificial Intelligence, 19 Sᴛᴀɴ. Tᴇᴄʜ. L. Rᴇᴠ. 32, 43 (2015).
  89. A Computer Called Watson, IBM, https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/ (last visited July 21, 2021).
  90. Eliza Strickland, How IBM Watson Overpromised and Underdelivered on AI Health Care, IEEE Spectrum (Apr. 2, 2019), https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care.
  91. Id.
  92. Gary Marcus, DeepMind’s Losses and the Future of Artificial Intelligence, Wired (Aug. 14, 2019), https://www.wired.com/story/deepminds-losses-future-artificial-intelligence/.
  93. Id.
  94. Bilski v. Kappos, 561 U.S. 593, 605 (2010).
  95. Id.
  96. Id. at 609.
  97. Id. (quoting J.E.M. Ag Supply, Inc. v. Pioneer Hi–Bred Int’l, Inc., 534 U.S. 124, 135 (2001).
  98. Id. (quoting Chakrabarty, 447 U.S. at 315).
  99. See generally Alexander J. Kasner, The Original Meaning of Constitutional Inventors: Resolving the Unanswered Question of the Madstad Litigation, 68 STAN. L. REV. ONLINE 24, 29 (2015).
  100. See Dr. Shlomit Yanisky Ravid & Xiaoqiong (Jackie) Liu, When Artificial Intelligence Systems Produce Inventions: An Alternative Model for Patent Law at the 3a Era, 39 Cardozo L. Rev. 2215, 2239 (2018).
  101. See Bonito Boats, Inc. v. Thunder Craft Boats, Inc., 489 U.S. 141, 151 (1989).
  102. U.S. Patent and Trademark Office, Inventing AI: Tracing the Diffusion of Artificial Intelligence with U.S. Patents, https://www.uspto.gov/sites/default/files/documents/OCE-DH-AI.pdf (2020) (finding the number of annual AI patent applications to increase over 100% from 2002 to 2018); see also US Patent No 5,659,666.
  103. An argument could be made comparing this to Congress increasing the duration of a patent or copyright, which courts have found no constitutional issues with. See, e.g., Eldred v. Ashcroft, 537 U.S. 186, 201–02 (2003). However, when just extending the patent duration, Congress is redefining what they have interpreted “limited times” to be. Here, Congress would be redefining what it means to incentivize inventors, a much more radical change.
  104. See World Intellectual Prop. Org., WIPO Technology Trends 2019: Artificial Intelligence 39 (2019). The ratio of scientific publications to patents stood at eight papers per patent in 2010. Id. This number has decreased in recent years though. Id.
  105. Mark A. Lemley, The Surprising Virtues of Treating Trade Secrets As IP Rights, 61 Stan. L. Rev. 311, 313 (2008).
  106. Tabrez Y. Ebrahim, Artificial Intelligence Inventions & Patent Disclosure, 125 Penn St. L. Rev. 147, 184 (2020).
  107. Id.
  108. AI is often referred to as a “black-box,” since it is so opaque. See generally Yavar Bathaee, The Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J.L. & Tech. 889, 901 (2018).
  109. Kristen Osenga, Changing the Story: Artificial Intelligence and Patent Eligibility, Just Security (Oct. 25, 2021), https://www.justsecurity.org/78727/changing-the-story-artificial-intelligence-and-patent-eligibility/.
  110. Alice Corp. Pty. v. CLS Bank Int’l, 573 U.S. 208, 213–14 (2014).
  111. Id.
  112. Id.
  113. Id. at 217.
  114. Id.
  115. Id. at 217–18.
  116. Id. at 223.
  117. No. 17-CV-03049-WHO, 2017 WL 3721480, at *1 (N.D. Cal. Aug. 29, 2017).
  118. Id.
  119. Id. at *5.
  120. Id. at *7.
  121. Ex Parte Joerg Mitzlaff, Appeal No. 2016-003447 (P.T.A.B. Mar. 29, 2018).
  122. Id.
  123. Id.
  124. Id. (emphasis in original).
  125. See, e.g., Brian Higgins, The Role of Explainable Artificial Intelligence in Patent Law, 31 Intell. Prop. & Tech. L.J. 3, 7 (2019).
  126. See generally Thaler v. Hirshfeld, No. 120CV903LMBTCB, 2021 WL 3934803, at *1 (E.D. Va. Sept. 2, 2021).
  127. Richard Seeley, Global Spending on AI Systems to Hit $98 Billion by 2023 – IDC, ADTmag (Sept. 9, 2019), https://adtmag.com/articles/2019/09/04/ai-spending.aspx.
  128. Abbott supra note 17, at 32
  129. Id.
  130. Figueroa v. United States, 66 Fed. Cl. 139, 149 (2005), aff’d, 466 F.3d 1023 (Fed. Cir. 2006)
  131. The Copyright Act of 1976, Pub. L. No. 94-553, 90 Stat. 2541 (1976) (codified as amended at 17 U.S.C. §§ 101–810 (2012)).
  132. 17 U.S.C. § 102.
  133. 17 U.S.C. § 101; Victor M. Palace, What If Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law, 71 Fla. L. Rev. 217, 227 (2019).
  134. Id. at 227 (“In sum, Congress and the federal courts have yet to address the issue of copyright ownership for works made by autonomous artificial intelligence.”).
  135. U.S. COPYRIGHT OFFICE, COMPENDIUM OF U.S. COPYRIGHT OFFICE PRACTICES § 306 (3d ed. 2017), https://www.copyright.gov/comp3/docs/compendium.pdf [https://perma.cc/RY7T-G6KE]. 
  136. Id.
  137. In re Trade-Mark Cases, 100 U.S. 82, 94 (1879).
  138. Palace, supra note 132, at 221.
  139. Narrative Science, EPS Estimates Down for J.M. Smucker in Past Month, Forbes (Oct. 12, 2015, 4:00 PM), https://www.forbes.com/sites/narrativescience/2015/10/12/eps-estimates-down-for-j-m-smucker-in-past-month/?sh=4a0f6c547595. The same AI is used to make articles for the Big 10 Conference Network. Steve Lohr, In Case You Wondered, a Real Human Wrote This Column, N.Y. Times (Sept. 10, 2011), http://www.nytimes.com/2011/09/11/business/computer-generated-articles-are-gaining-traction.html.
  140. James Bridle, Robots that Write Fictions? You Couldn’t Make It Up, The Guardian (Aug. 10, 2015, 6:00 AM), https://www.theguardian.com/books/2015/aug/10/robots-that-write-fiction-you-couldnt-make-it-up.
  141. See Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 439 (1984).
  142. Id. at 439 n. 19 (“We have consistently rejected the proposition that a similar kinship exists between copyright law and trademark law[.]”)..
  143. Id.
  144. David W. Barnes, Abuse of Supreme Court Precedent: The “Historic Kinship”, 16 Chi.-Kent J. Intell. Prop. 85, 86–87 (2016).
  145. Id. at 87.
  146. Id.
  147. Twentieth Century Music Corp. v. Aiken, 422 U.S. 151, 156 (1975).
  148. Id.; see also U.S. Const. art. I, § 8, cl. 8.
  149. See Section V.
  150. See Daniel Schönberger, Deep Copyright: Up – and Downstream Questions Related to Artificial Intelligence (AI) and Machine Learning (ML), in Droit d’auteur 4.0 / Copyright 4.0, 145–73 (2018) (“Robots do not need protection, because copyright’s incentives for creativity will and naturally must remain entirely unresponded to by them.”).
  151. 17 U.S.C. § 302(a) (2018) (“Copyright… endures for a term consisting of the life of the author and 70 years after the author’s death.”); 17 U.S.C. § 302(b) (2018) (“In the case of a joint work, … the copyright endures for a term consisting of the life of the last surviving author and 70 years after such last surviving author’s death.”); Daryl Lim, Ai & Ip: Innovation & Creativity in an Age of Accelerated Change, 52 Akron L. Rev. 813, 839–40 (2018).
  152. One solution could be to treat it as a pseudonymous or anonymous work under 17 U.S.C. § 302(c). This means that the copyright would last for 95 years after the year of first publication. Id. However, this would need to be addressed by Congress.
  153. See, generally, Briana Hopes, Rights for Robots? U.S. Courts and Patent Offices Must Consider Recognizing Artificial Intelligence Systems As Patent Inventors, 23 Tul. J. Tech. & Intell. Prop. 119, 130 (2021).
  154. Beech Aircraft Corp. v. EDO Corp., 990 F.2d 1237, 1248 (Fed. Cir. 1993) (citing 35 U.S.C. §§ 115–118).
  155. New Idea Farm Equip. Corp. v. Sperry Corp., 916 F.2d 1561, 1566 n.4 (Fed. Cir. 1990).
  156. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018).
  157. Id. at 420.
  158. Id.
  159. Id.
  160. Id.
  161. Id. Interestingly enough, animals do have Article III standing. See Cetacean Cmty. v. Bush, 386 F.3d 1169, 1176 (9th Cir. 2004).
  162. See Ravid & Liu, supra note 99, at 2232 (2018) (explaining the various stakeholders in AI Inventions); Abbott, supra note 45, at 1114.
  163. See, e.g., Andrew Griffin, Saudi Arabia Grants Citizenship to a Robot for the First Time Ever, INDEP. (Oct. 26, 2017, 2:31 PM), http://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-robot-sophia-citizenship-android-riyadh-citizen-passport-future-a8021601.html (noting criticism of Saudi Arabia for extending rights to a robot that women did not have).
  164. Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C. L. Rev. 1231, 1262–63 (1992).
  165. Id. at 1264–66.
  166. Id. at 1272–74.
  167. Id. at 1269–71. That said, a legal person is a broader term than can encompass entities such as corporations and governments. Id. at 1239. However, this is perhaps due to their nexus to contractual relationships and interests. See Stephen M. Bainbridge, Community and Statism: A Conservative Contractarian Critique of Progressive Corporate Law Scholarship Progressive Corporate Law, 82 Cornell L. Rev. 856, 859–61 (1997) (discussing the “Nexus of Contracts” theory). AI lacks this contractual nexus.
  168. Palace, supra note 132, at 233–34.
  169. Abbott, supra note 45, at 1114–15.
  170. Id. at 1116; see Amir H. Khoury, Intellectual Property Rights for “Hubots”: On the Legal Implications of Human-Like Robots As Innovators and Creators, 35 Cardozo Arts & Ent. L.J. 635, 650 (2017) (“Also, the owner of the [AI] cannot claim ownership [to the IP created] because he has made no ‘value added’ contribution to the creation of the IP generated by the [AI].”).
  171. Abbott, supra note 45, at 1116.
  172. See Jason Tanz, Soon We Won’t Program Computers. We’ll Train Them Like Dogs, WIRED (May 17, 2016, 6:50 AM), https://www.wired.com/2016/05/the-end-of-code/.
  173. Palace, supra note 132, at 236.
  174. See James Somers, The Pastry A.I. that Learned to Fight Cancer, New Yorker, (Mar. 18, 2021), https://www.newyorker.com/tech/annals-of-technology/the-pastry-ai-that-learned-to-fight-cancer.
  175. Id.
  176. Id. Similarly, AI in self-driving cars struggled to recognize the blue stop-signs in Hawaii. Id.
  177. Id.
  178. See Palace, supra note 132, at 237.
  179. Jacques Bughin, et. al, Notes from the AI Frontier: Modeling the Impact of AI on the World Economy, McKinsey Global Institute (Sept. 4, 2018), https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.
  180. Id.
  181. Id.
  182. W. Michael Schuster, Artificial Intelligence and Patent Ownership, 75 Wash. & Lee L. Rev. 1945, 1989–90 (2018).
  183. Id. at 1989–91.
  184. Thomas H. Kramer, Proposed Legislative Solutions to the Non-Practicing Entity Patent Assertion Problem: The Risks for Biotechnology and Pharmaceuticals, 39 Del. J. Corp. L. 467, 475 (2014) (discussing the toll NPEs take on society).
  185. Schuster, supra note 181, at 2000–01.
  186. Russ Pearlman, Recognizing Artificial Intelligence (Ai) As Authors and Inventors Under U.S. Intellectual Property Law, 24 Rich. J.L. & Tech. 2, 38 (2018).
  187. Id.
  188. See generally Nat’l Comm’n on New Tech. Uses of Copyrighted Works, Final Report, 43–46 (1979).
  189. Id. at 44.
  190. Id. at 45.
  191. Perhaps it truly is the best solution then since a good compromise leaves everyone dissatisfied.
  192. See Ernest Fok, Challenging the International Trend: The Case for Artificial Intelligence Inventorship in the United States, 19 Santa Clara J. of Inter’l Law 51, 62 (2021). There is concern that patents are already not disclosing that AI invented a particular invention. See Ben Hattenbach & Joshua Glucoft, Patents in an Era of Infinite Monkeys and Artificial Intelligence, 19 Stan. Tech. L. Rev. 32, 44 (2015) (“Indeed, patents have already been granted on inventions that were designed fully or in part by software.”).
  193. Therasense, Inc. v. Becton, Dickinson & Co., 649 F.3d 1276, 1290 (Fed. Cir. 2011) (ruling that materiality and intent are two separate requirements).
  194. See Fok, supra note 192, at 62.
  195. See Ravid & Liu, supra note 99, at 2243.
  196. Id. at 2242.
  197. Id. at 2241. (internal quotations omitted).
  198. See generally Omri Ben-Shahar & John A. E. Pottow, On the Stickiness of Default Rules, 33 Fla. St. U. L. Rev. 651, 682 (2006) (concluding that default rules are often sticky).
  199. Id. at 651.
  200. This approach was noted by notable commentators in the field. See Fok, supra note 193, at 62; Abbott, supra note 45, at 1098.
  201. 35 U.S.C. § 101 (emphasis added).
  202. See generally Craig Edgar, Patenting Nature: Isn’t It Obvious?, 50 Creighton L. Rev. 49 (2016); Joshua D. Sarnoff, Patent-Eligible Inventions After Bilski: History and Theory, 63 Hastings L.J. 53 (2011).
  203. 106 F.2d 142 (7th Cir. 1939).
  204. Id. at 143.
  205. Id.
  206. Id. at 145.