Humans and Machines: Challenges of Treating a Person Like a Machine

Arizona Law Journal of Emerging Technologies
Volume 4 Article 2, 02-2020
Download Article Here

 

HUMANS AND MACHINES: CHALLENGES OF TREATING A PERSON LIKE A MACHINE

Briseida Sofía Jiménez-Gómez [1]

“Once there is superintelligence, the fate of humanity may depend on what this superintelligence does.” (Nick Bostrom)[2]

I. Introduction

This article seeks to reflect on the difference between humans and smart machines. Humans have long fantasized of creating intelligent machines. Intelligent machines are fed on personal data from individuals, which entails the lack of privacy of these individuals. Machines will become more “intelligent” with time as they will have more data to analyze, potentially helping humans to make better decisions.

However, human interactions with an intelligent machine in private spheres, such as at home or at the doctor’s office, may affect how humans relate among themselves. A risk is that humans will tend to dehumanize themselves as interactions with smart machines are more prevalent. Not only will humans be watched in traditional public spaces, such as on the streets, but also in more intimate contexts by companionship of robots that capture, through multiple sensors and cameras, every detail of a human life.

Section II offers a perspective on the characteristics that make humans different from intelligent machines and questions if society needs a sentient robot. Section III examines the education of programmers and developers of artificial intelligence (AI), as a minority is building the future of most of the world’s population. Section IV discusses the possible consequences of the addiction by design of social networks and many applications whose use by humans is increasing. These consequences tend to disconnect humans as they replace human interactions in the real world with online interactions. The traceability of personal information would give power to the corporations and governments that have access to it. The challenge of the surveillance society would be to rationalize the absolute power of these entities, otherwise, a social credit score would determine not only individuals’ present but the future of their offspring. Finally, Section V expresses the compelling need for awareness of the transformative changes that AI may bring, which should be the final goal of AI.

II. The Real Difference

a. Humans Have A Soul and Empathy

Humans have souls and machines do not. This is one difference between a human and a machine. The premise of the Philip K. Dick’s novel, Do Androids Dream of Electric Sheep? (1968) was that robots and machines would be distinguished because of empathy. Empathy is founded on mirror neurons, which function in one individual to mimic the emotional state perceived in another. According to Ekman, there are three types of empathy: cognitive, emotional, and compassionate[3]. First, cognitive empathy is “knowing how the other person feels and what they might be thinking[,]”[4] but it does not mean that a person feels the same. Second, emotional empathy refers tophysically feel[ing] what other people feel, as though their emotions were contagious.”[5] And third, compassionate empathy makes humans understand a person’s predicament, feel with them, and on occasion, spontaneously move to help them.[6]

Communication between humans and machines does not need empathy. In fact, interactions between machines and humans may be counterproductive. For example, a study shows that children immersed in technology may be less likely to believe that living animals have the right not to be harmed.[7] Moreover, such children are more exposed to suffer privacy violations, not only in the present but also in the future. The data gathered about children and teenagers may be tied to them by third parties throughout their lives.[8]

Recently, technology has evolved, creating empathy machines able to admit self-doubt, tell jokes, apologize for mistakes, share personal stories about their “life,” and most striking, talk about how they were “feeling.”[9] However, what we understand as the personal life of a machine may be controversial. About which type of life are we discussing? Machines do not have a cultural history, so they cannot be conscious. Imperfections and knowing that we are going to die makes us human. In contrast, machines never die, provided they are connected to the power source.

A 2020 experiment carried at Yale University shows that interaction with empathy machines is better than interacting with neutral machines.[10] Humans need to feel empathy to be at ease. Machines could be programmed so well as to seemingly have empathy. In any case, it can only be cognitive, not emotional empathy.

The frequent interaction between humans and machines makes humans accustomed to a type of response. These responses increasingly adapt humans’ needs. At the beginning, the machine is not so “perfect”. Over time, the machine gathers information about an individual’s personal preferences for all types of subjects, such as products bought, food ordered, activities done outside home, or conversations maintained with family. Step by step, humans will increasingly depend on the machine as it offers more satisfactory answers. The machine solves the problems posed, organizes the agenda without lacking detail, and informs the human of people that may be interesting to meet. In contrast, humans are not emotionless. We have good days and bad. We need to express ourselves, and sometimes we lose our temper.

b. Power Imbalance Between Humans and Machines

In 2016, AI-powered player, AlphaGo, provided a significant example of the power of machine learning with its defeat of world master Lee Sedol in the strategy game Go.[11] Variables in Go are practically infinite, which demands human forethought and capacities beyond the simple analysis of one move and the subsequent ones. In 2019, Lee Sedol retired because he thinks AI like AlphaGo is invincible at the game.[12] The idea of trying to compare a machine with a human seems absurd.

Nevertheless, humans will compete with robots for jobs. In this regard, one issue is that machines do not have the same basic needs for survival (e.g. eating and sleeping), and for that simple reason, they are more efficient. Another issue is that machines feed on human data, but a machine’s potential usefulness depends on the strength of computational power, the power source to which it is connected, and the space it occupies. In turn, humans have limits, and human health depends fundamentally on the immune system, which global pandemics (like COVID-19) intensify. Furthermore, the human brain is limited to the body’s natural structure that contains it. Machines are more efficient from an economic point of view, and in the future, many workers may become unemployed as machine automation simply replaces humans in tasks where the machine is more efficient. For example, a 2013 study estimates that 47 percent of total of US labor market is at risk of replacement by machine automation,[13] including positions in transportation, logistics, office and administration support, and production and service occupations.[14] Meanwhile, the same estimations for China and India are 77 percent and 69 percent respectively.[15]

Despite the increased automation, it is unlikely machines will outperform humans in all occupations because “emotional empathy attunes us to another person’s inner emotional world, a plus for a wide range of professions, from sales to nursing…”[16] Therefore, intrinsic characteristics of humans will still be needed in the future labor force.

c. Is There any Need for “Sentient Robots”?

In the development of super-intelligence, one begins to speak of “sentient robots,” also known as conscious robots, which are endowed with senses.[17] Machines may begin to be created for being in the last moments of human life. I wonder what need exist to create a robot that speaks like a human to accompany humans to death, considering that millions of people will be unemployed. Why invest such a large amount of money and energy in creating something that humans by nature do best?

Even in the field of psychotherapy, AI may be capable of substitution for humans.[18] A patient may be less shy talking to a machine than to a doctor. However, here again, the machine is getting all the information. Humans gradually become naked. It is a slow process, in which humans are losing intrinsically human characteristics. In contrast, the machine receives more personal data and consequent power. Hopefully, machines can help diagnose and treat patients, but a human professional will need to supervise the work, and in any case, patient privacy should be protected.

However, one technological advance that may have considerably better prospects are surgeon robots. If the instructions are accurate, a robot surgeon could perform an operation. In any event, the surgeon-robot should not replace the human surgeon. Rather, robots should provide assistance to doctors, under the command and under the supervision of a person. This would help to advance society and avoid the liability problems that could arise in situations where robots are completely autonomous.

III. Education of Technology Gurus

a. Cross-Disciplinary Education Needed

Many of the individuals developing artificial intelligence in the United States and China are not trained in philosophy or comparative literature. Professor Amy Webb, in her book The Big Nine, criticizes the training of the leaders of the nine companies that run the world of artificial intelligence.[19] They have been educated at the best American and Chinese universities, but in the educational systems that emphasize programing skills to the detriment of other pursuits. Such programs focus on skills in programming languages, statistics, computational biology, and game theory. However, AI leaders have not studied philosophy, comparative literature, or the history of colonialism. They have not been trained to detect bias, nor to include social diversity in their teams. The problem is that they are training algorithms to make automatic decisions that will no longer make humans, but will still affect humans. An illustrative example of bias in creating digital voice assistants (Apple’s Siri, Amazon’s Alexa, Google Assistant and Microsoft’s Cortana) is the fact that they all have female voices, by default.[20] Different arguments have tried to explain this design: some appeal to people’s preferences,[21] others consider that females tend to articulate better than men, but the reason seems to be that the algorithm was trained only with one gender’s data.[22]

Machines will make decisions for all of humanity, but they are built by a small group of men, mostly with Democratic ideologies and similar values in the United States without considering the diversity that exists both in the United States and the rest of the world.[23] China is not on a better path as it is led by an authoritarian government that rules the international relations of the Chinese multinational companies established in the economic centers of the US and Europe.[24]

Meanwhile, there are calls to make the citizenry “data-literate” in order to “read, use, interpret, and communicate about data, and participate in policy debates about matters affected by AI.”[25] Others have suggested that learning to code should be part of the legal education.[26] Despite these suggestions, little attention is paid to the education of programmers within the AI field. A cross-disciplinary education of designers and leaders of companies developing artificial intelligence technologies could help to raise awareness of the difficult ethical questions posed by AI development and to tackle discrimination based on gender and minority status.

Another way to increase awareness of the risks posed by AI is to diversify the number of employees in technological areas. The European Union’s approach to AI explicitly recognizes that “[p]articular efforts should be undertaken to increase the number of women trained and employed in this area.”[27]

The truth is that in a few years, humans may soon be forced to implant a chip for different reasons. One reason might be to avoid exclusion from the modern artificial intelligence society, which may allow humans to compete in the future world. However, this way of thinking will only be an illusion because humans will stop being competitive in a pure sense. I mean, humans will not be able to compete with machines anymore. Machines, light years ahead in intellectual skills, will do nearly any sort of tasks better than humans. Our freedom may not completely disappear, but it will certainly be diminished by technological developments.

b. Far-Reaching Influence on Society

Programmers and App developers hold in their hands the ability to shape the brains of people who later use those applications. Current investigations into how the design of products or services affects the human brain are underway.[28] Apps are usually designed for simplicity so that we do not have any necessity to think. These apps have same biases that exist in the real world. If programmers think that women are second-rate citizens[29] or that blacks are not on the same level as whites, their biases may be consciously or unconsciously reflected in the algorithms they program.[30] People categorized by AI will suffer the consequences of automatic yet biased decision-making by such algorithms. One example would be a certain group which is only offered one type of credit card with higher interest rates due to belonging to a certain ethnic group. The individual who is categorized in that group, regardless of the category, will not be able to escape the effects of belonging to this group, which will affect access to credit throughout that person’s life. Therefore, the algorithms that were created to facilitate the application of credit cards and to automate the process of granting them become a challenge if people are classified by ethnicity and a credit score is awarded simply for that characteristic, which is added based on the neighborhood where the individual lives.[31] Likewise, these biases could not only affect your credit score, but things like social network could determine the likelihood of a person to commit a crime.[32] Currently, it is not new that African American sounding names are more likely to have an arrest record in the US[33] and that to prevent crime, black people are 40 times more likely to be stopped and searched in the UK.[34]

Therefore, bias is not a novelty,[35] but the computational ability to optimize decision making for the sake of process efficiency, which means that human biases are amplified in human-made machines. People who need that artificial intelligence or who are subjected to the certain application that uses artificial intelligence will suffer the consequences of automatic decision-making by algorithms designed to alleviate human life. Hence, the importance of ethics and the type of values ​​that are instilled in humans from an early age. Those beliefs cannot be changed later, because most of the time they are inside our minds, although we may not be conscious of them at all.

IV. Effects and further consequences: from addiction to data-driven economy

a. Towards Disconnected Societies

We are heading towards a disconnected society. Better put, there will be more and more “disconnected” people in the world, despite being connected to the internet twenty-four hours a day, seven days a week. It seems like a paradox, but social networks do not connect more human beings; they instead alienate them more than they were before because they create addiction.[36] Social networks are not designed to improve humanity, although this may have been the objective at first in Facebook’s initial dreams of connecting humanity.[37] What happens is that people are trampled on with advertisements that are directed specifically at the individual thanks to the activity they carry out in the application, further ensuring that people spend more and more time “connected” to the network. While connected, people stop feeling the real world of touching their fellow human beings. It is no wonder that the children of Silicon Valley workers are not educated through technology in their early ages, but rather use traditional media, such as a blackboard.[38] These same workers also encourage their children against having a profile on the social network, or if they do, limiting their use and encouraging them to play with their friends in the real world. When a child interacts in the real world, he or she learns lessons that do not come from reading books. Further, even if a child reads and studies, it is still necessary to experiment with them.

Relationships between people are not easy because it takes time and patience to understand and to learn how to live together. It is not something that is learned in one day, but in a lifetime. While we are alive, we are in a constant search to balance and improve ourselves through this path. A way to train an algorithm to do this does not exist, and even if we could, the machine is not going to stop being a machine, despite achieving a high degree of conceptual intelligence. Emotional intelligence is used to knock down barriers in many different fields of our lives, but it is not built in twenty-four hours. It is formed day by day in relationships with our peers and relatives. Emotional intelligence is precisely what successful people in the 21st century have to master. Emotional intelligence is formed daily in relationships with our peers and relatives as we get to know ourselves through relationships and contacts with others.

As the machine fills up with information about you and your performances, a time will come when the machine will know, and about your past, present or future circumstances more than you have ever been aware of or you will ever know. At that time, it could be considered that the machine has become your boss. This is especially, if you have lost the ability to relate to other people, to ask for help, or to speak and interact with humans. Here, the machine has, without a doubt, turned from a helpful company into your absolute boss. Considering that some predictions for 2020 were that many people would have more conversations with digital assistants than with their spouses[39] and that technology can replace the deepest and intimate relationships humans have, such as the emerging trend of sex robots,[40] the challenge of AI affecting human interactions is real and palpable. For instance, some campaigns are against marketing sexual robots specifically because businesses selling them are saying that if you do not have a friend or partner, a robot is here.[41]

By contrast, friendship is based on intimacy, attachment, and reciprocity.[42] People will lose the patience needed to establish relationships with humans as humans are imperfect and finite beings. The existing risk is that people can become dehumanized by demanding from others what they demand from machines. For example, in the ideal scenario of Amy Webb’s book Big Nine, artificial intelligence machines can help you have the partner you want because you can determine if you want a temporary relationship or a formal relationship, such as one that is headed towards marriage, to finding a person who laughs at your jokes.[43] As positive as this situation may seem, imagining that you meet a person who laughs at your jokes, the question is what will happen the day you tell the individual that something bothers you, and that person does not even consider your request? The answer is clear: it is over. Why should a person strive in a relationship as “ideal” as it might seem, having the option of being able to use the machine again to determine that the individual has to know someone else? People learn through experience, and it is uncertainty which shapes them and makes them wiser and stronger. Faced with the possibility of having better than human responses to everything, human beings will begin to be expendable.

First, humans will become expendable in mechanical and purely repetitive activities, then in those that need certain intellectual performance, such as how to reduce costs in a company, and finally, in relationships with others. The reason is that relations with humans will decrease, in the same way that social contact has decreased due to the coronavirus pandemic in 2020. People are advised to maintain a social distance of six feet and not leave home, except for what is essentially necessary.

However, people have gravitated to the online world, where they carry out activities that can be performed virtually, such as holding a law class or a business meeting. People are leaving traces of their connections with the use of their equipment through their IP, the identifications of their smartphones and tablets, which they previously did not leave.

b. Compulsory transition to a virtual world

This transition to the massive use of electronic media is not an option, it is a must. It is a requirement to be able to continue studying or working and to be able to continue in the world. In the same way, it will happen with the use of artificial intelligence, which at first may seem like an option, there will come a time when it will become an obligation. People will be marked by those who manage the spheres of power, e.g., governments and large corporations. Our political representatives, those who do not represent us,[44] will have access to all our personal information. This is already a reality in China[45] or the USA under specific programs.[46] In fact, in some countries where human rights tended to be respected in the last decades,[47] several governments have forced to temporarily geolocate the phones of their nationals under the reasonable pretext of preventing the spread of the pandemic.[48] In principle, these measures should be temporary and proportional[49]. Moreover, the European Commission recommends that these applications should be voluntary. Sharing data with national health authorities will need the consent of the person once is infected with COVID-19, and these applications should be deactivated automatically when the pandemic is under control.[50] Also, its justification for protecting public health will be used over time, in a similar way that the US authorities used the reason for the fight against terrorism and protection of national security.[51] September 11, 2001, changed control in airports and the transit of data from the countries of origin and destination, as well as the interception of communications from both the analog and digital world.

The pandemic has already suspended certain patients’ rights and waived certain sanctions and penalties for healthcare providers in the United States pursuant to the Health Insurance Portability and Accountability Act of 1996 (HIPAA).[52]To facilitate telemedicine, “providers are encouraged to notify patients that these third-party applications potentially introduce privacy risks.”[53] However, if some technologies are HIPAA compliant, why should patients be exposed to privacy risks when using non-compliant apps like FaceTime, Facebook Messenger, Google Hangout, and Skype? The crisis should not suppose a deterioration of acquired rights, in particular, the coronavirus seems like a perfect breeding ground to admit privacy infringements based on a free product.[54] Considering that health care providers would be exempted from sanctions, it takes for granted that the value of privacy is diminishing. Nevertheless, it is not assured that patient privacy protection might be normalized after the pandemic. Likewise, some applications share data with other applications and disregard users’ consent, even if users do not have an account in the second application.[55]

On the other hand, considering the importance of public health, the long-term effects of having different actions must be studied. In principle, isolation measures are temporary. As Choukèr has shown, extended confinement over time can have negative effects on the human mind.[56] Hitler’s concentration camps or Stalinist camps are dark examples of the problem of isolation. The considerable difference with the coronavirus pandemic situation is that today we are fighting against an invisible enemy that does not recognize borders or social classes. This reminds us of something very important: that we are simply humans, that is to say, vulnerability also characterizes us. A recent article subtly suggests that some consequences of social distancing are heart disease, depression, and dementia.[57] Moreover, it is emphasized that we do not need to lose human contact because technology, especially social networks, messaging applications, and teleconferences, makes us close to the people we love. This is precisely the Trojan horse of our society and could be the end of society as we now understand it.

Being confined at home in today’s age seems impossible without the help of technology, which has been our friend of battles. While we connect, either for pleasure to be in contact with friends or for the necessity to continue working, we are generating a huge amount of data. Data is the gasoline of artificial intelligence machines. Those with which humans cannot compete. Currently, our privacy is at risk more than ever, and we could lose our privacy forever. Meanwhile, the best universities are using cutting-edge technology to continue classes. Also, some companies and government agencies (such as SpaceX and NASA) are prohibiting the use of the same cutting-edge technology for working communication[58] due to the invasion of individuals’ privacy and safety problems.

Nonetheless, it is not less true that the creation of a digital fingerprint is now a necessity for humans to continue with their jobs and any kind of usual activities. It is not just about working, but simply studying how humans spend time. Humans are obviously not a stupid or unintelligent species. However, the manipulation of people is evident, in particular, of those who lack knowledge or simply do not have time to read every privacy policy of any application they are using.[59] Sixty-nine percent of Americans participating in a 1990 survey knew perfectly well that when companies sold their personal data without consent, and that action was wrong.[60] They did not need to study law or have some specific knowledge to know how to discern good from evil. Some businessmen, presently aligning with artificial intelligence development, have publicly expressed that we are facing the end of privacy.[61] In this regard, these kind of positions and intentions should worry humans.

Moreover, a shift in responsibility for the acts of humans to robots should be prevented. Robots are just human-made machines. Therefore, humans should not shift their responsibility in certain situations, such as a medical operation or a business decision, to robots. Algorithms are not autonomous; they are dependent on the data that human enter. Certainly, algorithms can learn by itself e.g., machine learning and reach results that are not understandable by the human mind. But whoever invests and benefits from the creation of the algorithm, be it a company, a group of companies, a government, or an individual must be responsible for the consequences of that algorithm. Industrial and intellectual property rights pass from the workers to the company under an employment contract, although with different specificities depending on the applicable law,[62] so that the company is enriched by the inventions of workers under their charge. As such, the responsibility for the effects caused by algorithms created under an employment contract cannot fall exclusively on workers. What incentive would inventors or workers have then?

Governments must control the purpose and the type of algorithm that are being fed. We cannot continue in a lawless world in a matter where it is warned that changes will be profound for the human race. The White Paper on Artificial Intelligence – A European approach to excellence and trust is a good start.[63] Human must have the last word on machines. Therefore, human should begin to regulate what type of artificial intelligence they want to develop and for what purposes. Repurposing artificial intelligence and robotics seems to be the main concern for a considerable number of businessmen, including Elon Musk, who have campaigned before the United Nations in favor of banning robots that kill people.[64] A solution could be to establish international regulations against the research and financing of the creation of “killer” robots. It appears that a fundamental problem is the lack of a universal definition of what lethal autonomous weapon systems are.[65] This debate may be linked to other biological-legal debates such as stem cell research or cloning of human beings. An example of the different perspectives worldwide is provided by the fact that stem cell experiments continue to be carried out in China, while they are prohibited in the European Union.[66] This means that agreeing on the values ​​of humanity will not be an easy task, especially when different economic thoughts exist in the most powerful nations.

However, efforts of mobilizing civil society have been successful recently. The International Campaign to Abolish Nuclear Weapons (ICAN), whose work was recognized with the 2017 Nobel Peace Prize, played an important role in demanding urgent action to end nuclear weapons.[67] The result of the non-governmental organization materialized in 2017 when the United Nations Treaty on the prohibition of Nuclear Weapons (TPNW) was adopted.[68] This type of international collaboration among countries can have an impact for a better society once fifty countries ratify it, thereby, achieving a legally binding status. So far, only 39 countries have ratified the TPNW and the option that more countries, particularly those that possess nuclear weapons, could follow, is on the table. [69]

c. Risks of Combining the Surveillance Society and the Communist System

Challenges of treating a person like a machine are enormous because machines are going to make relevant decisions. An illustrative example is the current social reputation system in China, where each person is assigned a few points and every activity in their life is monitored and affects their future.[70] There are cameras everywhere, and there are also increasingly sophisticated systems to deduce which people are exactly those who are walking on a street. The facial recognition system allows knowing “who is who” in areas of high population density such as a metro or train stations. In fact, people are constantly being monitored in real time. That is because each individual’s smartphone provides geolocation. This means that it can accurately determine what type of work an individual has, what their family relationships are, what shops they frequent, what products they consume or whether they usually go to the hospital. In principle, personal data can be used to offer better services and products to the individual, or even better healthcare. According to HIPAA, data must be de-identified to be used legally, but the law does not address the challenges of reidentification.[71]

Without ensuring protection of reidentification, risks of discrimination increase exponentially and are perpetuated through decisions taken by algorithms. For example, imagine a person who had previously applied for several jobs, has now been diagnosed with breast cancer. A people-free data flow could make it possible for the algorithm used by a potential employer to have that person’s information and, therefore, reject their application, regardless of their worth for the job. An employer does not logically want to incur the cost of hiring a person whose health is expected to worsen. Next, imagine that the employer in question is the State, the same policy could be applied. The State could make the same decision as the employer and very “reasonably” justify that a certain action grounded in economic reasons is “beneficial” for society. For example, that a certain individual sacrifice herself for the common “good.” Considering that there is an excess of labor to fill a few jobs, this logic is expected to occur during the period of transition to the artificial intelligence economy. For example, “a US legal Proposal establishes to require certain entities to conduct automated decision system assessments when existing and new high-risk automated decision systems”[72]. One complementary solution would be to protect individuals by requiring a natural person, instead of a computer, to make or review a decision that was based solely on automated processing (an algorithm).[73]

On the one hand, the machine is making decisions on the parameters entered by humans. However, the fact that the State justifies control of all spheres on an individual’s life by allegedly solving a lack of trust in society, makes the State not only a political power, but also an absolute social power. According to Acton, absolute power corrupts for sure.[74] Absolute power is already a phenomenon of fearsome corruption that is in contrast with merely power that tends to corrupt.[75]

On the other hand, reputation systems are nothing new. In rural China “witchcraft accusations act as punishment for those who do not cooperate with local norms”[76]. Women labelled as witches, are supposedly untrustworthy and, therefore, others conform out of fear of being labelled as such. The problem emerges once labels are applied to adult women heads of household, whereby the trait is transmitted to children.[77] To understand these ancient customs, some research studies have shown that “stigmatization originally arose as a mechanism to harm female competitors”[78]. The result of labelling women as witches undermines trust and social cohesion in a society.[79] Basically, “labelling may have become a way for people to get ahead of their rivals and gain a competitive advantage in reproduction or resources”[80].

This deep-rooted cultural belief can be compared with the current social credit system in China. Particularly shocking is the trait that the social credit score of an individual affects the ability to buy a flight ticket and the possibilities of schools for her children.[81] The social credit system may be transparent, but it impedes social mobility because it perpetuates the social class to which a person belongs. Humans’ destiny is, thus, determined under State control because each person’s score is implicitly inherited to the point where it affects the societal progress of their offspring. Because it is more difficult to recover from anything that has negatively affected an individual, AI erects much bigger barriers, becoming insurmountable barriers, than what we already have witnessed.

d. Dehumanizing Humans

If a pervasive system, controlled by an authoritarian State, is in place, then humans are likely to behave in a dehumanizing manner. That is because a human being is moved by passions, and many times, they act against the rules. Other times, the rules are not fair. Anyone can be in trouble or cross a certain line in her life. However, all the events that happen after a negative act will never be forgotten under a social credit system. Thus, people will be afraid of certain acts happening at all.

Development of life and species occurs largely because of diversity, which is the basis of life on the planet, and, therefore, the basis of human beings. If we consider that we can encompass the worth of a human being with a score and give it importance for every vital aspect in the development of a person, e.g., access to housing, access to public employment, access to education, access to public transport and international mobility, we eliminate the essence of the person. Credit scoring lacks transparency and entails discriminatory and arbitrary results quite often in Western countries.[82] Without adequate safeguards, humans run the risks that algorithmic predictions will affect multiple aspects of their life because, while there may be aspects in which a person is very trustworthy, there may be other areas in which the same person is not so reliable. However, the problem with a social scoring is that the footprint is global and affects all areas of a person’s life.

This paradigm induces a person to behave like a “perfect” person from the point of view of established valid theory, even if opposite from how the person desires to behave. If there is no other option, a person under massive surveillance feels pressure to behave in a certain way and therefore, is being inexorably directed towards imbalance and disharmony. That is, there is no harmony between thoughts, actions and emotions of that person, and her disharmony is transmitted. That process leads to fear that forces one to enter into specific habits. Because the process of doing things out of fear is what counts, society ends up getting worse people as fear perpetuates.

Finally, it leads to human beings behaving like machines because people are treated like machines that have no emotions, no passions, and would not break a rule. There is no option to be different. The system will merely create a confident appearance.

Human beings will end up behaving like a machine out of fear. Individuals are going to behave according to what a society expects of them, not genuinely, but rather out of fear, therefore, they will not move constructively.

V. Return to Humans’ Nature

AI needs to be approached in an ethical way to minimize the risks that it entails for humanity.[83] The OECD Principles of Artificial Intelligence can be a good start as forty-four countries have adopted them[84]. However, two main shortcomings are present. First, China, a big player in the artificial intelligence development, is not a member of the OECD. Second, the OECD principles are not yet enforceable, which limits the effectiveness of the international framework. Problems arise when artificial intelligence is used against humans under the command of a minority to ensure and maximize their power. Lack of privacy is used against human beings as well. AI, combined with an authoritarian rule and surveillance by the State, becomes the greatest damage to our own species because it disallows diversity and dissent to exist.

AI could have devastating consequences for humans if the society forgets the main role of diversity in development of the human race and instead, focuses on maximizing corporate profits and efficiency by States. The value of a human being cannot be measured in just a credit score. A credit score might be a good system to have access to a loan or preventing gender and racial discrimination. But considering a social score to access everything a person needs in life could have the opposite result from the desired objective.

Technology should be developed to achieve and optimize freedom of human beings. The ultimate goal of AI should be to amplify human liberty, to help them become more autonomous than without AI. AI should be a complement to human activities and should not be able to determine a definitive result affecting a person without human intervention. Therefore, caution should guide when artificial intelligence replaces human choices. That is because the possibilities of discrimination, in particular, risk of exacerbating gender and racial discrimination increase and, in turn, may disadvantage humans indefinitely.

Principles of all arts, not merely skills, should be taught, so that people can develop their own independence and reach their fullest potential. In other words, critical thinking about each field of science and art is essential. Humans, people of flesh and blood, have limits, but also virtues such as empathy. Machines can act in a way that looks like empathy, but they simply do not have it. They can be programmed for empathy, which introduces the critical difference. Development as species cannot make humans servants of machines. People who are designing the machines have a lot of responsibility, but this does not mean that the people who are not designing the machines do not have any. Even though keeping informed is not an easy task, citizens must demand information regarding advances in artificial intelligence. That is why all professions from journalists to lawyers are necessary and will remain to be relevant in an artificial intelligence society.

People have to train in the arts of music, literature, and painting because humans’ creations will continue to be valued. This does not mean that machines cannot make better creations than humans, in fact, they have already shown they can. Machines can be superior to humans because machines can acquire knowledge so quickly that it would take a thousand lives to understand what a machine can do. However, people will continue to value creations from other human beings because they know a human creation cannot be infinite, unlike a creation derived from artificial intelligence.[85]

The human being in his blindness of ambition, in his immeasurable ignorance of the truth, tries to continue advancing in artificial intelligence. The evolution of human beings must not be in contradiction with nature. What are we? We must return to what we are. We are human, and remembering Seneca, our only freedom is wisdom. Therefore, if the stimuli we receive from social networks focus us towards imbalance, and towards addiction, we have to counter that with more harmony and with more humanism. We must cultivate moral values, meaning we must direct our attention towards our own judgment of what is worthwhile based on our reflections on the world, where we would like to live, and the type of people we would like to be[86]. In recent decades, emphasis has been placed on technical professions and humanistic degrees that have a lesser range of possibilities in the job market have been vilified. However, the people who program the machines will need not only mathematical but also humanistic technical knowledge, such as comparative literature, philosophy, or psychology in order to create better machines. Machines, smarter than people, will help them make better decisions, without making humans servants of the machine. If choice is required, then people should choose to have dominion over machines rather than choose to be enslaved by machines.

 

  1. RCC Postdoctoral Fellow at the Harvard Law School Institute for Global Law & Policy. Visiting Scholar Seneca Foundation at the Center for Transnational Litigation, Arbitration and Commercial Law, New York University Law School. PhD in Law Complutense University (Madrid). LL.M. College of Europe (Brugge). Thank you to the editors of the Arizona Law Journal of Emerging Technologies for their hard work in preparing this article for publication.
  2. Nick Bostrom, What happens when our computers get smarter than we are?, YouTube (April 27, 2015), https://www.youtube.com/watch?v=MnT1xgZgkpk.
  3. See Daniel Goleman, Hot to Help: When Can Empathy Move Us to Action?, Greater Good Magazine (March 1, 2008), https://greatergood.berkeley.edu/article/item/hot_to_help.
  4. Id.
  5. Id.
  6. Id.
  7. Gail F. Melson, Child Development Robots: Social Forces, Children’s Perspectives, 11 Interaction Studies, 231 (2010), https://www.psychologytoday.com/sites/default/files/attachments/115726/child-development-robots.pdf.
  8. Emerging Technology from the arXiv, Big Data Poses Special Risks for Children, Says UNICEF, MIT Tech. Rev. (Oct. 27, 2017), https://www.technologyreview.com/s/609240/big-data-poses-special-risks-for-children-says-unicef/.
  9. Jillian Kramer, Empathy Machine: Humans Communicate Better after Robots Show Their Vulnerable Side, Scientific American (March 27, 2020), https://www.scientificamerican.com/article/empathy-machine-humans-communicate-better-after-robots-show-their-vulnerable-side/.
  10. See Margaret L. Traeger et al., Vulnerable Robots Positively Shape Human Conversational Dynamics in a Human-Robot Team, 117 Proc. of the Nat’l Acad. of Sci. of the U.S., 6370, 6373 (2020).
  11. (Yonhap Interview) Go master Lee says he quits unable to win over AI Go players, Yonhap News Agency (Nov. 27, 2019, 3:02 PM), https://en.yna.co.kr/view/AEN20191127004800315.
  12. Id.
  13. See Carl Benedikt Frey & Michael Osborne, The Future of Employment: How Susceptible are Jobs to Computerization? 1, 42 (Oxford Martin Programme on Tech. and Emp’t, Working Paper Sept. 17, 2013), https://www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf.
  14. Id. at 48.
  15. See Carl Benedikt Frey et al., Technology at Work v2.0, 7 (Oxford Martin Programme on Tech. and Emp’t and Citi GPS, report Sept. 2016), https://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work_2.pdf.
  16. Goleman, supra note 3.
  17. Hugh McLachlan, Ethics of AI: Should Sentient Robots Have the Same Rights as Humans?, The Independent, (June 26, 2019, 1:49 PM), https://www.independent.co.uk/news/science/ai-robots-human-rights-tech-science-ethics-a8965441.html.
  18. John Bohannon, The Synthetic Therapist, 349 Sci. 250 (2015); Jairo Esteban Rivera Estrada & Diana Vanessa Sánchez Salazar, Inteligencia artificial ¿reemplazando al humano en la psicoterapia?, 24 Escritos 279 (2016) (citing Bohannon, supra).
  19. Amy Webb, The Big Nine: How the Tech Titans and Their Thinking Machines Will Change Humanity (2019).
  20. Mark West et al., I’d Blush if I Could: Closing Gender Divides in Digital Skills Through Education 94-98 (2019), https://unesdoc.unesco.org/ark:/48223/pf0000367416.page=1?utm_campaign=the_download.unpaid.engagement&utm_source=hs_email&utm_medium=email&utm_content=72929072&_hsenc=p2ANqtz-8mK9ReSNoh1TU4O9Ie8iT4VU1_FKCfpA8ZEqkmHGYZmWvgF-VZL3aDwvQ7eDU6Ja2Wyn0yr1KodDuC_avp0YSXDs7o7sWeM7DWfFxnms7PCQMzdNw.
  21. Brandon Griggs, Why Computer Voices Are Mostly Female, CNN (Oct. 21, 2011, 11:43 AM), https://www.cnn.com/2011/10/21/tech/innovation/female-computer-voices/.
  22. Katharine Schwab, The Real Reason Google Assistant Launched With a Female Voice: Biased Data, Fast Company (Sept. 19, 2019), https://www.fastcompany.com/90404860/the-real-reason-there-are-so-many-female-voice-assistants-biased-data.
  23. Id.
  24. Id.
  25. See e.g., Exec. Office of the President, Nat’l Sci. and Tech., Council Comm. on Tech., Preparing For the Future of Artificial Intelligence 2, 37 (2016), https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.
  26. See e.g., Nayef Andrabi, How The Fusion Of Technology And The Law Will Serve As A Catalyst For Legal Evolution, 36 SANTA CLARA HIGH TECH. L.J. 345, 363-364 (2020).
  27. White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, at 6, COM (2020) 65 final (Feb. 19, 2020).
  28. Tristan Harris, How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist, Medium (May 18, 2016), https://medium.com/thrive-global/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3.
  29. Not only programmers could think in this way, customers could also believe that, see Clifford Nass & Corina Yen, The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships 3 (2010).
  30. See e.g., Julia Carpenter, Google’s Algorithm Shows Prestigious Job Ads to Men, But Not to Women. Here’s Why that Should Worry You, The Washington Post (July 6, 2015, 4:43 PM), https://www.washingtonpost.com/news/the-intersect/wp/2015/07/06/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you/; see also Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters (Oct. 9, 2018, 8:12 PM), https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
  31. See Stan. U. Sch. of Engineering, Latanya Sweeney: When Anonymized Data is Anything But Anonymous, YouTube (2018), https://www.youtube.com/watch?v=tivCK_fBBfo.
  32. Matt Stroud, The Minority Report: Chicago’s New Police Computer Predicts Crimes, But is it Racist? The Verge, (Feb. 19, 2014, 09:31 am), https://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.
  33. Latanya Sweeney, Discrimination in Online Ad Delivery, Arxiv (Jan. 28, 2013), https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.
  34. Mark Townsend, Black People ’40 Times More Likely’ to be Stopped and Searched in UK, The Guardian (May 4, 2019 09.16 EDT), https://www.theguardian.com/law/2019/may/04/stop-and-search-new-row-racial-bias.
  35. Sandra Wachter, The Other Half of the Truth: Staying Human in an Algorithmic World, OECD Forum (Jun. 7, 2019), https://www.oecd-forum.org/users/264249- sandra-wachter/posts/49761-the-other-half-of-the-truth-staying-human-in-an-algorithmic- world.
  36. See in general, Natasha Dow Schüll, Addiction by Design (2012).
  37. See Siva Vaidhyanathan, Anti-social media: How Facebook disconnects US and undermines democracy 3 (2018) (on the intention in the Zuckerberg manifesto but on the reality of having influenced the rise on nationalist politicians).
  38. The Times UK, The Times UK: Tech-Free Schools for Children of Silicon Valley (April 15, 2019), ILAC https://www.ilac.com/tech-free-schools-for-children-of-silicon-valley/; Pablo Guimón, Los Gurús Digitales Crían a sus Hijos sin Pantallas, El País (Mar. 24, 2019, 2:35 PM EDT),https://elpais.com/sociedad/2019/03/20/actualidad/1553105010_527764.html.
  39. Heather Pemberton Levy, Gartner Predicts a Virtual World of Exponential Change, Smarter with Gartner (Oct. 18, 2016), https://www.gartner.com/smarterwithgartner/gartner-predicts-a-virtual- world-of-exponential-change/.
  40. Bernard Marr, How Robots, IoT And Artificial Intelligence are Changing How Humans Have Sex, Forbes (Apr. 1, 2019, 12:24am EDT), https://www.forbes.com/sites/bernardmarr/2019/04/01/how-robots-iot-and-artificial-intelligence-are-changing-how-humans-have-sex/#e760f8c329c3.
  41. Policy Report: Sex Dolls and Sex Robots—A Serious Problem for Women, Men & Society, Campaign Against Sex Robots (May 8, 2018), https://campaignagainstsexrobots.org/ 2018/05/08/policy-report-sex-dolls-and-sex-robots-a-serious-problem-for-women-men-society/.
  42. Pallab Ghosh, Sex Robots May Cause Psychological Damage, BBC (Feb. 15, 2020), https://www.bbc.com/news/science-environment-51330261.
  43. Webb, supra note 19, at Chapter 5 Thriving in the Third Age of Computing: The Optimistic Scenario (book listened on Amazon).
  44. Lawrence Lessig, They Don’t Represent Us (2019).
  45. Bradley A. Thayer & Lianchao Han, China’s Weapon of Mass Surveillance is a Human Rights Abuse, The Hill (May 29, 2019 10:30 AM EDT), https://thehill.com/opinion/technology/445726-chinas-weapon-of-mass-surveillance-is-a-human-rights-abuse.
  46. Bruce Schneier, Data and Goliath, 38, 67 (2015). (For example, the NSA targets people who search for information on popular Internet privacy and anonymity tools. Moreover, national security letters are issued by the FBI without judicial oversight. They “are generally used to obtain data from third parties: email from Google, banking records from financial institutions, files from Dropbox”).
  47. Big Brother Watch and Others v. the United Kingdom, nos. 58170/13, 62322/14, & 24960/15 (ECtHR, 13 Sept. 18). (However, the European Court of Human Rights (ECtHR) held that the bulk interception of communication and obtaining communications data from communications service providers violated Article 8 on the European Convention of Human Rights (ECHR). This judgment is pending before the Grand Chamber); Centrum För Rättvisa v. Sweden, no. 35252/08 (ECtHR, 19 June 18). (In contrast the Court held that there has been no violation of Article 8 of the ECHR, when Swedish legislation created a secret surveillance that potentially affected all users of mobile phone and the Internet, without being notified. This case has been referred to the Grand Chamber).
  48. For Spain, see Orden SND/297/2020, de 27 de marzo, por la que se encomienda a la Secretaría de Estado de Digitalización e Inteligencia Artificial, del Ministerio de Asuntos Económicos y Transformación Digital, el desarrollo de diversas actuaciones para la gestión de la crisis sanitaria ocasionada por el COVID-19. (BOE 2020, 86). (Ministerial Order SND/297/2020, of March 27 by which the Secretary of State for Digitization and Artificial Intelligence, of the Ministry of Economic Affairs and Digital Transformation, is entrusted with the development of various actions for the management of the health crisis caused by COVID-19. The order allows the geolocation of the user for the sole purpose of verifying that he is in the region in which he declares to be and does not establish time limits of the measures).
  49. Tom Ginsburg & Mila Verstegg, States of Emergencies: Part II, Harvard Law Review Blog (Apr. 20, 2020), https://blog.harvardlawreview.org/states-of-emergencies-part-ii/.
  50. See Guidance on Apps Supporting the Fight Against COVID 19 Pandemic in Relation to Data Protection, COM (April 17, 2020).
  51. See Kathy Gilsinan, In 1995, the U.S. Declared a State of Emergency. It Never Ended., The Atlantic (Jan. 23, 2019), https://www.theatlantic.com/international/archive/2019/01/trump-renews-24-year-old-terrorism-state-emergency/581050/.
  52. Administrative Data Standards and Related Requirements, 45 C.F.R. §§ 160-164 (2001). The waiver was effective since March 15, 2020, see Dep’t. of Health and Human Services, Covid 19 & HIPAA Bulletin: Limited Waiver of HIPAA Sanctions and Penalties During a National Public Health Emergency, (2020, https://www.hhs.gov/sites/default/files/hipaa-and-covid-19-limited-hipaa-waiver-bulletin-508.pdf/.
  53. Notification of Enforcement Discretion for Telehealth Remote Communications During the COVID-19 Nationwide Public Health Emergency (last visited May 20, 2020), https://www.hhs.gov/hipaa/for-professionals/special-topics/emergency-preparedness/notification-enforcement-discretion-telehealth/index.html.
  54. Editorial Board, Privacy Cannot Be a Casualty of the Coronavirus, The New York Times (Apr. 7, 2020), https://www.nytimes.com/2020/04/07/opinion/digital-privacy-coronavirus.html?action=click&module=Opinion&pgtype=Homepage.
  55. See Kate Cox, Zoom’s Privacy Problems are Growing as Platform Explodes in Popularity, Ars Technica (Mar. 31, 2020), https://arstechnica.com/tech-policy/2020/03/zooms-privacy-problems-are-growing-as-platform-explodes-in-popularity/.
  56. J. I. Pagel & A. Choukèr, Effects of Isolation and Confinement on Humans-Implications for Manned Space Explorations, 120 J. Appl. Physiol., 1449, 1455 (2016): “Isolation and confinement can put the human body under a large amount of psycho-neuroendocrine duress, which results in measurable pathophysiologic symptoms.” A. Choukèr, et al., Effects of Confinement (110 and 240 Days) on Neuroendocrine Stress Response and Changes of Immune Cells in Men (Bethesda, Md. : 1985), 92 J. Appl. Physiol.: 1619, 1626 (2002): (“these results demonstrate moderate and distinct reactions of the neuroendocrine and immunological systems which from the laboratory point of view can be considered negative. Thus, confinement caused changes of variables that reflect physiological rather than pathological adaptive processes.”)
  57. Abigail Beall, Why Social Distancing Might Last for Some Time, BBC (Mar. 24, 2020), https://www.bbc.com/future/article/20200324-covid-19-how-social-distancing-can-beat-coronavirus.
  58. Munsif Vengattil & Joey Roulette, Elon Musk’s SpaceX Bans Zoom over Privacy Concerns -Memo, Reuters (Apr. 1, 2020), https://www.reuters.com/article/us-spacex-zoom-video-commn/elon-musks-spacex-bans-zoom-over-privacy-concerns-memo-idUSKBN21J71H. (“The Federal Bureau of Investigation’s Boston office on Monday issued a warning about Zoom, telling users not to make meetings on the site public or share links widely after it received two reports of unidentified individuals invading school sessions, a phenomenon known as “zoombombing.”).
  59. See Keith Wagstaff, You’d Need 76 Work Days to Read All Your Privacy Policies Each Year, Time (Mar. 6, 2012), https://techland.time.com/2012/03/06/youd-need-76-work-days-to-read-all-your-privacy-policies-each-year/.
  60. See Josh Lauer, Creditworthy: a History of Consumer Surveillance and Financial Identity in America, Columbia Univ. Press, 161-162 (2017): (“Three-quarters of Americans said that the practices of prescreening was “not acceptable”, and 69 percent said that the business of selling consumer lists—lists that included information “such as income level, residential area, and credit card use”—was a “bad thing.”).
  61. See, for example, Peter Newcomb, Reid Hoffman, The venture capitalist on how to hit a fast-moving target in the second-wave Web boom, Wall Street Journal, (June 23, 2011), https://www.wsj.com/articles/SB10001424052702303657404576363452101709880 (“Privacy, he claims, is primarily an issue with old people.”)
  62. For example, under U.S. copyright law, work made for hire is an exception to the general rule of ownership of copyright. 17 U.S.C. §201(b) (1978).
  63. See supra note 27.
  64. Samuel Gibbs, Elon Musk leads 116 experts calling for outright ban of killer robots, The Guardian (August, 20, 2017 10.01 EDT), https://www.theguardian.com/technology/2017/aug/20/elon-musk-killer-robots-experts-outright-ban-lethal-autonomous-weapons-war.
  65. See Caitlin Mitchell, When Laws Govern LAWS: A Review of the 2018 Discussions of the Group of Governmental Experts On the Implementation and Regulation of Lethal Autonomous Weapons Systems, 36 Santa Clara High Tech. L.J. 407, 410 (2020).
  66. Overview of International Human Embryonic Stem Cell Laws Appendix E, The New Atlantis, (2002), A Report of the Witherspoon Council on Ethics and the Integrity of Science,https://www.thenewatlantis.com/publications/appendix-e-overview-of-international-human-embryonic-stem-cell-laws.
  67. See ICAN receives 2017 Nobel Peace Prize, ICAN, https://www.icanw.org/nobel_prize (last visited Aug. 14, 2020).
  68.  Treaty Overview, Treaty on the Prohibition of Nuclear Weapons, the General Assembly of United Nations, (last visited Aug. 14, 2020), https://www.un.org/disarmament/wmd/nuclear/tpnw/.
  69. Status of the Treaty (last visited June 4, 2020), http://disarmament.un.org/treaties/t/tpnw.
  70. See Xin Dai, Toward a Reputation System State: The Social Credit System Project on China, (2018), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3193577.
  71. See Charlotte A. Tschider, The Healthcare Privacy-Artificial Intelligence Impasse, 36 Santa Clara High Tech. L.J. 439, 442 (2020).
  72. Algorithmic Accountability Act of 2019, H.R. 2231, 116th Cong. § 3(b)(1) (2019), https://www.congress.gov/bill/116th-congress/house-bill/2231/text.
  73. See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 4, art. 22.
  74. Lord Acton (John Emerich Edward Dalberg) Letter to Archbishop Mandell Creighton (Apr. 5, 1887), (“Power tends to corrupt and absolute power corrupts absolutely”), https://history.hanover.edu/courses/excerpts/165acton.html.
  75. Id.
  76. Ruth Mace, Why are Women Accused of Witchcraft? Study in rural China Gives Clue, The Conversation (Jan. 8, 2018), https://theconversation.com/why-are-women-accused-of-witchcraft-study-in-rural-china-gives-clue-89730.
  77. Id. (“The label was usually applied to adult women heads of household and often inherited down the female line.”)
  78. Ruth Mace et al., Population structured by witchcraft beliefs, Nat. Hum. Behav. 2, 39-44 (2018), https://doi.org/10.1038/s41562-017-0271-6.
  79. Id.
  80. See Mace, supra note 76.
  81. See Xin, supra note 70, at 33.
  82. See Danielle K. Citron & Frank Pasquale, Essay, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 7 (2014).
  83. Several frameworks already exist, for instance, the Berkman Klein Center for Internet & Society at Harvard University, Jessica Fjeld et al., Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI (Jan. 15, 2020). In Europe, the High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, (Apr. 8, 2019), https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. However, a principle approach has already been criticized, see Brent Mittelstadt, Principles alone cannot guarantee ethical AI, Nature Machine Intelligence 1, 501 (2019).
  84. OECD Principles of Artificial Intelligence (May 21, 2019), https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
  85. See For example, Silvia Jiménez Gómez, Generación y evaluación de secuencias melódicas mediante inteligencia artificial, TFG, Universidad Politécnica Madrid, 101 (2018), https://oa.upm.es/53396/1/TFG_SILVIA_JIMENEZ_GOMEZ.pdf (stating that 77.5% of individuals surveyed clearly give more value to a melody composed by a human, while the remaining group of 22.5% do not opt for that option).
  86. See Mariana Alessandri, In Praise of Lost Causes, The New York Times (May 29, 2017), https://www.nytimes.com/2017/05/29/opinion/in-praise-of-lost-causes.html.