AI is everywhere in the news, and we've all seen headlines like these:
- AI can now design cities, but should we let it?
- For The First Time Ever, A Drug Developed By AI Will Be Tested In Human Trials
- Google AI creates its own ‘child’ AI that’s more advanced than systems built by humans
Each of these headlines has something in common: they all speak of AI as something that has agency. Instead of saying “people can use an AI system to,” these headlines make it sound as though some artificially intelligent being is performing an action of its own accord and according to its own desires.
What do we mean here when we say that ‘artificial intelligence’ can design cities? We are certainly not talking about some self-conscious superintelligence that of its own free will designs cities. No, in this case, we are talking about a group of researchers using a machine learning system to do two things: firstly, to analyse survey data about what images of streets people find beautiful or ugly; secondly, to edit images of ‘ugly’ streets to make them conform better to what people find beautiful.
While this certainly sounds like an interesting tool for urban planning, it's a far cry from what most people will think when they read Fast Company’s original headline. A more deflationary version of the headline might say that “a group of researchers can use a computer program to help design cities by analysing data about what people like and don’t like.” This certainly sounds less promising and innovative, than Fast Company’s version, but it's much more accurate.
What's most important about this rephrasing is how it changes our approach to the question which follows: should we let it? When we hear that ‘AI can now design cities’, we might feel as though we are at the threshold of some sci-fi future, and perhaps feel a curiosity to ‘let AI’ try its hand at designing cities. After all, most of us have had to deal with the bad urban planning decisions of our fellow humans, so why not let AI have a go? But when we hear that some people claim that they have a really good computer program for designing cities, I’m willing to bet that we’re all a bit more circumspect to let ‘them’ design cities.
Once we know how the system actually works, one obvious question we would ask is whether the people who rated the streets represent the diversity of the populations in those cities. If, for example, the people rating the streets were mostly young, White, and middle class, this urban planning tool could end up being a tool that simply furthers the gentrification of cities. This is precisely the type of question that people won’t ask, however, if they think that ‘an AI’ came up with the designs all 'by itself.’
Hiding human agency
Another clear danger, which may be seen as an advantage by the more ill-intentioned among us, is that the ascription of agency to AI can quite effectively mask the human agency behind certain processes. This might seem relatively harmless in certain cases, such as the example above of using an AI system to analyse people’s preferences for making their streets look pretty. However, in the majority of cases we’re done a disservice when we’re presented with misleading claims about AI doing something when it is very clearly a case of humans using AI to do things.
No AI system, no matter how complex or ‘deep’ its architecture may be, pulls its predictions and outputs out of thin air. All AI systems are designed by humans, are programmed and calibrated to achieve certain results, and the outputs they provide are therefore the result of multiple human decisions.
In the article Black-boxed politics - opacity is a choice in AI systems, the authors list a number of the controversial human decisions that go into the design of any AI system:
Owners of AI systems and the data scientists they employ are responsible for the choices made at each stage of development: for the choice of the data, even if that choice was very limited; for deploying the system despite the fact that they could not avoid bias; for not revising their main objective, regardless of the fact that the fair outcome they hoped for could not be achieved; and, finally, they are responsible for choosing to use an automated system in the first place, despite being aware of its limitations and possible consequences.
When we fail to see these decisions (or when they are deliberately masked), we end up seeing AI systems as finished products, as systems that simply take input data and output objective results. This contributes to what Deborah G. Johnson and Mario Verdicchio call sociotechnical blindness in their article Reframing AI Discourse: "What we call sociotechnical blindness, i.e. blindness to all of the human actors involved and all of the decisions necessary to make AI systems, allows AI researchers to believe that AI systems got to be the way they are without human intervention."
Instead of seeing AI systems as detached from human agency and decisions, Johnson and Verdicchio propose that we see them as part of sociotechnical ensembles:
An AI system consists of a computational artefact together with the human behaviour and people who make the artefact a useful and meaningful entity [...] AI systems should be thought of as sociotechnical ensembles [i.e.] combinations of artefacts, human behaviour, social arrangements and meaning. For any computational artefact to be used for a real-world purpose, it has to be embedded into some context in which there are human beings that work with the artefact to accomplish tasks
When we speak about AI systems, it's important that we do so in a manner that makes them visible as sociotechnical ensembles, imbued with human decision making and human flaws, rather than as neutral technical systems.
Hiding human labour
Although the Fast Company headline is a relatively banal case of a sensationalist headline obscuring how a piece of technology actually works, there are more sinister cases in which we are told that “AI is doing something” and are thereby blinded to the human labour underlying that process.
Astra Taylor has proposed the term fauxtomation to refer to these misleading cases of shiny promises of automation
Fauxtomation manifests every time we surf social media, check out and bag our own groceries, order a meal through an online delivery service, or use a supposedly virtual assistant that is, surprise, in fact powered by human beings. Yet even though we encounter this phenomenon everyday, we often fail to see, and to value, the human labor lurking behind the high-tech facade (even if it’s our own). We mistake fauxtomation for the real thing, reinforcing the illusion that machines are smarter than they really are.
We need to be alert to how human labour is masked behind the facade of fancy-sounding AI systems, whether it is in the arduous and underpaid work of labelling datasets, or in cleaning up the mess made by algorithmic errors.
A particularly interesting case of dubious AI agency can be found in the
This is, however, a normal part of how these systems are improved: human reviewers listen to samples of interactions to understand, for example, where and why the device made a mistake. In many cases, these reviewers heard recordings of sensitive conversations with doctors, users requesting the assistant to search for porn, couples having sex, and even drug deals.
There is something very curious happening here: on the one hand, these voice assistants are anthropomorphised, in most cases in a
For a vivid illustration of the complex system of human labour that underlies these voice assistants, check out the project Anatomy of an AI system which looks at "Amazon Echo as an anatomical map of human labor, data and planetary resources":
The stack that is required to interact with an Amazon Echo goes well beyond the multi-layered ‘technical stack’ of data modeling, hardware, servers and networks. The full stack reaches much further into capital, labor and nature, and demands an enormous amount of each. The true costs of these systems – social, environmental, economic, and political – remain hidden and may stay that way for some time.
To circle back to our original point about bad headlines, we should consider Astra Taylor’s remark that even the common concern that “robots will take our jobs” is based on ascribing agency to AI, or in this case robots, where in fact the agency belongs firmly to those humans at the top of the capitalist food chain:
The phrase “robots are taking our jobs” gives technology agency it doesn’t (yet?) possess, whereas “capitalists are making targeted investments in robots designed to weaken and replace human workers so they can get even richer” is less catchy but more accurate.
The major lesson here is that whenever we hear that “AI can do X”, we should always break that statement down to uncover the human decisions and human labour that allow that AI system to accomplish whatever task it claims to be able to perform.
To illustrate the absurdity, and sometimes outright deceptiveness, of bad AI headlines, we've put together this interative widget for you. In one box, you'll find some of the all-time worst AI headlines. In the other, you'll see a 'debunk' button. When you click on it, the headline will be rephrased in a more accurate, if slightly cheeky manner. Give it a go!
Headline rephraser
How to do things better
The excellent (and free) online course, Elements of AI, gives us a great tip that can help us to avoid some of the issues outlined above when they stress that “AI” is not a countable noun:
When discussing AI, we would like to discourage the use of AI as a countable noun: one AI, two AIs, and so on. AI is a scientific discipline, like mathematics or biology. This means that AI is a collection of concepts, problems, and methods for solving them. Because AI is a discipline, you shouldn't say “an AI“, just like we don't say “a biology“. This point should also be quite clear when you try saying something like “we need more artificial intelligences.“ That just sounds wrong, doesn't it? (It does to us).
The people at the website SkynetToday have produced an excellent set of AI Coverage Best Practices, based on a survey of leading AI researchers. On the question of ascribing agency to AI they echo the previous tip when they say the following:
...it is misleading to say, for example, “An Artificial Intelligence Developed Its Own Non-Human Language”, since AI is not a single entity but rather a set of techniques and ideas. A correct usage is, for example, “Scientists develop a traffic monitoring system based on artificial intelligence."
We can go even further than this, and specify what particular technique was used in a given case. As an example, instead of saying "AI can now read your text messages," we could more accurately say that "Researchers can use natural language processing (NLP) tecniques to scan your messages."
Someone could object that specifying the particular technique here is likely to make the story less interesting, but this is precisely the point a lot of the time: stories about advances on NLP benchmarks which are not actually of interest to the general public are distorted and overhyped by using the term AI instead of more particular terms.
Even without going to the depth of using terms such as NLP, simply replacing AI with machine learning (ML) already makes most stories more accurate and less like sensationalist clickbait. For more on these terminological nuances, check out the the term AI has a clear meaning.
Skynet Today provide a number of other guidelines, which we've listed here:
Do's
- Be careful with what “AI” is
- Make clear what role humans have to play
- Emphasize the narrowness of today’s AI-powered programs
- Avoid comparisons to pop culture depictions of AI
- Make clear what the task is, precisely
- Call out limitations
- Present advancements in context
Dont's
- Imply autonomy where there is none
- State programs “learn” without appropriate caveats
- Cite opinions of famous smart people who don’t work on AI
- Ignore the failures
Check out their article on AI Coverage Best Practices for more info on these.
Gary Marcus has also outlined
- Stripping away the rhetoric, what did the AI system actually do here?
- How general is the result? (E.g., does an alleged reading task measure all aspects of reading, or just a tiny slice of it?)
- Is there a demo where I can try out my own examples? (Be very sceptical if there isn’t.)
- If the researchers (or their press people) allege that an AI system is better than humans, then which humans, and how much better?
- How far does succeeding at the particular task reported in the new research actually take us toward building genuine AI?
- How robust is the system? Could it work just as well with other data sets, without massive retraining?
All of these questions and guidelines can help us to counter the effects of ascribing agency to AI and other forms of mystification, and hopefully make us better readers and writers when it comes to these topics.
Extra topic: legal personality for AI
This section was written by Rachel Jang as part of a project for the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Further edits were made by Daniel Leufer.
This idea of ascribing agency to AI goes beyond just the sort of linguistic misrepresentations discussed above, and can lead to people wanting to radically alter our legal systems in order to accommodate ideas which might seem quite far fetched. For example, there has been ongoing debate about whether 'an AI' can own the intellectual property for something 'it invents.' Based on what we have said about it, it would seem more straightforward to say instead that some scientists used an AI system to invent something. In this case, the idea of 'an AI' holding the intellectual property seems quite bizarre.
Digging a bit further into this problem, we can ask ourselves who might benefit from 'an AI' (developed by a private company, of course) being able to hold copyright. Similarly, if 'an AI' could hold legal personality, it seems likely that companies could use this to evade responsibility by having the system held responsible 'for its actions' rather than having to be held liable for the behaviour of systems they have designed.
Currently, the international consensus in patent law is that AI cannot be an inventor of a patent. This rule was tested and reconfirmed recently in 2020, when the European Patent Office and the U.K. Intellectual Property Office both refused patent applications which identified the Dabus AI as the inventor. The applications sought patents for a "beverage container" and a device "to attract optical attention like a lighthouse during search operations." Designing a slightly improved bottle and a fancy flashlight certainly sounds a lot less spectacular than what most people would imagine when they hear that 'an AI invented something.' Nevertheless, it is interesting to look into the idea that 'an AI' could be credited with inventing these things, especially as these claims are likely the tip of the iceberg. The answer to this debate about AI systems holding patents may have significant implications because were an AI system to be granted the legal status of an inventor, it might lower barriers to legal entitlements such as being able to enter into contracts and file lawsuits.
Depending on your view of the current capabilities of AI systems and how rapidly they are likely to improve in the coming years, the idea of granting a patent to an AI system may seem absolutely absurd or admirably prescient. No matter what view you take, however, there is in fact legal precedence for non-humans to have the kind of legal status that would be required to hold a patent. Around the world, many non-humans are given the privileges of “legal personhood,” with the capability to hold rights or duty and the ability to carry responsibility. A legal person can sue, be sued, and enter into contracts with other legal persons. Corporations, states, cooperatives, and even some natural features like rivers or animals, have been treated as legal persons.
In 2017, the Saudi Arabian government granted Sophia the Robot citizenship. The decision may have been a publicity stunt, but many people have argued that the recognition of legal rights for robots eroded human rights. As Robert David Hart said in a piece on Quartz,
In a country where the laws allowing women to drive were only passed last year and where a multitude of oppressive rules are still actively enforced (such as women still requiring a male guardian to make financial and legal decisions), it’s simply insulting. Sophia seems to have more rights than half of the humans living in Saudi Arabia.
Still, Saudi Arabia is not alone: Japan took a similar step by granting residency in Tokyo to Shibuya Mirai, a chatbot on a messaging app.
The European Union has also flirted with this idea, with the European Parliament’s proposal to establish a specific legal status of “electronic persons” for robots. The European Parliament’s report called on the European Commission to explore the possibility of applying the electronic personality concept to “cases where robots make smart autonomous decisions.” An open letter signed by more than 150 European AI experts strongly opposed the proposal, largely pointing to overvaluation of actual AI capabilities and concern for liability issues. The European Commission did not accept the European Parliament’s proposal in the Commission’s outline of future strategy to address artificial intelligence, effectively rejecting the idea of electronic personhood.
As the experts’ letter to the European Commission pointed out, the motivation for ascribing more agency or legal personhood for AI comes from the idea that we need a new system to address the rapid development of AI technology. However, this line of thinking overstates where the world is currently at with AI technology. One problem is that the term ‘artificial intelligence’ tends to make us overestimate capabilities. Instead of calling the DABUS system ‘an AI,’ we could more accurately call it a computer program that uses advanced statistical methods, and it suddenly sounds far less plausible to grant it a patent.
Another problem is that a robot like Sophia the Robot that looks somewhat like a human and that seems to answer people’s questions on the spot may trick people into thinking that robots capable of human-level intelligence are not far off. The reality is that we currently only have what is termed ‘narrow intelligence’ in AI systems, where the system can perform well at one very narrowly defined task, but is usually back to square one when even the slightest variation is introduced to a task.
Indeed, achieving anything like Artificial General Intelligence (where a machine could perform intelligently across a range of complex tasks) is not simply a matter of adding more data and more computational power, but is a problem for which nobody has any kind of roadmap (for more on this, see: Superintelligence is coming soon).
A more serious implication of ascribing agency and personhood to AI has to do with accountability. One of the main arguments in favor of AI personhood is based upon the “black box” narrative about AI, and particularly machine learning systems. When systems employ complex algorithms, such as neural networks or support vector machines that ‘learn’ from large datasets, it can often be difficult for their programmers to understand the precise mechanisms by which they produce results. This means that there is a certain opacity to their operation, and they often produce results that are surprising to their human programmers.
In some cases, these surprising results have been due to serious errors and could have had life-threatening consequences. In other cases, they have produced exciting and novel results that a human would have been unlikely to arrive at. In the case of AI personhood, this opacity is taken to signify a sort of ‘creative independence’, such that the AI decision-making processes are independent of their creators, and thus their creators should not be responsible for them.
In this case of granting the AI system a patent, this may seem like generosity on behalf of the programmer: Dr. Stephen Thaler doesn’t wish to take credit for the ‘inventions’ of DABUS because he couldn’t have invented those things himself. Granting legal personhood seems a lot less like generosity, however, when we think about the patented ‘invention’ of an AI system causing harm: isn’t the granting of personhood just a convenient way for the system’s creators to avoid accountability? Every system, whether it uses complex machine learning algorithms or not, is the result of human decisions, and we should be careful about being misled about the role and responsibility of those who design and deploy these systems.
It may well be that existing legal systems can address the issues AI personhood is trying to solve. For instance, when trying to figure out where liability lies for a harmful action caused by an AI system we can turn to tort law and existing legal doctrines, such as strict product liability or vicarious liability. Strict product liability is a legal doctrine where the supplier of a product is held liable for harms caused by the product regardless of whether the supplier was negligent or not.
Under this doctrine, AI systems that fall below an acceptable rate of error can be deemed defective, which can then be used to impose liability on the parties who contributed to their making. Vicarious liability is a legal doctrine where one party is held liable for the actions of another party when the parties engage in some sort of joint activity. Using the vicarious liability doctrine, an owner of an AI system can be held liable for the system’s tortious act, grouping the owner and the AI system as being engaged in a joint activity. Such concepts in tort law can help apportion faults and liability appropriately when AI systems conduct tortious acts.
In addition, there is also the opinion that current company law in the U.S. can be used to establish legal personhood for AI. Shawn Bayern has argued that because legally enforceable agreements can give legal effects to an algorithm under current U.S. law, autonomous systems may be able to mimic some rights of legal persons. Bayern pointed to business-entity statutes, such as LLC statues, as potentially giving algorithmic systems and software the basic capabilities of legal personhood. Although such a view has not obtained significant authority yet, introducing a new concept of legal personhood for AI could lead to more confusion given the possible application of existing law to AI.
Annotated bibliography
This bibliography was primarily compiled by Rachel Jang as part of a project for the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Further edits were made by Daniel Leufer.
Here we've put together some resources on different topics around AI and agency, and added some annotations to give you a general idea about the different resources.
AI should be granted patents
-
- The researchers believe that the patent offices’ rules of insisting human attribution is “outdated.”
- Raises questions over what exactly AI is currently.
- Asks whether an AI system can be considered as an inventor or not.
- A patent predicament: who owns an AI-generated invention?
- Can an AI system be given a patent?
AI will take people’s jobs
- Astra Taylor - The faux-bot revolution
- SkynetToday - Job Loss Due To AI — How Bad Is It Going To Be?
- Over 30 million U.S. workers will lose their jobs because of AI
- 10 Jobs Artificial Intelligence Will Replace (and 10 That Are Safe)
- The Alleged Threat of AI Taking Away Human Jobs Is Not What We Think It Is
-
Legal Expert Systems - Robot Lawyers? (An Introduction to Knowledge-Based Applications to Law
- Artificial intelligence and expert systems
- Types of knowledge-based applications to legal practice
- Components of a legal expert system
- Limitations on automated reasoning
- Models of a legal expert system
Europe’s “electronic personalities” debate in 2018
-
EU Parliament’s Legal Legal Affairs Committee 2017 Report about “electronic personhood”
- Analogous to corporate personhood.
- Areas in need of specific oversight.
- “Give robots ‘personhood’ status, EU committee argues”
- The timeline of e-personhood: a hasty assumption or a realistic challenge?
- Open letter against the proposal
Sophia the Robot
- The complicated truth about Sophia the robot — an almost human robot or a PR stunt
-
Inside the mechanical brain of the world’s first robot citizen
- Understanding how Sophia works is crucial when talking about giving robots rights before people, and about what implications that might have.
- Facebook’s head of AI really hates Sophia the robot (and with good reason)
- An AI professor explains: three concerns about granting citizenship to robot Sophia
- Saudi Arabia’s robot citizen is eroding human rights
AI ‘Boy’ Shibuya Mirai Granted Residency
-
- Chatbot programmed to be a seven-year-old boy given residency in Tokyo.
- Miari and Sophia remain distinctly unselfaware.
We've also put together some resources that explain the fundamental concepts underlying a lot of these debates:
Legal Personhood
-
- Legal person refers to a human or non-human entity that is treated as a person for legal purposes.
- Typically, a legal person can sue and be sued, own property, and enter into contracts.
-
- Identified with the capability of holding rights and duties, the ability to bear responsibility.
- Designated by the law as right-holders.
-
The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems
- Shawn Bayern argued that AI systems could effectively be given legal personhood under the current U.S. law.
- Although nonhuman autonomous systems are not legal persons under current law, the history of organizational law demonstrates that agreements can direct the actions of legal persons. Legally enforceable agreements can give legal effect to an algorithm or other process, so autonomous systems may be able to emulate many of the private-law rights of legal persons.
- Modern business-entity statues, particularly LLC statues, can give software the basic capabilities of legal personhood.
-
Are Autonomous Entities Possible?
- Shawn Bayern responds to reactions to previous paper.
- The purpose of this paper is to rebut the criticisms and to suggest that Bayern’s reading of the LLC statutes is correct.
- Describes the workability of LLCs without ongoing human internal governance.
- Demonstrates that autonomous entities are legally sound under current statutes.
-
Legal Personhood for Artificial Intelligences
- Paper written in 1992.
- Describes what AI is, what legal personhood is.
- Thought experiment in certain scenarios, raising legal questions.
-
Machine Minds: Frontiers in Legal Personhood
- Personhood exists to protect conscious individuals from suffering and allow them exercise their wills, subject to their intelligence.
- At some point, we will have to address the issue of consciousness in the law.
- The boundaries of legal personhood: how spontaneous intelligence can problematise differences between humans, artificial intelligence, companies and animals
- Rights for robots: why we need better AI regulation
-
A Theory of Legal Personhood - The Legal Personhood of Artificial Intelligences
- Three contexts: (1) ultimate-value context, (2) responsibility context, (3) commercial context
-
- Legal subjects as responsible actors
- The essence of legal personhood
- “A subject of legal rights and duties”
- Not limited to natural persons
- Flexible and changeable aspect of the legal system, depending on the needs of the community
- Physical person as a natural legal person
- Question of punishment of legal persons
- Different construction of personhood
- AI or robot as legal actor
- Criteria for Recognition of AI as a Legal Person
Patent
-
Drafting Patent Applications Covering Artificial Intelligence Systems (American Bar Association)
- Write claims the patent office will consider “patent eligible.”
- Write a patent application that describes a technological development.
-
EPO and UKIPO Refuse AI-Invented Patent Applications
- Rejected Dabus AI’s patent application.
- Article 81 of the European Patent Convention (EPC)
- “The European patent application shall designate the inventor. If the applicant is not the inventor or is not the sole inventor, the designation shall contain a statement indicating the origin of the right to the European patent.”
- Inventor has to be human.
- UKIPO: “There appears to be no law that allows for the transfer of ownership of the invention from the inventor to the owner in this case, as the inventor itself cannot hold property.”
-
Can an AI be an inventor? Not yet.
- Patent law has very specific ways of assigning ownership: The inventor must be either the employee or the contractor of the parent company.
- There is also the legal requirement that inventors be individuals and “natural persons.”
- We’re nowhere near general artificial intelligence, so few people will believe that the AI is truly the inventor.
- Being an inventor comes with certain responsibilities such as being able to enter into contracts and file lawsuits.
- The USPTO wants to know if artificial intelligence can own the content it creates
- United States Patent and Trademark Office (USPTO) published a notice seeking public comments about copyright, trademark, and other intellectual property rights issues that may be impacted by AI. – 13 questions total
Common arguments in AI Agency Debates
This section was compiled by Rachel Jang as part of a project for the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. Further edits were made by Daniel Leufer.
We've also put together a list of sources for some of the most common arguments we hear in debates about AI and agency.
For assigning more agency to AI
-
Given the rate of AI advancement, we need to address necessary changes that need to be made to the current IP system.
-
AI should be given legal personhood because AI decision-making processes are too difficult to understand.
-
Europe divided over robot ‘personhood’
- Arguing in favor of AI personhood is related to the “black box” narrative (AI decision-making process are difficult to understand, so litigators cannot attribute legal responsibility for problems)
-
-
AI systems should be given legal personhood to reflect its role.
-
Hey Watson, Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicineabstract_id=3076576
- Explores whether AI’s ability to adapt and learn means that it has the capacity to reason and whether this means that AI should be considered a legal person.
- Concludes that medical AI should be given a unique legal status akin to personhood to reflect its current and potential role in the medical decision-making process.
- Differentiates medical AI from AI used in other products.
-
-
*Denying agency to AI rests on unjustified 'human supremacy'
Against assigning more agency to AI
-
Granting patents to AI systems raises accountability concerns.
- Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction
-
European Commission’s ethics guidelines
-
- “Trustworthy AI” includes human agency and oversight. Developers must put in mechanisms that ensure responsibility and accountability.
-
-
When machines create: Should AI be recognised as an inventor?
- “According to Joe Michael, principal technology evangelist at AI company IPsoft, deciding to issue a patent to an AI system could raise ethical concerns, particularly around accountability.”
-
Accountability of AI Under the Law: The Role of Explanation
- Focus on explanation in terms of increasing accountability in AI systems.
-
Open Letter to the European Commission Artificial Intelligence and Robotics
- Expresses concern about the proposal to create legal personhood for robots.
- The creation of a Legal Status of an “electronic person” for “autonomous”, “unpredictable” and “self-learning” robots is justified by the incorrect affirmation that damage liability would be impossible to prove.
-
AI and robots should not be attributed legal personhood
- Argues that responsibility must rest with people.
- AI in patent law: Enabler or hindrance?
-
AI capabilities are overrated.
-
Open Letter to the European Commission Artificial Intelligence and Robotics
- From a technical perspective, this statement offers many bias based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements.
-
Europe divided over robot ‘personhood’
- The controversy masks the reality that robots capable of human-like intelligence and decision-making remain a far-off prospect.
-
-
AI personhood is already present, so creating additional concepts of personhood for AI complicates things.
-
The EU is right to refuse legal personality for Artificial Intelligence” -The power to determine who is a “person” lies within the member states, not the EU.
- AI persons are already here (e.g., Shawn Bayern’s analysis), so additional “electronic personality” should not be established.
- It is premature to introduce AI personhood.
-
Appropriateness and feasibility of legal personhood for AI systems
- The scope of AI is still ill-defined.
- The potential economic efficiencies and distribution of gains is uncertain.
- The ability of existing legal structures to achieve similar ends have not been sufficiently analyzed.
- Ex. Strict product liability, vicarious liability
- The moral requirements for personhood have not yet been met.
- It is not yet possible to assess the social concerns arising from AIs that are indistinguishable from humans.
- Advantages and disadvantages of AI personhood.
- Conditions for AI personhood: technological, economic, legal, moral conditions should be met
-
-
Giving robots citizenship erodes human rights.
-
Robot Rights? Let's Talk about Human Welfare Instead
- Argues that a focus on esoteric ideas such as robot rights distracts energy from more important and immediate concerns such as human rights violations
-
A misdirected application of AI ethics
- "If we look at AI as it exists today, we see a situation with altogether different ethical concerns that have to do with the undemocratic distribution of power: who gets to use AI, who is merely a passive recipient of it, who is suppressed or marginalized by it. In the current reality of AI, a call for robot rights becomes perverse — amounting to arguing for more rights and less accountability for tech companies. So, let’s not talk about hypothetical robots. Let’s talk about Siri, Nest, Roomba and the algorithms used by Google, Amazon, Facebook and others. "
- "The dominant futurist sci-fi conception of AI conceals ubiquitous and insidious existing AI hiding in plain sight. While waiting for human-like robots, we forget to notice AI that has already morphed into the background of our day-to-day life."
- "Giving rights to robotic and AI systems allows responsibility and accountability for machine-induced injustices orchestrated by powerful corporations to evaporate. In other words, giving rights to robotic systems amounts to extending the rights of tech developers and corporations to control, surveil and dehumanize mass populations. Big tech monopolies already master the art of avoiding responsibility and accountability by spending millions of dollars on lobbying to influence regulations, through ambiguous and vague language and various other loopholes. Treating AI systems developed and deployed by tech corporations as separate in any way from these corporations, or as “autonomous” entities that need rights, is not ethics — it is irresponsible and harmful to vulnerable groups of human beings."
-
Saudi Arabia’s robot citizen is eroding human rights
- Criticizing women’s rights in Saudi Arabia
- “Naming Sophia a citizen creates a huge void in legal systems around the world, damages public understanding of AI, and fundamentally damages the very notion of human rights itself.”
- Instances of AI systems displaying racism and sexism
- Sophia’s right to self-determination?
- “To be citizen means something, and that something means less now that it includes Sophia.”
-
Pretending to give a robot citizenship helps no one
- “Basically the entire legal notion of personhood breaks down.”
-
-
AI should not be given its own personhood for philosophical and ethical reasons.