By subscribing to our newsletter you agree to receive periodic e-mails from Dubé Latreille Avocats Inc.
An error has occurred. Please try again later.
The Digital Age has brought an unprecedented surge of innovations in Information Technologies which have drastically reshaped the environment we live in and our rapport with machines. As robots through AI take on new roles and responsibilities, this raises a fundamental question: could smart robots eventually be held criminally liable for their actions?
In this paper, I hypothesize that Canadian courts are currently unable to prosecute such“machines” without first reviewing certain fundamental dispositions and concepts of theLaw. To test this hypothesis, I have reviewed certain legal publications indexed inQuebec’s Legal Information Access Center (CAIJ) and similar Canadian databases, as well as online media and literature.
My recommendation is that, while it would be possible to prosecute robots under the Criminal Code under certain conditions, it would be more beneficial overall for our society to establish a distinct set of rules specifically and uniquely designed for robots.
TABLE OF CONTENTS
I. INTRODUCTION
II. THE EMERGENCE OF SELF-DRIVEN AI
III. CRIMINAL LIABILITY IN CANADA
IV. CHALLENGES IN PROSECUTING ARTIFICIAL INTELLIGENCE
V. RECOMMENDATIONS
VI. CONCLUSION
The Digital Age has brought an unprecedented surge of innovations in InformationTechnologies which have drastically reshaped the environment we live in and our rapport with machines. Increasingly, robots and AI are taking on new roles and responsibilities with remarkable autonomous decision-making capacities. This, combined with a much-anticipated breakthrough in quantum computing, makes it possible to envisage that, in our lifetime, robots will have a “mind” of their own. This raises a fundamental question: could robots eventually be held criminally liable for their actions?
In this paper, I hypothesize that, at present, Canadian courts cannot prosecute such “machines” without first reconsidering and adapting certain fundamental dispositions of the Criminal Code and the Canadian Charter of Rights and Freedoms. To test this hypothesis, I have reviewed legal publications on the matter indexed in Quebec’s LegalInformation Access Center (CAIJ) and similar Canadian databases, as well as online media and literature.
The first part of this paper will focus on the emergence of AI technology and the relevance to envisage criminal liability for smart robots2. The second part will review the main features of criminal law, while the third part shall consider some of the challenges that extending criminal liability to robots will entail.
My recommendation is that, while it could be possible and pertinent to prosecute smart robots in Canada under certain conditions, it would be more beneficial overall for society to establish a distinct set of rules specifically and uniquely designed for robots.
Over the centuries, through technical innovations, humans have strived to adapt to their environment in order to survive and to improve their life conditions. In that respect, the Digital Age has disrupted human society as never before. Thanks to Information Technologies, computers now process instantly vast amounts of data which enables them to perform simultaneously complex tasks and functions that were previously unattainable. This astounding capacity – combined with interconnectivity, interoperability and access to Internet – has enabled Artificial Intelligence to thrive and pave the way for robotics.
To better understand the issues inherent to AI in terms of criminal liability, it is important to determine what constitutes “Artificial Intelligence”, how it works, and what distinguishes smart robots.
In the early 60’s and thereafter, while researchers were already hard at work to develop the technology, the notion of Artificial Intelligence was a popular topic in sci-fi movies (Star Trek, Star Wars, etc.) but, for the most part, it was considered a fictitious concept. Since the early 2000, however, the progress achieved by IT developments and the renewed interest generated for AI have been such that Artificial Intelligence is being discussed as if it was an accomplished fact. Has artificial intelligence been achieved? And what does it consist of?
First, there is a fundamental question: what is intelligence? The Oxford dictionary defines intelligence as “the ability to learn, understand and think in a logical way about things”3.
In other words, it is the capacity to process information in an evolutive manner. Comparatively, artificial intelligence is described as “the capacity of computers or other machines to exhibit or simulate intelligent behaviour”4.
Given that, historically, the notion of intelligence had been reserved or assimilated to humans (and to animals to a lesser extent), are machines capable of intelligence? According to John McCarthy, that is the very purpose of AI, that is “the science of making intelligent machines, especially intelligent computer programs »5. Through the use of algorithms, software and data sets, computers can achieve remarkable results in analyzing large volumes of information. Given their memory and the lightning speed at which they can process data, they now significantly outperform humans in many areas. As a result, even though computers do not reproduce or explain how the human mind works, it can be argued that they are intelligent constructs6 that imitate human logical thinking.
And this is only the beginning. As Jill R. Presser & al. points out, “With continuing increases in computing power, storage capacity, algorithmic sophistication, and the quantity and accessibility of training data, the cognitive acuity of AI is only bound to grow”7. Not surprisingly, reality is beginning to catch up with science-fiction as various kinds of autonomous robots are beginning to appear in homes, businesses, and industries.
Embodied, disembodied, and sometimes referred to as “artificially intelligent non-human entities”8, smart robots9 have been described in a number of ways that suggest some form of autonomy, that is, the ability to perform some of their tasks or functions without human assistance or supervision10. In one definition, robots are represented as “machines that can sense their environment, process the information they sense, and act directly upon their environment”11. Though the level of autonomy of robots varies greatly, it is worthwhile to note that the concept of consciousness (also known as general artificial intelligence or GAI) has not been achieved though some believe this technology is imminent12.
As was seen above, the pervasive use of AI in the environment and the fact it is being entrusted increasingly with critical tasks and functions that used to be accomplished by humans alone is not exempt from situations that could inflict harm, directly or indirectly, including death. As Presser points out, the harm might be caused by accident, by design, by autonomous choice or action, through an external source or some other unforeseen cause13.
This in turn may raise the thorny and complex issue of criminal liability: who will be held accountable for unlawful or criminal acts committed by an AI entity acting on its own command?14 The developer, the programmer, the user, the robot itself, or a combination thereof? Notwithstanding the technical complexity that this entails for lawyers and the courts in terms of attribution, how will the hurdle of proprietary rights (IP rights and trade secrets) be circumvented to allow for greater understanding, transparency and due process?
And if the harm has been caused by the AI alone (deliberately or not), and through no fault or negligence attributable to a human, can justice be rendered if no one is to blame?
The purpose of Criminal law is “to help maintain public safety, security, peace and order in society”15. In other words, it is meant to deter legal subjects from conducts that are undesirable or harmful in society. In the Canadian justice system, as in all Common Law jurisdictions, criminal liability rests on a handful of fundamental rights and principles that are protected by legal instruments such as the Criminal Code16 and the Canadian Charter of rights and freedoms17 and upheld by the courts. In order to determine whether robots can be prosecuted under our court system, we shall review some of them.
First and foremost amongst the principles of criminal justice is the presumption of innocence. It is the cornerstone of Canadian Criminal Law and can be found in Sect. 6(1)of the Criminal Code:
6. (1) Where an enactment creates an offence and authorizes a punishment to be imposed in respect of that offence,
(a) a person shall be deemed not to be guilty of the offence until he is convicted or discharged under section 730 of the offence;
(...)
As a result, in order to obtain a conviction, the burden of proof entirely rests on the prosecution as it must establish beyond a reasonable doubt that the accused has indeed committed a crime.
To achieve this, two elements must be proven, that is, the actus reus (the perpetration of an unlawful act by the accused) and the mens rea (that is, the state of mind or willful intent by the accused to commit same).
The Canadian Charter of rights and freedoms is an important part of our Constitution that regroups rights deemed to be essential in a free and democratic society.
In order to ensure the principle of equality before the law, the Charter contains rights such as the right to life and liberty18, the right to a lawyer19, and also the right to be presumed innocent until proven guilty20. These rights accrue to all Canadians (and, to some extent, to non-citizens), while a limited portion thereof have been recognized to corporations.
It is worth noting that neither the Criminal Code, nor the Canadian Charter provide any definition for the notion of person.
While the Criminal Code endeavors to regulate the behavior of individuals, the pertinence of its interpretation and application must take into account the historical, political, cultural and contextual factors of society as they evolve over time, a process that originates from and is expressly designed to meet the needs and expectations of humans
That said, it is true that the Criminal Code does contain provisions pertaining to legal persons which, hitherto, have been restricted to corporations. This, in itself suggests that it could eventually embrace AI entities as we will see further.
Comparatively, and although it refers on occasion to the notion of citizen, the Canadian Charter is a little ambiguous in that respect considering that some of its rights apply exclusively to humans21 while some others can be invoked by legal persons22.
In light of the above, considering that smart robots do not benefit from any specific or general form of legal status, they cannot be prosecuted in their own rights. Rather, they must be assimilated to an object or thing (inanimate or not) under the responsibility of their owners23 whose liability will be engaged as the case may be.
As a result, as AI evolves into more autonomous entities with a mind of their own,24 the issue of criminal liability may eventually expose the Canadian legal system to situations where attribution or conviction becomes impossible25, a scenario that is unacceptable in a society where one’s actions are regulated by the principle of accountability.
From the onset, the mere prospect of prosecuting a machine (certainly from a human point of view) is unnatural. It challenges several values and concepts generally accepted hitherto such as the primacy of mankind in the universe and the superiority of humans over machines. More importantly, it elevates robots to a legal status that is similar to that of humans, a revolutionary and very unsettling prospect to say the least. In addition, while this possibility threatens deeply rooted cultural beliefs, it calls into question the very significance of free will, consciousness, freedom, and, ultimately, the notion of life itself and what it means to be human.
There is much debate in literature about the desirability of criminal liability for AI entities. While some argue that this would help resolve some practical problems of accountability inherent to self-aware technology (in particular in circumstances where humans are not to blame), others, like Abbott, claim that the trade-offs would be too costly for society whereas other options grounded in civil liability would arguably yield similar results26.
In any event, if our society decides to grant AI legal personhood, what will be the preconditions? While “all humans are created equal” in principle, the same cannot be said about AI entities which are the end result of a combination of different suppliers, developers and programmers. This raises the complex questions of technical criteria: what functions, processes, and algorithms would be required? Also, what degree of self- awareness and consciousness would this entail? Ultimately, should there be different classes of legal personhood for smart robots?
Moreover, if smart robots are to be criminally prosecuted, this suggests that they would be entitled to due process and, consequently, to some rights as well. How far should the legislator go in granting such rights? Should they be limited in scope in a manner similar to those conceded to corporations?
In granting legal personhood to smart robots, Presser warns that this could unduly anthropomorphize and enhance the place of AI (that is, more strikingly, non-human non- living machines) in society27 and, thereby, bring about dire consequences for humanity.28
If the Canadian legislator confers a degree of legal personhood to robots to subject them to the Criminal Code, the latter will therefore be prosecuted similarly to individuals and corporations under the same fundamental principles as seen above. The concepts of actus reus and mens rea will therefore be applicable to them. How will this work?
The demonstration that an unlawful act was committed voluntarily (the actus reus) may present, from the onset, a pointed difficulty when applied to AI. By definition, an act is voluntary if it is “proceeding from the will or from one's own choice or consent”29. While this is often implied for humans, such is not the case for machines. As Presser points out, if an act is the result of programming or coding, there cannot be an expression of choice or will (even if the robot can learn autonomously). Consequently, the constitutional requirement for the actus reus cannot be satisfied.30 To resolve this problem, some authors suggest that the notion of voluntariness should not be applied to robots but reduced to “a material performance with a factual-external presentation”31.
The second element required to obtain the conviction of a smart robot, the guilty state of mind (mens rea), also presents considerable difficulties for the prosecution. To begin with, it is important to recall that Criminal Law has evolved over time to meet the needs of humans and that it is based on a distinct set of human values derived from physical and psychological experience32. As a result, the idea of transposing this regime to a non-human entity, no matter how advanced, seems highly perilous on many accounts.
Assuming that self-awareness can be achieved technologically, what level of consciousness would be required from a machine to meet the mens rea requirement? Similarly, how will the blameworthiness of the machine be demonstrated considering the inherent complexity of algorithm33 programming? In that regard, Ying Hu suggests the adoption of a “less human-centric” approach to morality similar to what is already in place for corporate entities34. While this approach may be convenient, this proposition is problematic because corporations35 are incorporated entities where criminal liability is limited to certain crimes and sentencing whereas robots, as autonomous self-aware entities, should have a criminal liability identical to humans’ (notably to account for serious offenses such as murder, harassment, etc.).
If robots are to be convicted under the Criminal Code, one can expect they would be entitled to due process and, necessarily, some rights. Considering that their criminal liability would not be as limited as is the case for corporations (subject to the level of autonomy involved), they could presumably be entitled to the rights36 and freedoms provided by the Canadian Charter for natural persons, a precedent with unfathomable repercussions.
According to author Adou, a robot would have access to the same traditional means of defense that humans rely upon (necessity, self-defense, etc.), in addition to sui generis37 defenses in the event, for example, the robot is hacked or if it becomes the victim of a singularity38.
Canadian Criminal Law hinges on consequentialist and retributivist approaches. The former, founded on utilitarianism (the greater good), seeks to promote the prevention, deterrence and rehabilitation of crime whereas the latter focuses on punishment according to the severity and the blameworthiness of the crime at hand39.
Applied to AI entities, the concept of sentencing raises several questions. First, even if the principles applicable to humans in the determination of a sentence40 were followed, certain sentences might appear to be meaningless, inapplicable or inconsequential if applied to non-human entities. In response to this, Adou contends that, similarly to corporations, the issue of sentencing can be resolved with certain adjustments that take into account both the above-mentioned principles and the specificities inherent to AI.41 As a result, various punishments such as deactivation, destruction, imprisonment, decommissioning, community services or even fines42 could be envisaged.
Somehow, this perspective fails to answer the legitimate expectation humans have in the application of Criminal Law, that is a sense of justice. While the achievement of consequentialist objectives is arguable in some respects, the imposition of such punishments43 to non-human non-living entities appear inappropriate and ineffective in light of the fact that only humans will suffer the rea life experience of these penalties.
For the time being, considering AI possesses neither the autonomy, the consciousness, nor the legal personhood required to be held criminally liable, the responsibility for its unlawful actions will continue to accrue to humans44 (or corporations). However, as technology continues to evolve while incidents involving autonomous robots multiply, lawmakers will need to address this issue.
Considering the fundamental questions and the risks raised by autonomous and self-aware AI globally45, there is a pressing need for the international community to control the evolution of this industry. In 2017, the European Parliament adopted a resolution entitled “Civil Law Rules on Robotics” which included a Charter for Robotics46. More recently, the UN adopted the Draft Text of the Recommendation on the Ethics of Artificial Intelligence47 aiming to provide “a universal framework of values, principles and actions to guide States in the formulation of their legislation, policies or other instruments regarding AI”. In both instruments, however, the concept of criminal liability for AI remains an open question.
Meanwhile, Canada also endeavors to catch up on AI with Bill C-27 (Digital Charter Implementation Act)48, which contains in Part 3 the Artificial Intelligence and Data Act (AIDA) designed “to regulate international and interprovincial trade and commerce in artificial intelligence systems”49. Similar to the Draft Text adopted by the UN here above50, there are no provisions pertaining to the criminal liability of AI.
In light of the findings made earlier in this paper, could AI be subject to criminal liability in Canada? As the case may be, how could this be accomplished?
Given the legal complexity and the multi-layered controversy that the perspective of legal personhood for AI within the Criminal Code would inevitably lead to, it might be safer and wiser to avoid altogether this Pandora’s Box in favor of the creation of a distinct criminal liability regime specifically designed for smart robots, as advocated by Hu55. The advantage of this approach would be socially more acceptable and provide similar benefits while enabling humans to set distinct moral standards for all robots.56
The Digital Age has set the stage for, arguably, a time of reckoning in human History where intelligent machines will become increasingly autonomous and an integral part of our environment to a point where, ultimately, “machine self-consciousness” will be achieved57. The repercussion of this technological achievement is hard to fathom. While the benefits and convenience that AI can provide are undeniable, the trade-offs for humans have yet to be ascertained and raise serious concerns.58
To date, while technological innovations in AI seem to get generally the thumbs up to proceed at the speed of light, fundamental legal issues compel us to take a hard look at the implications inherent to machine self-awareness and at the anticipated effects and consequences it will have on how we define ourselves as humans.
Considering the potential risks that AI presents for individuals and for humanity, the rule of law (in Canada and globally) needs to be revised preemptively59 to ensure accountability in the Robotic Age and to deter uses that would be foreseeably nefarious to human society.