Where are you visiting from?

+256-752-944-166

bnambooze@cmadvocates.com

 

Ethics And Legal Implications of Artificial Intelligence

CM Advocates LLP - Uganda > CM Uganda Insights  > Ethics And Legal Implications of Artificial Intelligence

Ethics And Legal Implications of Artificial Intelligence

Artificial Intelligence (AI) describes the capacity of a computer to perform the tasks commonly associated with human beings. It includes the ability to review, discern meaning, generalize, learn from experience, and find patterns and relations to respond dynamically to changing situations.

“The promise of AI is better decision-making and enhanced experiences. In their book Machine, Platform, Crowd, MIT professors Andrew McAfee and Erik Brynjolfsson write, “[t]he evidence is overwhelming that whenever the option is available, relying on data and algorithms alone usually leads to better decisions and forecasts than relying on the judgment of even experienced and “expert” humans.” The fear is that AI in an unregulated environment will lead to a loss of human supervisory control and unfortunate outcomes.

On December 18, 2018, the European Commission’s High-Level Expert Group on Artificial Intelligence (“AI HLEG”) released the first draft of the Draft Ethics Guidelines for Trustworthy AI. Pursuant to the guidelines, Trustworthy AI requires an ethical purpose and technical robustness:

  1. Ethical purpose: Its development, deployment and use should respect fundamental rights and applicable regulation, as well as core principles and values, ensuring “an ethical purpose”, and
  2. Technical robustness: It should be technically robust and reliable, given that, even with good intentions, the use of AI can cause unintentional harm.

Evolution of the law in response to AI

To grasp an understanding of the legal aspects of AI, one of the central questions will be how the law will evolve in response to AI. Will it be through the imposition of new laws and regulations, or will it be through the time-honoured tradition of having our courts develop new laws by applying existing laws to new scenarios precipitated by technological change?

According to the tech republic, “AI has already been used and accepted in a number of US decisions. In Washington v Emanuel Fair, the defense in a criminal proceeding sought to exclude the results of a genotyping software program that analysed complex DNA mixtures based on AI while at the same time asking that its source code be disclosed. The Court accepted the use of the software and concluded that a number of other states had validated the use of the program without having access to its source code. In State v Loomis, the Wisconsin Supreme Court decided that a trial judge’s use of an algorithmic risk assessment software in sentencing did not violate the accused’s due process rights, even though the methodology used to produce the assessment was neither disclosed to the accused nor to the court.”

The Question of Privacy

AI systems use vast amounts of data; therefore, as more data is used, more questions are raised. Who owns the data shared between AI developers and users? Can data be sold? Should this shared data be de-identified to protect privacy concerns? Is the intended use of data appropriately disclosed and compliant with legislation such as the Personal Information Protection and Electronic Documents Act (“PIPEDA”) and the General Data Protection of the European Union 2016/679 (GDPR)?

Governments are now updating their privacy legislation to respond to privacy concerns fuelled by the public outcry against massive data breaches and the unfettered use of data by large companies. Consumers have become increasingly concerned with the potential misuse of their personal information. In 2015, the European Commission conducted a survey in 28 member states of the European Union, which demonstrated 7 out of 10 people expressing concern for their information being used for a different purpose than the one for which it was collected.

Under the GDPR, companies must be clear and concise about their collection and use of personal data and indicate why the data is being collected and whether it will be used to create profiles of people’s actions and habits.

The Facebook data privacy scandal properly illustrated the privacy AI issue around data collection of personally identifiable information of “up to 87 million people” by the political consulting and strategic communication firm Cambridge Analytica. That company and others were able to gain access to personal data of Facebook users due to the confluence of a variety of factors, broadly including inadequate safeguards against companies engaging in data harvesting, little to no oversight of developers by Facebook, developer abuse of the Facebook API, and users agreeing to overly broad terms and conditions.

The Question of Contract Law

The inherent nature of AI may require individuals or entities contracting for AI services to seek out specific contractual protections. In the past, software would literally perform as it was promised. Machine learning however is not static but is constantly evolving. As noted by McAfee and Brynjolfsson, “[m]machine learning systems get better as they get bigger, run on faster and more specialized hardware, gain access to more data, and contain improved algorithms.” The more data algorithms consume, the better they become at spotting patterns.

Parties might consider contractual provisions, which covenant that the technology will operate as intended, and that if unwanted outcomes result, then contractual remedies will follow. These additional provisions might include an emphasis on audit rights with respect to algorithms within AI contracts, appropriate service levels in the contract, a determination of the ownership of improvements created by AI, and indemnity provisions in the case of malfunction. AI will dictate a more creative approach to contracts where drafters will be forced to anticipate where machine learning might lead.

The Question of Tort Law

We start the tort analysis with the following questions: Who is responsible? Who should bear liability? In the case of AI, is it the programmer or developer? Is it the user? Or is it the technology itself? What changes might we see to the standard of care or the principles of negligent design? As the AI evolves and makes its own decision, should it be considered an agent of the developer, and if so, is the developer vicariously liable for the decisions made by the AI that result in negligence; the further that AI systems move away from classical algorithms and coding, then they can display behaviours that are not just unforeseen by their creators, but are wholly unforeseeable. When there is a lack of foreseeability, are we placed in a position where no one is liable for a result, which may have a damaging effect on others? One would anticipate that our courts would respond to prevent such a result.

The Question of Bias and Discrimination

Companies like Microsoft and Google have recognized that offering AI solutions that raise ethical, technological and legal challenges may expose them to reputational harm. The issues of bias and/or discrimination have become more prevalent as more companies and governmental entities turn to AI systems in their decision-making processes. For example, a 2016 investigation by ProPublica revealed that a number of US cities and states used an algorithm to assist with making bail decisions that was twice as likely to falsely label black prisoners as being at high-risk of re-offending than white prisoners.

This brings up the “black box” problem, in which a computer or other system produces results, but provides little to no explanation for how those results were produced. In the case of machine learning, the greater the complexity of an algorithm, the more difficult it is for users to understand why the machine has made a certain decision.

“Almost all concerns that relate to ‘improperly set up AI’ can be solved by the AI explaining its thinking,” Grosset said. “If the human counterpart of the AI can understand why something is flagged, then they can make better informed decisions. Human judgement is still a key component of a balanced AI system.”

The Question of Employment

Due to AI, we have seen the rise of modifications. From analog to tech savvy. Orders are currently being made by apps, tools of trade have been replaced, a case in point being the introduction of bitcoin. In 2020, Amazon launched its first walk-out cafes with no employees. Robot restaurants have been fully automated, providing each service through an AI-enabled robot. Robots can provide many benefits for businesses as well as consumers and improve the overall experience for diners. Major restaurants and food chains all over the world have already implemented robots for the benefits they bring, and the adoption is only going to increase with advancements in technology. This then brings up the question of how to deal with unemployment of human resources, which is replaced with technology.

Conclusion

AI development has a lot of enhanced experiences but is not devoid of ethical and legal shortcomings. The journey to continuously acknowledge and understand its evolution is ongoing. Like Matt Bellamy states, “In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose.” It’s to our advantage to look at the positives and how we can make it work to our satisfaction.