Artificial intelligence is growing dramatically – it is not just a household word but a key part of many households, and it is also increasingly essential to business.
What do we see in the near term for AI? How can businesses exploit potential AI-limiting legal issues? Is the AI the new electricity?
Here are our top three predictions on the future of AI and IoT.
1. The AI imagined in science fiction is becoming a reality
I, Robot is an influential collection of stories and essays written by writer Isaac Asimov between 1940 and 1950. The baseline of these pieces, the inspiration for the 2004 Will Smith movie of the same name, is a world where robots are part of everyday life.
Today, we are not far from achieving that world. AI personal assistant systems are always with us in our smartphones and control a continually increasing number of devices in our homes. And this movement is also taking place in the industrial sector. According to GlobalData, 54 percent of the more than 3,000 companies it surveyed in early 2018 have already prioritized investments in chatbots, machine learning, and deep learning technology. But, more importantly, these findings suggest that penetration of AI is snowballing: more than 66 percent of respondents indicated that AI investments will be a priority by the end of 2019.
The potential of AI is vast: for instance, Twitter recently announced that it had flagged 95 percent of the nearly 300,000 terrorism-related accounts it took down in the previous six months through algorithms rather than humans, enabling it to remove 75 percent of such suspicious accounts even before their first content posting.
Governments understand the relevance of AI for their countries. For example, France has launched an AI strategy which provides for investments of € 1.5 billion. The US Department of Health and Human Services recently ran a pilot using AI to process thousands of public comments on regulatory proposals. The UK’s Department for Work and Pensions has deployed an AI system to handle incoming correspondence, and the Italian Ministry of Economy and Finance has implemented an AI-driven helpdesk to deal with citizens’ calls.
Our personal experience also confirms this trend. We have advised clients on legal issues arising from several AI projects. Among those issues are usage of facial recognition to identify customers and potential fraudsters, machine learning and chatbot technologies to automate relationships with customers in the contracting and customer support process and of IoT systems as part of both industry 4.0 projects and smart home, connected car and telemedicine projects.
Our experience shows though that companies are trying to exploit 4.0 and AI technologies, many still have a “3.0 approach” in the sense that they often underestimate the impact of such technologies and fail to understand that they will:
- lead to a change in the model of business of companies that unveils new legal risks (eg, in terms of potential liabilities) which require new legal competencies and a cultural shift in the company’s management and legal department
- require a more in-depth assessment of how to minimize risks and maximize benefits, for instance by protecting and exploiting the data that increasingly becomes an asset of the company, and by carefully vetting suppliers and negotiating agreements with them
- need the support of third-party providers since, as already experienced with Internet of Things technologies, the costs and efforts of creating your technologies yourself can be excessive and offer lower results than those achievable through cooperation with external suppliers.
As Satya Nadella, Microsoft CEO, anticipated in 2015, “Every business will become a software business“ − and this is definitely one of our predictions for 2019.
2. AI regulations and their enforcement will become urgent
In 1942, Isaac Asimov published his “three laws of robotics” – introducing the concept of rules to govern the behavior of artificial intelligence in its relationships with humans.
Increasingly, those taking part in the development of AI are seeing the need for such rules – for swift, thorough regulation of AI. In a fascinating and “scary” interview discussing the evolution of artificial intelligence systems, Elon Musk told Joe Rogan that government regulation of AI is urgently needed. “I am not normally an advocate of regulation and oversight, “ he said, but this is a case where you have a very serious danger to the public.”
His view: “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry“ but “AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late, [—] AI is a fundamental risk to the existence of human civilization.“
Recently, regulators have begun trying to tackle this necessity. Initiatives such as the recent draft Ethics Guidelines for Trustworthy AI, the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment issued by the European Commission, and the 2017 resolution of the European Parliament containing recommendations to the European Commission on civil law rules on robotics are relevant. Indeed, they provide some suggestions for achieving proper regulatory solutions, including, for instance, compulsory insurance for self-employed agents’ use.
International cooperation is necessary to regulate AI. And this is the path followed by the EU member states which signed in April 2018 a Declaration of Cooperation on AI whereby they agreed to work together on the most critical issues raised by AI, aiming to jointly deal with social, economic, ethical and legal questions.
However, the current attempts to regulate AI appear to be more of an identification of general principles of ethical behavior rather than actual regulations, and they lack any binding effect and enforcement actions.
It is hard to say whether such a gap will be filled in during 2019. But there is no doubt that if regulators do not take a firm move towards binding regulations that can limit potential misuse of AI and IoT, without preventing their growth, we risk that their development:
- will be hindered in countries like those of the European Union where for instance the GDPR already considerably constrains the exploitation of technologies able to make automated decisions based on personal data and
- will lead to significant negative consequences and potential risks on matters like the allocation of liability for damages, if product liability rules are not “upgraded” to an environment where AI does not just follow instructions from the relevant manufacturer but performs independent reasoning that even its manufacturer might find hard to explain.
Besides, it is not possible to control AI through “traditional” technologies and actions. Even police authorities will need to use AI to control it and enforce measures against it.
3. Ethical rules will become essential for AI
Asimov’s first law of robotics is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm,.” But, when it comes to today’s highly complex AI technologies, that rule may fall short. As was stressed in the movie version of I, Robot, the potential gap between analytical and ethical reasoning can become a significant issue.
Rational decisions depend on the likely outcome of an event, but in practice, logical choices do not necessarily match socially acceptable positions: .ethics needs to drive decision making by artificial intelligence technologies.
The most commonly cited theoretical example is the one of a self-driving car which must decide between striking a pedestrian or swerving to avoid that pedestrian, but creating the likelihood of a higher risk of injuring others. Another relevant example would be the decision of companies to invest in the usage of AI in sectors where it might be profitable, but could present considerable risks for humans if the technology goes out of control.
AI should be implemented with care and consideration to avoid misuse and unintended consequences. We are already seeing businesses taking self regulatory steps. Companies like Microsoft have identified six ethical values – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. Also, large IT corporations are establishing ethical committees.
But only governments have the ability to ensure that the economic and social impacts of AI are managed uniformly across all of society – this unique role can ensure that the economic and social impacts of AI are managed appropriately and can set the ethical and legislative frameworks for AI to be used safely in our communities.
Our top 3 best practices on the future of artificial intelligence
Regulators will need to understand that compliance with ethical rules governing AI cannot be left to the discretion of companies. On the contrary companies must:
- be obliged by applicable laws to comply with laws and regulations
- be required to prove they are in compliance and
- be accountable for such compliance, and face the consequences of potential sanctions in case of a breach.
A famous quote from Isaac Asimov is: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.“
Use of AI will only grow in 2019, and we predict that regulators, as well as police and judicial authorities, need to create an appropriate environment to ensure the proper exploitation of such technology.
At the same time, every business is increasingly being expected to rely on artificial intelligence. As outlined in a famous quote from Andrew Ng:“ Artificial intelligence is the New Electricity.” Like electricity, AI will lead to a new, universal industrial revolution. It is swiftly becoming the baseline, a thing that every company will rely on. And those companies that will not change are unlikely to survive.
For more on artificial intelligence and the legal and regulatory issues that touch it, come along to DLA Piper’s European Tech Summit in London on October 15. More details are available on dlapipertechsummit.com.