Artificial intelligence is a massive opportunity, but triggers some risks which cannot be sorted through over-regulations that might damage the market.

3 simultaneous technologic revolutions unleashing AI

One of the main topics of the World Economic Forum 2017 was artificial intelligence (AI). I found extremely interesting the interview to Ginni Rometty, the Chairwoman, President and CEO of IBM. She said that we are in a unique period since there a 3 technologic revolutions happening simultaneously which make this point in time different from ever before

  1. The rise of cloud computing;
  2. The rise of data; and
  3. The rise of mobility.

Because of these 3 revolutions, there is a huge amount of information that cannot be dealt by humans, but we need systems that can deal with such data, reason around it and learn. This led to the rise of artificial intelligence.

I have read a number of articles on how artificial intelligence might represent a threat for workers. It is quite recent the news about the Japanese insurance firm Fukoku Mutual Life Insurance that is making 34 employees redundant and replacing them with IBM’s Watson Explorer AI. But the point raised by Ms. Rometty is that AI does not replace humans, it does something that humans cannot physically do since no human would be able to deal with such massive amount of information and AI can reach results from which anyone can benefit.

AI and robots are exponentially becoming part of our daily life and the potentials of such technologies cannot be always controlled by humans. If you think about Google DeepMind project where AI is not programmed/taught to solve problems, but needs to learn itself how to solve problems. This means that we will reach a stage when machines will take decisions whose reasoning cannot be explained by humans!

The call for regulations on artificial intelligence

Ms. Rometti herself mentioned as part of her interview that a 4th revolution is around security and privacy, and how such issues might still derail the revolution the three components mentioned above have combined to create.

And on this topic, it might not be a coincidence that the Legal Affairs Committee of the European Parliament approved a report calling the EU Commission for the introduction of a set of rules on robotics. Such rules include

Who is liable and how damages should be recovered?

The Committee is for the introduction of strict liability rules for damages caused by requiring only proof that damage has occurred and the establishment of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party.

This would not sort the issue around the allocation of responsibilities for “autonomous” robots like Google DeepMind that did not receive instructions from the producer. And this is the reason why the Committee is proposing the introduction of a compulsory insurance scheme for robots producers or owners (e.g. in the case of producers of self-driving cars). The issue is whether such obligation would represent an additional cost that either would be borne by customers or would even prevent the development of technologies.

Robots treated as as humans?

What sounds quite unusual and honestly a bit “scary” is that the Committee also calls for the introduction of a “legal status” for robots of electronic personswith specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently“.

The report does not fully clarify how such legal status should work in practice, but it seems like we are already attempting to distinguish the liability of the artificial intelligence itself separate from the one of its producer/owner. This shall be assessed on a case by case basis in relation to autonomous robots, but civil law rules definitely need to evolve in order to accept such principles.

Are ethical rules needed?

The Committee stressed the need of guiding ethical framework for the design, production and use of robots. This would operate in conjunction with a code of conduct for robotics engineers, of a code for research ethics committees when reviewing robotics protocols and of model licences for designers and users.

My prediction is that most of the companies investing in the area shall sooner rather than later establish an internal ethical committee. But the issue is whether statutory laws on ethics are necessary since they might limit the growth of the sector.

Privacy as a “currency” cannot affect individuals

It is the first time that I see privacy associated to a “currency”. However, it is true that we provide our personal data to purchase services. And the matter is even more complicated in case of complex robots whose reasoning cannot be mapped. Such circumstance might trigger data protection issues. But it is important that the Committee called for guarantees necessary to ensure privacy and security also through the development of standards.

The reaction from the industry

The European Robotics Association immediately reacted to this report stating in a position paper that

Whereas it is true that the “European industry could benefit from a coherent approach to regulation at European level” and companies would profit from legal certainty in some areas, over-regulation would hamper further progress. This poses a threat to the competitiveness not only of the robotics sector but also of the entire European manufacturing industry“.

This is the usual issue that is being discussed also in relation to the recent European consultation on rules for Internet of Things technologies. It can be hard to set so specific rules on technologies that are rapidly evolving. The concern is that regulations might risk to restrict investments in the sector, while in my view we should welcome regulations that create more certainty and foster innovation.

If you found this article interesting, please share it on your favourite social media!