In parallel to the proposal for a Regulation on Artificial Intelligence,[1] the European Commission has also presented a proposal for the new Machinery Regulation 2021[2] (Machinery Regulation), that will replace the Machinery Directive currently in force (Directive 2006/42/EC).[3]

In the first part of our analysis (available here), we examined the new obligations that manufacturers, importers and distributors will have to comply with under the new Machinery Regulation. In this second part, we look at the main intersection points between machinery safety legislation and the evolution of digital technologies, including Artificial Intelligence, IoT, cybersecurity and the human-machine relationship.

The Machinery Regulation between AI, IoT, robotics and cybersecurity: The challenges ahead

As mentioned in the first part of this article, in 2018,[4] the EU Commission confirmed the general relevance and flexibility of the Machinery Directive (Directive 2006/42/EC, the Directive) to the pursuit of its objectives, even in a context of pressing innovation. At the same time, however, the Commission highlighted some critical issues inherent to the ability of the Directive to meet the real challenges posed by emerging digital technologies, including Artificial Intelligence (AI) and the Internet of Things (IoT).

Technological transformation and digitization are driving the European mechanical engineering industry to grow steadily, thus maintaining competitiveness at the global level. Among the most prominent technologies are advanced manufacturing (eg the application of IoT on an industrial scale and 3D printers) and advanced materials (including so-called nanomaterials), which are an essential step toward establishing Industry 4.0 or Smart Industry. In particular, the IoT is developing rapidly and dynamically. Its use is set to increase in the organization and design of industrial sites and the manufacture of equipment and machines.[5] More domestic and foreign companies in the industrial sector rely on the IoT to manage and control their spaces, increase the potential and efficiency of their machinery (eg through coordinated performance monitoring and so-called predictive maintenance), and strengthen their security measures.

The use of robotics in machinery is also increasing: robots, bots, androids, and other manifestations of AI now seem to be on the verge of giving impetus to a new industrial revolution, making it essential for the relevant legislation to consider the legal and ethical implications, without this constituting an obstacle to technological development. According to the European Parliament, progress in robotics and AI could transform living and working conditions and habits, raise efficiency, cost savings and safety, and improve service levels in the short and medium term.[6]

In the mechanical engineering sector, attention is mainly focused on robots guided by AI systems which – while still being classified as products or machines – are to some extent endowed with autonomous and “self-learning” capabilities: they process information in real-time, “learn” from experience, interpret the environment, interact with humans, adopt new behaviours based on their “experiences,” can function in unstructured environments (such as construction sites) and they move through physical space autonomously, without human intervention (and control). In this context, “intelligent” machines remain a tool. We are still far from the dystopian scenario the Swedish philosopher Nick Bostrom envisaged: a general AI entrusted with producing paperclips would sacrifice all the planet’s resources to achieve its goal.[7] In particular, mobility could pose more significant risks to health and safety than those arising from mere AI software, mainly because of the real possibility that the robot, as it moves, could cause physical harm to humans.[8]

The drive towards this type of innovation in the machinery sector is now evident and tangible, but it’s also necessarily accompanied by critical issues and challenges:

  • the need to ensure non-discrimination, due process, transparency and comprehensibility of decision-making processes;
  • the employment consequences of robotics and machine learning;
  • the need for those involved in the development and commercialisation of AI applications to integrate security and ethical aspects from the outset, accepting legal responsibility for the quality of the technology produced;
  • the importance of preserving the dignity, autonomy and self-determination of individuals, including in terms of human-robot emotional attachment and relation to the issue of ergonomics; and
  • on privacy and cybersecurity, concerning applications and devices that communicate with each other and with databases without human intervention and exposure to possible cyberattacks mainly because many industrial processes are now driven by computer systems/applications.

As mentioned in our first article, the legislative proposals on AI and machine safety complement each other. The AI Regulation delegates the conformity assessment to the Machinery Regulation. The risk assessment for machines incorporating AI systems is only carried out once through the Machinery Regulation. In particular, while the proposed AI Regulation addresses the safety risks of AI systems, the proposed Machinery Regulation focuses on the safe integration of such systems into machines.

It follows that robots guided by AI systems that fall under the definition of “machine” must also meet the machinery safety regulations’ essential health and safety requirements and be subject to the relevant conformity assessment procedures.

It’s questionable how such requirements can mitigate certain types of risks. For instance, risks may arise from control systems in IoT ecosystems or a lack of adaptation of existing protective measures to human-machine interaction in shared workspaces. Is it enough to deal with physical damage due to unforeseen movements of the robot, or should mental health also be considered, given the pressure resulting from the confrontation of humans with machines as entities that exceed human intellectual capacity in certain respects?

Below are some of the main novelties introduced by the proposed Machinery Regulation with regards to these issues.

AI systems: ‘High-riskor ‘potentially high-risk’ products

The Machinery Regulation considers, among high-risk products,[9] software that performs safety functions (including AI systems) placed on the market individually and machines incorporating AI systems that perform safety functions.

The risk assessment must take account of changes in the behaviour of machinery designed to operate with different levels of autonomy. Safety functions and operating settings or rules may not be modified so as not to exceed the limits defined by the manufacturer in the initial risk assessment. Also, in the learning phase, the behaviour of the machinery must be limited by appropriate safety circuits.

Furthermore, in the context of the essential health and safety requirements applicable to mobile machinery, specific references have been included to “autonomous mobile machinery” and, in particular, to “Automated Guided Vehicles” (AGVs).

Among the various AI-related amendments made during the legislative process of the Machinery Regulation, four proposed changes stand out in particular:

  • replacing the reference to machine products presenting “high risks” with “potentially high risks,” whereas the AI Regulation provides that AI systems are considered “high risk” if certain conditions are met or they appear on a specific list attached to the Regulation;[10]
  • adoption of delegated acts by the Commission for the amendment of Annex I as from 36 months after they entered into force;
  • application of the Machinery Regulation only to AI systems with self-determining behaviour and capable of evolving during regular operation, thus excluding static/deterministic systems with a programmed code for the execution of certain automated machine functions; and
  • drawing up appropriate guidelines (a more dynamic document than the Regulation) that take into account developments in AI and machinery applications to correctly assess possible additional risks that may arise during the life-cycle of the machinery product.

Cybersecurity and system corruption

Among the new risks, the Machinery Regulation turns its attention to cyberattacks and malwareregulating the cases of danger resulting from faults and the circumstances in which the safety of machines may be affected by intentional external attacks. In this sense, manufacturers will have to take appropriate measures to ensure control circuits that perform safety functions are designed to prevent malicious attacks from causing dangerous behaviour in machines.

In regard to the protection of computer systems against system corruption, it is provided that the machine is designed and built so its connection with a device does not cause a dangerous situation. And hardware components associated with machine connectivity and/or software access, which are critical from a security point of view, should be adequately protected against possible accidental or intentional system corruption.

The proposal is consistent with EU cybersecurity policy. It links cybersecurity systems under Regulation (EU) 2019/881[11] to demonstrate compliance with the Machinery Regulation. In particular, certifications or declarations of conformity under the cybersecurity certification system per Regulation (EU) 2019/881 can be extended to comply with the essential health and safety requirements per the Machinery Regulation.[12]

Workers and co-bots: Safety and mental health

Finally, the Machinery Regulation deals with the delicate topic of workspace sharing between humans and machines and, specifically, between humans and collaborative robots or “co-bots” and the influence that such technologies could have on workers’ wellbeing in psychological and stress terms.

Machines that can learn and evolve in their behaviour should be prepared to relate to humans appropriately and understandably (orally, or through gestures, facial expressions or body movements), telling workers what they are planning to do and why. Instead of focusing only on the physical and visible damage that AI-based machines might cause to humans, the proposal goes so far as to regulate the mental health and psychological tensions that might arise in confronting an entity that can be infallible and tireless.


The overall objective of safety and liability frameworks is to ensure that all products, including those incorporating emerging digital technologies, operate safely, reliably and consistently and that there are effective remedies in case of damage. Confidence in these technologies is a prerequisite for businesses and users’ adoption and can foster competitiveness.[13] This context inevitably confronts us with a new scenario regarding the possible liability of economic operators (ie manufacturer, authorized representative, importer and distributor) in the event of damage resulting from robots and machines integrating AI systems.

The coordination between the Machinery Regulation and the applicable AI legislation is crucial, with product liability (Directive 85/374/EEC and the Consumer Code) and non-harmonised national tort regimes applying in parallel. According to the current European legislative framework on machinery safety, responsibility for product safety remains with the manufacturer who places the product on the market (or the person who makes substantial modifications to it, according to the Machinery Regulation). The manufacturer is the one who has the detailed knowledge of the design and production process and, therefore, is best placed to carry out the conformity assessment procedure.

Some EU safety legislation already contains provisions explicitly referring to situations where several economic operators intervene on a given product before it is placed on the market (eg the Directive requires equipment manufacturers to provide the operator with information on how to assemble the equipment with another machine). However, explicit provisions requiring a certain level of coordination and cooperation between economic operators in the supply chain and users could create more legal certainty, especially for complex value chains.

With this in mind, the Machinery Regulation stipulates that all economic operators intervening in the supply and distribution chain should take measures to ensure that only compliant machinery products are made available on the market. To this end, a clear and proportionate distribution of obligations corresponding to the role of each operator is necessary.

Furthermore, to ensure the health and safety of users of the machinery products, economic operators should ensure that all relevant documentation (such as instructions for use) contains accurate and understandable information, takes account of developments in technology and changes in end-user behaviour, and is kept up to date as far as possible. Where machinery products are made available in packages containing several units, the instructions and information should accompany the smallest commercially available unit. Nevertheless, for traceability of machinery products throughout the supply chain, operators should be required to keep information on their operations involving machinery products for a certain period, proportionate to their role.

Finally, distributors and importers should be involved in the market surveillance tasks carried out by the competent national authorities. They should be prepared to participate actively in these tasks, providing the authorities with all necessary information on the products in question.

Next steps

The French Presidency of the Council of the EU adopted a compromise text on the draft Machinery Regulation on 31 March 2022. The vote in committee is planned for 20-21 April 2022, then the file is expected to go to Plenary in May 2022.[14]


[2] Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on machinery products,

[3] Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC,

[4] Evaluation of the Machinery Directive,

[5] Advanced Technologies for Industry – Sectoral Watch, April 2021,

[6] European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics,

[7] The ‘paperclip apocalypse’, in Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.

[8] Revision of the Machinery Directive (REFIT)/After 2021-4,

[9] Annex I, formerly Annex IV of the Directive.

[10] Art. 6 “Classification rules for high-risk AI systems”.

[11] Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act),

[12] In particular, the requirements set out in Annex III, sections 1.1.9 and 1.2.1, with regard to protection against corruption and the security and reliability of control systems.

[13] REPORT FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL AND THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics,