Artificial Intelligence in Machinery
Artificial Intelligence in Machinery: Navigating New Safety Frontiers Under Regulation (EU) 2023/1230
For years, the machinery industry has been rapidly integrating advanced digital technologies, moving away from purely mechanical systems toward smart, autonomous equipment. Today, advanced machinery is increasingly less dependent on human operators, capable of real-time information processing, problem-solving, and adapting to unstructured environments.
Recognizing that the emergence of Artificial Intelligence (AI), the Internet of Things (IoT), and robotics presented new product safety challenges, a 2020 European Commission Report concluded that the old Directive 2006/42/EC contained significant gaps. To close these gaps, Regulation (EU) 2023/1230 explicitly covers the safety risks stemming from these new digital technologies.
For safety engineers and managers, this Regulation introduces a paradigm shift. Here is how you must navigate the new safety frontiers of AI and autonomous machinery.
Annex I, Part A: The High-Risk Classification of Self-Evolving AI
One of the most critical updates for safety professionals is the treatment of AI and machine learning under the new Annex I. The Regulation acknowledges that systems with self-evolving behavior possess characteristics—such as data dependency, opacity, and autonomy—that might considerably increase the probability and severity of harm.
Under Annex I, Part A, the following are now classified as high-risk items requiring a stricter conformity assessment procedure:
- Safety components with fully or partially self-evolving behavior using machine learning approaches ensuring safety functions.
- Machinery with embedded systems featuring fully or partially self-evolving behavior using machine learning approaches ensuring safety functions.
Because these fall under Part A, safety engineers must now secure mandatory third-party conformity assessment for these systems. It is important to note that these strict rules apply specifically to systems using machine learning capable of evolving; they do not apply to traditional software programmed only to execute static, automated functions.
Lifecycle Risk Assessments: Anticipating the "Learning Phase"
Under the new Regulation, the mandatory risk assessment process is no longer just about the machine's state when it leaves the factory. Safety managers must ensure that the risk assessment addresses the machine's entire lifecycle.
Crucially, the risk assessment must now include hazards that might arise due to an intended evolution of the machine's behavior as it operates with varying levels of autonomy. When designing control systems for self-evolving machinery, engineers must ensure that:
- The machinery does not perform actions beyond its defined task and movement space.
- Modifications to settings or rules generated by the machinery or operators—especially during the machine's learning phase—are strictly prevented if such modifications could lead to hazardous situations.
- It must remain possible at all times to correct the machinery or related product to maintain its inherent safety.
The Data Trail: Mandatory Logging and Traceability
With AI making autonomous safety decisions, accountability requires transparency. Regulation (EU) 2023/1230 introduces stringent data recording requirements for safety engineers to implement in control systems:
- Safety Decision-Making Data: For software-based safety systems, the machinery must record data on the safety-related decision-making process. This recording must be enabled after the product is placed on the market and the data must be retained for one year exclusively to demonstrate conformity to national authorities upon request.
- Intervention Logs: A tracing log of data generated regarding legitimate or illegitimate interventions, as well as the versions of safety software uploaded, must be enabled and kept for five years.
Human-Machine Interaction: The New Ergonomics of Autonomy
As robots and AI-driven machines leave cages and enter shared workspaces, human-machine coexistence becomes a primary safety concern. The Regulation mandates that the psychological stress caused by interacting with machinery must be reduced, adapting the design to both shared spaces without direct collaboration and direct human-machine interaction.
Safety engineers must design the human-machine interface to adapt to the machine's varying levels of autonomy. In practice, this means autonomous machines must be designed to respond to people adequately (e.g., verbally through words, or non-verbally through gestures and movement). Furthermore, the machine must be able to communicate its planned actions—specifically what it is going to do and why—to operators in a clear and comprehensible manner.
Supervisory Functions for Autonomous Mobile Machinery
For autonomous mobile machinery, safety managers must ensure the implementation of a specific "supervisory function". This remote surveillance system must allow a supervisor to receive alerts about unforeseen or dangerous situations and issue limited commands, such as stopping, starting, or moving the machine to a safe position. If this supervisory function is inactive, the machinery must not be able to operate.
Preparing for the AI-Driven Future
Machinery Regulation (EU) 2023/1230 forces safety professionals to look beyond static mechanical risks and engineer safety into the very logic and data of evolving systems. By understanding the rigorous third-party assessment requirements for Annex I AI systems, implementing robust data logging, and prioritizing transparent human-machine communication, safety managers can ensure their innovative machinery remains compliant and safe in the digital age.
