hazardsforum.org

AI workshop

In July, the Hazards Forum had the pleasure of visiting the Health and Safety Executive (HSE) site in Buxton, UK. The HSE kindly hosted our hybrid Artificial Intelligence (AI) workshop which brought together presenters from ThalesOffice for Nuclear Regulation and University of York, along with other experts in the hazards fields, both in-person and online. 

Workshop objectives

The Emerging Technology Interest Group of the Hazards Forum ran the workshop with the following objective – to consider whether (or how) the rapid adoption of AI and the resulting opportunities outweigh the new risks that emerge.

The main objective of our interactive workshop was to assess AI risks. In areas such as critical infrastructure and the built environment, decision-making, and monitoring, these risks may involve algorithmic bias, a lack of interpretability, overreliance on AI, malicious use and manipulation, ethical dilemmas, and liability and accountability issues. These risks can lead to unbalanced outcomes, erode public trust, compromise safety and security, and create legal and regulatory challenges. The application of AI to risk assessment is also fraught with inherent difficulties. Complex systems and combined hazards can challenge the capacity of AI systems to accurately predict, respond to, and mitigate the cascading impact. Extreme events that exceed established design parameters, may reveal limitations in AI models trained on historical data and expose infrastructure to unanticipated threats. New hazards stemming from emerging technologies, for instance, can introduce unforeseen dangers that outpace the adaptability of AI systems.

To effectively address these risks, it is essential to continuously refine AI models, integrate reliable data and expertise, and promote close collaboration across sectors to formulate comprehensive and proactive hazard management strategies. This also requires the establishment of robust AI governance frameworks, cooperation among AI developers, policymakers, and end-users, as well as the creation of industry-specific standards and best practices. By sharing this knowledge, and with the guidance of our three expert speakers and the debate among our participants, we enabled a wide discussion of the risks associated with the implementation of AI within their relevant professional fields.

Speakers and presentations

We heard from three speakers who presented perspectives on AI classification, assessing and managing AI risks, as well as the opportunities AI offers all sectors.

AI classification – a framework to manage AI risks 

Rajiv Murali, Product Safety Engineer at Thales explained that the rapid adoption of AI brings various challenges and risks that require careful management. Understanding these risks and implementing appropriate measures is crucial to ensure the safe and responsible development and deployment of AI-based services. At the system-level, one approach to view and manage risks is through considering the classification of AI: Assistance, Automation and Autonomy. 

This classification provides a manageable framework for understanding the unique failure modes, assessing appropriate hazard analysis techniques, and determining effective controls and mitigation strategies required at each stage. 

For Assistance and Automation, the socio-technical accident model STAMP (Systems-theoretic Accident Model and Processes) plays an important role. STAMP places significant emphasis on the interaction between the operator and the AI-based system. This model helps identify potential failure points in the interaction and facilitates the development of appropriate safeguards early in system design. On the other hand, in Autonomy, additional analysis methods such as SOTIF (Safety of the Intended Function) is essential. Autonomy introduces a unique set of risks as hazards can arise solely from the system’s intended function without any failure modes in the system. SOTIF analysis focuses on ensuring the safe behaviour of AI systems, accounting for edge-case scenarios where the system might perform as intended but still lead to hazardous situations. 

To conclude, Rajiv outlined that the various types of risks introduced by the adoption of AI can be managed effectively by considering the classification of AI-based service within the safety management system. This provides a view of the expected failure modes, appropriate safety analysis techniques and controls required in each AI level.

Assessing AI Risks: Safety and Beyond 

Professor John A McDermid, OBE FREng, Director of the Assuring Autonomy International Programme (AAIP) at University of York examined how the use of AI, especially machine learning (ML), poses many challenges for safety engineering – primarily due to the transfer of decision-making from humans to machines and the lack of interpretability of the ML models.  John outlined the challenges of AI/ML from a safety engineering perspective and gave an overview of the approaches to addressing these challenges, specifically systematic approaches to developing safety cases, being developed by the AAIP and which are progressively influencing industry, standards and regulation. 

John concluded with a discussion of some of the broader concerns, e.g. ethical and legal issues, and indicated how the concept of safety cases can be extended to consider these concerns. 

AI in practice: Examples from the Nuclear Sector 

David Smeatham, Head of Innovation at ONR, offered the regulators approach to the advances in AI. David focussed particularly on how ONR’s approach to regulating innovation is to support the adoption of innovative solutions by the nuclear industry and its supply chain where it is in the interest of society and is consistent with safety and security expectations. Where regulatory frameworks do not exist and there is a high level of technology readiness, regulatory sandboxing is being used to establish pragmatic and robust regulatory benchmarks to enable permissioning. 

David made it clear that sandboxing offers a safe space in which to have an open, constructive and collaborative discussion. He shared two examples of the application of sandboxing in relation to AI, as well as describing how this work informs considerations for AI safety cases. 
















Participant Q&A

During the workshop, the participants debated on the following matters:

  • Who will be responsible for putting together the safety case, where needed?
  • Will smaller industries have the skillset to develop and monitor the safety case. Should the regulator step in? 
  • What are the key challenges within the nuclear sector? 
  • Who is the duty holder? The designer / construction stage / user? 
  • Adopting a golden thread approach – it is essentially that all duty holders know what responsibilities they have

Workshop findings

The participants debated the risks and opportunities of AI, priorities and potential actions the Hazards Forum could take in order to support interdisciplinary learning on the hazards and risks of AI.

Risks of AI

Given the pace of change and rapid development in AI,  pressure to demonstrate AI capability coupled with competition can lead to gaps in knowledge and challenges with competency.. There is a lack of visibility of potential issues, such as  implementation benchmarks and understanding safety consequences. Additionally, integration with legacy systems was considered a challenge  as they weren’t designed with sensors etc. as modern equipment is.

There are challenges and limitations with available data. While algorithms will say how confident they are, if they have few training samples they can be highly confident and very wrong. Correct data selection criteria and appropriate context are important factors. The ability of machine learning (ML) to inform us about how AI methods could be bypassed was questioned. With this knowledge it might be possible to reduce risks and thus get higher up the risk reduction hierarchy. 

Also, the threat of theft from cloud infrastructure will cause disruption of handling systems and greater complexity with multi-modal processes.

When it comes to bias, bias in foundation models is not clear. Large language models (LLMs) can also introduce additional biases via hallucinations. Similarly, poor understanding of the AI/Human interface could lead to;

  • safety consequences, missed benefits and impacts for vulnerable workers 
  • crossed wires through lack of clarity of conversation 
  • lack of transparency of decision making 
  • mental health / wellbeing of workers (work intensification) 
  • loss of control, communication (with the AI system), inaccurate reporting issues.

In relation to definitions and boundaries, where there is a broad range of experience and applicability in different areas within a sector, inconsistencies may exist in approaches to implementation. Proliferation in commercial solutions offered to different industries can store up potential problems if those companies are not able to understand the full risks outside their boundary.  Definitions of safety need to be questioned as these are variable. 

Opportunities to contribute to and influence IEC 61508 were considered limited.

AI Opportunities

Identifying and prioritising where AI would be useful could lead to improvements in the work environment, for example s the potential to reduce maintenance, improve maintenance schedule by doing continuous monitoring thereby improving cost, and taking people out of hazardous environments. There is also the potential to remove people from working in error prone repetitive tasks, create greater independence for disabled people by augmenting difficult or complex tasks. 

There is the potential to share lessons from across sectors –  if  machine learning (ML) could sort out the lesson from the generality of circumstances. The key issue is what is done with the lessons so as to prevent recurrence.

It can also eliminate barriers to people not learning from safe mistakes, create actionable intelligence through integrating data sets, remove human based bias from analysis – an effective framework for human-AI co-working to eliminate bias would be valuable.

When it comes to legacy technology, AI can find solutions to these issues and provide the UK with opportunities to increase adoption rate in heavy industry.

Priorities for action

A key priority is to identify what is in scope with reference to the Government policy paper  ‘A pro-innovation approach to AI regulation’ to help inform on matters such as proportionality in the safety case approach and human / worker impact: like stress, wellbeing, work intensification, coping mechanisms, management training so that regulators and other to ask the right questions. 

An AI skills roadmap needs to be developed to build on existing strengths the UK has and develop new ones with a focus on emergent AI plus a healthy AI skills pipeline. There is also a need to develop a framework to assess AI for bias, hallucinations, drift, and inferred instructions/goals, as well as updating data handling procedures and infrastructure to account for new threat

Understanding how ML systems can be fluid within a system and how/when to re-verify its effectiveness, including where boundaries within these systems are/should be and the  development of tools to monitor for drift. There is a need to ensure there is a strong link between the system description and the safety analysis and an emphasis on learning from others who have already started using ML in monitoring and maintenance activities about data management and validation of algorithms.

While sectors will vary in their utilisation of AI, there is a need for a high-level piece of guidance which sets out the principles needed to assure an AI system in each sector. 

COMAH duty holders will need to continue dialogue with sectors to identify how they are using AI. 

Within the energy sector, they will need to identify how  AI is keeping up with the pace of the search for alternative energies and can it help successfully implement these in the future? They must identify the skills of the future engineers to meet the demands of the sector (links back to the STEM skills). 

Within the research sector, they can help the government focus on the immediate problems such as AI used in safety-critical applications, e.g. vehicles.

Action on the Hazards Forum

Based on the workshop suggestions, we will consider exploring the following topics:

  • sharing AI experiences from a wider range of sectors and geographies. 
  • understanding of what safety and environmental protection means when viewed through different stakeholder lenses, such as the government. 
  • the human impact of AI 
  • what decisions are being made and which are irreversible e.g. drones. 
  • the AI skills gap 

We will endeavour to facilitate collaboration between industry/supply chain and regulators, using hackathons for common challenges such as interfaces with legacy systems, data tools and humans and establishing maintainability fit for purpose, as well as promote innovation and will also look to engage with the Office for AI.

If you have an interest in AI, want to know more about the Emerging Technology Interest Group or want to follow up on the areas discussed at our workshop, please get in touch with the Hazards Forum by clicking the button below.


Click here

Scroll to Top
Malcare WordPress Security