hazardsforum.org

Nested Dependencies and Unconscious Connectivity

The modern world survives and thrives on complex interconnected systems which normally interface seamlessly and give efficiency and productivity.  These systems are complex to develop and engineer but may be entirely un-noticed by the end user.  The COVID pandemic has highlighted the interdependencies of both societal and engineered systems, whilst recent events in the energy sector have identified rapid cascade effects into chemical manufacture, food production and distribution.

This Hazards Forum event considered the some of the implications of failure of these interconnected systems which give rise to effects well beyond a single component replacement or a localised outage.  The presentations considered;

  • The implications of deliberate exploitation of vulnerabilities
  • The complex threats posed by flooding, a particularly relevant topic given the UK Government’s Property Flood Resilience Code of Practice launch and climate change implications
  • Research into topical case studies undertaken with UK experts involved in major Resilience events to identify lessons learned and potential improvements or innovations.

This event brought together expert speakers from a variety of diverse sectors in security, natural hazards and policy research to discuss how these risks can be identified, characterised, modelled and mitigated. The event was held at Institution of Civil Engineers, Westminster, on 7 December 2021.

Introduction by Andrew Buchan, Chairman of the Safety and Reliability Society

Andrew Buchan introduced the event, noting it had been planned some 20 months previously. He said: ‘We’ve emerged into a very changed, different and dynamic work environment.’ He gave a personal example to demonstrate how the world is now a very complex place, where we rely on extremely complex technologies: ‘Only a couple of weekends ago, I found myself heading into a red weather warning due to Storm Arwen and power cut that lasted 50 hours.  Interestingly, I discovered that the part of the world where I grew up in the 1970s may have been much better prepared and resilient to a 50-hour power cut than it ever was in the 2020s because modern technology is really interlinked and very interdependent.’ He was delighted to welcome three speakers who are all experts in their area.

Speaker 1: Dr Paul Martin

Complex is as complex does

Dr Paul Martin is an advisor and writer on security risk and behaviour needs and among many other roles, has been director of security for the UK Parliament. See Paul’s full biography. He explored: the different types of risk; how security risks are dynamic and adaptive; the malevolent creativity of threat actors; the characteristics of complex systems; the cascade of unintended consequences and the rules of resilience.

‘I’m going to talk mostly about bad people trying to do bad things and malicious risks as distinct from natural hazards’ said Paul, before making some general points about the nature of risk: ‘Risks come in different flavours, of which security risks are only one. One of the problems in the risk world is the tendency to treat risks as though “a risk is a risk” – putting financial and safety risks and health risks in the same basket as security risks, and they’re different. One reason why is that most of the really big, interesting security risks are things that have either happened very rarely or indeed have never happened at all, at least not yet. And yet we still have to contemplate them.’ He said there was no actuarial data about the biggest security risks, such as a high-end terror event. As a result, quantitative risk analyses that seem to work for other types of risk, such as financial risk can’t be relied on. He noted that while this doesn’t work well with a major banking crash, it does work for insurance risks for burglaries or vehicle accidents, for example, or house fires or financial fraud.

Dynamic and adaptive

He explained: ‘One of the most fundamental ways in which security risks are not like all other risks, is that they are dynamic and adaptive. They change over time, sometimes extremely rapidly on a time scale measured in hours or days or weeks rather than years or decades. And more importantly still, they’re adaptive, which means that they emanate from intelligent threat actors, from human beings who adapt their behaviour in response to our response to their behaviour. There’s this perpetual feedback loop going on, with delays built into it, which produces some very interesting effects.’

Malevolent creativity

He likened protective security – trying to stop the things we really care about from being disrupted or damaged – to a perpetual arms race: ‘It’s Alice’s Red Queen’ he said: ‘You’ve got to run to stand still. And if you want to get somewhere, as the Red Queen pointed out, you’ve got to run twice as fast’. One of the reasons why security risks, particularly big ones like terrorism and hostile state activity, are dynamic and adaptive is that threat actors, being people, are creative. A niche area of academic research known as ‘malevolent creativity’ looks at this: ‘The risks emanate from people who want to create certain effects for their own reasons, and they can be very creative and innovative. And they’ve got the advantage of not being constrained by ethical or legal considerations or regulation or bureaucracy. And in some cases, they’re also much less constrained by risk aversion; they literally don’t care if they get caught or die in the process’. He gave examples of how terrorists have changed their tactics in response to security measures, by analysing what was being checked, when and where. While the perceived threat was being addressed, the bad actors used their malevolent creativity to circumvent security and attack targets in different ways.

Characteristics of complex systems

He reminded his audience that a complex system is a set of interacting components whose collective behaviour is greater than the sum of its parts. Examples of complex systems include the climate, financial markets, the internet, terrorist networks and nation states: ‘Complex systems have got some interesting characteristics, which you need to get your head around if you’re going to deal with these sorts of risks,’ he said.

  1. They have emergent properties, which cannot be predicted simply by adding together the properties of the component parts. Emergence is one of the primary differences between complex systems and systems that are merely complicated. For example, a mechanical wristwatch is complicated, but brains are complex.
  2. Under certain conditions, complex systems can behave in radically non-linear ways. So, when a large-scale security risk starts to materialise, events may unfold with alarming speed: ‘Quite often there will be a very prolonged period, which might be years, of business-as-usual … or gradual change, and they’re managed using conventional risk management processes. But they’re suddenly interrupted by an abrupt shift to a new state that lies far outside the normal range of variation’. He noted that in his experience, many people in positions of political power were not from an engineering or scientific background and so don’t always instinctively grasp that all interesting and important phenomena have multiple causes and huge interdependencies. And that’s certainly true for big security risks.

Real world examples of complex adaptive systems undergoing sudden non-linear changes include:

  • The collapse of the Soviet Union in 1991.
  • UK fuel crisis of 2000.
  • The great financial crash of 2008.
  • The Arab Spring uprisings of 2011.
  • The London riots of 2011.

He added: ‘These events happened with outcomes that were not predicted in detail, not least because given the nature of the complex systems in which they happened, they were arguably impossible to predict in detail’.

Cascade of unintended consequences

The nature of complex adaptive systems can also lead the actions of threat actors to have unintended consequences, he revealed. These typically manifest themselves in the form of cascading effects.

Paul gave two other examples of the unintended consequences of disrupting complex systems:

·       WannaCry ransomware. The ransomware was not intended threaten the NHS on 12 May 2017; the North Korean threat actors who released it wanted money. But Wannacry spread around the world rapidly. It was infectious but crude, only causing a problem if it landed on a computer system that was using old and unpatched software. In the UK, the biggest impact was in the NHS, which declared a major incident within hours.  Paul said: ‘It was a near miss and the consequence could have been much worse had it not been for a piece of luck, which was that a lone cybersecurity researcher discovered a flaw in the malware that enabled him to stop it dead in its tracks by activating something called a Kill switch.’

·       Fuel strike. Similarly, the small group of lorry drivers who took lawful industrial action in September 2000 did not intend to spark a national fuel crisis. However, within a few days, more than 3,000 petrol stations had closed, and the country was predicted to grind to a halt within 48 hours. Government ministers found that they were largely powerless to resolve the problem. The strikers were persuaded to go back to work because it hadn’t been their intention to cause schools and hospitals to shut down or for supermarkets to run out of food. Paul said: ‘Another good example of how crises tend to work in the real world, which is a cascade of effects resulting in consequences that were foreseeable but not foreseen, and where the triggering action of relatively small, lawful protests in this case has all sorts of big and unintended consequences.’

Resilience

Paul went on to consider how we can we become better at avoiding major crises and coping better with the crises we can’t avoid: ‘As a security practitioner, I would say that security is a subset of resilience. Security is one way in which you can make yourself or your business or your country or the world more resilient. It’s one of the ways the Covid 19 pandemic has shone a glaring light on our collective resilience or lack of it. Many businesses, organisations, and governments around the world discovered that they were not as resilient as they thought they were and not as resilient as they needed to be. The pandemic was foreseeable. We’ve had several pandemics and coronavirus epidemics in the 21st century – swine flu, MERS SARS – and there were many more in the 20th century, including the Spanish flu pandemic of 1918 to 1920, which killed between 50 and 100 million people, far more than died in World War I. As well as being foreseeable, the COVID 19 pandemic was to a large extent foreseen, in that a respiratory viral pandemic, specifically influenza, had been near the top of the government’s Risk Register since the National Risk Assessment process was invented in the early 2000s. So, if the present crisis was foreseeable and to a large extent foreseen, then why were we not better prepared? And it wasn’t just the UK, of course; most countries arguably were ill prepared. I believe there are several reasons. Obviously, I would believe there are several reasons, not one.’

He illustrated the reasons by standing the problem on its head and sharing, ‘tongue lodged firmly in cheek’, some of the ‘rules of bad resilience’ that he devised last year with Jonathan Evans (Lord Evans of Weardale) for the defence and security think tank the Royal United Services Institute (RUSI). See the 14 rules in full at  Bad Resilience, Good Resilience: How (Not) to Make Your Organisation More Resilient and Cope Better When the Next Crisis Hits.

‘Of course, sadly it isn’t all quite simple,’ Paul concluded. ‘The risks that materialise to cause the really big crises are dynamic, and many of them are also adaptive, so they change in response to what we do to defend ourselves. And this is certainly true for malicious risks arising from human threat actors such as terrorists and criminals and hackers and hostile foreign states. It’s also true for some natural hazards arising from other kinds of biological organisms, notably infectious diseases such as Covid 19.’

His closing point was, ‘In order to understand and manage complex risks, whether that’s international terrorism or pandemics, we need a basic understanding of how complex adaptive systems behave. Serious crises happen in complex adaptive systems, and anyone attempting to make our world more resilient needs a basic grasp of how they behave.’

Speaker 2: Dr Beverley Adams

‘The climate lens’

Dr Beverley Adams is head of Climate and Catastrophe Resilience at Marsh. See Beverley’s full biography. She explored: building fluency with the language of climate and flooding; insurability and acting; progress at COP 26; scenario modelling; resilience, resistance, and recoverability; how policy becomes a judgement point.

‘I want to give you the opportunity to be slightly dangerous on the subject of climate,’ said Beverley. ‘So, if you’re chatting with friends in the workplace or just watching television, you may think back to some of the things that I’m going to show you and you think “now I really understand what’s going on behind the scenes with climate”.’ She also wanted to raise awareness about flooding which is a considerable risk in this country. Working with the government’s flood resilience roundtable, she looks at every building she enters and thinks, ‘if we’re having to live with water and if it floods on Friday, how can I make sure it’s back up and running on Monday?’

Beverley’s work encompasses the interaction between the global climate system, the planet and society: ‘One of the things that we’ve had to really start to drive – I’m in the insurance and risk management sector – is the idea of insurability and acting. If you don’t act, you will, over time, have problems. You will, over time, not be able to get loans and insurance. Your suppliers will not want to work with you because you are a dodgy business. There are all these consequences that we’re having to learn how to manage now and that comes into this world of nested dependencies. I’m going to explain how it’s hitting people and how it’s also hitting businesses.’

Climate language

She started by testing the audience’s fluency in climate language and went on to describe the following climate related acronyms:

  • TCFD (Task Force on Climate-related Financial Disclosure). A framework to help public companies and other organisations disclose climate-related risks and opportunities.
  • TNFD (Taskforce on Nature-related Financial Disclosure. A risk management and disclosure framework for organisations to report and act on nature-related risks.
  • SBTI – Science Based Targets Initiatives. Drives ambitious climate action in the private sector by enabling companies to set science-based emissions reductions targets.
  • UN SDGs – UN Sustainability Development Goals. A framework of 17 goals intended to be a blueprint to achieve a better and more sustainable future for all.
  • PFR – Property Flood Resilience. Describes measures intended to reduce the risk of flood damage to properties and to help speed up recovery after a flood.

Progress at Cop 26

Marsh was part of the business innovation representation at COP 26. Beverley said: ‘This was a key moment where we were talking with other leaders and saying, “Where have you got to? What are you doing next? What does good look like? Where are you struggling?”. And it was a real first-time coming together, because of COVID, of all the people who share the same desire to move the world forward.’ While the focus until now has been on measuring carbon emissions, what needs to become central is what Beverley called “transition risk”, that is, how businesses transition to be net zero and respond to the world becoming warmer.

Scenario modelling

Scenario modelling is the key element in helping businesses adapt themselves to undertake TCFD reporting: ‘Some of you will no doubt have come across the idea of modelling for earthquakes and windstorms and floods – my job was catastrophe modelling before I moved into climate modelling,’ said Beverley: ‘This is in my wheelhouse. Modelling can help you understand in theory what might happen. The theory is fine… the key part is how you manage that. That’s when resilience comes to play… So next time you see it on TV, think that’s why the big companies are doing what they’re doing, because they’re now being held accountable. And the way in which they’re having to respond is being dictated to us by some authorities, like the Bank of England, the Prudential Regulation Authority (PRA) and the Financial Conduct Authority.

She shared a set of results from a company with 198 locations, 32 of which had significant climate risk, coming from surface water, river water, coastal effects, and soil. The change in risk over time was also shown. Company leaders need to take a new view and start planning what they’re going to do: ‘You start off with the modelling; it tells you which sites have risk,’ she said. ‘We go into doing surveys. We then think about a flood plan… how you make sure you’re insurable… how you monitor, day in, day out. I have 24/7 monitoring for flooding around the UK. I have to because it’s my job. But increasingly, this is where we’re taking things; we don’t just react, we’re proactive with all of this.’ She went on: ‘This is the journey: know what’s at risk and then let’s think about acting on it. It’s been hard to act on it in the past, without guidelines but this is one of the things that I feel really proud of again, that as a country we’re driving resilience.

Resilience, resistance, and recoverability

‘How we do resilience is very much etched into engineering thinking,’ Beverley said. ‘I like to call it three Rs – resilience, resistance and recoverability. So, for any given home, I need to be thinking about whether the right things to do are holding the water back or whether, if the water does enter, how do I get rid of it quickly in a hygienic manner so that my business starts again, or my home is safe again’. She went on: ‘We don’t always have the luxury of being able to hold water out.  It’s really, really important we get comfortable with living with water and thinking about how we minimise that time to recovery.’ 

Environmental, Social and Governance (ESG)

‘Every business will have to have ESG as part of their DNA. They’ll have an ESG statement on their website … they’ll have a clear strategy for how they’re going to manage all of this going forward. And that’s something that is becoming a judgement point. An investor looking at a business will say, “What’s your ESG strategy? Is it persuading me that you’re a good bet for the next 5 to 10 years?”. Businesses are now being judged negatively if they don’t have an ESG’, she said. ‘It’s why you hear things like ‘greenwashing’, because people want to look good, but there’s no substance behind it … It’s knowing what your top 10 risks are and making sure that you have a very clear action plan on all of the.’ She concluded: ‘If you don’t assess your climate risk, someone else is going to assess it for you.’ See: https://www.marsh.com/uk/services/risk-consulting/insights/esg-meets-erm-top-tips-for-2021-22-esg-journey.html

Speaker 3: Dr Marie-Laure Hicks

Strengthening the UK’s resilience: perspectives from engineering

Dr Marie-Laure Hicks is senior policy advisor for research and innovation at the Royal Academy of Engineering (RAEng). See Marie-Laure’s full biography. She explored: the value of the engineering perspective; an experimental workshop, understanding and mapping capabilities; recommendations to government, risk assessment and resilience thinking.

‘We’ve discovered what it’s like to live through an emergency. And our question is, how can an engineering perspective help?’ Marie-Laure said. ‘Taking it back to basics, engineers think about complex systems. They risk assess. They look at how the risk might propagate across that system. They consider what that means for safety, and they work with a collaborative, problem solving approach.’ COVID began a conversation across the RAEng networks about how engineers can help the government improve the resilience of the UK: ‘We thought critical capabilities might be part of that answer. We wanted to think about those things that you need to put together; what are the systems of capabilities that come together when you’re trying to respond to an emergency?’ The outcome was RAEng’s publication of May 2021: Critical capabilities: strengthening UK resilience, which made recommendations for government.

Marie-Laure explained how their journey began by defining and exploring 6 groupings that are intrinsically independent yet also linked to each other. These were:

  • National assets.
  • Resources.
  • Skills and labour.
  • Research and innovation.
  • Industrial capability.

She explained: ‘These groups are specifically designed to be broad, to make sure we captured everything. We wanted to push those boundaries beyond the limits of what government often does, which is just think about the public sector. By focussing on networks and coordination, we have been able to explore how all of the underpinning capabilities come together, to identify the challenges and enablers of effective emergency response.’

Five cross cutting components of networks and co-ordination were identified as:

  • Agile networks for rapid mobilisation.
  • Permeating the intersection between the public and private sector.
  • Expertise and advice into governments.
  • Local, national or international coordination.
  • Facilitating communication and engagement. 

Experimental workshop

 Four case studies were examined in an experimental workshop session. Though all had an engineering component, they were all very different:

  • The Icelandic volcanic ash cloud in 2010, which disrupted international air travel, especially in the UK and Northern Europe.
  • The UK’s response to the Fukushima nuclear accident, as to how it might affect UK nationals in Japan and its impact on the UK nuclear industry.
  • The Lancaster flooding in 2015, which led to electricity loss for up to a week. This was a complex example at a local level because it affected just one area in the UK and highlighted chronic vulnerability to loss of power. An RAE report tells the story of that week without power. See Living Without Electricity https://www.raeng.org.uk/publications/reports/living-without-electricity
  • The WannaCry ransomware incident.

Marie-Laure said, ‘We brought people together who’d been involved in each of these crises individually or had expertise in that area. We had almost a group therapy session, asking what was it like at the time and how did it work?. These were people who might have worked together on the day or who’ve been in different components of the response. We tried to … explore how the different parts of the response had connected to each other. What had worked, what had been the challenges, where it had been effective and why? We also challenged them with the scenario question of what if this happened again, now?’. She went on: ‘We had some cross cutting lessons, and it’s surprising how four different crises can very much bring you to the same conclusion.’

Understanding and mapping capabilities

Then came the process of understanding and mapping capabilities, whether at a national, local, or organisational level, asking what they provided and identifying any gaps to remedy relationships. ‘Building and maintaining those networks are crucial to making sure things happen. We found that when those networks weren’t in place or weren’t strong enough initially, that instantly impedes the response to an emergency. It can be particularly challenging if you’ve got a lot of staff churn, or a new organisation isn’t embedded.’

The exercise highlighted the importance of:

  • Practising responses and building relationships.
  • Increasing awareness of what capabilities exist and what might be needed to be better prepared.
  • Putting people in the situation of having to make these decisions in high uncertainty levels.
  • Ensuring clear ownership of resilience at every level.
  • Knowing who’s doing what.
  • Using resilience by design, that is,improving preparedness of organisations, process, infrastructure, and facilities, and then agility.
  • Keeping pace with increasing digital interconnectedness and evolving threats and hazards.

She said: ‘It’s building in the awareness that you can’t just follow the plan. You have to write the new one every time it happens’.

In these 4 case studies, the RAE found that government had often:

  • Drawn the boundary of the system they needed to respond in the wrong place.
  • Not involved the right people early enough.
  • Not gone far enough to seek those capabilities to deliver an efficient response at the right time.

The report makes these 3 recommendations.

  1. Government to embed an engineer system approach. Marie-Laure said, ‘At the RAE, we like taking that system-wide perspective: understanding the users; understanding how they connect and what their needs might be; and how they can be enabled to do what they need to do. And looking very much across the public, private and also third sector stakeholders to bring that whole society element to it.’
  2. Government should carry out an audit of existing public, private and third sector capabilities. They should know who they can call on when they need them and identify those convening bodies against the critical capability groups. There were some good examples from the COVID pandemic of organisations that were able to cascade across their networks and make that process much more effective.
  3. We put out an offer for them to work with us, to think about how we can develop this approach into a practical tool.

Risk assessment and resilience thinking

Marie-Laure has been working on a follow-up to ‘Critical capabilities’, which should be published soon: ‘We started thinking about risk assessment and resilience thinking; how can we learn from risk assessment methodologies used in industry and academia to bring best practice into government risk assessment and resilience thinking?’ Part of it is the benefit of bringing stakeholders together, to understand interdependencies and explore interactions between chronic and acute risks. Uncertainty is invaluable and can enable low regrets and strategic decision making. ‘It’s appreciating that the expertise you need might be academic, but it might also be the camper at the end of the road who knows how to light the gas stove when there’s no power. It’s pulling all of those different things together to develop decision-making capability within government, as well as risk assessment, considering risk velocity and how it changes.’ As an example, she gave the Thames Estuary 2100, which is ‘London cannot be left to flood when climate change becomes dramatically bad’. This raises questions such as: what are the trigger points? What do we need to monitor? How do we make sure those mitigations can be put in place and we can be prepared?

‘And then finally, there’s resilience-thinking-versus-risk-management,’ she said. ‘This is something we come across more in the academic sphere, but it’s starting to be put in practice in cities with the 100 Resilient Cities project. It’s the idea of taking a more risk-agnostic approach. It’s going along the whole chain and understanding how to be agile, how to adapt and how to build in resilience, rather than respond to and manage particular risks as they arise.’

Questions to the panel

Question 1: How can we communicate risk to the public?

What sorts of discussions should be had with the public in terms of communicating some of these risk issues and their complexities? The only thing I’ve seen over the last 21 months that communicated trade-off issues was a horizontal bar chart presented by (England’s then deputy chief medical officer) Jonathan Van-Tam, showing the trade-off of risk and reward for vaccines for different age groups. That was effective risk communication, but I haven’t seen it elsewhere. What’s the best way of making the public aware of these complex issues?

Paul: ‘There’s no easy way of doing it. There’s a bit of a history with national security issues like terrorism – sadly with the government having to communicate to the public the existence of risks, because bad things have happened and there’s a distinct possibility that more bad things will happen in the future. Over the years we’ve had various sorts of government messaging on this – ‘be alert, not alarmed’ and so on. It’s a tricky art. I’d go back to the failure often to take account of basic psychology and to think carefully about how messages are perceived and acted on by people rather than sort of assuming that if you simply impart information, you will somehow change things. These days, though, government does have a behavioural insights team and so behavioural sciences are increasingly incorporated … Of course, different people will hear the same message and take very different interpretations of it, so it’s hard for governments to do this. I think they’ve got gradually and progressively better at it over the last 30 years though.’

Beverley: ‘On the climate front, I think that they’re using businesses as the main channel to try and start softly driving awareness. I used to live in California in an earthquake zone, and so everybody knew what you needed to do if the earth shook. I don’t think it’s the same for flooding. For climate, you’re seeing it and it’s subliminally spreading out there on TV, but I expect that there’ll be much more comms to come around things like flood resilience as major focus areas.’

Marie-Laure: ‘From a policy perspective, there’s a lot to learn from science policy and science communication and there’s a huge field of study around how various science issues have been communicated well or really poorly. There are basic things about clarity of message and not taking people for being stupid. We all talk about PCR tests and people understand that there are different types of vaccine technologies now; there are ways to take the public on a journey with you and I think there were some very, very effective communicators during the pandemic. There’s the challenge of having that skill set available when the emergency hits – people like JVT [Jonathan Van-Tam] who can come up with football analogies that even I understand. It has to be built into your crisis management and your emergency comms. You can’t take out the political dimension; we live in a democracy and this is how this works. But there’ll be lessons to learn from this from a science communication point for sure.’

Paul: ‘Building on that, I think the biggest challenge for experts is actually communicating with ministers before you get to the public – getting the right decisions made at the right time. Conveying some of this stuff to the people who’ve actually got to make the decision can be really hard. They will see the world in a slightly different way and will have different agendas.’

Question 2: How can we learn from good examples?

In terms of the security field, how much emphasis is there on learning from where things have gone very well, where people have responded and prevented things from occurring? The questioner gave the example of the Fukushima Daini plant, which largely due to the resilient response of site superintendent, Naohiro Masuda, survived the 2011 earthquake and tsunami, unlike its sister plant, the Fukushima Daiichi plant.

Paul: ‘The most important metric for security is nothing happens. And of course because of human psychology, we don’t notice when it works well; we really focus when things go wrong… So, testing and exercising is really important, not just in government, but commercially. Classically people claim to have got brilliant cybersecurity in their companies and it turns out they’ve got some policies written down on paper… Unless you test and exercise, you won’t discover that there are gaps between what you think is happening and what’s actually happening. Hence my railing against fossilised risk registers. Risk registers have their role but when they become ritualistic, it gives false assurance. It’s not until something happens that people realise that it wasn’t as good as they thought it was – as we discovered with the pandemic… The history of security is a cycle of bursts of anxious activity punctuated by long periods of complacency. And how you break out of that cycle is one of the real challenges and I think we’ve just got to keep talking about stuff.’

Marie-Laure: ‘We’ve brought in perspectives from different industries on what their best practise is. And there’s actually so little sharing across different sectors – you can look at cybersecurity or the chemical industry or some academic who studies earthquakes in Guatemala, and you’ll pull out lots of different things. Bringing them together, you suddenly build this picture of bits that are helpful to others. There’s so much to learn from… organisations that have become very resilient by building in a practise that might seem really obvious to them in their day-to-day, but that isn’t to everyone else.’

Beverley: ‘We call it the resilience playbook in my world. You take a line of a spreadsheet; you’ve got 10 locations you read across and see that they’ve got flood risk, wind risk, all these other risks, and then you go: “Right, what’s my resilience playbook for dealing with wind? What’s my resilience playbook for dealing with heat?” That’s where the RAE paper becomes really important as the foundations to make sure that our playbooks have the right structures.’

Scroll to Top
Malcare WordPress Security