Earlier in 2020, with the first wave of the Covid-19 coronavirus pandemic raging, the security community was quick to warn of the risk to healthcare organisations from cyber criminals, and they were right to do so.
The risk was very real and had an incident similar to WannaCry befallen the health service at the height of the first wave of the pandemic, the results for the NHS, where the risk of IT failure carries the risk of death, could have been catastrophic.
In a speech delivered in September 2020, the outgoing CEO of the UK’s National Cyber Security Centre (NCSC), Ciaran Martin, described the prospect of a substantial attack – particularly a ransomware attack – on the NHS at the height of the pandemic in the spring as something that had caused many sleepless nights.
Mercifully, this scenario never came to pass, but whether that is through a combination of good security planning and practice in the wake of WannaCry, sheer dumb luck, or the apparent ‘benevolence’ of cyber criminals, it is too early to say, or so says Sam Shah, former director of digital transformation at NHSX.
“I don’t necessarily think it was all in the planning and preparation, but it’s probably to some extent decisions made by those that were thinking about this that meant cyber criminals didn’t go for hospitals and healthcare organisations,” he says, reflecting on the past months.
“I do think it’s important that we recognise that risks and threats still exist, and for that reason we do need to continue the work to prevent this happening in the future, because it could happen again.”
A tale of improvement
Since the disastrous WannaCry attacks of 2017, the NHS has been pouring resources into cyber security and by many measures this has been a success. Statistics obtained by Comparitech earlier in 2002 under the Freedom of Information Act (FoI), for example, found that the incidence of ransomware attacks against the NHS fell dramatically in the past couple of years.
“A couple of things have happened in relation to cyber,” Shah tells Computer Weekly in an interview conducted shortly after he spoke at CybSafe’s PeepSec 2020 event. “The first is that around the time NHSX was forming, we had, of course, the aftermath of WannaCry. There’s a recognition of what can happen when something like that affects the public sector, so I’d certainly say awareness around the importance of cyber security was elevated and raised at that point.”
The continuous drip feed of cyber security incidents outside the NHS also had an impact in terms of building greater public understanding of the risk landscape.
“Culturally, there’s been a shift, both in society, among clinicians, and among the digital profession around what security risks are and why they’re important,” says Shah.
These risks are particularly pertinent in healthcare for one obvious reason: getting security wrong could result in fatalities. Indeed, since the conversation with Shah, this may now tragically have happened at a German hospital.
“You might think this seems extreme, but given we now run so much of our medical technology on infrastructure that is connected and uses the internet, it is all exposed and at risk from the very same threats that could affect other parts of the world or the system,” says Shah.
“The NHS and those connected to it have certainly taken cyber security much more seriously. Culturally, society probably has an expectation that we take it more seriously. Now there’s clearly a lot of work still to do and there’s a lot more that needs to happen around raising the profile of it, why it’s important and why it’s important to clinical safety, but it’s better than it was.”
Moving on up
Since he was last interviewed by Computer Weekly in May of 2019, shortly before the formal establishment of NHSX, Shah has moved on from the day-to-day minutiae of NHS technology to roles with more wider implications for healthcare.
He first undertook a brief stint at the Department for International Trade, but has now set up the Faculty for Future Health alongside Ulster University’s College of Medicine and Dentistry, with the objective of effecting digital transformation in the wider healthcare sector, with an eye on cyber security.
“Hopefully, what this means is that we’re going to create more people in health systems that have a better understanding of the cultural changes, as well as the technical changes, that are needed to deal with this emerging set of threats,” he says.
“In the same way that people are now socially distancing, washing their hands in a different way, behaving in a different way, the same sort of cultural shift is needed in relation to cyber.”
Risk and responsibility
This cultural shift will require change at the highest levels of NHS organisations and all the way down to doctors and nurses on the frontlines.
This will be further complicated by the question of exactly who is responsible for security. “In other sectors, there is someone who has the security officer role, but often in healthcare that job, as well as that of technology and digital, is given to the same person,” explains Shah.
He argues that as the NHS becomes more technology-focused, that simply cannot continue to be the case, particularly in larger healthcare organisations, which need a dedicated security lead with the ear of the board.
He says that before one can begin to start in on improving security on the frontline of a healthcare organisation, one must first ensure the board is taking the risk seriously, and that the person talking to the board isn’t simply the IT decision-maker, but a true security adviser.
“Historically, especially in the NHS, CIOs, CDOs, CTOs or anyone digital wasn’t typically a board member, and I’m not saying they necessarily have to be, but they certainly need access to the main decision-makers so that they can both advise them and seek the right decision,” he says.
Once this is achieved, the next step is to evaluate both the resources and the risk that exist within the organisation to work out what the security gaps are, followed by a prioritisation exercise – all this done in a way that assesses and takes into account all the relevant risks.
These risks are manifold. For example, there are those that come from the presence of third-party IT suppliers within the NHS, which need continuous assessment as the volume of external suppliers grows. Other sources of risk arise from the increased volume of endpoints as the huge back-end administrative machinery that powers the NHS shifts – like other office workers have done – to a culture of semi-permanent remote working. This, he adds, comes on top of the explosive growth in connected medical devices.
“Those risks are often acknowledged, but they’re not quantified. What’s important is that they’re quantified in some way because that then suddenly will allow them to be compared with other risks in organisation to determine how seriously they are taken,” says Shah.
“As a starting point this must be taken seriously at a board level in every organisation, and trusts and other organisations should be measured on their ability to manage this type of risk. Now that also requires the healthcare regulators to change their approach too.”
Security without shame
Moving down the chain, Shah calls out a number of areas where the NHS could continue to improve its security culture – most critically in terms of ongoing security training needed for clinical staff, which often slows down or stops altogether during periods of crisis, such as the pandemic.
While understanding of security in the NHS has clearly improved, Shah reckons this is likely limited to people he describes as “digitally motivated”, younger staff who are more likely to be tech-savvy than, for example, a consultant surgeon who qualified decades ago and who may be brilliant in the operating theatre, but struggles to turn on their PC.
“There are a lot of people who probably don’t realise why or how security is important, and this comes back to the cultural piece,” says Shah. “Often I would get asked, ‘Can I use this public messaging system on this network?’, and I had explain that it’s not just the public messaging system, it’s everything else that goes with it – what it’s connected to, what else could leak in or leak out and what else comes with that.
“It’s those things that a lot of people don’t understand, and in some ways I don’t expect them to, because why should they? They’re not experts. But that does mean that the NHS needs that expertise and that advice because that would improve the safety of the system.”
It would be easy to advocate security awareness training within the NHS has to start from a fundamental message – that getting it wrong can be fatal – but that’s not necessarily a good idea.
“You don’t want to scare people and you don’t want people to feel like they shouldn’t use technology because of that threat,” says Shah. “But it’s about helping raise awareness so they know the sorts of things they need to ask, the questions they need to ask, the philosophy they need to have and the change they need to be seeking when adopting technology.”
This is why risk assessment is so important within the NHS, to empower people to use the digital tools they need to get the job done, but in such a way that those tools are trusted from the outset.
To this end, clinicians also need to be encouraged to do “security without shame”, to understand the risks and how to report potential incidents while also accounting for the prevalence of stress and burnout within the NHS, which may lead to a moment’s unintentional thoughtlessness by a frazzled doctor.
“If they click on something and something bad happens, often it’s by virtue of them trying to just do their job. So we do need to create a more open culture, one where people can seek help and advice, knowing that they’re not going to be treated any different in any way for seeking that advice, and that we change that and go from a blame culture to one that’s about reducing risk, improving knowledge and ultimately improving safety,” says Shah.