BoardProspects Member Wayne Sadin Weighs In on the Ransomware Attacks and What Boards Should be Talking About
WannaCry Ransomware Debacle: Four Important Conversations For Boards
Fact: A few days ago a ransomware attack locked-up over 200,000 computers and networks in 150 countries.
Even if another wave of disruption doesn't hit when the new work week starts, the cost--in ransom payments, in data restoration, in data that's forever lost, and perhaps in human pain and suffering (considering that hospitals seem to have been hit especially hard)--is staggering.
It will take weeks to uncover the details, and I'll leave that discussion to cybersecurity experts. I'm here as a CIO and Board member to talk about the bigger issues.
The message for Boards and CEOs is, "You were lucky...this time. Wake up before something awful happens."
In a nutshell, WannaCry took advantage of a flaw in the Windows operating systems that powers computers throughout your network. This flaw has existed for over a decade, but its existence was only recently disclosed...and fixed by Microsoft back in March. Let me repeat that: FIXED IN MARCH. In addition to the Microsoft fix, security hardware and software firms quickly updated their products to block this malicious software.
Wait, what? This known flaw has been fixed for two months? "Why were so many systems affected," you ask? Two reasons:
- Old, unsupported software
- Lax security risk mitigation policies
Let's talk about each of these a bit more, then talk about why this debacle was a 'good thing' at the Board level.
Old, unsupported versions of Windows
Like any corporate asset, computer systems have a useful life. Hardware breaks and parts become scarce, so you upgrade to new servers (just like a forklift has a useful life, so does a server). But software also 'wears out,' in the sense that the software firm stops 'supporting' old versions after a number of years. As a non-technical executive, your thinking might be "So what? Software doesn't really wear out. If it's still running, why spend the money?" Well, you just got the reason: unsupported software stops getting fixes issued and becomes vulnerable to newly uncovered flaws like WannaCry.
Microsoft stopped supporting Windows XP in April 2014, Windows Server 2003 in July 2015, and Windows Vista in April 2017. (Actually, these dates are the 'extended support' dates; full support for bugs aside from security bugs ended several years earlier). Using software products beyond their extended support dates means you're rolling the dice--and for many firms, the dice came up 'snake eyes' the other day.
By the way, obsolete Windows versions are part of a bigger problem called 'Technical Debt'--the scary thought that decades of under-investment in technology exposes you to undisclosed risks like this one--and more importantly, stymies Digital Transformation projects because of inflexible, brittle IT systems.
Lax security risk mitigation policies
Say your firm spent the money and upgraded to newer software like Windows 7, 8.1, or 10; and Windows Server 2008 or newer. And you recall the CIO or CISO telling you they were running all sorts of security tools to protect your network and all the computers on it. How can it be that you still suffered damage? In this case, it's possible your IT team (or outside vendor) was asleep at the wheel when it came to installing software fixes from Microsoft and those other security vendors.
In a way, security software fixes are similar to safety recalls announced for your automobile. For some number of years after a car is built, the manufacturer identifies and remediates certain kinds of risks that might affect your safety. All you have to do is bring the car in for service...ah, get it? You have to take time to schedule a visit, arrange alternate transportation, and then go back and get your car--which generally drives exactly like it did before...so what did the dealer really do for you? Possibly saved your life. Possibly. So it goes with software fixes. They take time and a certain technical skill (unlike bringing your car in, it's your IT folks doing the work); they disrupt operations; and they're not risk-free (never know when some 'simple' fix breaks a critical piece of software your team or your customers depends on). And after all, how often do these fixes actually prevent a problem? (the thinking goes).
In late March I saw a talk by Bret Arsenault, Microsoft's CISO. He said '90% of cybersecurity failures are the result of 'bad hygiene.' Security hygiene refers to enforcing password policies, keeping accurate inventories of critical devices, patching systems in a timely manner, etc. And at organizations with newer gear, a big part of the reason they were affected is likely related to hygiene.
Why is this debacle a good thing for the Board?
- For most, the damage wasn't very severe. Think Target, or Anthem, or Sony. $300 to decrypt, or the cost to restore data, is quite low.
- You weren't alone. Reputational risk is diffused when a disaster strikes lots of firms. Think of this like a hurricane or an earthquake. You got hit, but your competitors likely did, too.
- It triggers four Board-level conversations about technology:
- What's our 'Technical Debt' and what are we going to do about it? The conversation will obviously start with 'what IT risks don't we properly understand?' but it should extend to 'what Digital Transformation options are closed to us due to our brittle technology?'
- What level of security hygiene do we need in order to mitigate risks cost-effectively? And this should lead to a larger discussion around formalizing the notion of 'IT inherent risk' and mitigation costs vs. acceptable residual risk.
- Is our Corporate Disaster Plan evolving to handle new threats like WannaCry, especially as IT becomes embedded into products and the customer experience in addition to traditional internal processes?
- Why don't we have a Digital Director (or 'Qualified Technology Expert' (QTE), as Russell Reynolds calls it) who can ask these questions and understand the answers?
WannaCry will cost lots of firms some money, but it should leave them operating. Imagine if this was a variation of Stuxnet, a virus that can destroy expensive machinery? Or an attack along the lines that Ted Koppel talks about in 'Lights Out,' his book about America's Critical Infrastructure?
Let's not ignore last week's attack once we've cleaned up the damage. Let's talk about technology risks--and opportunities--while the 'might have been' is still fresh in our minds.