Unintended consequences

The biggest global IT outage in history created by Crowdstrike impacted friend and foe without discrimination. There are a metric ton of think pieces but I was most intrigued by this nearly 20 year old snippet from Silicon, currently hosted on the Web Archive (emphasis mine):

Microsoft has announced it will give security software makers technology to access the kernel of 64-bit versions of Vista for security-monitoring purposes. But its security rivals remain as yet unconvinced.

Redmond also said it will make it possible for security companies to disable certain parts of the Windows Security Center in Vista when a third-party security console is installed.

Microsoft made both changes in response to antitrust concerns from the European Commission. Led by Symantec, the world's largest antivirus software maker, security companies had publicly criticised Microsoft over both Vista features and also talked to European competition officials about their gripes.

As the saying goes, hindsight is 20/20!

The truth is a whole lot of poorly considered decisions had to follow this fateful capitulation to end up with this level of failure. This moment should fill your entire team with a sense of responsibility and even fear. Every member of your team, and the processes you align to, should be designed to prevent this kind of incident. Consider what testing process should have prevented this? Who should have reviewed the code? Was the deployment cascading? How do you manager global roll backs?

Software development is about abstractions, layered upon abstraction, and so assumptions should always be handled with care. You can only trust what you produce if you can trust the foundation you build on. That trust is built in drops and lost in buckets. Security is necessary in every part of your solutions, and so, by definition, the entire exercise is a team sport. There is never just a single line of code to blame.

Yellow sign with the text


Comment Section