Sujet : Re: A meditation on the Antithesis of the VMS Ethos
De : craigberry (at) *nospam* nospam.mac.com (Craig A. Berry)
Groupes : comp.os.vmsDate : 21. Jul 2024, 13:55:18
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <v7j0fo$3k1u$1@dont-email.me>
References : 1
User-Agent : Mozilla Thunderbird
On 7/21/24 4:41 AM, Subcommandante XDelta wrote:
The problem here is that Crowdstrike pushed out an evidently broken
kernel driver that locked whatever system that installed it in a
permanent boot loop. The system would start loading Windows, encounter
a fatal error, and reboot. And reboot. Again and again. It, in
essence, rendered those machines useless.
It was not a kernel driver. It was a bad configuration file that
normally gets updated several times a day:
https://www.crowdstrike.com/blog/falcon-update-for-windows-hosts-technical-details/The bad file was only in the wild for about an hour and a half. Folks
in the US who powered off Thursday evening and didn't get up too early
Friday would've been fine. Of course Europe was well into their work
day, and lot of computers stay on overnight.
The boot loop may or may not be permanent -- lots of systems have
eventually managed to get the corrected file by doing nothing other than
repeated reboots. No, that doesn't always work.
The update was "designed to target newly observed, malicious named pipes
being used by common C2 frameworks in cyberattacks."
Most likely what makes CrowdStrike popular is that they are continuously
updating countermeasures as threats are observed, but that flies in the
face of normal deployment practices where you don't bet the farm on a
single update that affects all systems all at once. For example, in
Microsoft Azure, you can set up redundancy for your PaaS and SaaS
offerings so that if an update breaks all the servers in one data
center, your services are still up and running in another. Most
enterprises will have similar planning for private data centers.
CrowdStrike thought updating the entire world in an instant was a good
idea. While no one wants to sit there vulnerable to a known threat for
any length of time, I suspect that idea will get revisited. If they had
simply staggered the update over a few hours, the catastrophe would have
been much smaller. Customers will likely be asking for more control
over when they get updates, and, for example, wanting to set up
different update channels for servers and PCs.