Summary
- Overnight update to CROWDSTRIKE FALCON SENSOR security software has resulted in unforeseen mass global IT outages, affecting all sectors. CROWDSTRIKE FALCON Sensor update of 19 Jul 24 was deployed globally to all instances of CrowdStrike overnight 18 to 19 Jul 24. Machines receiving this update have experienced “Blue Screen of Death”.
- Rollback has not been possible for nearly all systems affected.
- CROWDSTRIKE software effectively covers ~25% of the global market for similar products (as indicator of magnitude); the issue affects windows devices only (endpoints such as laptops, servers and cloud/virtualised environments).
- Collateral impact will be felt by all organisations (regardless of whether they are CROWDSTRIKE clients) as employees and customers struggle to travel, pay or communicate.
- We assess with a high degree of confidence that business interruption is likely to continue for a protracted period for all organisations.
Impact
- CRITICALITY: HIGH, with massive global impact and subsequent immediate and long-tail collateral operational disruption.
- UNCERTAINTY: MEDIUM, initial fix released by Crowdstrike this afternoon UK time. However, there remain challenges, not least for manual intervention likely required may present overhead and delay.
- OVERALL ASSESSMENT: protracted global disruption in the immediate term, with long-tail recovery and need for assurance activities in aftermath.
- SECONDARY EFFECTS: organisations’ security functions should remain on increased vigilance against potential uptick in cybercriminal and social engineering (at a low level) and more aggressive targeting (higher level). Separately, recent increased social unrest in several regions may be exacerbated by the incident, whilst authorities are inhibited due to being affected.
Affected Landscape
- Windows devices only
Not impacted devices
- Windows hosts which are brought online after 19 July 0527 UTC will also not be impacted
- Hosts running Windows 7/2008 R2 are not impacted
- This issue is not impacting Mac- or Linux-based hosts
Notes
- Channel file “C-00000291*.sys” with timestamp of 19 July 0527 UTC or later is the reverted (good) version.
- Channel file “C-00000291*.sys” with timestamp of 19 July 0409 UTC is the problematic version.
Recommendations
Technical Corrective Actions
Please follow updates at: CrowdStrike Falcon Issue Acknowledgement
Workaround Steps for individual hosts:
- Reboot the host to give it an opportunity to download the reverted channel file. If the host crashes again, then:
- Boot Windows into Safe Mode or the Windows Recovery Environment
- NOTE: Putting the host on a wired network (as opposed to WiFi) and using Safe Mode with Networking can help remediation.
- Navigate to the %WINDIR%\System32\drivers\CrowdStrike directory
- Locate the file matching “C-00000291*.sys” and delete it.
- Boot the host normally.
- Boot Windows into Safe Mode or the Windows Recovery Environment
Note: Bitlocker-encrypted hosts may require a recovery key.
Workaround Steps for public cloud or similar environment including virtual:
Option 1:
- Detach the operating system disk volume from the impacted virtual server
- Create a snapshot or backup of the disk volume before proceeding further as a precaution against unintended changes
- Attach/mount the volume to a new virtual server
- Navigate to the %WINDIR%\System32\drivers\CrowdStrike directory
- Locate the file matching “C-00000291*.sys” and delete it.
- Detach the volume from the new virtual server
- Reattach the fixed volume to the impacted virtual server
Option 2:
- Roll back to a snapshot before 0409 UTC.
AWS-specific documentation:
Azure environments:
- Please see this Microsoft article
Bitlocker recovery-related KBs:
- BitLocker recovery in Microsoft Azure
- BitLocker recovery in Microsoft environments using SCCM
- BitLocker recovery in Microsoft environments using Active Directory and GPOs
- BitLocker recovery in Microsoft environments using Ivanti Endpoint Manager
What should crisis and executive teams be doing?
- Crisis management teams should be stood up as best they can
- Timeframes should consider weeks rather than days
- Regime of crisis operations should be measured out accordingly
- Seek to maximize the benefits for lessons learnt and increased resilience from the start
- Damage assessments should be conducted, under two lines:
- Direct impact (i.e. systems that have been impacted)
- Indirect impact (i.e. operational collateral to employees or customers)
- Preparation for affected parties:
- As fixes emerge, and are beginning to be deployed, it is worthwhile to consider business driven priorities (and risks for interim fixes) to prioritize and sequence updates. It is recommended some preparation time is given.
- IT teams should make sure asset registers are to hand, including configuration databases (CMDB) and availability of BitLocker keys (hard disk encryption keys to recover hard drives).
- Early and realistic assessment should be conducted on the accuracy and availability of this information, any gaps or particularly areas of complexity or challenge. This should be used to inform overall recovery strategy.
- Separate impact assessment and planning should be conducted, including but not limited to:
- Regulatory obligations
- Legal obligations
- Financial obligations
- Safeguarding and wellbeing of employees and clients
- Contingency planning – prepare for reaction:
- Develop inventory of contingency measures and related preparatory steps; adapt business continuity plans and incident management; and execute business continuity plans and testing.
- Engage insurance and external providers early:
- Crisis management specialists
- Communications
- Engineering and security support providers
- Legal
- Maintain heightened cyber security vigilance within teams:
- These events are opportunities not lost on adversaries
- Urgency to restore services may incur compromises or vulnerabilities which need to be redressed at some point
- Begin formulating strategic priorities, risk tolerances and options around:
- Restoration of services and business
- Stabilisation
- Full recovery
- Ensure a strong degree of assurance activities is built in between each phase
- Recovery-centric mindset:
- Adopt a recovery-centric mindset, with firms able to demonstrate adaptability assuming major disruption will occur.
SOURCE: This advisory used content from Tyburn and Obrela TI team
FOR MORE INFO PLEASE CONTACT: info@tyburn-str.com