Published on December 18, 2020
By Augusto Barros, Vice President of Solutions
FireEye has finally released details on the campaign that hit them earlier this month. It includes findings related the SUNBURST malware, distributed through the compromise of the update mechanism of the SolarWinds Orion software and identified as the initial access method of the attack. This campaign is a textbook case that can be used to justify many professed security practices.
I'm not saying this as a way to bash FireEye, by the way. This is actually to explain how they managed to not only identify the breach but also do it in a timeframe that avoided bigger consequences. This must be recognized as a success story for them.
There has always been a debate about why you should invest in detection when you could invest more in preventative measures. However, a supply chain attack like this one is a perfect example of why prevention will never be perfect. In this case, a trusted component, the SolarWinds software, was breached, providing the attackers with privileged access to the infrastructure. You can try to minimize your attack surface by limiting the number of suppliers you trust, but risk will always exist. You run Windows? You trust Microsoft. Running on AWS? You trust Amazon. Admittedly, the likelihood of one of these providers being compromised in a way that could be used to get into your environment is very, very low, but it is not impossible. As the amount and type of valuable information managed by software or cloud providers grows, we can expect more threat activity trying to obtain that holy grail. We should be prepared in case that happens.
This campaign also highlights the limitation of indicator of compromise (IOC) based security. FireEye’s detailed report shows that the attackers did not reuse known infrastructure or malware exploits. If your detection strategy relies on finding known bad indicators, you will not be able to detect an attack like this. Malware was used strategically to achieve a targeted number of objectives, such as stealing credentials. Then the stolen credentials and "regular" remote access were used for the next steps of the attack. The typical threat detection solution, searching for "known bad", is extremely ineffective in this scenario.
Finally, this case also provides two good lessons for using analytics for threat detection. The first can be scary: Some actions by the attackers were focused on avoiding detection by well-known analytics use cases. They took steps to avoid being detected based on IP geolocation anomalies, for example, or by detecting C&C traffic based on typical beaconing behavior. It doesn't mean that these detection use cases are no longer effective. They may have been avoided by this specific campaign, but many others are less sophisticated and will be detected by these methods. But the real lesson here is that advanced attackers are paying attention to what the best blue teams are doing out there and tweaking their methods to avoid detection.
The second lesson confirms something that we at Securonix have known for a long time. Analytics methods are a powerful, and sometimes the only way to detect advanced attacks. We can see that, even with all the steps taken by the attackers to avoid detection, there are existing analytics use cases that would be useful in detecting this attack. In their write-up FireEye mentions, for example, anomalies related to host/user associations, landspeed violations on the compromised accounts, and anomalous modification of scheduled tasks. Could the attackers have avoided these detections as well? Maybe. But organizations adopting a broad range of analytics-based use cases across the many tactics of the MITRE ATT&CK framework will have a higher chance of finding that needle, disguised as hay, in the haystack.