Andrew Ruef on Stuxnet

Following the widespread awareness of the Stuxnet worm, anti-virus companies and analysts have produced detailed analyses of the worm. While most of the technical analyses of Stuxnet have been superb, conclusions drawn from those analyses about the creators of Stuxnet – their identity and goals –have been deeply flawed. The reason for this disconnect is unclear given the availability of verifiable information that refutes some of the commonly held beliefs about both Stuxnet and the malware business in general. The ideas that Stuxnet’s is unparalleled in technical sophistication, that its complexity required vast resources to create, and that it specifically targeted Iranian facilities remain largely unquestioned by major, non-technical news outlets and their audiences. From these statements come several conclusions, which are also largely unchallenged: that the level of sophistication required a nation to finance its construction, that the target of the worm was Iran, due to the volume of infections. . Unfortunately, all of these “facts” are demonstrably false.

The problem with these commonly accepted ideas about Stuxnet is that they are derived largely from false authorities. A “computer security expert” is not necessarily a “malware expert;” the latter would be able to tell a reporter that over roughly the past five years, the skills required to discover vulnerabilities in software and develop exploits for those vulnerabilities has been both democratized and commoditized.

Books on how to discover vulnerabilities in applications via fuzzing and reverse engineering, as well as how to write rootkits for Windows, linux and BSD, are all readily available. The easy availability of this knowledge means that anyone prepared to put in the necessary time and effort can produce malware– but of what complexity? Could a college student with a laptop and a book purchased from Amazon create a Stuxnet? Certainly we know that open-source developers are capable of producing malware of this complexity, and additionally, markets for malware and exploits have been created.

Those who have the ability can sell exploits and malware on their choice of three markets, white, black, and gray. The white market for vulnerabilities takes the form of disclosure to vendors. The black market for vulnerabilities takes the form of selling exploits to criminal organizations. The gray market for vulnerabilities takes the form of selling exploits & vulnerabilities to security consultancies.

So we know that to create a Stuxnet is within the realm of probability for anyone prepared to put in the time and effort. But is Stuxnet really as sophisticated as some have claimed?

Stuxnet registers its own driver and places the driver on the file system. TDL3 – a well-known piece of crimeware - performs a much more complicated act by infecting an existing driver. TDL3 then filters access to the hard disk to conceal the modifications made to the infected file, and provides an API to clients to read and write from a “hidden” area on the hard disk that is encrypted with the RC4 algorithm. TDL3 also uses knowledge of an algorithmic flaw in GMER to conceal hooks it makes in the storage stack.

The driver infection technique used by TDL3 is not new; in 2007 a post on the forums documented “DaMouse”, a “driverless ring0 rootkit.” DaMouse was a proof-of-concept driver-infector that infects drivers on disk to have a kernel-mode payload executed when the infected driver is loaded.

TDL3’s kernel-mode component uses several techniques to attempt to resist static analysis. One of those techniques is calling imported system routines via function pointers, and dynamically resolving these pointers as needed. TDL3 uses a string hashing algorithm to produce 4-byte hashes of the exports it needed to call, and then has a lookup function to search the kernels export table for a matching hash. This technique isn’t unique to TDL3; it’s used by various shellcode in exploits to allow dynamic resolution of functions.

Like TDL3, Stuxnet attempts to resist static analysis by dynamically resolving called functions. Unlike TDL3, Stuxnet reveals the string name of the function by first passing it to a hashing function, then passing the result of that hashing function to a resolution function. Technically speaking, Stuxnet is in fact less sophisticated from a security standpoint than TDL3.

There are two other aspects of Stuxnet that analysts say indicate the involvement of a state-sponsor: the combination of vulnerabilities exploited and the usage of stolen code signing material. Some assert that the combination of these exploits represents an “unprecedented” level of sophistication from malware attacks. While some vision was required to assemble these exploits together in a coherent package, the array of exploits suggests that the whole is not sophisticated, but in fact rather a hodge-podge. Consider:

Stuxnet uses MS10-061, which is a non-public remote vulnerability in Windows XP that is over a year old. Unlike MS08-067, it comes with some pretty stringent requirements and mitigating factors, the first being that “Systems are only vulnerable to remote attack when sharing a printer and the remote attacker can access the printer share,” which rather drastically limits the scope of the attack.

MS10-046 is a fairly potent vulnerability; after its disclosure many pieces of crimeware latched onto it (Zeus, etc). It is probably the most powerful enabler Stuxnet has, due to its limited mitigating factors. Exploits for MS10-046 probably has some value on the gray and black markets; the decision to use one in Stuxnet thus indicates either that the Stuxnet actors were able to purchase it, or they created it themselves and are choosing to deploy it rather than sell it.

The use of stolen code-signing certificates is an interesting but unsurprising tactic. With the introduction of Authenticode and the Kernel Mode Code Signing (KMCS) policy in 64-bit Windows Vista and forward, Microsoft has increased the value of Authenticode certificates. The value of private encryption keys is well understood, as is the value of private keys for Web-based SSL communications. But the value of keying material for code-signing is different in two major respects.

The first is that the keys used for code signing aren’t “protecting” data from compromise; they’re simply authenticating that data is coming from who it claims to be from. The compromise of code-signing keying material allows a malicious actor to generate code that the operating system treats as “trusted,” which dramatically increases the potential scope and scale of a malicious attack that uses a compromised code signing key.

The second is that code-signing keys are difficult to protect. Microsoft tools generally need to pull the key from the local certificate store, which leads to holding the certificate on developer workstations that need to generate release or test release builds. Tools to facilitate the storage of code-signing keys on removable, protected storage - like smart cards - don’t exist. Due to the lack of perceived value in the keys and the difficulty in protecting them, the desire to protect these keys could be low in development organizations.

Given both the inability and lack of desire to protect code-signing keys, how hard are they then to steal? How hard would theft be with the help of a corrupted insider? If these keys are now relatively unprotected high-value items, could a black market in stolen keying material exist?

Actual, Factual Conclusions

The first major takeaway from the Stuxnet case is that sophisticated attack tools are not necessarily all that rare, but attacks that are sufficiently unique – like those that target SCADA systems - are. That “experts” have failed to articulate this says much more about them than it does about the creators of Stuxnet.

Second, the idea that only nation-states can create “sophisticated” code no doubt comes as a surprise to entities like Google, Apple and Microsoft. The fact of the matter is that no one can speak to the true level of sophistication of those behind Stuxnet. On one end of the spectrum we have nationally-backed cyber warriors; on the other we have very talented, motivated and visionary Computer Science undergraduates: both sets and nearly everyone in between are capable of creating software like Stuxnet.

We also have to consider that those who deployed Stuxnet may not be the same people who created Stuxnet. System programming and rootkit knowledge have become democratized and commoditized to the point that developing robust kernel and user mode rootkits is within the reach of anyone with sufficient programming skill and time. The same holds true for vulnerability and exploit research and development. If one does not have the requisite skills or time, one can purchase the needed capabilities on the black market.

We can speculate endlessly about the motivation behind Stuxnet, but absent some action on its part, no one has any idea why it was made. Stuxnet has not shut down any facility (that we know of), nor has it asked for ransom money (that anyone has admitted to). It is not inconceivable that Stuxnet represents the vanguard of a new purpose for malware that “security experts” have not yet envisioned. Stuxnet is not a surprise for anyone who has tracked issues related to “cyber” or information warfare for any length of time. This day has been predicted by various studies and working groups for the past two decades; it simply took until now for computers and network connectivity to become ubiquitous enough for these predictions to become reality. Fighting a stand-up fight, nation-to-nation, on a physical or digital battlefield is something we have put a lot of thought and investment into; what we are not prepared for or able to combat effectively is a horde of capable, undisciplined and downright reckless actors unleashing Stuxnet-after-next.