Press "Enter" to skip to content

The Great Game

Part 3 of a series on U.S. cybersecurity.  Part1Part2.  Part 4.

By 2010, the Iranian nuclear program was clearly behind schedule. Despite dire predictions from the world powers, Iran seemed little closer to acquiring the bomb. The reasons for the delay were somewhat puzzling: perhaps the IR-1 centrifuges used at the nuclear facility at Natanz were even less reliable than first thought; perhaps CIA sabotage operations were bearing fruit; perhaps Israel’s assassination campaign against Iran was beginning to take its toll; or maybe the Iranians were simply having more trouble than expected in mastering the complex process of enriching uranium. It took months for the true culprit to be discovered: Stuxnet, by far the most sophisticated piece of malware yet discovered, had been subtly damaging the plant since at least 2007 (and perhaps as early as 2005), according to researchers at Symantec, a computer security firm.

Developed by American and Israeli intelligence, Stuxnet pulled off one of the most sophisticated attacks in computing history. Over the course of several versions, Stuxnet utilized an unprecedented five zero day vulnerabilities (rare and valuable flaws in software that were as of then unknown to the software vendors) and used them to spread itself to air-gapped (i.e. not connected to the Internet or to any other computer that is connected to the Internet) Iranian control computers. From there, it increased and decreased the speed of centrifuges or opened and closed valves to subtly damage the devices and waste Iran’s limited supply of uranium, all while feeding false information to the operators and protecting itself from attempts at discovery or removal, according to Kim Zetter in Countdown to Zero Day.

Stuxnet ushered in an age that computer security experts had long predicted and feared: the age of cyberwarfare. It represented the first truly new warzone since aviation took off in World War I. Today, at least 140 nations are supposedly pursuing cyberwarfare capabilities in some capacity. Since Stuxnet, an eerie quiet has descended, but it won’t be long until more nations consider cyberweapons as tools of both military and political significance. In 2008, in a massive and unprecedented breach, a piece of spyware known as Agent.BTZ, believed to be from Russia, attacked the U.S. military’s Central Command, as well as government computers across Europe, with the apparent goal of stealing intelligence data. Maintaining the economic, cultural, and technological advantages of computer networks will depend on nations establishing international norms. A Pentagon report from 2010 concluded, “The cyber competition will be offense-dominant for the foreseeable future.” With offenses far outstripping defenses, nations will have to move quickly to take advantage of these new weapons and tread carefully to avoid disaster.

Most discussion surrounding cyberwarfare gravitates towards hacks directed against consumer-facing technologies. The theft of millions of credit card numbers at Target, the release of confidential or private information at Sony, and the siphoning of gigabytes of data from JPMorgan Chase have all drawn public scrutiny. At far greater risk, however, are Supervisory Control and Data Acquisition (SCADA) systems; such devices drive the industrial systems that underpin power, water, traffic, manufacturing, railway, air traffic control, communications and other systems.  While banks and retailers are increasingly investing in cybersecurity, the operators of these factories and plants spent years mostly ignoring the warnings given to them by security experts, according to an author interview with Wired journalist Kim Zetter. Many systems lack even the most basic protections expected in the consumer-facing world. While SCADA systems and the devices they control – such as Programmable Logic Controllers (PLCs) and Remote Terminal Units (RTUs) – should almost universally be air-gapped, a major research project found that hundreds of thousands were directly accessible via the Internet. Worse, manufacturers have been slow to address other obvious flaws in device security, including default passwords, an inability to lock out someone who is guessing passwords, and a lack of authentication to determine that commands originate from a legitimate source.

Researchers have envisioned (and often created proofs-of-concept of) dozens of nightmare scenarios for every major utility, from the electric grid to air traffic control. This could leave millions of Americans without access to power, clean water, or other services, cause massive economic damage, and lead to deaths (particularly if a cyberattack was paired with a traditional, kinetic attack).  A 2007 estimate by economist Scott Berg estimated that if ⅓ of the country lost power for three months, the cost would be $700 billion. In 2010, Mike Davis of IOActive demonstrated one way that could happen.  Davis created a piece of software (with the utility’s permission) that could automatically spread between the smart meters that control power to homes, and shut them down. According to Kim Zetter, in 2007, researchers at the Idaho National Laboratory conducted the Aurora Generator Test, in which researchers simulated a cyberattack on a 5,000 horsepower diesel generator. By forcing the generator out of sync with the grid, the researchers destroyed the generator in spectacular fashion using only a few lines of code, producing a terrifying video of an industrial size generator shaking itself apart and spewing smoke. To make matters worse, replacing generators of this size (particularly many at the same time) would prove challenging, given that you can’t exactly pick one up at your local Best Buy. And this technology isn’t new – rumor has it that the CIA used a logic bomb to blow up a Soviet pipeline as early as 1982. Officialdom is increasingly recognizing the problem: government officials, including former Homeland Security Secretary Janet Napolitano and former Secretary of Defense Leon Panetta, have warned of an impending “cyber 9/11” or “cyber pearl harbor.”

Cyberwarfare is primarily defined by two key features that make it quite unlike any other form of warfare. First is the difficulty of attribution. Unlike in conventional or nuclear warfare, there is often little ability to trace an attack back to its source. A report from McKinsey notes that, “The defender no longer has the advantage: there is no need for proximity as the attacker can be based at any Internet-enabled computer in the world; the attacks are difficult to detect and often hard to attribute to a specific attacker…key assets…and methods of attack are very difficult to predict.” Installing certain network monitoring software can improve the odds of identifying the perpetrator, but even in the event that this software is successful, attribution is still hardly certain, given the ability of attackers to use proxies to hide themselves. Kim Zetter notes that the discovery and attribution of Stuxnet was possible only thanks to numerous favorable circumstances, including months of effort from researchers around the world, including at Symantec and Kaspersky Labs. Unfortunately, there is also always a risk that another actor is framing the obvious culprit in order to incite tensions. This lack of a clear “whodunnit” makes establishing traditional deterrence challenging.

The recent Sony Pictures hack demonstrates the difficulty of attribution in the cyber realm. Although the FBI declared with apparent certainty that North Korea was behind the attack, numerous cyber security experts have questioned that claim. Had the Obama administration decided that such a cyberattack required a kinetic response, it would likely have had a hard time gathering political support.

Assuming that the FBI has good reason for its assertion that North Korea was behind the Sony hack, the most plausible explanation for the lack of additional evidence is that the United States has technology or conventional intelligence sources that it is seeking to protect.. For nations who have the kind of wide reaching intelligence networks that will often be necessary to attribute cyberattacks with high certainty, publicly releasing the attribution (which, politically, is almost certainly necessary for any large attack, though it comes with its own set of political pressures) will put the victim in a double bind. Either they must contend with a lack of domestic and international conviction, or they must reveal classified intelligence sources. In addition, a lack of public evidence may lead the attacking country to cry slander, further reducing domestic conviction and weakening international support for a response. The flip side of this situation is that an enterprising state could manufacture a cyberattack against itself in order to justify a “retaliatory” strike, or simply attribute an actual attack to whomever it is politically convenient to finger.

Second, cyberattacks are almost always “one hit wonders,” meaning that once a weapon is launched, it’s unlikely that it can ever be used again. Unlike bombs and bullets, a cyberweapon is likely to be effective exactly once. A successful cyberweapon will provoke the target to bolster his defenses; it is unlikely he or she will be fooled by the same flaw twice. Given the rapid advancement of computer technology, the ever persistent hunt for bugs, and the constantly shifting landscape of any given system, a cyberweapon that takes years to develop may be rendered instantly obsolete before it is ever even fired.

While the difficulty of attribution will likely entice many nations into using cyberweapons, the “one hit wonder” factor will be one of cyberwarfare’s greatest limiting factors. Governments will have to be careful with what flaws are used for weapons and which are revealed to vendors for patching. Every zero day vulnerability a nation finds in commercial software is likely present in every copy of the software, regardless of which nation it is located in. Some attacks risk subverting key internet institutions; for example, it is widely believed that the NSA inserted a vulnerability into standards created by the National Institutes of Standards and Technology. NIST standards are widely adopted; undermining them risks undermining trust in NIST, which in turn will weaken security around the Internet.

The future of cyberwarfare is uncertain at best. With only one major confirmed attack, it’s possible (though unlikely) that nations will conclude that cyberweapons are too expensive, too specific, and create too much risk to technological infrastructure to be used in any but the most extreme circumstances. On the other hand, cyberweapons may become increasingly widespread tools of geopolitical maneuvering. Unlike the advent of nuclear weapons, cyberweapons will prove all but impossible to restrict by treaty. For society to continue to benefit from the rapid advance and adoption of computer systems, nations will have to establish meaningful norms of conduct and effective deterrence; so far, no one has made a move.

 

Author’s note: Many of the claims made in this article cannot be fully substantiated with evidence for the simple reason that cyberweapons are too new for anyone to know for certain where they are going. Unless otherwise cited, the views expressed in this article are my own, and are subject to change given new evidence.

Acknowledgements: Much of the basis for this article, particularly regarding Stuxnet, comes from Countdown to Zero Day by Kim Zetter, as well as an author interview with Ms. Zetter in February 2015. Kim Zetter’s work inspired and paved the way for this article.

I am also particularly indebted to this RAND Corporation analysis for several key ideas; I have tried to cite the RAND study whenever I use or expand on their ideas.

Image source: Office of the Presidency of the Islamic Republic of Iran (via Wired)