by Gary Farber
Except that who is responsible for Stuxnet is a mystery.
What we know is that it's incredibly dangerous. And it's at least possible it was targeted at Iran's nuclear program, perhaps the enrichment centrifuges in Natanz.
Cyber security experts say they have identified the world's first known cyber super weapon designed specifically to destroy a real-world target – a factory, a refinery, or just maybe a nuclear power plant.
The cyber worm, called Stuxnet, has been the object of intense study since its detection in June. As more has become known about it, alarm about its capabilities and purpose have grown. Some top cyber security experts now say Stuxnet's arrival heralds something blindingly new: a cyber weapon created to cross from the digital realm to the physical world – to destroy something.
[...]
The appearance of Stuxnet created a ripple of amazement among computer security experts. Too large, too encrypted, too complex to be immediately understood, it employed amazing new tricks, like taking control of a computer system without the user taking any action or clicking any button other than inserting an infected memory stick. Experts say it took a massive expenditure of time, money, and software engineering talent to identify and exploit such vulnerabilities in industrial control software systems.
Unlike most malware, Stuxnet is not intended to help someone make money or steal proprietary data. Industrial control systems experts now have concluded, after nearly four months spent reverse engineering Stuxnet, that the world faces a new breed of malware that could become a template for attackers wishing to launch digital strikes at physical targets worldwide. Internet link not required.
"Until a few days ago, people did not believe a directed attack like this was possible," Ralph Langner, a German cyber-security researcher, told the Monitor in an interview. He was slated to present his findings at a conference of industrial control system security experts Tuesday in Rockville, Md. "What Stuxnet represents is a future in which people with the funds will be able to buy an attack like this on the black market. This is now a valid concern."
It is a realization that has emerged only gradually.
Stuxnet surfaced in June and, by July, was identified as a hypersophisticated piece of malware probably created by a team working for a nation state, say cyber security experts. Its name is derived from some of the filenames in the malware. It is the first malware known to target and infiltrate industrial supervisory control and data acquisition (SCADA) software used to run chemical plants and factories as well as electric power plants and transmission systems worldwide. That much the experts discovered right away.
But what was the motive of the people who created it? Was Stuxnet intended to steal industrial secrets – pressure, temperature, valve, or other settings –and communicate that proprietary data over the Internet to cyber thieves?
By August, researchers had found something more disturbing: Stuxnet appeared to be able to take control of the automated factory control systems it had infected – and do whatever it was programmed to do with them. That was mischievous and dangerous.
But it gets worse. Since reverse engineering chunks of Stuxnet's massive code, senior US cyber security experts confirm what Mr. Langner, the German researcher, told the Monitor: Stuxnet is essentially a precision, military-grade cyber missile deployed early last year to seek out and destroy one real-world target of high importance – a target still unknown.
"Stuxnet is a 100-percent-directed cyber attack aimed at destroying an industrial process in the physical world," says Langner, who last week became the first to publicly detail Stuxnet's destructive purpose and its authors' malicious intent. "This is not about espionage, as some have said. This is a 100 percent sabotage attack."
[...] 1. This is sabotage. What we see is the manipulation of one specific process. The manipulations are hidden from the operators and maintenance engineers (we have the intercepts identified).
2. The attack involves heavy insider knowledge.
3. The attack combines an awful lot of skills -- just think about the multiple 0day vulnerabilities, the stolen certificates etc. This was assembled by a highly qualified team of experts, involving some with specific control system expertise. This is not some hacker sitting in the basement of his parents house. To me, it seems that the resources needed to stage this attack point to a nation state.
4. The target must be of extremely high value to the attacker.
5. The forensics that we are getting will ultimately point clearly to the attacked process -- and to the attackers. The attackers must know this. My conclusion is, they don't care. They don't fear going to jail.
6. Getting the forensics done is only a matter of time. Stuxnet is going to be the best studied piece of malware in history. We will even be able to do process forensics in the lab. Again, the attacker must know this. Therefore, the whole attack only makes sense within a very limited timeframe. After Stuxnet is analzyed, the attack won't work any more. It's a one-shot weapon. So we can conclude that the planned time of attack isn't somewhen next year. I must assume that the attack did already take place. I am also assuming that it was successful. So let's check where something blew up recently.
Langer than elaborates on reasons to believe Iran's nuclear program may be the target.
Frank Rieger has more:
[...] stuxnet is a so far not seen publicly class of nation-state weapons-grade attack software. It is using four different zero-day exploits, two stolen certificates to get proper insertion into the operating system and a really clever multi-stage propagation mechanism, starting with infected USB-sticks, ending with code insertion into Siemens S7 SPS industrial control systems. One of the Zero-Days is a USB-stick exploit named LNK that works seamlessly to infect the computer the stick is put into, regardless of the Windows operating system version – from the fossile Windows 2000 to the most modern and supposedly secure Windows 7.
The stuxnet software is exceptionally well written, it makes very very sure that nothing crashes, no outward signs of the infection can be seen and, above all, it makes pretty sure that its final payload, which manipulates parameters and code in the SPS computer is only executed if it is very certain to be on the right system. In other words: it is extremly targeted and constructed and build to be as side-effect free as humanly possible. Words used by reverse engineers working on the the thing are “After 10 years of reverse-engineering malware daily, I have never ever seen anything that comes even close to this”, and from another “This is what nation states build, if their only other option would be to go to war”.
Industrial control systems, also called SCADA, are very specific for each factory. They consist of many little nodes, measuring temperature, pressure, flow of fluids or gas, they control valves, motors, whatever is needed to keep the often dangerous industrial processes within their safety and effectiveness limits. So both the hardware module configuration and the software are custom made for each factory. For stuxnet they look like an fingerprint. Only if the right configuration is identified, it does more then just spreading itself. This tells us one crucial thing: the attacker knew very precisely the target configuration. He must have had insider support or otherwise access to the software and configuration of the targeted facility.
I will not dive very much into who may be the author of stuxnet. It is clear that it has been a team effort, that a very well trained and financed team with lots of experience was needed, and that the ressources needed to be alocated to buy or find the vulnerabilities and develop them into the kind of exceptional zero-days used in the exploit. This is a game for nation state-sized entities, only two handful of governments and maybe as many very large corporate entities could manage and sustain such an effort to the achievment level needed to build stuxnet. As to whom of the capable candidates if could be: this is a trip into the Wilderness of Mirrors. False hints are most likely placed all over the place, so it does not make much sense to put much time into this exercise for me.
Regarding the target, things are more interesting. There is currently a lot of speculation that the Iranian reactor at Bushehr may have been the target. I seriouly doubt that, as the reactor will for political reasons only go on-line when Russia wants it to go on-line, which they drag on for many years now, to the frustration of Iran. The political calculations behind this game are complex and involve many things like the situation in Iraq, the US withdrawal plans and Russias unwillingness to let the US actually have free military and political bandwith to cause them trouble in their near abroad.
But there is another theory that fits the available date much better: stuxnet may have been targeted at the centrifuges at the uranium enrichment plant in Natanz. The chain of published indications supporting the theory starts with stuxnet itself. According to people working on the stuxnet-analysis, it was meant to stop spreading in January 2009. Given the multi-stage nature of stuxnet, the attacker must have assumed that it has reached its target by then, ready to strike.
He brings up some interesting data, including that:
According to official IAEA data, the number of actually operating centrifuges in Natanz shrank around the time of the accident Wikileaks wrote about was reduced substantially .
Rieger has more, but Robert MacMillan asks: Was Stuxnet built to attack Iran's nuclear program?
[...]
Bushehr is a plausible target, but there could easily be other facilities -- refineries, chemical plants or factories that could also make valuable targets, said Scott Borg, CEO of the U.S. Cyber Consequences Unit, a security advisory group. "It's not obvious that it has to be the nuclear program," he said. "Iran has other control systems that could be targeted."
Iranian government representatives did not return messages seeking comment for this story, but sources within the country say that Iran has been hit hard by the worm. When it was first discovered, 60 percent of the infected Stuxnet computers were located in Iran, according to Symantec.
We don't know who created Stuxnet, and we don't know what the target is.
We know this:
The Stuxnet worm is a "groundbreaking" piece of malware so devious in its use of unpatched vulnerabilities, so sophisticated in its multipronged approach, that the security researchers who tore it apart believe it may be the work of state-backed professionals.
"It's amazing, really, the resources that went into this worm," said Liam O Murchu, manager of operations with Symantec's security response team.
"I'd call it groundbreaking," said Roel Schouwenberg, a senior antivirus researcher at Kaspersky Lab. In comparison, other notable attacks, like the one dubbed Aurora that hacked Google's network and those of dozens of other major companies, were child's play.
O Murchu and Schouwenberg should know: They work for the two security companies that discovered that Stuxnet exploited not just one zero-day Windows bug but four -- an unprecedented number for a single piece of malware.
And we know that:
[...]
"What we're seeing with Stuxnet is the first view of something new that doesn't need outside guidance by a human – but can still take control of your infrastructure," says Michael Assante, former chief of industrial control systems cyber security research at the US Department of Energy's Idaho National Laboratory. "This is the first direct example of weaponized software, highly customized and designed to find a particular target."
"I'd agree with the classification of this as a weapon," Jonathan Pollet, CEO of Red Tiger Security and an industrial control system security expert, says in an e-mail.
Langner's research, outlined on his website Monday, reveals a key step in the Stuxnet attack that other researchers agree illustrates its destructive purpose. That step, which Langner calls "fingerprinting," qualifies Stuxnet as a targeted weapon, he says.
Langner zeroes in on Stuxnet's ability to "fingerprint" the computer system it infiltrates to determine whether it is the precise machine the attack-ware is looking to destroy. If not, it leaves the industrial computer alone. It is this digital fingerprinting of the control systems that shows Stuxnet to be not spyware, but rather attackware meant to destroy, Langner says.
Stuxnet's ability to autonomously and without human assistance discriminate among industrial computer systems is telling. It means, says Langner, that it is looking for one specific place and time to attack one specific factory or power plant in the entire world.
"Stuxnet is the key for a very specific lock – in fact, there is only one lock in the world that it will open," Langner says in an interview. "The whole attack is not at all about stealing data but about manipulation of a specific industrial process at a specific moment in time. This is not generic. It is about destroying that process."
So far, Stuxnet has infected at least 45,000 computers worldwide, Microsoft reported last month. Only a few are industrial control systems. Siemens this month reported 14 affected control systems, mostly in processing plants and none in critical infrastructure. Some victims in North America have experienced some serious computer problems, Eric Byres, an expert in Canada, told the Monitor. Most of the victim computers, however, are in Iran, Pakistan, India, and Indonesia. Some systems have been hit in Germany, Canada, and the US, too. Once a system is infected, Stuxnet simply sits and waits – checking every five seconds to see if its exact parameters are met on the system. When they are, Stuxnet is programmed to activate a sequence that will cause the industrial process to self-destruct, Langner says.
That's not as bad as this, but it's not good:
Langner's analysis also shows, step by step, what happens after Stuxnet finds its target. Once Stuxnet identifies the critical function running on a programmable logic controller, or PLC, made by Siemens, the giant industrial controls company, the malware takes control. One of the last codes Stuxnet sends is an enigmatic “DEADF007.” Then the fireworks begin, although the precise function being overridden is not known, Langner says. It may be that the maximum safety setting for RPMs on a turbine is overridden, or that lubrication is shut off, or some other vital function shut down. Whatever it is, Stuxnet overrides it, Langner’s analysis shows.
"After the original code [on the PLC] is no longer executed, we can expect that something will blow up soon," Langner writes in his analysis. "Something big."
For those worried about a future cyber attack that takes control of critical computerized infrastructure – in a nuclear power plant, for instance – Stuxnet is a big, loud warning shot across the bow, especially for the utility industry and government overseers of the US power grid.
"The implications of Stuxnet are very large, a lot larger than some thought at first," says Mr. Assante, who until recently was security chief for the North American Electric Reliability Corp. "Stuxnet is a directed attack. It's the type of threat we've been worried about for a long time. It means we have to move more quickly with our defenses – much more quickly."
This is some of the new warfare, and it is only rarely going to be visible. Defense must obviously be played. It's inevitable that the military and NSA have and continue to work on offensive cyberwarfare.
And it may be vastly more destructive than car bombs and shoe bombs.
UPDATE, 9/25/10, 8:20 p.m.: per regular commenter nous, cue the New York Times, right on schedule, page A4 of the 9/26/10 edition, David Sanger's Iran Fights Malware Attacking Computers. (Crappy headline: it's ambiguous. Is Iran fighting the malware, or computers using malware?; also, more importantly, the malware is attacking more than "computers.")
No news in the article you haven't read vastly more about here, and only one quote of any interest:
[...] “It is easy to look at what we know about Stuxnet and jump to the conclusion that it is of American origin and Iran is the target, but there is no proof of that,” said James Lewis, a senior fellow at the Center for Strategic and International Studies in Washington and one of the country’s leading experts on cyberwar intelligence. “We may not know the real answer for some time.”
Based on what he knows of Stuxnet, Mr. Lewis said, the United States is “one of four or five places that could have done it — the Israelis, the British and the Americans are the prime suspects, then the French and Germans, and you can’t rule out the Russians and the Chinese.”
As I said, no news. But I win my shiny nickel.
UPDATE, 9/25/10, 11:25 p.m.: On second thought, these 'graphs are also worth noting for the record:
[...] The semiofficial Mehr news agency in Iran on Saturday quoted Reza Taghipour, a top official of the Ministry of Communications and Information Technology, as saying that “the effect and damage of this spy worm in government systems is not serious” and that it had been “more or less” halted.
But another Iranian official, Mahmud Liai of the Ministry of Industry and Mines, was quoted as saying that 30,000 computers had been affected, and that the worm was “part of the electronic warfare against Iran.”
For whatever it's worth.
UPDATE, 9/30/10, 10:25 p.m.: John Markoff and David E. Sanger have a substantive update here.
Deep inside the computer worm that some specialists suspect is aimed at slowing Iran’s race for a nuclear weapon lies what could be a fleeting reference to the Book of Esther, the Old Testament tale in which the Jews pre-empt a Persian plot to destroy them.
That use of the word “Myrtus” — which can be read as an allusion to Esther — to name a file inside the code is one of several murky clues that have emerged [....]
The stuxnet software is exceptionally well written, it makes very very sure that nothing crashes, no outward signs of the infection can be seen and, above all, it makes pretty sure that its final payload, which manipulates parameters and code in the SPS computer is only executed if it is very certain to be on the right system.
I smell a hoax/psyop. You're talking about a highly sophisticated piece of code that cannot be tested in a "real" setting before being deployed that is supposedly flawless? Sorry, I don't buy it. Perhaps the target is Iran's nuclear power operation, but the "weapon" is publicity that will compel Iran to design entirely new security protocols for its computer systems, significantly delaying putting operations on-line, (and possibly compromising existing protocols in the process.)
Posted by: paul lukasiak | September 22, 2010 at 04:52 PM
I am no kind of technical expert on any of this: not remotely.
But if you read through the links, the technical details seem plausible enough to convince all sorts of people it should otherwise be very difficult to convince, it seems to me.
To be sure, perhaps it's way over-hyped. I couldn't say. But enough people with credibility seem to think otherwise that I find it more than interesting.
I've known Bruce Schneier since we were both very young; he's impressed, and his opinion carries weight with me.
Posted by: Gary Farber | September 22, 2010 at 05:07 PM
If the Koch Brothers and their astro-turf kochsuckers approach their goal of destroying the U.S. Government, Stuxnet should be used by the U.S. Government, if it is the author, to destroy all Koch Industries
manufacturing facilities.
Mortal enemies within and without.
Posted by: Countme? | September 22, 2010 at 05:46 PM
See also: the alleged 1982 sabotage of the Soviet Urengoy - Surgut - Chelyabinsk natural gas pipeline by the CIA
The pipeline, as planned, would have a level of complexity that would require advanced automated control software (SCADA). The pipeline utilized plans for a sophisticated control system and its software that had been stolen from a Canadian firm by the KGB. The CIA allegedly had the company insert a logic bomb in the program for sabotage purposes, eventually resulting in an explosion with the power of three kilotons of TNT.
The CIA was tipped off to the Soviet intentions to steal the control system plans in documents in the Farewell Dossier and, seeking to derail their efforts, CIA director William J. Casey followed the counsel of economist Gus Weiss and a disinformation strategy was initiated to sell the Soviets deliberately flawed designs for stealth technology and space defense. The operation proceeded to deny the Soviets the technology they desired to purchase to automate the pipeline management, then, a KGB operation to steal the software from a Canadian company was anticipated, and, in June 1982, flaws in the stolen software led to a massive explosion of part of the pipeline.
It's not clear whether any of the above actually happened, though.
Posted by: Jacob Davies | September 22, 2010 at 05:51 PM
Paul -- that was my first thought as well, but in order for that to work there have to be a bunch of people in on the whole thing, many of whom aren't that closely tied to the usual alphabet soup suspects.
Which is not to say that there isn't a big psyops upside for even a partial success, just that I think there has to be more substance to this than mere bluff. They have to have, at the least, managed to infect a bunch of machines in Iran to make this threat plausible, and they likely would have to have compromised the Russian contractors in order to do that. Without those details the rest of the social engineering sort of falls apart.
Posted by: nous | September 22, 2010 at 06:35 PM
If you are talking high level advanced automated control software (SCADA) we are talking very heterogeneous systems, aren't we?
I've seen high level systems built on traditional UNIXes like HP-UX and Solaris, things like SCO, Linux, QNX, and of course some Windows.
Kind of hard to target them all, though if they have some subset in mind, and if that subset is net-connected, it might be possible.
(If it was me, I wouldn't put my centrifuges on an open net.)
Posted by: john personna | September 22, 2010 at 08:30 PM
"If it was me, I wouldn't put my centrifuges on an open net."
As I quoted: "stuxnet is a so far not seen publicly class of nation-state weapons-grade attack software. It is using four different zero-day exploits, two stolen certificates to get proper insertion into the operating system and a really clever multi-stage propagation mechanism, starting with infected USB-sticks, ending with code insertion into Siemens S7 SPS industrial control systems. One of the Zero-Days is a USB-stick exploit named LNK that works seamlessly to infect the computer the stick is put into, regardless of the Windows operating system version – from the fossile Windows 2000 to the most modern and supposedly secure Windows 7."
Stuxnet is non-net-dependent; it's spread by USB memory stick.
Posted by: Gary Farber | September 22, 2010 at 08:38 PM
The Natanz theory doesn't hold up. The only real basis is the idea that the target might have been in Iran (due to the large number of infections) and the effect should show up around Jan 09 (because that's when stuxnet would cease spreading.) But capacity at Natanz increased during that time. According to the chart you included, the decrease didn't come for another six months or so (summer of 09). Further, the decrease was only on the order of 15%, hardly a catastrophic result. (And my reading of the IAEA report indicates an actual decrease of only about half that level; 6-7%.)
There are other plausible explanations for the Natanz decrease (e.g. maintenance or sparing issues, a temporary shortage of uranium hexaflouride, preparation to switch to a new type of equipment, changes related to the later switch from 3.5% to 20% enrichment, no benefit to Iran from dramatically increase their low enriched uranium stockpiles at this time).
Posted by: Mojo | September 22, 2010 at 10:17 PM
"There are other plausible explanations for the Natanz decrease (e.g. maintenance or sparing issues, a temporary shortage of uranium hexaflouride, preparation to switch to a new type of equipment, changes related to the later switch from 3.5% to 20% enrichment, no benefit to Iran from dramatically increase their low enriched uranium stockpiles at this time)."
Definitely.
Posted by: Gary Farber | September 22, 2010 at 10:21 PM
I missed the USB in my skim. Interesting also if it is limited to the Win-Siemans architecture?
Posted by: John Personna | September 22, 2010 at 11:40 PM
If the malware is spread by USB stick, then one of two things would appear to be true:
1) whoever generated the malware has (or believed that they would have) someone on the inside at the target location. Someone who could insert the USB stick.
2) the intention is to infect a lot of (supposedly new and unused) USB sticks, in the expectation that one of more of them will be acquired, and then get used, at the target location.
From the number of systems reported to be infected, it would seem that the latter is the more likely transmission vector.
Which suggests that one interesting analysis approach might be to figure out how to detect it on a memory stick. And check lots and lots of them. And then back-trace the infected memory sticks, to see what their common origin is. Just the sort of combined "lousy legwork" and massive analysis that major intelligence organizations are very good at. Which, in turn, would suggest that the origin (if not the target) is probably not totally anonymous any more.
Posted by: wj | September 23, 2010 at 12:37 AM
Paul,
Is there a reason that someone couldn't build a duplicate of the target SCADA, with fake inputs representing the actual controls and instrumentation that the actual SCADA is hooked in to, and fully test the worm on the duplicate SCADA?
I would have imagined that that was what people do when they build a new SCADA (wouldn't want to debug the code on a nuclear power plant while it was actually running...), so I would have imagined that there would even be a standard framework for setting up SCADA development test beds. But I don't really know anything about the subject, is there really nothing like that?
Posted by: Charles S | September 23, 2010 at 05:02 AM
Not saying that the underlying information in the quoted article is wrong, but there are a lot of technical inaccuracies. That might just be an issue of translation from Technical to Everday, but it might also indicate some fundamental misinformation (or disinformation).
Just two examples:
> Too large, too encrypted, too complex to
> be immediately understood, it employed
> amazing new tricks, like taking control
> of a computer system without the user
> taking any action or clicking any button
> other than inserting an infected memory
> stick.
I first encountered that "amazing new trick" in the wild in 1987, and I don't think it was new then. USB-key malware has been common since at least 2002.
> "Until a few days ago, people did not
> believe a directed attack like this was
> possible,"
That will come as a surprise to the industrial control systems dudes I work with; the possibility of a directed malware attack on an industrial DCS has been of great concern to everyone in the industry since the barriers between the corporate networks and the plant networks began being crossing around 1996, and has been discussed extensively on controls-guy message boards for the last 2-3 years. It was also discussed both in the industry and general press after the 2003 Northeast blackout (which was ultimately tied to poor tree trimming, not control systems, but which an inopportune software failure did make worse).
Cranky
Posted by: Cranky Observer | September 23, 2010 at 06:48 AM
> Is there a reason that someone couldn't
> build a duplicate of the target SCADA, with
> fake inputs representing the actual
> controls and instrumentation that the
> actual SCADA is hooked in to, and fully
> test the worm on the duplicate SCADA?
In theory not impossible, but (a) it would be an extremely expensive endeavor (b) obtaining that much hardware and licensed software (these systems are neither cheap nor sold in large quantities) would be difficult to conceal for very long in the industry without extreme Manhattan-project levels of security (c) control systems in working plants are constantly being modified, expanded, contracted, changed, optimized, etc; if you have that much knowledge of the target's systems why not just pay the guy who is stealing the plans for you to set a small explosive device in the computer room? [*]
Cranky
[*] That's the weak link in most super-cyber-espionage plots: it is always cheaper to just bribe the cleaning lady to steal the stuff for you.
Posted by: Cranky Observer | September 23, 2010 at 06:53 AM
Paul,
Is there a reason that someone couldn't build a duplicate of the target SCADA, with fake inputs representing the actual controls and instrumentation that the actual SCADA is hooked in to, and fully test the worm on the duplicate SCADA?
I have absolutely no clue :)
-- my take was based on the rather extravagant claims for the software, and my minimal experience literally years ago in writing and debugging even the simplest program.
Posted by: paul lukasiak | September 23, 2010 at 08:21 AM
Here's what I think:
I think the government tends to work the middle of the technological spectrum better than it does the edges. Defense is a decent example of this: by the time a system actually gets deployed, its technology is many years old. I recall seeing some press releases in the late 1990s about missile defense tests, talking up the technology. None of the technology in that missile was much less than a decade into its development cycle; some of it much older. Sure, some of the specific hardware was brand-new, but the technology wasn't.
I can't see the government contracting this kind of work out to a straight-up defense contractor; that's kind of a low-bid proposition. I also can't think that the CIA (for example) has enough high-caliber hackers to pull this off, although anything I say about the CIA should be viewed with Spock-canted eyebrows.
The alternative might be that this kind of work has been farmed out to a national laboratory like Sandia or LLNL, or that an analog of a national laboratory for developing cyber-weapons has been established in near-absolute secrecy. In any case, the need for secrecy would be very high, and the cartoon view of hackers is that they resist authority.
This is just a really long version of "I don't know either"; just throwing some ideas out there. It's probably horribly irresponsible of me to speculate, and I'm sure I'm as good as equipping Iran with nuclear-tipped precision-guided long-range missiles just by thinking about this kind of thing.
Posted by: Slartibartfast | September 23, 2010 at 08:48 AM
"Not saying that the underlying information in the quoted article is wrong...."
This would be one reason I provided links to several articles.
Posted by: Gary Farber | September 23, 2010 at 08:53 AM
The amount of intelligence needed to specifically target a particular system must be staggering. I know a (very) little amount about automated control systems, and while there are a limited number of platforms on which automation controls run, I can't see how one should target a specific system in the way described unless you had a virtual replica of the system i.e. a true-to-life simulation. The complexity of sensor inputs, controls, etc, customized tolerances and user inputs would be extensive, plus many components (switches, motors, etc) have their own on board logic.
This isn't to say it couldn't be done, but whoever did this needed to know the target system pretty intimately.
If you had access to that kind of intelligence, wouldn't there be easier solutions?
I guess this is a long way of saying I agree with Cranky.
Posted by: bluefoot | September 23, 2010 at 09:56 AM
In theory not impossible, but (a) it would be an extremely expensive endeavor (b) obtaining that much hardware and licensed software
since it definitely targets a Seimens PLC, there's a good chance Seimens is involved in the analysis, and likely wouldn't charge for any of that since it's in their best interest to figure out how the worm operates (so as to prevent future infections)
Posted by: cleek | September 23, 2010 at 11:18 AM
maybe the worm is related to this... ?
Posted by: cleek | September 23, 2010 at 11:35 AM
I'm guessing there aren't all that many people who could afford the system in question for testing, but it's well within the means of the US government.
/obvious
Posted by: Slartibartfast | September 23, 2010 at 11:36 AM
I'm waiting for the globally televised demand for One....Million....Dollars.....
Posted by: russell | September 23, 2010 at 11:44 AM
As I quoted Frank Rieger:
Posted by: Gary Farber | September 23, 2010 at 12:21 PM
It seems that Mr. Ralph Langner is doing a fair bit of self-promotion in this and is stoking the hype some.
I don't think that Langner's third claim (viz. "The attack combines an awful lot of skills ... This was assembled by a highly qualified team of experts") stands up particularly well.
It has been possible to buy 0-day exploit code for some time now. The reason why prior malware hasn't used 4 0-day exploits is because that's overkill. The simplest explanation for that aspect of Stuxnet is that the attacker spent a bit of money (hundreds to thousands of dollars) for exploit code and prioritized probability of success over cost.
Similarly, an attacker doesn't need great sophistication to get access to code-signing certificates. If the certificate's owner has weak internal controls, you could simply bribe (or extort) an employee to get it for you.
That does make this particular attack (and, by extension, the attacker) stand out, but it stands out not by its sophistication but by its complication and degree-of-overkill.
I like this abstract of a paper on Stuxnet:
Posted by: elm | September 23, 2010 at 12:24 PM
"Well within the means of the US government."
Or the Israeli government. And as described, this would be an ideal attack from their point of view: highly targeted, highly effective, yet deniable. I think the main barrier to entry is buying/obtaining copies of the industrial software involved, and setting up a test bed somewhere. That's expensive, no doubt, but I doubt it would break the IDF or Mossad bank. If you're highly motivated -- and whatever else one thinks of Goldberg's article, it showed the Israelis are -- you'll find several $million somewhere to pay for it.
Posted by: Thomas Nephew | September 23, 2010 at 12:25 PM
People seem to be suggesting this is all hype; it really isn't. That is, I can't say that aspects might not be entirely hyped; the existence of the virus, and an awful lot of specific knowledge about it, though, is indisputable.
Interesting graph:
Posted by: Gary Farber | September 23, 2010 at 12:30 PM
Some other good quotes.
This is a major story, and if you want a patented case of one you've read here that will be showing up any day now in the New York Times, this is it.
I will bet anyone a shiny nickel on this.
Posted by: Gary Farber | September 23, 2010 at 12:35 PM
Additionally, the large size of the Stuxnet files (500kb and 25kb) points away from the normal virus-writing crowd.
Compare some of the files sizes listed here (e.g. 73728, 54482, 5554, 6038) and you'll see that a 500kb file is quite big by these standards.
Posted by: elm | September 23, 2010 at 12:36 PM
And some more level-headed analysis of Stuxnet.
This posting, in particular, shows that it infected more systems in India and Indonesia than in Iran.
Posted by: elm | September 23, 2010 at 12:47 PM
i'm impressed that nobody here has asked what a "zero-day" exploit is.
i had no idea the ObWi gang would be familiar with the arcana of haxxorz.
Posted by: cleek | September 23, 2010 at 01:01 PM
elm, both our links on the number of infections by country date from July. Clearly, more up to date figures would be more useful.
I haven't seen those figures yet; so far, all the quotes on that that I've read go back to July; if you run across more current figures, please let us know.
Posted by: Gary Farber | September 23, 2010 at 01:04 PM
Is there something about Stuxnet that gives it a particular affinity to countries whose names start with I? Look out, Italy, Ireland and (unless you're Francophone or something) Ivory Coast.
Posted by: Hogan | September 23, 2010 at 01:05 PM
i'm impressed that nobody here has asked what a "zero-day" exploit is.
i had no idea the ObWi gang would be familiar with the arcana of haxxorz.
I didn't ask because I'm pretty sure I wouldn't understand the explanation either.
Posted by: ajay | September 23, 2010 at 01:06 PM
I place even less reliability in general newspaper/media articles on technical subjects than on politics or other subjects, and on quotes, but for what it's worth:
That again points back to July, save for Mr. O'Murchu's quote.On August 6th:
Posted by: Gary Farber | September 23, 2010 at 01:09 PM
Agree with Gary on this. Whether or not it's been hyped, it isn't a hoax.
As far as the salesmanship aspect, keep in mind that there are a lot of people looking at the US Mil with an eye towards cutting budget and with the table tilting towards COIN at the moment there's a big chunk of the brass (especially in the Air Force) looking for ways to save their careers/budgets/commands. Cyberwarfare ties in with a lot of the RMA stuff that Rumsfeld was pushing before Iraq and Israel's most recent foray in Lebanon put RMA to pasture. The tech side of the military and their contractors need something, even if it's not originally theirs, to point to as a reason not to take away their allowance.
Posted by: nous | September 23, 2010 at 01:16 PM
I didn't ask because I'm pretty sure I wouldn't understand the explanation either.
Apparently it's a system vulnerability that the attackers discover before the system developers do.
Not like these guys.
Posted by: Hogan | September 23, 2010 at 01:18 PM
The source of this malware is obvious. The real question is... what does Wintermute have against the Iranians?
Posted by: NCfromMKE | September 23, 2010 at 01:28 PM
Win.
Posted by: Catsy | September 23, 2010 at 01:30 PM
They're resourceful and committed, but I'm not sure they'd be all that interested in Indonesia.
But maybe that's the point of targeting Indonesia...
Posted by: Slartibartfast | September 23, 2010 at 01:30 PM
Governments which definitely have the capabilities: US, UK, France, Germany, Japan, Israel, China. There are a number of other countries (especially in Europe) which in theory have the money and the skills but likely not the motivation. For all we know it cold have been the Dutch.
Posted by: gerbal | September 23, 2010 at 01:48 PM
"Governments which definitely have the capabilities: US, UK, France, Germany, Japan, Israel, China."
Russia, of course, as well. And as you say, others could if they really really wanted to.
Posted by: Gary Farber | September 23, 2010 at 02:01 PM
To clarify my position, I don't think it's all hype. It's a real piece of malware with aspects that point to something new and potentially-interesting going on. The level of attention and effort are interesting and I think the links I've posted support that.
I do think that some of the people commenting on it exaggerate the level of sophistication and skill required to execute such an attack. It requires money, people, and moderate organization but not necessarily anything extraordinary.
Posted by: elm | September 23, 2010 at 02:04 PM
"It requires money"
In selling software services, I know a little about it, the answer is that i can write software to do anything, given adequate money and time.
There is nothing as commonplace as process controllers so the only complexity is in knowing the bios/OS/application code details of the specific target you want to infect.
Just ballpark quoting the project I think it is two to three architect/coders on those specific platforms 6-8 months plus hardware for testing.......3 Million tops.
Posted by: Marty | September 23, 2010 at 02:16 PM
I didn't ask because I'm pretty sure I wouldn't understand the explanation either.
:)
if i release a program Tuesday and hackers find a way to break its protections on Thursday, that's a 2-day exploit.
if they break it on Wednesday, that's a 1-day exploit.
a zero-day exploit means they found a way to break the protections within 24-hours of its release (either because the exploit was easy, or they were working from a pre-release version, or both).
Posted by: cleek | September 23, 2010 at 02:18 PM
cleek, I think you're referring to the warez sense of 0-day. A 0-day exploit is one that's unknown to the software vendor (or at least not-yet-fixed).
Posted by: elm | September 23, 2010 at 02:29 PM
cleek, I think you're referring to the warez sense of 0-day.
yup.
A 0-day exploit is one that's unknown to the software vendor (or at least not-yet-fixed).
yeah, i've never liked that usage. guess that's why i forgot about it :)
it seems silly to start the clock when the vendor learns about the problem. the real clock started when the exploit became possible. if it takes you three years to learn about a problem, that's means attackers have had three years to exploit it - an attack that happens after you start the fix ? BFD.
Posted by: cleek | September 23, 2010 at 02:45 PM
bluefoot: The amount of intelligence needed to specifically target a particular system must be staggering
Maybe, maybe not. Let's say you know that control systems with serial numbers X, Y, and Z wound up being sold to the target country, and the control systems pass through their serial numbers to the Windows software (which would be a pretty typical setup for most software). It would be easy to target a particular system by serial number in that case.
I don't know anything about the SCADA stuff in particular, I'm just saying that typically hardware has serial numbers and those serial numbers are available to software. And acquisition of serial numbers is a lot easier than getting physical access to the equipment before it is sold, since you can do it after the fact.
Posted by: Jacob Davies | September 23, 2010 at 03:08 PM
What I don't understand is (a) how you could do this without having someone on the inside, but (b) why you would do this if you *did* have someone on the inside.
Posted by: Doctor Science | September 23, 2010 at 11:18 PM
S.P.E.C.T.R.E. or SMERSH?
Posted by: elbrucce | September 24, 2010 at 12:41 AM
I'd like to make the case for this being a dud or a misfire. So much work went into targeting a single configuration of the SCADA, it becomes like a rocket launch, and we know how frequently those go awry. There's a killswitch for Jan 2009, and I don't see any really viable candidates for successful victim. What if the implemented system deviated from the "stolen" blueprint in some meaningful way, leading to mission failure?
Posted by: rweaver | September 24, 2010 at 02:36 AM
why you would do this if you *did* have someone on the inside.
Presumably, because a source on the inside of the Iranian bomb programme (if such a thing existed, which, let's be clear here, it doesn't) is too valuable an asset to burn for a single act of sabotage, and dropping a worm into a SCADA system is less easily traceable to the source than other more overt acts of sabotage.
Posted by: ajay | September 24, 2010 at 05:00 AM
Well, using the vast resources at my command, I believe I have cracked the enigmatic "DEADF007" code. It clearly is partial 1337-speak for "Deadfoot," which is, as everyone knows, the site of "NY's Original Hardcore Mutant Thrash Metal Band," known as "Combat." Be afraid. Be very afraid.
Posted by: Allienne Goddard | September 24, 2010 at 05:12 AM
Gary -- This is a major story, and if you want a patented case of one you've read here that will be showing up any day now in the New York Times, this is it.
I will bet anyone a shiny nickel on this.
Cha ching!
Posted by: nous | September 25, 2010 at 07:59 PM
Post updated; thanks, nous.
Posted by: Gary Farber | September 25, 2010 at 08:28 PM
To cross from digital realm to the real world? And do you think the boundary still exists? Our daily lives are firmly connected to the Web, there are no boundaries anymore. and it's not the first attack of its kind, just remember the Russian attacks on Baltic states' banks and financial institutions
Posted by: Dilbert | September 26, 2010 at 08:03 AM
They didn't actually literally blow up any banking equipment, last I looked.
Posted by: Gary Farber | September 26, 2010 at 08:27 AM
More on the worm at DEBKA.
Don't know anything about this source besides that it appears to be right wing and Israeli. Also not sure that I trust the money quotes about the worm's capabilities as they make it sound almost conscious and guided, rather than procedural. Something is off in the reporting on this, but it is worth a read, nonetheless, just to see how the report projects its own motivations onto the worm.
There's something vaguely surreal about this whole story.
Posted by: nous | September 29, 2010 at 12:59 PM