Or they’ve decided that strategic enforcement serves the interests of the American people better.
What about the FBI turning a blind eye to offenses of child pornography end users in order to bust the ringleaders? Surely there’s some example of a government agency seeing the forest that will meet your muster?
You’ll find it in the dictionary. One definition of ethics is conforming to what is commonly considered acceptable and responsible behavior, which in the case of discovering vulnerabilities in common operating systems used by individuals, businesses, hospitals, and other institutions throughout the world is to notify the vendor and the major security organizations. Ethics may also dictate the need to limit, as necessary, public disclosure of the details to minimize the risks of an exploit.
The serious failure in ethics and judgment here is that this was just a common everyday security flaw in Windows of the kind that are regularly found and patched in large numbers, so at best it was an exploit with a very limited lifetime, and at worst, it could have been independently discovered by hackers and caused a great deal of damage even if the NSA itself had not been hacked. This had all the wisdom of boys playing with dangerous toys and having something blow up in their faces, except that it blew up on thousands of blameless users including hospitals in numerous countries (it wasn’t just the British NHS).
If a Russian nuclear weapons system is under the control of an off-the-shelf version of Windows riddled with security flaws and connected to the Internet, we’re all in deep trouble! And if the NSA’s secret access to such a facility depends on a common security flaw and the hope that Microsoft doesn’t get around to patching it or the Russians don’t buy a decent antivirus program, then I’d say we need a better security agency!
That’s another failed analogy, in my view. Cracking the German encryption was a top-priority wartime initiative that was seen as a major key to victory and national survival, and involved some of the most intensive research and engineering effort of its kind in history conducted under unprecedented secrecy. It can hardly be compared to some kid finding a bug in Microsoft Windows.
I believe the US government should take an active role in defending this nation’s cyber security, including the security of our businesses and public facilities. If a foreign power is able to damage our economy through cyber attacks, then our government has failed to protect us and has failed what many would agree is its primary function. Whether the NSA is the proper department to take on this role is another question, but I would argue that if by withholding this information they damaged the US security, they should be able to answer why they felt it necessary to leave the country vulnerable.
That isn’t a substantive answer; that’s a dodge. Once more: do you think that the US Government should possess no zero-day exploits, no matter what important systems the Government may hold at risk with them?
Or if you think that the Government should hold only some zero-day exploits, what are some principles for what the Government should hold and what it should notify the public about?
ETA: and although it doesn’t directly relate to this cyber threat, let’s not kid ourselves that nuclear command and control is a cutting edge set of systems that are constantly updated. The US uses 8 inch floppy disks, and is probably not at risk because of the fact that it is so, so very outdated.
And a few hundred thousand ransomware attacks, while certainly harmful, isn’t quite the same as letting a city get razed by Nazi bombs and allowing thousands of civilians to die. I never said I was an analogy whiz. :):(
Point is, when the NSA sits on a zero day, they know the ramifications of that. While I appreciate that this ethical dilemma might be new to some audiences, it’sbeencovered. Like I said, barring some evidence of negligence or particularly bad risk assessment over this specific issue, I’m willing to give the NSA the benefit of the doubt.
There’s always a chance that other governments will find the same exploit and use it to infiltrate systems in the US. Seems to me, the consequences of US systems being hacked can be much greater than any lack of tools for the US to use. I might accept an exception if the exploit only worked on non-US computer systems (e.g. vulnerabilities specific to industrial systems not imported to the US).
If I had to hazard a guess, I’d say when vulnerabilities such as this are found and/or brought to the NSA, they use a Risk-Management Process whereas they weigh the risks and advantages a given vulnerability has. A vulnerability that no one else knows about, without an exploit in the wild, would certainly be low risk. The advantages of it are self-evident. A determination is made whether to notify the vendor or not. When something changes, such as the release of the exploit as happened several months ago, a NEW determination is made. Coincidentally, the patch came out around then from the vendor. Blame a piss-poor vulnerability management program by the companies that were affected instead of the NSA.
What about a hypothetical exploit used to gain access to an ISIS lieutenant’s computer, which results in actionable intelligence about an impending attack on a friendly target? You’d prefer they release that vulnerability and give ISIS a chance to patch their computer?
How comfortable are you with this position since we, as members of the public, are in a poor position to judge the pros and cons of the NSA never maintaining zero-days for widely available commercial computer systems? You’ve highlighted one “pro” for such a policy, but do you feel you have a good handle on what the “cons” are for that question?
It wasn’t a dodge, it was a direct answer. I’ll state it in different terms. A zero-day exploit in a common and widespread operating system is not likely to stay zero-day (i.e.- unknown to the public) for very long. The only question is what will happen first, its discovery by ethical agencies who provide fixes and protections, or its discovery by hackers who wreak havoc. It is, therefore, a very poor tool because it is both likely to be short-lived and potentially risky. It’s just an odd happenstance that it made it into the wild due to the NSA itself being hacked; it was more likely – and foreseeable – that it would have been independently discovered by hackers.
I’m not, for the reason given above – it was very poor judgment. Kind of like defending yourself with a stick of dynamite that will become inert and useless in a short time but meanwhile is so unstable that it might spontaneously blow up in your hand. Not really a great weapon.
It’s not just a simple risk vs. benefit calculus, it’s a moral issue. Putting our own people at risk should not be an acceptable trade-off for maintaining an offensive capability. It would be like waiving worker safety rules to build a new weapon for war.
This doesn’t make any sense. You’re arguing against your own case.
I find it totally inconsistent to say in this case that zero-days are both short-lived and risky, implying that something is both dangerous and useless at the same time.
As a general proposition, if it is useless, then few computers would be vulnerable to it over a short period of time. If it is risky, then many computers would be vulnerable to it over a longer period of time.
In this case, we can generally agree that this exploit was risky. But then your implication that the NSA would not find value in it is shown to be wrong: it IS successful in infecting computers, showing that it is “useful.”
So which is it? You sort of have to pick a horse to ride on, because your two horses are headed in opposite directions.
So go back to my extreme case: the US Government can foil a Russian nuclear weapons launch because of a flaw in Windows. If the Government keeps it secret, then it is much more likely that 75,000 computers in dozens of countries could be held for $300 ransom. If the Government does not keep it a secret, then there’s a small fraction of a smurfs-ass-hair of a chance that hundreds of millions of Americans would die in a nuclear war.
To claim that this is a simple moral question is not clear to me at all. Which risk should we take more seriously: the relatively high odds that a very small percentage of the many hundreds of millions of Windows computers in the world will be ransomed, perhaps with risks to the lives of relatively small numbers of people who may be in hospitals; or the remote chance that if nuclear war were to break out, that we could save many hundreds of millions of lives?
I think that’s a pretty fucking hard question to answer, in terms of responsibility to the security of people in this country and elsewhere.
While the questions asked in this thread are all reasonable, in this particular case, Microsoft knew about the bug and fixed it and issued a patch months before this malware was released. The problem is that many people and institutions are slow to upgrade.
Now, is the NSA liable for creating this malware and then losing it? If it were a private party that had done this, I think they would be liable. The government enjoys broad immunity from lawsuits, but can still be morally liable, of course. The “someone stole my loaded gun” analogy seems apt. Did they take reasonable precautions to avoid that?
That is a ridiculous example. There are many, many levels of protection against a Russian nuclear attack, from basic diplomacy & international treaties to mutual assured destruction and missile defense systems. We got through the Cold War without either side having to sabotage the other’s offensive capabilities. And if all those layers of protection fail, to the point where an NSA hack is all that’s standing in the way of a global thermonuclear war, then the situation is already hopeless.
In fact, if a Russian nuclear weapons system has a software vulnerability, the more significant danger is for someone to infiltrate it and activate the weapon. It’s in our best interest to make sure their systems are secure.