His point was having a backdoor jeopardizes the confidence and robustness of encryption. This was likewise the general consensus at the 2016 RSA Cryptographer’s Panel: https://www.youtube.com/watch?v=k76qLOrna1w
The government is essentially requesting Apple create a backdoor via a custom iOS, which is possible because currently iPhones will accept a new iOS without entering a password. This is a vital point because a common lay perspective is “just have a really good back door and keep it secret”. The RSA cryptographers plus Matt Blaze argue that is technically risky.
Apple may now feel forced to alter iPhone design so it no longer accepts iOS updates without a user’s physically-entered password, which itself would be encrypted in the Secure Enclave cryptographic coprocessor. That would close any possibility of breaking in except for chip decapping.
Not so sure it’s crazy zany high. Of course, it depends on the size of the facility, but for TS cleared SCIFs $280 - $330 per square foot is not unheard of. I have no idea what floor space is needed. But if the facility build is, say, $300 per square foot as a raw cost, and to that we add overhead, G&A, and fee, and then initial T&E/acceptance testing, and then start discussing the IT costs, we get into the 10 million range for a moderately sized facility.
In my view: no, the bolded portion of (1) is not the case
However, the government does have an uncontroversial legal path open to them that would make (1)'s bolded portion true. To avoid taking that path, I think they seek a precedent with this approach.
You’re changing the rules of your own hypothetical. You started with this:
But, now you’re saying that with option 1, it is actually allowable to make a phone that can’t be cracked. So, why isn’t it possible in option 2 as well? The NSA have many capabilities, but they aren’t wizards. Whether or not they can get into this particular phone, it’s certainly possible to design a phone they cannot get into.
I mean, the operating budgets of the NSA and Apple are in the same ballpark. If Apple can design a phone that they can’t crack (even with their private keys available), what makes you so sure the NSA can get in?
Exploiting a security flaw (whether there intentionally or not) it not the same thing as weakening an encryption algorithm. Period.
Let me be more precise: there’s absolutely no way you can convince me that Apple would have to build a new building to accommodate a SCIF. Refitting office space to be a SCIF isn’t something you do on a weekend with stuff from Home Depot, but there is oodles and oodles of commercial office space around DC that has been redone to accommodate SCIFs. The concept that Apple must pay for a whole new building in Silicon Valley is patently absurd and not credible in the slightest. SCIFs do not need to be surrounded by tall fences, barbed wire, and all that stuff.
I think you misinterpreted my caveat. The thesis of my question does not suppose that any cracking effort must be successful - I’m simply saying that best effort is required in either choice. Results are not guaranteed, but “don’t try to crack the phone” is not an option.
Notably, the FBI admitted that they did not ask the NSA for help. And when Comey was directly pressed about whether the NSA had this power, he evaded the question.
Indeed, there is some skepticism about whether the FBI cannot do this itself.
So I think there is some chance this is 100% about getting the precedent, which may allow them to do this more cheaply and routinely, and may also apply to areas where they really don’t currently have a solution (WhatsApp?).
The problem is that Clarke does not identify the experts, nor does he or anyone else I have read explain how the NSA can crack the phone.
Do they believe the NSA already has Apple’s private key, and has written a signed OS update that disables the retry lockout? That’s at least plausible.
Do they believe that the NSA can brute force 256 AES ? That’s not.
No, they aren’t the same thing. The FBI still needs to brute force the passcode. If it were a complex passcode, they would never be able to do it even if the modified iOS were loaded onto an iPhone to disable the 10-tries-and-erase feature.
These security features are not interchangable, they are complimentary. And the Clipper Chip has fuck-all to do with dumping the contents of a phone after ten errors.
Lots of other options, too. They could have figured out a hardware hack to avoid the reset (indeed, it already seems to be public knowledge that this is possible, using at least two different methods). Or they could have captured the data in transit.
Consider that this was a work phone. They destroyed their private phones. The odds of it having anything on it can be found on the corner of Slim and None.
It’s not clear whether the government can unlock this specific phone with its own resources (best estimates based on known crypto capabilities are “probably”). It is known that the government would already have the contents of the phone if they hadn’t directed the San Bernadino County IT people to bollix the phone’s iCloud sync.
A more fundamental reason they can’t do that is that it would be a flagrant breach of evidence chain-of-custody. Allowing the procedure you suggest opens the door to the following scenario:
If the facility is protecting something as valuable as Apple’s source code and all the financial transactions, etc, that would be exposed by releasing a backdoor into the wild, tall fences and barbed wire would be mere warning signs before an intruder starts to encounter the real security.
Well, considering that Apple already protects its source code, I must presume that they already have some kind of security. Maybe a few mall cops guarding a a mattress with the private keys hidden underneath.