Yes to which possibility–the hardware exploit or the manufactured backdoor?
Perhaps it will turn out that Taiwan Semiconductor Manufacturing Company is actually an NSA operation. But I wonder whether it is possible to build into the chip that does the encryption some kind of backdoor that would be unknown to Apple.
I guess you don’t do a lot of online shopping? Or never went to a hospital that needed to request your medical records from your doctor? I guarantee you your doctor didn’t drive down to the hospital and hand your records to the ER doc in person. Can you even invest in the stock market without a computer these days?
The 'Internet shouldn’t be encrypted" ship sailed a few decades ago. Feel free to create your own insecure network service if you want, though, I’m sure you’ll draw a lot of like-minded customers.
My contention is that they are already in the phone. This public song and dance is to convince any associates or terrorists in general that they are safe using their phones.
The very fact that there is such a large public outcry is in fact evidence that the phone has been broken. One does not broadcast a weakness deliberately, but you do broadcast what you want them to think is your weakness.
If the history of cryptography shows us anything, it’s that people have way too much confidence in their encryption.
Let’s look at some of the assumptions you’ve made. In order to break the key taking a billion billion years depends on that theory being accurate, and that Apple has implemented their version of the code properly. Codes being broken because they were not implemented properly is pretty commonplace.
But, let’s assume that your above statement is theoretically accurate, and that Apple has produced a a perfect implementation of that theory.
That being the case I would say that it is still almost a certainty that the phone has been broken. This is because the easiest way to break a particular instance or iteration of a code is by already knowing its content. If you know nothing about the content than cracking the code is likely as difficult as you suggest. However we know an awful lot about what’s on that phone. It has an operating system and tons of applications which are going to be line to line identical to every other phone with those apps and operating systems. We are talking about millions and millions of lines of code to work with. This is much much more than a Rosetta Stone. We also know dactyl how the encryption system works. The only thing we don’t have is the key. But we have millions of lines of code from which to derive it.
Enigma was famously cracked because the code breakers figured out that the Germans always ended with “Heil Hitler.”
With all due respect, this shows that you don’t really understand how modern encryption works.
My phone and your phone have identical content, but my phone will encrypt the identical file content much differently than yours. It’s of no use to try to compare the results of two different encrypted phones, or even compare a clear text phone and an encrypted phone, and expect to gain any useful information.
Both. Each depends on the strength of post-production audits. It’s theoretically possible to subvert a coder or two, and then subvert an auditor or two.
No. I’m pretty sure I am right. And you haven’t really made an argument here. You’ve just made some assertions.
Let me try to further the discussion by asking In what precise way will these two identical phones running identical software encrypt the identical data differently?
If the government has such talented people working for it that they can break strong encryption, and they are seeking to use Apple as a ruse so as not to expose that we can break that encryption… Boy, an idiot devised that strategy.
“So Bob, we’ve got the contents of the phone here, but we need a cover story as to how we broke into it. Any ideas?”
“How about we try to force Apple to break into the phone by pursuing a questionable legal strategy, which arrogant Silicon Valley firms will oppose. Ultimately, Congress might have to weigh in and change the law for us to successfully use this ruse. But we all have faith in the productivity of Congress, right?”
“Damn, Bob, you’re a genius. You’re going to be promoted to GS-16.”
“Hey, Special Agents? It’s me, Donny the Intern. How about we say we found the passcode on a Post It while searching the terrorist’s home?”
“Shut the fuck up, Donny. You’re like a child, lost in this world. We’re going with Bob’s plan.”
It’s simpler than that. Apple has to be in on it and cooperating. The idea here is to get the bad guys to use their phones and rely on them under the assumption that they are secure.
This kind of thing has been done before, as I pointed out in an earlier post
I’ll answer myself. They are going to be different based on the UID of each device which is unique to each device and is used possibly in conjunction with the passcode to generate an encryption key. Running the same algorithms with different encryption keys produces different results on the same data. Since everything is known but the encryption key a comparison of non encrypted data to hat same data encrypted May through statistical analysis allow the key to be determined. That key can then be applied to the data which is not known in order to decrypt.
That’s one way. Another is to attempt to brute force the password. Is the terrorist still on the old 4 digit password or did he choose a stronger passcode? How strong? Is it one we can guess?
Did he store things in the cloud?
Do we have access to the trusted computer he used to black up and sync his phone?
Is IOS vulnerable to some form of a trusted external boot source the way earlier phones and os’s were?
I think the real prize the Feds are after is being able to compel Apple to assist them in hacking phones in use. In other words, there’s no real difference between saying “Apple, write this software so we can get into a phone we posses” and “Apple, write software so we can remotely install spying software on this phone”. That’s the real prize. Imagine the payoff if you can see all the files on some terrorist’s phone and turn on the microphone/camera at will.
A random (ideally unique) value is used within the encryption process to prevent this flaw.
That said, this encryption scheme is flawed because it ultimately rests on a 4 digit code. Apple has done a lot to mask that flaw, but ultimately it’s still flawed. With enough effort it can be cracked by a third party, and Apple can probably do it in an afternoon.
There are a number of ways this could be addressed in court. First, Apple may own the rights to the phone but it doesn’t own the employees who designed it. They (the programmers) cannot be compelled to do anything they find morally objectionable. Second, the actions of the court were taken under the premise of “no stone left unturned”. We already have plenty of legal doctrine regarding privacy. Creating software that has a likelihood of being misused by criminal elements is (in a very broad sense) a contradiction of laws designed to protect privacy. The court must take other laws into consideration as well as the purpose of those laws and the consequences of any interpretation of those laws.
Each iPhone has a UID hardware key burned into the chip during the manufacturing process. That UID is used on each phone, which generates a random number and then generate a key called key0x89b.
key0x89b is used to encrypt the iPhone’s flash disk. This is a unique key to each device and means that the result of the encryption on one iPhone is totally different from another.
In addition, each file is individually encrypted with a class key. There are four class keys exposed through the Data Protection API. The keys for these classes are stored in a keybag.
The user-created passcode is used to create a key by entangling it with the system UID and then salting it lots of times. This key is used to encrypt the individual class keys within the class bag, and the process takes about 80 milliseconds each time.
When you lock the phone, the decrypted key is wiped from memory. So all those class keys are useless, unless the lock code is provided again externally.
I’m no expert on encryption, but my understanding is that it works by multiplying prime numbers. The idea that the math is easy one way but near impossible the opposite way. 631x773=487,763. But to find the 2 original prime numbers is near impossible without trial and error. It is my understanding that they use very large primes.
So if apple were to make their encryption easily cracked, aren’t they just basically being required to make pretend encryption? Something that allows us all to pretend we have secure data but really is all just make believe?
No. The encryption has the effect of hashing the plain text. With a key of this length changing one letter of a plain text sentence produces a completely different result, not just a result that’s one letter different.
The problem with brute-forcing the password is Apple’s security mechanism that deletes the key after ten incorrect guesses.
It’s not clear to me what the owner of the phone used, but iOS permits alphanumeric long passcodes, not just 4 digit unlock codes.
I understand. What I am saying is that if we know that a given string of data is John Donnes “No man is an Island” poem, No matter how you encrypt that, the result is extremely vulnerable to decryption analysis. It doesn’t matter if my phone uses one key and your phone uses another. If we know how a large data stream decrypts than we essentially know how to decrypt it with not too much analysis. Once we have the algorithm (which is already known) And the content (known,) the key is the only remaining variable. Things are tough, nigh unto impossible, when we don’t know content, key, or algorithm. But much easier when we know two out of three.
One of the hardest most unbearable types of encryption is the one time pad. Even so, the German’s managed to break some of this without knowing content or algorithm simply based on the fact that the lady spinning the bingo cage with the random numbers tended to get lazy later in the day. That’s a lot less to go on than we have with this iPhone.
It resets the key, but not the password. Brute forcing the password for more than ten attempts would thwart another party that was working on the key. They would have to start again from scratch, but it doesn’t reset anything else. I just entered 13 incorrect passwords on my iPhone, and then put the correct one in and it worked just fine. Also realize that the encryption on an iPhone is different based on whether the phone was rebooted since the last time a correct password was entered. Brute forcing the password is just fine as long as long as you are not simultaneously attacking the key.
Yes. If the user in question was a longstanding iPhone user who had owned several iPhones and been through several iOS upgrades, than he has never gotten a prompt to upgrade his password beyond the four digits. If he was lazy or ignorant that is all he has. If he used an alphanumeric password of longer combination it is still unlikely that it is random. People use nicknames, birth dates, parents names, kid’s names birthday’s, past girlfriend’s names. Typically people choose passwords that they feel have special significance to them, but because we as people tend to think the same, we tend to choose passwords based on the same criteria. People almost never choose a random string or truly strong password.
Can’t say for sure, but there is a very good chance there is one as Apple makes it difficult to not have one, and most people do. It would be unusual for there not to be one.
You have a cite for that? My research is questionable on this particular. Evidence suggests that it won’t because this has been a known vulnerability of past versions that had been successfully spoofed. If in fact it is vulnerable to this which I consider possible but unlikely than the gov provably could spoof it on their own without Apple.