I don’t know either way. I don’t know why you would say it seems likely.
So, in your view, is there any limit on the government’s ability to impress someone into their service for a criminal investigation so long as they pay reasonable compensation? If so, on what basis do you draw the line there?
I would quibble with it, but I don’t see that my quibbles matter. That scenario seems entirely distinguishable from asking the phone company to reveal data that is already contained on the property of the phone company.
If I have the data, you can compel me to reveal it. You cannot compel me to get the data. To the extent those categories are murky in this context, it is much closer to the latter than the former.
I’m not sure how Verizon would get the content of phone calls to authorities without inventing something to do it. Seems like common sense.
I’m not sure I can imagine all the rules that would possibly come into play. What I’m saying is, for this set of facts, where the government has a pretty reasonable need for some type of search, a judge has looked at the facts and agreed, but state-of-the-art technology means that the government probably doesn’t have the expertise to enable the search, I think it’s pretty reasonable for the judge to order a company to do a rather limited thing (and very likely compensate the company for their effort).
There’s absolutely limits to this question, but we’re headed so far into hypothetical and “slippery slope fallacy” land that the answer to your question isn’t actually relevant at all to the circumstances we are debating.
Who controls iOS, Apple or the person who bought the phone? If a person owns and possesses a device with iOS, do you suppose Apple allows them to make any and all changes to the software, just as Ford doesn’t try to prohibit people from making all sorts of changes to a Fiesta hatchback?
Often it is the authorities who invent the technology. For example, Harris Corporation manufactures devices that force cell phones to connect to them and Harris Corporation also creates software that decrypts the content of the radio communications broadcast by those phones. Verizon’s involvement in that case would be limited to providing the owner’s unique cell phone identifier to the law enforcement agency. I imagine, though don’t know for sure, that similar procedures happen with modern wiretaps. That is, law enforcement needs technical assistance to “install” the tap device, but they do not need Verizon’s help to decrypt the communication or otherwise make the captured data useable.
That strikes me as circular, hinging on the premise that this is a rather limited thing. It doesn’t strike me as de minimis at all. I don’t know how you’d assess whether this is over-the-line until you have in your head some kind of line or guiding principle.
Apple does not permit someone to alter and then use iOS. The contractual agreement is take it or leave it. But it doesn’t follow that Apple retains custody and control over it. You get to choose, for example, whether to update the software. If you choose not to, Apple cannot (ordinarily) force you to do so.
Nothing in what he was saying should lead you to conclude that there is a security hole. Apple has been asked for a workaround to the self-destruct feature only, not to decode the data once the self-destruct has been avoided. The FBI wants three things: to disable the self-destruct, to allow input from other than the physical touch screen of the iPhone, and to disable delays that kick in with incorrect password attempts.
Everyone is going on the assumption that the only way to decode the data is to try to brute force guess the password. It’s just that the way it is now, guessing wrong ten times will destroy it, and that’s what the FBI wants Apple to help with.
I agree with that - I can see the sequence of events playing out as:
The court asks Apple to invent a tool to disable the self-destruct.
Apple throws a fit and makes Bricker’s point that you can subpoena a capability that doesn’t yet exist (even if everyone acknowledges that it’s a simple solution).
The court says “very well, we’ll subpoena the source code for iOS and your private key for signing software.”
Apple says “How about we just create you a tool to disable the self destruct?”
They can do it even easier - they can make a software image file that will only load on that one particular device, and it will be impossible for it to load on another device. They can protect this restriction just like they protect access to their secret key for signing software. No need to update the other phones on the market - that one image file is for that device only.
Which, is, of course, exactly what the court order specifies - that this file be coded to run only on a phone with the specific unique identifying numbers that Farook’s phone has.
Everyone claiming that “hackers” will be able to use this device to break into any phone whenever they want is either not aware of that fact or is deliberately ignoring it.
That’s plausible, but I think the plain reading of the law is that it can compel companies to do more than merely provide schematics in order to allow the government to build something.
I’m not saying that this court order is de minimis – it is clearly an important issue, and not a trifiling detail that deserves no attention. What I’m saying is that the court order is limited in terms of what Apple is being ordered to do: basically, Apple is to deliver software to do a well-defined thing on one device that has a certain effect. It isn’t a blank check, which would be something more like, Apple is directed to give the FBI software to unlock every iPhone in the United States.
I can assess whether something is over the line without coming up with a comprehensive theory of justice and privacy in the electronic age because I see this order as being very consistent in principle to other existing laws – with the caveat that the existing laws pertain to other types of electronic devices, and not an iPhone in particular.
In my opinion, one can’t simultaneously argue that the iPhone (and the OS on it) is a device that is sold and then entirely outside of Apple’s hands, like most consumer products like safes, cars, locks, etc., and reconcile that with Apple’s view of its ownership of the iOS. The EULA specifically states that the iOS is licensed to the device owner “and reserve all rights not specifically granted to you.” Apple specifically states that users are not allowed to “copy … decompile, reverse engineer, disassemble, attempt to derive the source code of, decrypt, modify, or create derivative works of the iOS Software or any services provided by the iOS Software or any part thereof…”
It’s clear that Apple intends to maintain near-complete control over iOS, even though the physical device that it resides on is in someone else’s hands. Therefore, it’s inconsistent to say that Apple’s relationship to iOS is effectively terminated when the device is sold. Apple wants to maintain control over iOS, and demands that its customers agree to that. With rights come responsibilities, so if Apple wants to control iOS, they can’t then argue when it is convenient that they have relinquished iOS as soon as the devices it resides on are out of their physical possession.
Thanks. Why then do you suppose they don’t want to do that? Frankly I’m seeing a huge hullabaloo over nothing here since it looks like there are a number of ways Apple can provide to bypass the self-destruct feature on this one phone without endangering all the others. Maybe it’s just a huge publicity ploy on the part of Apple to create the impression their products are super secure and they’re willing to go to battle with the government to keep them that way, even though the reality is that they can easily accommodate the government’s request without jeopardizing any other phones.
What I’m saying is that there is clearly a means to flash a modified OS to a locked device, and that this OS delivered in such a way can override the self-destruct security feature.
Apple retains a great deal of control over the iPhone. It (mostly) restricts software instillation to the App store. It exerts control over what type of apps can go into that store. It forces apps to use only Apple approved APIs and services, for example payments must go through Apple. I could go on, but it’s clear that Apple retains a significant level of control over the iPhone.
Apple wants to eat its cake and have it too. They want control over the iPhone and they want to be the only ones able to exercise that control. That’s simply not how it works. The state has a compelling and obvious interest in pursuing justice, and can compel third parties to assist when there’s no other option. It has been like this for a long time and this power has been used in a variety of situations.
If Apple is truly concerned about privacy, then they can build the iPhone to be totally private and unhackable. They don’t want to because that would require them to loosen their control on the iPhone. They don’t want to do that because it will cost them lots and lots of money. Essentially, Apple wants to eat its cake and have it to. They want to control your iPhone and track your usage, but only for their benefit. If it makes them money, control and track away! If it could help prevent terrorism, then suddenly privacy is the concern.
The government is not asking Apple to in any way circumvent their encryption, but they are asking them to create a security hole. Security does not consist entirely of encryption.
And as an aside, I’m surprised that it’s even possible to install an operating system upgrade on a locked phone. I would have expected that the installation process would require that the phone already be unlocked. And I suspect that as of the next version, it will be, just to stop the next time the government attempts this.
Control over iOS or what apps are loaded on a phone is different from custody or control over customer data. It seems to me that the customer data is the appropriate object of the inquiry here.
If data is stored on an Apple server (or a Verizon server, or an AT&T filing cabinet), then a company can be forced to turn that data over. If they have the encryption key, they can likely be forced to turn that over too. But if the data is outside their custody or control, it should not be possible to order them to go get it just because they have specialized knowledge of how to craft malware that will do so.
I see no other clear line to be drawn concerning what the government may order a private actor to do in pursuit of some law enforcement investigation. If not that line, then what line can be drawn? Can the FBI force Harris Corporation to make them a Stingray that allows the FBI to read everyone’s text in a city block radius in real-time? Can they do it if Harris Corporation is the only company with the technological know-how? If you think it is relevant whether Harris manufactured the cell phones or their software, why is it relevant?
From what we know now, it’s impossible. The code must be signed by the Apple secret key for the iPhone to accept it. You can’t fake that. I presume the way that it would work is that the software image would only run on a specific ID phone, and someone couldn’t go and change that little portion of the binary file without necessitating that it get a completely different signature (and you have to have Apple’s secret key to generate that).
Yep, I agree with you here. They just don’t want to be seen as helping governments defeat the data protection features. I think, though, that they should be able to explain to their customers the limited amount they’re doing for the FBI and that the data encryption is still strong if someone uses a good password.
From my point of view, my data is protected by strong encryption algorithms and good passwords, not by the features of a platform that say they’ll erase the data if they sense a threat. I can choose my passwords and can research the security of the encryption used, but that self-destruct feature is not in my control. I don’t think any security experts (the Bruce Schneiers of the world) would consider working around the self-destruct as a security hole.
Given that it’s taking the FB Freaking I multiple months and court orders to maybe get the manufacturer to open it up and maybe be able to recover the data I think “doesn’t mean much” is the wrong description of the security this approach provides.
It’s under their control because it’s protected by things that only they can disable. I mean, this is kind of a ipso facto thing here. If Apple, and Apple alone, has the ability to access this data how can it not be considered under their control?
The difference (or at least one of many) is that Harris Corporation does not exert any control over the thing that the search warrant applies to.
Yes, you’re right. But to get at the data, you have to go through iOS. It simply isn’t plausible to say that iOS has nothing to do with this issue.
Questioning whether some other hypothetical scenario is reasonable has nothing to do with the substance of the question at hand. If I say it’s okay to eat a little chocolate after dinner as a dessert, your question of whether its okay to eat a little chocolate and some ice cream is immaterial to what I said.
But the feasibility of devising some software to be installed on your phone while locked that would bypass the self-destruct feature…
(1) has been proven to be used on older versions of iOS
(2) appears to be something that can be done for newer versions of iOS
I’m baffled as to how you (and the Bruce Schneiers of the world) don’t see that as a security hole. I don’t think your average iPhone user believes one can insert tampered software onto an iPhone while it is locked, but apparently it is feasible.
Say the information is in a safe, and no one at the FBI has the skills to open it without a lot of explosives that might destroy the information. Could they force some civilian safecracker who’s the world’s best to work for them and open it?
I would think there should be some kind of precedent regarding the power of the government to compel the assistance of a civilian expert in some field or another.
For your hypothetical to jibe with the actual situation, we would also need to assume that the safecracker built the safe and designed its locking mechanism and that nobody in the world besides him understands how the safe and lock work.