Why is everything hackable?

Your point 2 is sound as far as it goes. There have been consumer routers sold with defective hackable firmware. And certainly many people (though perhaps not your MIL) have defective configurations.

Your point 1 & 3 are, IMO, pure BS & dangerous.
Yes, nobody is targeting your MIL by name. Instead, the botnets are simply trolling the entire address space of the internet, attempting entry at each router/computer they find. And there are enough of them that your MIL, and you, and me, and my next door neighbor, are tapped every few hours by some bad guy group based somewhere.

And it’s the logistical power of them being able to rattle the network “front doors” of millions of PCs per hour that makes hacking for identity theft profitable. They can’t send live people around to break down physical doors & windows nearly that efficiently. Which is why Joe & Jane Middle America don’t have to worry much about coming home to find the front door broken open & their PC gone.

The thing I try to tell amateurs is that their network connection to the internet is like a SciFi portal connected directly to the streets of Mogadishu. *That’s *what’s just behind the connector in your wall. Not the rest of your calm suburb of civilized people; instead a wild West of Pure Anarchy & endemic organized crime and violence. With zero police or our-side military anywhere. And every few hours somebody from there will shake the crap out of your virtual front door hoping the lock’s a little loose, and they’ll try sticking their collection of a few thousand stolen keys in it. Maybe one will work today.

THAT’s what we’re all up against, both homeowners and pros who’re securing seriously important systems. In addition to all the above, the pro’s have to deal with targeted attacks actually aimed specifically at them.

Correction: You cannot write a program that can always verify if every arbitrary program halts on a given input. But nobody in the real world writes arbitrary programs, even when they’re studying problems that are given as textbook examples for the halting problem.

For example, suppose I’m interested in the Collatz conjecture. I write a program that increases n one step at a time, and for each value of n, constructs the Collatz string for it. The program keeps going, incrementing n by one step at a time, until it finds an n that leads to a loop other than [4,2,1], and when it finds such an n, it halts. Will my program ever halt? I have no idea: That depends on the truth of the Collatz conjecture, which nobody knows.

Except I wouldn’t actually write my program that way. I wouldn’t tell it to just keep increasing n; I’d tell it to test values of n from 1 to 1,000,000 or something. Then maybe if I didn’t find any exceptions in that range, I might test from 1,000,001 to 2,000,000. Or maybe not. But the point is, it’s easy to determine whether my real-world program will halt: It will, definitely. I don’t know how it’ll halt, but I know it’ll halt.

And in fact, most programs written for real purposes in the real world are this way. They’re specifically designed to be predictable, and to have escape cases. And just like it’s always possible to determine whether a practically-written program will halt (because it always will, because designing it to deliberately halt is part of what makes the program practical), it’s also always possible to verify properly-written code, because it’s written to ensure verifiability.

Which is not to say it’s always easy to verify properly-written code. When you get up to the size of modern applications, nothing’s easy. And that’s how the vulnerabilities creep in.

Almost no programs are written from absolute scratch nowadays, and are built upon previously used routines, even if the new program is “perfect,” the one it’s built upon is an uncertainty.

You’re writing a program to run in a browser. Do you create a line input routine that accepts keystrokes and parses them? Of course not; you call an existing routine that handles the drudgery for you. How do you know that that routine is bug-free?

Example (oversimplified): I could write a small, perfect program in C that had no possibility of buffer overflows in the new code, but how do I know that the C compiler or interpreter is perfect?

Indeed. And in fact part of modern security research is figuring out how to limit the execution environment so that we can prove correctness, or at least put bounds on what programs can do.

For instance, it is not possible in general to prove how x86 machine code behaves short of simply executing it. You can’t even disassemble it into opcodes, because you can jump to an arbitrary byte offset, and opcodes can be many bytes.

So x86 is a really bad idea for executing from an untrusted source. But what you can do instead (this is Google’s NaCl project) is limited yourself to a subset of x86 with provable characteristics. Among these are that jumps and jump targets must be aligned; there are others that I don’t recall at the moment. So you download a program and you consider it unsafe unless you can prove that it falls within the subset of provably trustworthy programs. If someone send you a program that is safe but not provably so… well, that’s their problem, not yours.

An old but good article (though fairly technical) on why you can’t know.

Depends, you could have the self drive system be completely isolated and only allow outside input at specified time, like having a button that says “check for updates” and a virus/hack/whatever would have to hit at the same time you were pulling updates with the wireless radios being physically disconnected from the “driving” systems except for a few minutes when the “update” button is pushed. I would bet you my next paycheck nobody is going to build cars that will allow the drive system firmware to be updated while the car is in motion. You could have cars that only connect to a wireless network you specify so the only time your computer would communicate with the outside world would be via your home wifi. The drive systems do not have to be open to outside input, only their own hardwired sensors.

The vulnerability of computers as most people understand them is a function of their flexibility. A car drive system does not wander around offering free wifi like a starbucks access point. You can easily have 3 computers one handling mapping and gps while another focuses on the actual physical driving tasks while a third handles the entertainment systems and media/WAN connection. That media computer that has an outside internet connection does not have to have any access to the driving or mapping systems. The gps/mapping PC tells the driving computer, turn right on the next street. The tracking systems that tell the computer where obstacles and other cars are are not going to obey the gps saying “turn here and hit this wall” At most they will say, “sorry obstacle” and refuse to move. The assumption that driving control system computers are open to anyone poking at them with a laptop is assuming that a major car manufacturers auto drive engineering team are total retards that cant think of a few basic security measures that would render most of these systems effectively “hack proof” Things like firmware updates only being done by a physical connection under the hood or dashboard. If there is no outside access, there is nothing to hack. how do you tell a computer with no wireless networking to do something stupid wirelessly…simple you don’t.

The Tesla Model S is already well past that point. It downloads updates that can alter just about every aspect of its behavior, such as the ride height, top speed, and autopilot characteristics. It does so over its built-in cell connection (though obviously it doesn’t apply the updates until the car is parked).

Still, there are solutions, such as using signed code.

I knew about teslas system, its also a connection they control, not one subjects to the whims of a household full of teenagers trying to figure out where to get the best free porn.

No, they don’t control the connection. They use the same cell towers as everyone else. Cell towers are actually quite easily spoofed and it’s almost certainly trivial to trick the car into connecting to your own server and downloading a hacked firmware image.

However, what’s not trivial is getting the code signed. Unless Tesla has been utterly incompetent, the car won’t run unsigned code, and the only way to sign an image is through a handful of key people at the company.

A few additional points.

Breaches of security systems come from a wide variety of issues. Many come from naivety on the part of programmers or designers. The question of truly random numbers is a very hard one - and systems that depend upon internal noise in a machine are almost certainly open to hacking - there are far too many periodic processes and sources, and far too low a resolution of the sampling systems to allow for anything like the amount of entropy that is hoped for.

Exploiting the leakage of information out of a system via physical aspects of its implementation - ie power draw, timing, is a fertile area. Similarly exploiting the lack of true randomness. Like Feynman’s safe example - you don’t need to work out the entire key - you only need to exploit weaknesses that reduce the search space enough.

A very productive method of finding security flaws is fuzz testing or just fuzzing. In its simplest for it is little more than an automated system for slamming a program with random inputs and looking for misbehaviour. Unlike targeted assaults, like SQL injection, fuzzing can turn up really unexpected exploits that are very serious. Again, the complexity of systems means that even the best code can be found wanting.

Cars are a really worrisome one. It isn’t true that there isn’t access into car systems. In fact it is rather the opposite. Most modern cars run a large number of signalling busses, mostly CAN, but also Flexray. These busses are quite insecure. You can buy adaptors that allow your PC to connect to a car’s diagnostic port for a few dollars. You can even buy WiFi adaptors that are intended to be installed permanently on the bus to allow access to the diagnostic information over WiFi. Tesla uses Ethernet. After that the control computers are only a buffer overrun or other exploit away. Worse, most cars don’t provide any authentication on these busses. So spoofing something like brake sensor data is trivial if you have access to the bus. Te computers do often perform some authentication of the major components - so for instance the ECU and the body control systems may exchange serial numbers and refuse to operate if they have changed (making theft of components more difficult.)

The difficultly with signed code is that the signatures are open to human factor exploits. We have seen a number of master codes released into the wild over the years - DVD was cracked very early on due to a stupid mistake inside one manufacturer, there have been signatures for other software systems released to cause grief. Signatures should be revocable, but the process for revocation itself must be secure, and so it goes.

Not so much in the context of what she was paranoid about, which was someone getting into her personal files and getting into her bank and investment accounts, since she’s somewhat wealthy. My point was that very few people know who she or how wealthy she is, so there’s some degree of safety in anonymity in that sense. She doesn’t run any MORE risk than anyone else with a computer hooked up to the internet- they’re not targeting her directly because of who she is; like you say, they’re just wiggling the doorknob.

Also, if someone DID know who she was and wanted access to her bank/investment income, the best way to get that would be to physically burglarize the house, not dick around on the internet.

That was my point, not that she’s not any more or less vulnerable than anyone else.

That’s true but it depends strongly on the nature of the system. DVD fared poorly because the decryption codes were distributed to any maker of a DVD decoder, including software players. As such it was trivial to extract the key from the object code.

Something like a car has a much easier time. Part of it is that signatures are easier than encryption. Any time the client has a decryption key, there is the possibility that someone will extract it. But with signing only the private key needs to be kept private. Since firmware updates are fairly infrequent, you can keep the key on an airgapped PC–or even on a piece of paper in a safe–accessible by only a few of the top people at the company, and used only on a new software release. Security is much, much easier when it only depends on the trust of a small number of people.

That’s still the situation today.

Years ago, I worked for big companies using mainframe computers. Security was less complicated then, usually just a single password, but still, it was extremely common to find most users had their password written on a post-it note pasted to the bottom of their keyboard. (And those were the security-conscious ones – the slackers had tht post-it note taped tight there at the bottom of the screen.) I dare say that’s still common in many businesses today.

Adding a ‘Security Officer’ has generally made it worse, because most of them equate inconvenient with highly-secure. Like having many passwords for different systems, forcing them to change on different (and very frequent) schedules, having strange (and different) rules for setting the passwords, etc. So workers can’t remember their many passwords, so they write them down and keep that note handy. Usually on the bottom of their keyboard!

And it’s even worse on the Internet, where nearly every site wants you to set up a user-id and password. How do they expect people to remember all those?

I misread that at first, and thought “Yup, cats are a very good source of fuzzing”. In multiple senses of the word.

This is where the human element comes in. Given human nature, the odds are very good – probably over 90% – that somebody at Tesla has that key code stored in a file on his computer desktop, or at most in his My Documents folder.

And the file is probably named “Key”, or something obvious like that.

DARPA has a current recent project intended to harden cars against hacking attacks (HACMS formerly lead by Kathleen Fisher, now by John Launchberry). When they first started the project they found that a remote attacker could more or less take complete control of a generic modern American car, controlling everything from the brakes to the acceleration to the tightening of seatbelts.

Anyway, it would help if people specified what they mean by “hackable”. If Chronos and Indistinguishable wrote their simple C programs and proved that they met a reasonable functional specification in some proof system (which is certainly possible these days for even sophisticated systems software like operating systems and compilers) then these pieces of software are still “hackable”. I can rewrite the binary, I can perform side-channel attacks, and all sorts of other underhanded tricks.

Now, you can cry foul about changing binaries or inspecting the underlying machine state as the program executes to obtain information that should not be exposed, but the reality is that an inordinate number of attacks against established implementations of cryptographic protocols and cyphers, even those that have been “proved correct” using formal methods, are made along these lines.

Cryptographic software especially has to be extremely carefully engineered so that the intensional properties of arithmetic subroutines (such as stack usage, execution times, and so on) used to implement a cypher are not a function of the key length or the size of other inputs, that no confidential information is ever exposed in general purpose registers and therefore can be examined by somebody with access to the machine, and so on and so forth. The amount of information about private keys that can be inferred by throwing random data at a bit of supposedly secure code and timing how long it takes to execute is pretty impressive.

On this last point, a bug was recently found in the Linux Kernel’s cryptographic code. A buffer containing information that should not leak out of a subroutine was being memsetted to contain zeroes as the last step of the subroutine. GCC’s control flow analysis realised that the buffer would never be used after this point, and could therefore safely ignore elide the final memset as dead code. Side channel attack introduced, and completely invisible to anybody inspecting the source code, or doing any sort of proof or analysis on that same source that didn’t take the behaviour of the compiler into account!

Yes but your spoofed cell tower needs to be present when the car calls for its update, thats what I mean by control. push vs pull. The software in the car asks for a connection to tesla to look for an update. You would have to scatter fake microcells all over, or get access to the existing towers to modify the routing of connections to tesla to connect to a server specified. The individual steps are simple, but gaining access to multiple cell towers, probably owned by several companies, to have a shot at intercepting a random tesla calling home for updates. Thats not a very effective way to target an attack unless you just hate all tesla owners and dont care if it takes two days or two years for it to happen. This also assumes that the package delivery system finishes before leaving range of your spoofed tower/s. I full well understand there is no real defense against someone with the skills who has nothing better to do with their life than try to trash a cars control system. However those people are rare and would probably have far more lucrative careers as pen testers selling their discoveries to car makers than screwing with random cars or hiring themselves out as cyber hitmen.

Got it. Thanks.

The amount of absolutely false statements in this thread is amazing.

  1. You do not have to search thru all keys to break an encryption system. You only have to find a hole in the implementation. People screw up implementation of encryption software all the freakin’ time. All the WiFi encryption programs have been broken to some extent. The older ones are now breakable with trivial effort. Newer ones take a bit longer. The theory behind them are solid. The people who put them into practice are … people. They screwed up. They will always screw up.

  2. About home routers and such. Almost all of the major selling models that have been out for a while have been found to be exploitable. E.g., one company left an open port with admin access in their firmware! Groups are searching for these, finding them in the millions, and now are using them to do things like fake ad clicks. They can also monitor your Internet traffic, which can get interesting info.

And good luck getting a firmware update. (Assuming the people who took it over haven’t block updates.) The manufacturers have a “buy a new model” attitude.

They are low hanging fruit.

  1. The argument that the Halting Problem somehow magically doesn’t apply has been tried over and over for decades. It is literally a laughable argument. No, your programs aren’t magically excluded just because you checked them. You are not perfect. You make mistakes.

I have seen a 10 line program that the author claimed was proven correct with formal verification systems which had an obvious error in it.

Plus, the Halting issue is just a mere minimum of correctness. Verifying that the program is otherwise correct is a task of such immense size that monumental doesn’t begin to describe it.

  1. Hacking cars. Some of the latest models have been proven remotely hackable via their wireless network interface. Physical access is no longer required. Think about it: Your braking and speed systems being fiddled with in real time while you drive.

Here is today’s Slashdot horror story of the day. Somebody either didn’t think that a file system would be moved or that the effect of the move would matter. And this was probably a pretty good programmer. Yet file system wipe ensues.

This is what real life programming is all about. Tiny little oopses that do horrible things.

Writing down your passwords for all the internet sites you use very secure from the types of threats that are applicable to the internet. I am not worried that my wife will get my email password. I am worried that people without physical access to my house will get access to my email and use that for fraud purposes.