Why does it seem no matter how sophisticated the computer code, someone always seems to be able to hack it? This especially refers to code for safety and security programs and applications. Why is it impossible to build a perfect code that is unhackable?
In the simplest possible terms, it’s because security and encryption really all comes down to a specific set of letters, numbers, and characters, and there will always be a finite number of possible combinations (because a key cannot, in practical terms, be an infinite number of characters).
Pretty much. Beyond that, each way you offer to provide access to data is another way for someone to break in, and the safest ways are often the most inconvenient.
It is possible to construct a system that is in practice unhackable. Part of the issue is that there are so many complex systems that are constantly changing that a mistake is made that eventually gets exploited.
As an analogy, imagine you’re the supervisor of a massive apartment building with millions of front doors and your job is to make sure that none of the doors are left unlocked. In theory it’s possible to keep all the doors locked but at some point somebody is going to forget.
The more sophisticated and complicated a system is, the more opportunity there is for insecure coding practices to slip in. In the zillion lines of code that make up an operating system, there are going to be some where the programmers make an error in input validation (for example) and they slip past QA. Eventually those mistakes get found by the bad guys.
Another reason is that the people who design and implement systems are generally rewarded for meeting deadlines, not meeting security standards. Corporations praise and give bonuses to the go-getters who break down barriers to get things done on time! Then when things go all to shit, they ask, “How’d that happen!?” Well, because that’s what you paid your people to do.
Everything is also hackable because people are hackable. If you can’t pry open a system through a technical vulnerability, you can con people into letting you in (“social engineering”). People want to be helpful, and we’re encouraged by our employers to be team players, so when that nice new administrative assistant asks for our help getting a critical project done for his boss, sure we’ll send that document right over.
OK by me, though. I make my living responding to security breaches.
An isolated system can only be hacked with physical access. Of course it is now limited in usefulness as it cannot share data with other systems. For example, the fuel injection computers in cars. You dont hear of people taking over the internet by breaking into the computer in thier ford festiva.
Plus many times when a system is ‘hacked’, it turns out that it is a result of social engineering. If it weren’t for having to give access to, you know, actual users, systems could be much more secure.
If you want a truly secure system, you don’t want something that’s “no matter how sophisticated”. You want something that’s as simple as possible. Give me any one simple task for a computer to do, and I guarantee you that I can write an unhackable program that can do that simple task. Give me five hundred complicated tasks for a computer to do, and I’m not going to guarantee that any more.
Another contributing factor is that undergrad Computer Science programs often don’t require much coursework in security. Here are two example programs: Mizzouand Missouri University of Science and Technology. While both universities have security courses available, and I think MUST offers a CS degree with a security concentration, you can still leave with a BS in Computer Science without having done a course focused on security. I don’t have a degree in CS myself, and I freely admit that the CS people are smarter than me. But I’ve worked with smart CS grads who don’t really appreciate the security threats out there. It’s just often not part of their day-to-day.
This is becoming a concern now that cars have wifi access and everything is computer controlled. There’s more possibility for a hacker to cause trouble. For something like the Google self-driving car, it’s entirely possible that someone could take control of the car and do the kinds of things you see in dumb action movies, ie program the car to drive off a cliff or something like that :eek:
The NSA doesn’t allow browsers to use encryption that they cannot hack easily.
If web browser encryption had been allowed to grow and advance with PC processing power we would now be using at least 2 Mbit - 4 Mbit encryption, not 256 bit.
That said, when the DVD came out the developers were all proud of how unbreakable their system was but it turned out the key was stored in ram and very easy to retrieve, so there is always a way to break a system.
Although many break-ins occur because of poor human security practices (poor protection of passwords, etc.), there is a black market of system exploits. If you are the first one to find a new bug in an OS or other program that can be exploited to hack into a system, it’s worth a lot of money. How do those bugs get in there in the first place? According to this article, Windows XP has 45 million lines of code. When I started my career in 1979, 1 million lines of code was thought to be impossibly complex to manage. You simply cannot generate that much logic without errors. It is not possible to exhaustively test every possible combination of conditions that can occur in a system that complex.
An early and well-known type of exploit is the SQL injection attack, in which a user filling in, say, a web form, can type in a sequence of characters that the system will interpret as SQL code when it was really expecting to get the user’s username. This was a design flaw, but was very common because it never occurred to programmers that anyone would deliberately try to do something like this to attack the system. The testers didn’t think of it either. Similarly, a buffer overflow attack occurs in a condition that originally was so exotic that it would never have been tested.
Now testers are thinking like hackers, but it is not even possible to enumerate all possible conditions that can occur in a complex system, much less test them all.
Though I wouldn’t really classify that as an SQL bug, since that same general sort of attack shows up for many other kinds of code. Early in this board’s history, for instance, there was a wave of thread vandalism based on sneaking in HTML code via posts (yes, this board uses SQL, but that wasn’t relevant for these particular attacks).
Not only that, but they frequently penalize people who don’t circumvent the security protocols to meet a deadline. It’s often considered more important to get stuff done/done on time than to be late and meet the security protocols. Or worse, the guy who is sometimes late because of security issues is viewed more negatively than the guy who ignores security entirely and is on time every time.
One other point that’s worth mentioning… “hacking” something isn’t like in the movies where they bang away at some keyboard, and a few seconds later proclaim “We’re in.” and proceed to have full root access to some system. It’s often more similar to identifying a vulnerability in a process that may let you siphon off a bunch of unencrypted data that may have credit card numbers in it, or something along those lines.