How do open source programs not get hacked to pieces?

I admit I can’t begin to understand programming or hacking, but I’m wondering how an open source program’s defenses can’t be easily penetrated, seeing as its source code is available to anyone who’d like a look.

First, not all programs need defenses. An open source program like OpenOffice doesn’t, for instance - it’s practically impossible for anyone to gain control of your computer in some way through your use of a word processor since it doesn’t connect to the internet.

Second, open source programs that do need defenses like Linux are built with encryption systems which are not easily reverse-engineerable, that is, given some sort of output (the encoded information), it is very difficult to identify exactly what the input was (the information that needs to be protected). This is cryptography, an art which is way beyond me in its details, but suffice to say that the standards that open source programmers use are generally pretty good.

Exactly. For example: consider the following made up program:

A = GET(Please enter password)
If

Knowing how the defences work does not automatically make it possible to defeat them. It is possible that having access to the source code may make it easier for you to find a weakness - but if no such weakness exists, the source code won’t do you any good.

I suppose someone could hack their own copy of Open Source software but that is more like regular programming. Open Source software like Open Office and many of the Linux distributions have version control and releases just like traditional commercial software does but they probably have even more people checking things before they get incorporated. A given release has been tested by many people and its components were selected from among the best variants out there at least for a given purpose.

It is rather like something like Wikipedia which some people assume is a stupid idea because they figure an open source encyclopedia would just get filled with crap and rendered unusable in short order. It does happen in small ways but the side pushing for accuracy and rapid fixer is even stronger. Open Source software has that phenomenon going for it as well as official releases for those that want quality control. Variants of it floating around in the wild can be hacked easily of course.

This is not correct - Linux does not use encryption for any kind of internal security, ‘hacker’ defense, or whatever. It supports encryption of a user’s files - but the encryption is not used to protect the integrity of the operating system itself. Simple software engineering practices are sufficient for that.

You do have to exercise caution when you download an open source program that you plan on running on your computer. Since it’s open source, in theory it would be “easy” for someone to add to the program some malicious code, but leave untouched the program functionality that is visible to the user. (I put the word easy in quotes because I’m not sure how easy doing something like this would be in practice - putting in the extra routines, not modifying the functionality of the program, and then convincing people to download your trojan horse version. The last part would require some effort.)

Flaws in open source programs are indeed visible to anyone. But this means that anyone can spot and report or fix a flaw as easily as exploit it, at least in theory.

Is this better or worse than having only small number of developers with access to the source code?

Openness is the open-source program’s defence… openness, and a well-oiled development team standing over the check-in mechanism.

Anyone may hack on an open-source program, making changes good or bad. The source code is available to all. But the team overseeing the development does not have to let all changes submitted to them enter the next version of the program.

Their entire development and testing effort relies on reputation, another fruit of openness (specifically, open communications networks). If Good Hacker Team A lets J. Evil Hacker’s change into the version that is released, and it wreaks havoc among the world’s computers, it is quite capable of removing that change and releasing a new repaired version of the software… and for its reputation to remain intact, it’d better do it quickly.

Otherwise, Good Hacker Team B may take the last known good version from the archive, announce that Team A has officially Lost It, and continue the development. Individual programmers are quite capable of jumping from Team A to Team B if they feel that Team A’s management is less than effective.

So why don’t the J. Evil Hackers of the world form their own development team, take the source code, and push their own version on the world?

They can, of course, but that would be too much like work. :wink: They’d also need to establish their reputation as Good Hackers before launching their attack. Which would also be too much like work. Who would use Jolly Roger Office from an unknown team based in <mumble>, when the original Open Office is freely available?

In general the defenses of programs to being hacked is to be well written. If some one can hack your program it has a bug in it. If I go to a website with an old version of internet explorer and a maliciously coded web page can execute some arbitrary code on my computer it is because IE did not handle badly formed HTML in an acceptable manor. The same goes for firefox. The defense is to handle the HTML in a way that does not allow for executing arbitrary code. Bug free code is bug free weather or not I can look at the source.

Guys, I believe that the question is not, “Why don’t hackers deliberately introduce security vulnerabilities into Open Source projects?”, but “Why can’t hackers more easily find existing security vulnerabilities in Open Source projects?” Honestly, they probably could, if security vulnerabilities are equally likely to occur in Open Source and proprietary software. I would argue that’s not the case in most of the famous Open Source projects, like Linux. The Linux developers put a lot of emphasis on security(and it helps that their kernel follows the Unix design, which was designed for security from the start).

It’s nearly as easy to add malicious code without the source code. Just write your own program which quietly does whatever you want, and then after it’s done with its mischeif, it calls the original program.

The difference is that, with an open source program, the sabotage is much more easily detected. I could open up the source code and read it to see if it does anything it shouldn’t, and then, once I’m satisfied that the source code is safe, compile it myself to produce a known safe executable. Even if I’m too lazy to do so, I can hear from others whom I trust who have taken the time to check it.

As for exploits, remember, it’s just as easy for a good guy to find a vulnerability, and fix it, as it is for a bad guy to find the same vulnerability, and exploit it. And there are more good guys than bad guys in the world.

It’s the same in normal locks. Anyone can buy another lock from the same series, take it apart and analyze how it works. Or perhaps buy the design from a disgruntled ex-employee. But in a good design that doesn’t give any information on how to unlock the original lock, because of the need to figure out the key, which is unique to each lock.

So it’s not the design of a lock that should be kept secret or inaccesable, it’s the key. That way the responsibility for keeping things secret is completely on the customer. By publishing the design, the lockmaker shows he realizes this. And he wouldn’t publish a design with known flaws, nor dare to leave found flaws in a published design unfixed. So by publishing the design, the security of the lock becomes more trustworthy, not less.

One thing to note is that there’s no trivial way to find exploits just by looking at the source code. Any generally available automated processes to find vulnerabilities in source code will have already been run by white hat hackers.

So, in order for an attacker to find a vulnerability in the software, he’ll either need to have discovered and implemented some new algorithms that statically analyze code for bugs (in which case, he’d probably be better of selling it. There’s good money in static analysis), or he needs to go over the code line by line and figure it out. If it were that easy to find bugs in software by looking at it, we’d have a lot fewer software bugs.

A much easier way to find bugs and exploits in software is to just run it and test some boundary cases. What happens if you try to input too much data, or badly-formatted data? You can test this without having the source code, although I agree that crafting a specific attack is easier if you can look at the source code and figure out what exactly happens when the bug occurs.

Having the blueprints doesn’t make it that easy to break into the bank, as long as the bank is designed well.

As long as you download the executable from the right place. Personally, I’m one of those guys too lazy to compile, so when I get an open source program I always get the pre-compiled version if possible. I have never heard of someone trying to slip a trojan into a popular open-source program, it was just idle speculation on my part.

I agree with you and others that any possible defects in an open source program can be found by a person with evil intent, but it is more likely that the bugs will be noticed by well-intentioned programmers.

I stand corrected - and on your 1337th post, no less.

Actually, so do I. But there’s nonetheless a certain degree of safety in the fact that I could, if I wanted to, recompile from source. It’s sort of like how educated folks will take an encyclopedia article more seriously if it cites its sources, even if those people don’t actually read the cites.

Actually part of the reason they are not so easily hacked is because they are open source. This seems circular, but it is logical: More people have been able to try to hack and look at the source code, which creates a much more robust testing and bug reporting system.

Reverse engineering is only needed for closed source stuff, BTW. Open source stuff can more easily be examined by looking at its source code [-_o]. There are a handful of ways to exploit flaws in both open and close sourced programs. A good example is firefox vs IE. They have the same job, and can run on some of the same OSs, but one is closed and one is open. One has a huge community to file and find bugs, and one leaves flaw fixing to only a handful of people.

I think there are some psychological reasons, also. Hackers looking for fame and fortune can get it by finding a hole in an open source program, and reporting it, down to the bad line. Half the time, the makers of proprietary software scream if you publish a weakness, even if it’s been months since you reported it to them. Second, I don’t think too many people consider Linux the Evil Empire, so there will be fewer hackers who convince themselves they’re striking a blow for liberty.
Finally, your average script kiddy class of person will get intimidated looking at the code, and look for easier things to break.
But I think the fact that lots of eyes look at the code is the number one reason.