Has there ever been a piece of software that couldn't be cracked?

Generally each button has an event number associated with it. When you click the button a message is passed to an event handler saying “event ID XX just occurred” and then it’s up to the program to handle it.

The whole malware thing is much more likely just to be a specific program, rather than hacking into another process’ event handler and inserting nasty code. It’s (generally) quite tricky to do and each new OS comes with more features to stop you injecting between processes.

tim

Ah. Filthy loads of money for snake oil salesmen/consultants who make promises they can’t keep?

I remember reading a behind-the-scenes article about the time and effort spent on copy protection for one of the Spyro the Dragon games. The final conclusion was that, in their case, the cost of the more elaborate copy protection wound up being justified by additional sales during the extended period it took crackers to crack their game.

Here’s the article: http://www.gamasutra.com/features/20011017/dodd_01.htm

Let me just note that this is largely a different topic than the one being discussed in this thread. Similar techniques might be used to create malicious code as are used to remove anti-piracy measures or anti-cheating measures from software. But overall it is off topic to this thread (just a FYI to be helpful.)

If something can be done, it can be undone. Some software is just harder to figure out than others.

There’s one piece of software which I use at home as a hobby, which as far as I am aware, does not have a massively distributed crack, because it has a really niche market (for my own protection, I won’t mention its exact name, but its the software that keeps the trains running on time in and out of Grand Central Terminal). Since I’m not running an actual railroad out of my home, I can’t afford to purchase it myself, so I had to crack the demo version on my own, which I did by erasing a couple lines which get HIDDEN in my registry which indicates that it is running in demo mode.

Copy protection exists more as a reminder than a security measure. The software companies KNOW that its gonna be cracked, and most of the people running cracked versions are people who wouldn’t buy it in the first place. However, legitamite businesses know better, and have the budget to puchase more licenses (since the software they use is helping them make a profit). If an installation of Photoshop fails on a new employee’s computer, it will remind IT that they need to buy a new license.

I know this isn’t IMHO (yet), but IMHO, most copy-protection schemes are laughable to experienced hackers. All they do is make access difficult for legitimate users, who lose keys, lose installation disks, or have to suffer if the developer decides to discontinue support or goes out of business. Not to mention when the software decides, incorrectly, that the legitimate user is a crook and treats him accordingly.

I’ve never objected to paying for good software for a fair price (1980’s Turbo Pascal is the best example I can think of – a complete compiler and integrated word processor, and one of the fastest compilers around at the time), but balk at paying exhorbitant prices for buggy software, then have to pay more for an “upgrade” to fix the bugs and introduce more of the wascally critters.

To relate to what trmatthe is saying, in the days of software distributed on floppies, some partially-smart developers tried to outwit users who copied disks and gave them to friends. Many schemes were tried – most relied on quirks of the disk drivers and parameters hard-coded in the OS, like how many sectors the reading routine expected to see per track. If the normal was 8 sectors, the disk would have 9 on one track and a normal diskcopy program wouldn’t know enough to copy all 9, making the program non-functional.

Other schemes involved non-standard data blocks, writing on tracks that weren’t listed in the directory, and non-standard checksums, fooling the software into thinking it was getting a read error.

But all of these were made useless when someone marketed an interface board that made a bit-by-bit diskcopy. This would copy everything on each track without regard to checksums, block sizes, or directory entries, i.e.; an exact copy of the “special” disk. I bought a board like that for $30 and it worked perfectly.

Incidentally, the board was the same design as the one used by the developers to make the master disks in the first place. The copy protection schemers had counted on the public not knowing such a device existed. Sort of like the cellphone programming devices – use it to program a cellphone for good or evil, the gadget doesn’t know the difference and the cellphone companies depended on that gadget not getting into the wrong hands. That was their only security. Didn’t work.

As the others have said, it is just a cost-benefit deal.

If the extra protection costs more that then piracy saves, you don’t go past that level. Back in the Floppy days, as other have mentioned, the idea wasn’t to make it unhackable, just to make it difficult enough that everybody couldn’t do it. If 1000 guys could take 3 hours in their basement and crack it, but couldn’t easily put it on a disk and pass it out in an exponential distribuition, then the copy protection was as good as the wanted. If it could be put on a disk and just copied normally by lay people, then it was a problem.

Of course, any copy protection that could be bypassed by a distibutable code solution became worthless in the internet days.

So now if you can get a couple months of release without a widely known crack, then you have made most of the money you are going to anyway.

And a lot of the people who crack do it for the fun of “beating the man” and wouldn’t have bought it in the first place anyway.

I’ve worked for several legitimate businesses that ran cracked software.

Why?

The software “protection” for the 3D animation package I used was a dongle (parallel at that time). The default software would not run without the dongle, and the manufacturer would not replace the dongle if it was lost or stolen. Not a huge issue if we’re talking about Microsoft Word, but the business would be one disgruntled ex-employee from bankruptcy if the dongle for a $100,000 piece of software were to walk.

So every client of this software I worked for had a $495 “crack” installed and the “dongle” locked away in a safe deposit box.

The very amusing part was that the cracked software ran faster! The paranoid lead programmer had put in a call to the dongle every single scan-line. Taking out 480 unnecessary parallel port calls per frame boosted render speed quite a bit. This was especially noticable on the “transputer” version, where every 32 x 32 pixel “bucket” was being rendered by a different processor.

Theoretically, any software classified as “hard encryption” by the Bureau of Industry and Standards should be near impossible to crack. The one I negotiated with recently was the Pointsec Protector program.

The crack cost $495?

Actually dongles is what I had in mind in distributing my application. I was thinking of using a custom-made one for licensing. (If you want the details: I’m making a CUDA app that runs on GPUs and I had the idea of creating a DVI dongle that used DDC to store the licenses for the app and the third-party patents it uses. Much easier than one of those MAC-tied license files that the competitor notoriously uses and hopefully on par with a custom-hardware solution that another competitor uses that doesn’t need any licensing hassles.) My simple dongle would be relatively “easy” to crack (most monitors could be reprogrammed with the license data and act as a dongle) but I figured no matter what I do it’d be even easier to crack than that. The dongle just has to be convenient and be a little more secure than a cd-key. Also the dongle’s not worth $100k like your 3D app, and it won’t kill performance for no reason.

Re “transputer”: … hey I’ve heard of that! Someone actually used those? Like for distributed rendering?

This is hardware, really, but two examples that are worth mentioning just because of the time that hackers have likely put into them and failed, are the PS3 (so far), and the Gamecube. You can’t run pirated games on either of those systems.

There is no such thing as near impossible to crack digital rights management, hard cryptography or not. There’s only physical rights management. In the classical analogy of Alice, Bob and Eve, one person cannot be both Bob and Eve by the very definition. The only way this is even theoretically possible is not to distribute the software is to only grant access to it’s output over the network.

Even with the most trusted platform, a CPU capable of executing encrypted code, and well encrypted software, you still have to provide the decryption key to the CPU. Even if your trusted platform uses PKI and the decryption key is encrypted with the public key of your platform, the private key is still stored somewhere inside.

Well the Gamecube is for wimps and the Wii has been cracked. But the PS3? Hmm, interesting. Sony must have tried real hard. But hardware should be much harder to crack than software. I’m incredibly impressed that the xbox360 fell.

That is nine kinds of awesome. Why didn’t I get a genius gene? :frowning:

As others have pointed out, there is no end to the ingenuity of thieves. One only has to look at the rampant theft of music and movies online to appreciate the depths to which many people are willing to degrade themselves in the name of ripping off the people who create the things they use.

I would say that there’s no such thing as “uncrackable” software, but there are a few software components (not entire programs) which could be described as uncrackable.

Specifically, I’m thinking about key generation (keygen) algorithms. Conventional software key protected software works like this. The programmer has a secret key generation algorithm that generates psuedo-random keys. The algorithm can be arbitrarily complex. The application contains a verification routine which, based on the programmer’s knowledge of the secret key generation algorithm, can validate whether or not an entered key is valid. The program refuses to run (or runs in a trial mode, etc.) if you don’t have a valid key.

Now, there are probably any number of ways that this entire validation routine can be ignored entirely. By modifying the program, you can simply skip these checks and allow the program to run without a valid key. Alternately, you could just modify the validation routine to accept any key. However, it is more challenging and less disruptive to the end user if you can create an alternate keygen of your own. By studying how the validation routine works and “reversing” it, you can come with an algorithm that makes keys which will be accepted by the program. Your keygen may or may not be identical to that of the original programmer’s, but as long as it generates keys which will be validated by the program, it doesn’t really matter.

So where’s the uncrackable part?

Well, modern keygen protection systems are increasingly taking a cue from public-key cryptography (asymmetrical crypto). Newer keygens typically consist of two steps: 1) A conventional key generation algorithm and 2) the result of part 1 is then signed by the private half of a key pair (ex. RSA, DSA, etc.). Now part 1 can still be reversed by a cracker. However, without access to a quantum computer, which so far as anyone knows is still a matter of science fiction, it is not possible for the cracker to reverse the programmer’s private key from the public key used to validate the signature. Well…technically, you can, but it would take longer than the age of the universe. So for all practical intents and purposes, it is not possible for a cracker to create a keygen program which will produce valid keys for the program.

Note, however, that it is still susceptible to the simpler forms of attack. The key check can be removed or modified so that it will accept other keys, but this requires patching the program. This is exactly the approach now employed by crackers for these types of programs. For example, they sometimes distribute a patch, which modifies the public key used in the validation routine, and a keygen program which makes keys that are valid for the modified routine.

There are several variations on this scheme, and implementation can be tricky. For example, The software keys for Windows XP and several other Microsoft packages are generated using something called Elliptic Curve Cryptography, which is a form of public key cryptography. Unfortunately for Microsoft, they made a few subtle mistakes in the way they implemented ECC. Subtle as they were, they were just enough to allow one brilliant mathematician to derive their private key without resorting to brute force. It was not long after this that Microsoft’s “Genuine Advantage” program was introduced which replaced the client-side key validation with server-side checks that compared keys to a database of the keys actually issued by Microsoft. I.e., with the right keygen, you can convince your own computer that the keys are valid, but you can’t fool Redmond. Of course, there are ways around this too, but that’s not the point.

The point is that server-side validation is becoming more common, and in a sense, this is also “uncrackable” (unless you can somehow gain access to the validation servers). All recent versions of Windows “call home”, as do applications like the Photoshop CS series, and game distribution systems like Steam. As long as the application code still runs client-side, these checks can be removed, bypassed, or emulated, but the validation itself is effectively beyond the reach of the cracker. These systems are not without disadvantage, however. You must have internet access to use the software, for example. This can definitely be a problem when traveling with a laptop, for instance.

I predict that in the future, more and more software will move totally online. The only way to make software uncrackable is to take it entirely out of the hands of the user. For example, you won’t find any cracks (in the true sense of the word) for World of Warcraft. The client software is useless without the server software. Unless someone manages to steal or leak the server code, it cannot be “cracked”. At most, someone can attempt to emulate the server software, which is never a perfect process. For some, this is the point - they don’t like the design decisions of the official game, so they play on emulated servers which may have slightly or wildly different rules.

For an entirely online application – for example, a web-based word processor – even this is not exactly possible. You could create an identical free version of the product, but that wouldn’t be cracking. Getting your hands on the actual code would be extremely risky, and even if you had it, what then? Deploying it would be difficult since it isn’t designed to be a client application. This software model has a lot of disadvantages for the end user (not the least of which are security and privacy concerns), but I suspect it’s what’s going to forced down our throats next.

Oh, one other thing:

Actually, with respect to what I’ve quoted, he’s actually right (just perhaps not for the reasons that he thinks :p). I assume you’re referring to something like the Trusted Computing initiative or Microsoft’s Palladium (or whatever they call it now)? All of these platforms are theoretically uncrackable because of highly complex hardware-based trust and system metric mechanisms that make it impossible to modify the system’s state in an unauthorized fashion. However, implicit in this system is the assumption that the software really is running on actual hardware, which is difficult or impossible to crack for the average person. The entire system is completely undermined if you emulate the hardware as software, such that you have the magical god-like ability to observe and modify the system at the “hardware” level.

The only precaution against this is that the root of all trust depends on certain information (private keys, essentially) that has been “burned in” to the hardware, and are generally inaccessible to average people. Indeed, this makes it very difficult, but not impossible for someone to obtain the information needed to make a functional emulator. After all, it only has to be done once. What are the hardware makers going to do if someone ever gets the keys? Recall every computer they’ve ever sold? This exactly the problem with hardware implementations. If the system is ever compromised, it’s almost impossible to fix it.

Yep. From a company called “Imagine That”. Every issue of “Computer Graphics World” back in the 90s had an ad for their cracks. They would demand proof that you owned the software you wanted cracked (Digital Arts DGS, Lumena, TOPAS, etc).

Yep! YARC made T-800 transputer cards, 4 processors per card each with dedicated memory, and up to 4 cards tied together with a special high speed bus across the top. We had a version of RenderMan that could work with these. The main processor would divide the work into “buckets” and send the necessary geometry, textures, bitmaps, shaders and light sources to each transputer to render. Then they would sent their finished portion of the bitmap to the framebuffer (a Targa or Vista card).

It’s what you had to do when you were trying to do 3D on a 486!

A secure means of distribution of the software is of course necessary, I agree. For instance, a Linux-like central repository of all applications which is only transmitted directly to the machine under encryption.

Overall the complexity of the task and the limits on usership it creates will pretty well keep truly secure computing almost solely in the hands of the military. Though I would say that the lower level the layer of protection is, the better a chance it has, so building DRM features into Vista, mandating that only hardware which has proper security functions will be recognized by the OS, and so on is the direction that things will need to go. Like your example shows, even with a machine capable of running software securely, there’s a whole process for securely creating the software, securely distributing it, and properly utilising the security features of the machine that all need to be done correctly or there’s a perfectly large opening for attack. But, you do need to have that last step, the user-operatable but not user-modifiable computer or you’re going to end up with something fully within the reaches of a decent sized number of hackers to break open.

I have a good crack for most productivity software (including CS3). Install it, license it and run it in a virtual machine. This also works around the ‘Microsoft Trusted Computing’ thing very nicely. The ‘hardware’ is all virtual, so you only need to copy the virtual machine data with its hard drive image. Some future version of Microsoft OS will undoubtedly refuse to run in a virtual machine, but that won’t last for long.

Anyways, once I did that, I stopped having to activate it, re-activate it, call Adobe to get permission to re-re-reactivate it, etc. When the ‘bad things’ happen to my machines (or more often the ‘copy protected’ software its self when it automatically applied a patch and became inoperable), the virtual machine is backed up and untouched. I even had to make ISOs of the installation media to install it, because the DVD media that CS3 came on literally could not be read by its own installer, but could be ripped. So now the ONLY way I could install it is either on a virtual machine, or mounting the ISO files I made, or burning DVDRs.

Theoretically, I could ‘pirate’ the virtual machine in its entirety. I already did use it to move the installations from my old machine to my new machine, and copy it to a portable (though I am strictly ‘legal’ for the number of computers I have it installed on). The virtual machine image even came across from when I dumped Vista off the new machine and switched to Ubuntu Linux. No problem. Well, small ones caused by ‘VirtualBox’ and its badly organized virtual machine data, but not insurmountable in the least.

As for ‘cracking’ the software its self, I’m too lazy to pick over the opcodes and such to do so, so never developed that skill. I have met people who DID develop that skill and watched them disassemble games. On the average heavily protected game of the time peppered with many ‘reality checks’, it took them less than about 20 minutes per trap, sometimes a little longer, often MUCH less if a pattern to search for became obvious. By in large, better copy protection will simply be overcome by better hackers for any widely distributed and desirable product that is too expensive for most people to afford.

I have zero trust for ‘cracks’. Running a crack could install just about anything in Windows.

Let’s face it, if you could clone a copy of your neighbor’s big screen TV into your own living room for free, most people would do it. It is estimated that globally, 85% of Windows users are using pirated copies of the OS and every piece of software installed into it. Piracy is a big problem for the old way of making people pay to individually buy/license software.

The last, unique thing that I have paid for that I still use is Adobe’s tools, and a version of XP to run them under. Everything else is open source software. Even the OS is Linux. In a way, ‘Open Source’ can’t be copied illegally, as long as you get it for free. So open source software is immune to ‘piracy’ because anybody can just download it for free. I mean, you could ‘crack’ something like OpenOffice.org, but what the heck would that accomplish? You could far more easily checkout the source from their CVS/SVN server and make any mods you like, and even make a branch off their development tree for your own special version. It won’t necessarily be integrated back to the main development trunk, but what I’m getting at is there is little incentive to crack an open source project.