Why Witcher 3 goes "no-DRM", and does this allow it to be easily pirated?

Keep in mind that CD Projekt, the parent company of the developers of The Witcher series (CD Projekt Red), own GOG.com, which is billed as a DRM-free site. Obviously CD Projekt believes DRM-free in general is a good business move, but it’s way, way too late to go to using DRM now. They’d be grilled as massive hypocrites.

That’s a non-starter.

Both your statement and the one you quote accurately describe UbiSoft’s business strategy as far as I can tell.

No, not really. I mean, yes, Ubisoft is very fond of having some small parts of their games tied to token servers for copy-protection purposes, but as a previous poster said that’s pretty doable (if not easy) to circumvent once you grok which file exchanges take place, emulate them locally ; and which bits of the exchange are the server saying to the game “is it really you ? This is really me. Let’s prove it” and spoof *those *too. You’re still downloading 60 gigs worth of game for your machine to play and display.

But **BigT **is talking about that concept taken to its extreme, which is game publishers essentially streaming the video output of processes happening 100% in-house. And that’s a non-starter for a number of reasons.

For me, the player, I would simply refuse to enter that scheme. When I’m buying a game, I’m buying it. Not leasing it until the publisher arbitrarily decides that it’s not profitable any more to play these games for me.

For the publisher, it’d represent a huge infrastructure investment both to have umpteen machines dedicated to running the complex processes involved in modern games and to distribute that data over the Internet. I mean, shit, they won’t even run dedicated FPS servers any more. Even in MMOs most of the work is done by your PC, the data being exchanged with the server is suprisingly small - and even that causes problems, desyncs, huge lag etc… Now imagine games where instead of only the other live players sometimes teleporting or bugging out or doing impossible things because their internet connection dropped a packet or having to play the game in slow-motion because their internet connection is crap ; everything in the game has to go through these channels. Your own mouse/keyboard inputs, the game’s video output, the sounds… it would be completely unplayable.

Live streaming video might be a Thing these days, but it involve large delays (typically the streamer is already ~10 seconds “into the future” from the audience’s POV), it’s very low quality video and it still requires a ton of processing power & HD space to both play the game AND record it AND encode it in real time. The whole process is fraught with lots and lots of hiccups and technical issues to boot.

Now, one could argue that “yeah, right *now *it’s not workable, but with improvements in technology…” - and the answer to that is, self-evidently, that games will always fill out the bounds that technology allows (and sometimes push those forward - see Crysis). By the time all consumers have the 50 Gigs/sec reliable 0s ping internet connections that would be required to play a game like The Witcher 3 remotely in a way that is transparent and with a game experience matching that of playing it today on dedicated, local machines ; that stupendously beautiful and state-of-the-art game will look like fucking Pacman.

Onlive does exactly what you are saying. It isn’t unplayable, albeit the service wasn’t great (and it went out of business).

You’ve gone over the disadvantages. Let me cover the advantages :

First of all, yes, you’d need 50 mbps or so stable, guaranteed connections to every user. That means that the ISP would need to give packets from this service priority over everything else so that they are never dropped nor delayed. Online works ok with 10 megabits but to have a live 1080p resolution video stream that does not have any visible compression artifacts, you would need about 50.

Anyways, the technology to build such a network is possible. Cable and dsl modems are both that fast if the line quality is good, fiber optics is a thing. The problem is partly the corrupt way ISP monopolies work in the USA prevents investment in such services (ISPs have to be regulated like a utility, or they will perpetually jerk users around forever) and partly the current debate over “net neutrality” precludes even a legal way to offer a service where the latency and packet loss is guaranteed.

(see, by taking away the ISPs ability to screw everyone over by making deep pocket web services unusable slow to access unless they pay up, you also take away their ability to offer enhanced service to the services that genuinely need perfect performance)

So, ok, assume you have the tech - or you are in Europe or Korea. Why do it this way?

  1. There’s a utilization problem. Every $400 XBOX or PS4 someone purchases, the gaming console is only running it’s CPU/GPU flat out a few hours a day on average, at most. That means the rest of the time, it sits idle. Moreover, anytime anyone plays a game on their console or gaming rig that does not use 100% of the CPU and GPU power, that’s an unused resource.

With cloud gaming, servers in the cloud that are running games that are less CPU/GPU intensive can run several instances of that game at once. That means that one high end PC can serve several users at the same time on average instead of each user having to own a high end rig.

  1. There’s an install base problem. Games we can barely imagine today are technically possible. Imagine a game that requires 8 video cards and 100 gigabytes of RAM to render a photorealistic virtual world where everything can be edited and the world itself changes according to player actions. Well, nobody owns a computer like that. So game developers have to shrink their game to whatever the lowest common denominator is - in this case, the hardware specs of the Xbox One (because it is the weaker console this generation)

  2. There’s a problem with failures due to software and hardware differences on user machines. PC games crash and fail all the time because certain users don’t have the hardware or software the game was developed for. Console products have to be developed for 2 distinctly different machines, costing millions of dollars each game title produced for the conversion.

  3. Players aren’t willing to try risky new games because they have to pony up the $50-$60 or whatever up front. If they instead subscribe to a service, where for $30 a month or something they get to play any game they like, you would distribute the money according to how many hours users actually spend playing each game. This is also more fair to the game developers - better games suck player attention in for longer time periods, and would be rewarded accordingly.

  4. Game developers would be able to produce their games for a target platform (some variant of Linux PC probably) and 100% of every game sold would run on that platform. (since the game would run on a server farm using virtual machines). When a game crashes or otherwise fails, they would receive a memory dump of the exact game state (basically you would suspend the virtual machine and save the memory snapshot) and have a much better chance of discovering the exact cause of the failure and fixing it.

  5. The only gatekeepers here are the ISPs. These server farms would compete with one another, and users would probably have subscriptions that cover multiple farms so that no single company has the power to control everything. The only reason the ISPs are a gatekeeper is obviously they have to cooperate or nothing works.

  6. Users could play the highest end games on any device, at any time. Decoding even 50 megabit live video is not a difficult task relative to modern hardware - the chips in ipads and other tablets can do it easily.

Anyways, the current gaming industry does not reflect the limit of what is possible in games. Having the games hosted in a cloud server farm lifts a lot of current limitations, albeit it requires something we don’t have in the United States, mainly due to corrupt politics.

Ah, but that works both ways. Because obviously the infrastructure for remote play needs to be able to cover every user playing at once, or at least a large majority of them, if only for the peak hours - 20h to midnight or so. Else you’d have to make a choice between giving some players perfect gameplay while others are plain shut out (which is cause for grousing) or giving every user crummy laggy gameplay (which is cause for even *more *grousing).

A solution to a problem that doesn’t exist. The user is gonna have the rig regardless, unless every game provider out there makes the switch at the same time and the need for a high end game rig goes from 1 to 0.

True enough (and god knows we PC gamers bitch about that one). But that’s still more infrastructure cost on the publisher side who, right now, generally speaking have to invest $0 on that kind of infrastructure nor lease any beyond a very punctual point-of-sale.

There are pros and cons to standardization of hardware, just as there are to standards for software.

Oh, I’m sure publishers would find a way to still screw developers just fiiiine. Loads of experience there :).

Also, a solution to a problem that doesn’t exist - a prevalent philosophy these days is to pirate every game all the time and buy the games you actually wind up playing more than a couple hours to test the waters. This in response to publishers hyping shit (or downright broken) games and not providing demos any more.
Hell, considering the publishers don’t want people to be able to sample derivative, crappy, broken games for themselves any more ; I doubt they’d go along with a “get paid per people who *actually *wind up playing your games” scheme either. Then they’d have to actually release good games instead of copy-pasting crap ; spend time debugging ; provide long-lasting enjoyment instead of paying game journalists for good reviews based on the first level… That’d be like, *work *!

Granted - again, standardization has its upsides. It also has downsides - for example, what if I can afford much better gear than what the publishers have dedicated to running game X (which, necessarily, would bottom down to what’s most profitable irrespective of customer enjoyment or idiosyncrasies) ?
Also, no mods. Ever.

Not really. You might see even *more *consolidation of farms, games and studios than are going on right now. Imagine Ubisoft running everything, all the time. I just vomited in my mouth a little.

True. But again, downside : interface & controls would need to match all available devices. Which is already kind of an issue (the mouse & keyboard setup for Skyrim was nigh unplayable, and humongous UI fonts because they’re supposed to be readable on a TV are a thing in general). Now imagine if the game I play on a 1980x1080 screen - hardly top of the line - is meant to be playable on iPhones too. Yeah.

Capacity sizing is an engineering problem. It can be solved competently if competent people work on it. Notice how google doesn’t generally crash or even slow down even when there is a rush of access? Or how the utility company doesn’t generally fail when it’s a hot day? My point regarding capacity was cost : right now, say a gamer has to spend $400 on a new game console every 5 years, and then has to pay for that console by paying an additional $20 surcharge on every game purchased for that console. (consoles are generally sold at slim margins or even a loss - even if the sticker price covers the manufacturing cost, it often does not cover the R&D costs)

If the user buys 5 games a year for 5 years, it comes to $900 over 5 years.

I’m trying to say that right now, capacity sizing is that every user everywhere has their own hardware all the time. If, at peak times, only half of gamers are playing on their game consoles, that means that as a society we have manufactured twice as many game consoles as we actually needed. So the “free market” cost of it would be lower with capacity sizing and cloud systems because less chips would be built.

Or, on a more pedestrian level. Suppose you’re at your house and 3 friends come over. You do have enough cheap TVs and tablets for everyone to have their own screen, you just don’t have enough $400 game consoles for everybody. Everybody wants to play. 4-player split screen is an inferior solution to this problem. Instead, each of your friends could probably pay about $5 for a day of access, or use their own accounts if they already have one, and this would be much cheaper than buying 3 extra $400 game consoles so whenever friends come over, everyone has their own screen. Or the age old problem of having that little brother who wants to play, but, again, you only have 1 console…

Also, you could use the under-used game servers during off-peak times when less gamers are using them to do physics simulations for other customers, etc…

A marketplace where game developers/publishers are paid according to how many hours uses spend in each game (which is a reasonably ok metric for how good a game is) is a more fair marketplace. Many of these negative behaviors by game publishers you cite would be discouraged in such a situation. Right now, the game publishers are able to get away with a lot of their bad behavior because

  1. Only they have the $50-$100 million in financing on hand to actually do a AAA game project. No one else can compete because they cannot get their hands on that much cash.
  2. They own exclusive licenses to titles that have a lot of cachet with gamers. A lot of people are going to buy the next Call of Duty because previous call of duty games were very good. (well, sorta - I mean, the early ones where it was like you were the star of your own action movie were great. But then they didn’t really expand on the idea and kept making the same thing over and over but with less bug testing…)
  3. They have the money to pay for massive multi-million dollar ad campaigns

And so on.

One final comment : standardization of hardware is always a good thing. Bluntly speaking, are you a software developer or an engineer? Because if you are, you should know this - you never want to develop for a system that acts different ways depending on what it’s running on. That causes no end of problems that it may take you years to fix. “platform abstraction layers” are a hugely complex and failure prone software solution to this problem that you don’t need if your hardware is standard.

Similarly, sizing for different screen sizes/inputs is straightforward. You just change your render target settings, and your game code is still running on the same virtual machine in the data center…

There are design issues in that you would ideally want games that do not require ultra-fine manipulations by a particular controller type in order to be playable. That is, it’s extremely straightforward to say, write a game running on the same software stack that works with either controller input, mouse input, or tilt input for pointer control. The problem is that obviously mouse input is razor sharp, while accelerometer input data is noisy and not very precise. So if your game was a remake of counterstrike, where clicking on the heads of rivals from 50 meters away is the main gameplay mechanic, it’s going to be a very poor experience on anything but the PC.

Lag is still an issue, and will be no matter what happens with the technology. No matter what they come up with, there’s no getting around the speed of light. The only way to do that is to put the processor close to the user, and in the current model, it already is.

Moderator Action

While DRM isn’t restricted to games, this conversation seems to be focusing on DRM as used in games. As such, it’s better to toss this over to the Game Room.

Moving thread from General Questions to the Game Room.

My point was also about cost, and efficiency somewhat. Regardless of creative engineering solutions to capacity problems, since gamers only play games 4 hours a day the full capacity of your cloud farm goes unused 20 hours of the day. It’s not like you can store this stuff up for later distribution :D. And no, the internet being global doesn’t solve that problem with a constantly shifting base of users, because as the appropriately name **Chronos **says : lag.

Beyond that, if the world ran on efficiency that might be the way things were done, but that’s not really something we care about. We do plenty of redundant shit because it’s more comfortable, or because we don’t want to socialize this or that aspect of our lives. And, in this particular case, not only would I not trust anybody but me with my hardware picks, I think game developers and game publishers are pretty fine with the customer shouldering the financial burden of their rigs rather than the other way 'round, even if the customer is actually screwing him/herself.

Used to be a code monkey, yes. So I do know it’s a good thing on the *developer *side. As you say, it makes coding leagues easier. But what if the hardware being chosen as the standard is crap (and it will always be crap for some stuff, because a chip optimized for this thing isn’t optimized for *that *thing) ? It would be a lot easier to style webpages if everybody was forced to use Internet Explorer, but… yeah.

Besides, people have different priorities - some are fine with playing on lower settings as long as the game’s good, some are just cash strapped, some people just can’t stomach older graphics etc… whereas a server farm, and its architecture, would be run on a single operative principle : what’s cost-efficient and profitable. The Xbox is pretty damn “lowest common denominator” under the hood.

And I have zero, but ZERO illusions about the owners of large cloud server farms in your model having the ability or willingness to play shenanigans. On players and developers both, since now the latter have to actually *pay *for their game to be able to be played in the first place - let’s face it, few software studios have the resources required to run a global cloud farm. So : “Oh, you want your game to be run on the top of the line machines ? That costs extra. Oops, Battlefield Call of Honor VIII’s publishers paid us more, sorry, they get the good rigs this month. Oh, and they paid us even further to throttle *your *game, isn’t that nice of them ?”.
You’d basically be consolidating the power of large publishing houses AND those of ISPs (in a theoretical non-neutral 'net) in one large besuited corporate package. Yeah, that’d become Evil in no time at all :).

Once upon a time, I recall that when I bought a new game I’d go ahead and look for a no-CD patch for it. Being able to put the disc away was enough of a convenience to me that it was worth the few minutes to track down a patch that is in effect the same as I’d get if I hadn’t bought the game at all. Now that games I get hardly ever require a disc sitting in the drive, I don’t even think about the pirate scene. But I remember there would be buzz about other games, how their DRM was tantamount to rootkits, caused all kinds of other problems. What little time it took me to solve a little annoyance, other legitimate paying customers were turning to in order to solve problems actually caused by intrusive and destructive DRM. Even for legitimate customers, pirates were providing a superior service.

I’m not sure I believe that ‘good will’ of customers has such a high cash value. I works on me, but I kind of suspect the rest of the internet is a wretched hive of scum and villainy. Lack of bad will may be more important to the bottom line, but I did after all buy Dragon Age 3 even after Dragon Age 2 and the ending of Mass Effect 3. That’s of course without the problem of the game effectively treating me as guilty-until-proven-innocent every time I fired it up. If it does so, that’s become pretty invisible.

For a high profile and genuinely compelling game like The Witcher 3 or Dragon Age: Inquisition the desire to play may well overcome bad will generated. I’m not sure that’s the case with the new Sim City, for example. Among the complaints I’ve heard about it, I don’t hear much about it being worth the extra hassle of trying to get permission to play your game from an over-committed server, or about the forced-multiplayer aspect being considered fun.

The new SimCity is a compilation of the best “Fuck You !” moves in the industry. It’s kind of hilarious, in a trainwreck featuring clowns kind of way.

Well, all those amazing PR moves sold a lot of copies - copies of Cities: Skylines, that is.

I think the problems with building big farms to manage content remotely is an unlikely future path for several reasons. First, Kobal2 is right that it’s going to be an extremely hard balancing problem to make useful most of the day. Second, one of the issues we’ll see is that the hardware is very directed, which is one of the things which separates game technology from most computer tech. Modern game processing requires not only powerful processor chips, but also dedicated graphics. It’s hard to see how that can be avoided for the foreseeable future, and that drastically limits the utility or plausibility of a big, remote, gamefarm.

In essence, you’re going to still need dedicated machines for as many users as you might possibly have at once, at that might easily be a very, very large proportion of your userbase. Like it or not, people won’t pay unless they can play an extremely large proportion of the times they want to connect. Once in a blue moon, you can get away with no-service. But if only half the userbase can play Call of Duty on launch day, they will be pissed. And if you don’t buy enough licenses, or your internet connection gets shaky, or you need big upgrades to your machines for people to play… then your service is not going to be long-lived.

Also, we should remember that companies have been shattered by making bad hardware choices. Most of the companies who have made game consoles have, at one point or another, either been forced out of the market or been in serious danger of being shoved out of the business entirely. The result that competitive hardware has only two current PC competitors, and two current console competitors, both of which are actually dependent on the technology development of those two PC devs).

Further, it’s not altogether clear that even if all the problems are taken care of, and we have no problem at all with the last mile connectivity problems (because only a very small portion of the market could possibly use it right now), then it’s still unclear if it would still work. The reason is that it might not actually be financially viable. Right now, an amortized console probably only costs around 5 bucks a month. And a large portion of the market does not spend another $720 on full price new titles a year. So how much money can such a service charge to make it worthwhile - and can it actually provide access to old games at all?

I’m not going to argue whether what I said is doable. I will point out that the idea that games keep pushing the edges is belied by consoles, but that doesn’t mean remote play will work. (Though Sony is really planning hard on it working out…)

But I will point out that my ping times from Google are already less than 30 ms on a rather crappy connection over Wi-Fi. That’s pretty snappy, considering we can handle as much as 200 ms lag (though we susually shoot for around half that.) And seeing as 60fps (~17ms) visual processing is already possible, we’re nowhere near a problem.

And, of course, this is nowhere near the speed of light, which could travel nearly 9000 km in those 30 ms. Any server will be a bit closer than that.

[/quote]

Actually, even with consoles the boundaries are often being pushed. Even with static hardware, programmers get better and better. Each new generation of hardware is (hypothetically) designed for the moment when they can’t squeeze any more performance out of the existing machine.

Yeah, we are. Actual real-world response times in practice can easily spike above tolerable levels repeatedly.

The network is and isn’t the problem. It’s the computers at either end. However, where the network does become a problem is that computer a have to link the two ends, and that’s not instantaneous. If you’d like to pay the money to set a wire directly between you and server farm, then on that day you can officially ignore all the in-between, but you’d likely find that more expensive than a dozen consoles.

To return to the original point for a moment, the purpose of DRM is not to decrease piracy, it is to increase sales. Those are not the same thing. DRM does not increase sales if it deters a pirate who wasn’t going to pay for your game anyway, nor does it increase sales if it pisses off would-be customers to the point where they don’t buy this or your next game.

The intended purpose is to increase sales, but many DRM schemes are implemented in a way that belies ignorance of that fact.

The purpose of “DRM” is in some sort of ideal world to increase sales.
The purpose of any specific DRM implementation may instead be to “reduce piracy” in the mistaken belief that by doing so, they will increase sales.

But yeah, the point has pretty much been beaten to death - for the number of people who pirated your game that would otherwise have bought it (Note: This number is generally small. When was the last time you search the internet for a game, couldn’t find a good crack, and say “Screw it, I’m gonna go give Electronic Arts fifty bucks for a real copy!”? Exactly.) the cost of purchasing/implementing/maintaining a DRM solution can be very large.

Honestly, I am not sure why anyone BOTHERS with DRM beyond “We are only releasing this game on Steam.” :stuck_out_tongue: