Oh I know. Part of my job is telling government people that they still need to take care of security once their system moves to the cloud. They often don’t believe me ![]()
I hear you, telling developers about that shift in responsibility while also trying to get security teams to collaborate with the developers is a large part of my job.
My job is more similar to a family counselor than a technical resource these days.
But to be fair, good security is hard but ironically the cloud provides tools that can actually make that easier if you don’t try to forklift the current, outdated models.
My company uses Office 365 One Drive for file storage and Content Server for shared work projects. Before, we had multi-partitioned servers that had file paths a mile long. We used to have to label folders “First pass”, “TM review”, “For vendor”, “From vendor”, and so forth. With Content Server, everything’s better organized, and files can be reserved & unreserved to prevent more than one person accessing them. We also use Daptiv to track each task per department.
Both clouds had some growth pains, but we eventually smoothed out the bugs. We have the occasional bandwidth slowdowns, but so far it’s all good. There’s been no difference in speed as far as I can tell.
Then how can they afford to run it?
Here are a few ways:
[ul]
[li]Multiple customers running on single physical machine.[/li][li]Industrial power rates are lower than commercial which are lower than residential.[/li][li]Buying purpose built systemsfrom ODMs with needed features dropped with more efficiency through optimized configurations and component slection.[/li][li]Fully leveraging the depreciation schedules in their taxes[/li][/ul]
In some cases, and especially if a company has to maintain a physical data-center due to needs like MacOS build farms which are not cloud friendly due to EULA restrictions you can hit a point where the costs are cheaper to run your own systems but typically only if you are willing to run on similar grades of hardware.
If you look at server designs like Microsoft’s OCP servers or Facebook’s OPC servers you will see how they have shared power supplies and lack typical “Enterprise” features.
While there is no reliable public data rough wags show that they are making at least 3 times their total cost of the machine over the lifespan including power, space, capital costs etc…
They can size machines in the sweet spot between Horizontal scaling Vertical scaling based on point in time prices and costs while also reducing power draw. In theory a private data-center could do so also, but most people will not be comfortable in choosing low wattage parts with no redundancy and using horizontal scaling to enable availability. We have professionals with decades of experience in an era were this wasn’t an option and we all have a hard time with change even if the tools now exist.
This is a great example of what I am talking about https://youtu.be/MFzDaBzBlL0
This is the thing. You can migrate some functions to cloud servers. Or jump in completely.
The first and obvious one is email. Nobody smaller than a decent enterprise (let’s say, 200+ users, more likely 1000+) should have their own email server. If it’s time to replace your email server, move to the cloud. With all the hardware, service packs, upgrades, monitoring disk space and backups - it’s a helluva lot simpler to let Microsoft or whomever handle it.
then, I’m seeing a lot of other applications converting to cloud. Actually, they are web. Does your company need fairly higher-end accounting software? Why deal with the same hassles mentioned with email - severs, SQL database maintenance, backups, service packs, etc.? I’ve run across a number of accounting and ERP solutions where your company is just another instance on a server maintained on the internet by the company providing the software. It’s easier for them - they don’t have to deal with hundreds of different sites, some of whom may want support from versions years old, unpatched, old server operating systems, etc. They update one site and all their clients are on the new software. This is probably the end game for any complicated, database-centric purchased software.
The biggest leap is to move the file services to the cloud. The setups I’ve seen involve either virtual desktops (everyone runs a private virtual machine) or Terminal services (RDS, Citrix, etc.) Surprisingly, you can run dozens of people on a single terminal server and get the same response time as most PC’s - i.e. not noticeable. Most PC’s spend the majority of their time idling. Sharing a server puts this to good use.
I haven’t seen a good setup for local PC’s and remote servers - there will still be appreciable lag bring a file down across the WAN or internet; but then, unless you while business is in the same building as the servers, this is typical with any local servers too. the office crosstown relies on internet speeds, as does the one several towns over; or there’s the headache of maintaining multiple servers, some remote.
For the company going cloud - they lose a lot of the maintenance headaches. For the cloud providers, there’s efficiencies of scale an efficiencies of technology. Cloud servers will be Virtual Machines (VMs) on a large collection of hosts. VM’s can be migrated form host to host - live even! - so a host can be emptied and have maintenance done at any time, instead of always bringing things down after hours. VMs allow for snapshot backups, and a large datacenter can afford high-end storage and network tech to do backups without interruption to processing. Upgrades or restores can be done on separate test VMs in parallel so there is no need to bring down existing servers; and having a large datacenter allows testing additional VMs without impacting production.
The downside - there are some features - live video and video editing, for example, which simply hog processing and video bandwidth. These will for now probably need local processing; also, there are occasional specialized peripherals that will likely need to be connected directly to a PC because the roundtrip time would be unacceptable; plus they would mean needing to install specialized drivers on a shared Terminal server.
Everyone says “but what if someone chops the internet cable”? Internet is pretty reliable nowadays, I probably see as many or more power failures compared to internet. Plus, if your company is in any way distributed, the people in remote sites will have this issue regardless - their files and database applications and email will not be available.
Efficiency of scale. Your home computer running 24/7 probably does (or used to0 suck up a lot of electricity sitting there doing nothing. One server in a datacenter is doing the same job for dozens or hundreds of different websites.
While microservices and soa and loose coupling are good methods to be aware of and use where appropriate, there is no universal approach that provides lowest cost over time, other than being smart and mapping solution to problem.
If everyone did what Bezos mandated then most companies would be wasting large amounts of time and money on IT/dev/support building out those capabilities and dealing with all of the downsides, and not getting ROI on that because only some situations actually benefit from that model.
The trick is to be smart, be aware of the problem space (not always obvious), and apply tools that map well to that problem space. I run into people frequently that are pushing method X or method Y without really understanding which problem they think they are solving (let alone actually solving or not solving).
That is why I explicitly called out the fact that the flavor of Kool-Aid doesn’t matter.
What does matter is avoiding a culture that tightly coupled systems and accumulated tech-debt.
When you are small Vertical scaling is the simpler approach, though it is more limiting and tends to have an exponential increase in capital costs with an inversely exponential up-time benefit. One does need to avoid premature optimization, and the use case should be the driving factor for decisions but painting oneself into a corner is the problem.
I was responding the the provided scenario, related to people who are avoiding the cloud due to security concerns. If your build system or tech debt are so high that you can’t even apply security patches you have a serious problem if you have publicly accessible services. As I stated before it is trivial to scan the entire Internet in less than 5 min to find all public hosts in the world that have a vulnerability.
There is a small increase in the attack surface from using a shared cloud provider and at least for the US government customers that isn’t even a problem because of govnet cloud offerings. But the attack vectors for running outdated versions of software with well known vulnerabilities is huge.
In organizations that choose to defer updates in their product also typically also defer updates in their physical hardware. As an example it is almost the rule that some of the more fragile, critical legacy systems will be running on older servers with unpatched out of band management software. As a specific example older Dell systems cipher 0 bug where the iDRAC would require a password, but didn’t care what password you provided.
I paid to do a scan on the public internet looking for Cisco VPN 3000 devices, which have not had a security update in over a decade now and found a surprising number of the devices still online and from locations that seemed to have sensitive data. I did this trying to justify the budget to replace the one at a previous employer and hoped that we were one of the few companies still running one as justification…because somehow running an edge device without security updates and known attack vectors was OK under security policy.
Due to the shared responsibility model agreements a cloud provider cannot operate in this way as it would be easy to justify as a conscious and voluntary disregard of the need to use reasonable care in a lawsuit. If juries were more technical running Java SE 7 Update 80 would probably also be usable as an example of gross neglagance in a lawsuit.
Even without the risk of litigation, Equifax is a good example of why having a fragile, undocumented system really isn’t an excuse for exposing data to disclosure just because it is hard to actually start addressing the accumulated tech debt.
I realize that this is often not within the power of individuals working in these groups, but it is very much a very real culture problem.
While absolute it is almost certain that cloud infrastructure, configured in a form that matches best practices, is more secure than on-premises infrastructure. Primarily because most on-premises infrastructure doesn’t follow best practices.
Reported