Cloud computing: Anyone successfully using it to replace the local file server?

In general cloud companies tend to be far better at Security than individual IT departments. A breach of information within a self hosted data-center is almost always higher than a cloud system configured within best practices of the vendor.

Absolute costs are not typically considered in these moves as the industry has had a long term issue where capitol costs are easy to quantify and operational expenditures are typically very difficult to quantify. The Opex side has been a vague slushy bucket forever and accounting has learned to be OK with this and so even the increased spends in opex in moving to the cloud is acceptable.

Even with good number there is also an issue around risk-avoidance in the industry and due to various issues IT departments will be far more conservative with product and equipment choices which will result in selection of outsourced contracts and far more expensive “enterprise” hardware tiers which will almost always result in the cloud being cheaper. Companies are fine running on cheap commodity hardware in the cloud but they would never be willing to buy white-box servers for their data-center as an example.

There is the whole integration costs and benefits related to automation that leans the decision to the cloud, but a lot of this is tech-debt and culture problems caused by a tenancy to a model where you pay external vendors for needs and once again buy expensive enterprise gear in an attempt to prevent failures vs mitigate the effects.

The large players in the cloud space were forced to plan for failure purely due to scale and had to abandon the comfortable but non-scaling solution of centralized, fragile, tightly coupled systems that try to avoid downtime on “pet” systems.

As an example File systems and thus file server have mutual access needs and assumptions that directly result in Consistency being favored over partition tolerance or Availability Brewer’s theorem. Tools like Dropbox, S3, ceph or other object stores remove the need for strong consistency and thus can be distributed to improve recoverable, scalability, and cost in a way that a typical IT department would never find acceptable even if they hired and developed internal skills to support such a configuration.

In the future we will most likely have a hybrid model in a more steady state of the industry but it will be corp IT that will have to change to adopt the newer model that avoids the pitfalls of tightly coupled systems with typically unneeded tight consistency models that increase costs while reducing performance and ironically availability.

Maybe, but every hop along the way also has to be secure. And as I said, how do we really know that the CIA (or MI5 or whoever) isn’t leaning on Amazon? Or one of the hops along the way? If your IT operation suffers a security breach you can fix it; that’s not necessarily the case for the cloud. More to the point, it is the responsibility of a particular identifiable person.

If you are using non-encrypted data streams your attack vector is huge no matter what, if you use proper encryption your risk of MiM attacks is tiny. Most corp networks use pure edge based security which has been outdated for decades and are actually far more vulnerable to attack.

If you have left yourself open to that attack vector being in a physical data-center, typically with minimal internal segmentation is a far greater risk, and if you are transmitting critical data over an IP network in the clear the location doesn’t matter.

Unless you are the exception to the rule I can bet you are using simple vlan tagging, and your ilom interfaces and router interfaces are accessible to other ilom interfaces and router management interfaces to a significant number of employees.
But yes, if you ignore your part of the shared the responsibility for cloud security it can cause problems.

I’ve been out of it for years, but threats from foreign actors were a significant concern in both the military and commercial sectors in which I was involved.

Don’t get me wrong: the cloud as an abstraction concept is great. You just have to think about the implementation.

This is what I’ve been arguing for years. Say that I’ve got 60 people connecting to an on-premise file share. I need to:

[ul]
[li]Ensure network connectivity[/li][li]Secure the perimeter[/li][li]Allow VPN[/li][li]Backup the file share[/li][li]Check the backups to make sure they actually work[/li][li]Rotate tapes or disks offsite[/li][/ul]

And I’m only one person with a bunch of other tasks to do!

Swap that out for Teams/SharePoint/OneDrive/Visual Studio Team Server (or your choice of other vendors products) and everything except the first two items disappear. I maintain redundant network connections with diverse providers so anything less than a major environmental event doesn’t affect me. If we lose power, I send my staff home with their laptops and they continue to work. My files are versioned, scanned for malware, geographically backed up. My chance of ransomware attacks is dramatically lower. Microsoft employes 1000s who focus on network security, as do the other vendors.

Of course they can do it better!

That is fixed through strong PKI, and other good practices and the government system tends to breed practices like ignoring updates which opens far more attack vectors or other vectors like malicious firmware.

https://www.networkcomputing.com/networking/cisco-warns-malicious-firmware/611360444

Unfortunately a lot of “security” compliance is really box checking, and a lot of those policies are based on the ancient rainbow books and the fully discredited trusted model.

Show me a government install that isn’t using Java 1.6 or at most java 1.7 or ancient java libraries like old version jackson with dozens of known remote exploits and I will be impressed.

But that is separate from the cloud vendors security responsibilities. Due to the costs of a potential breach they tend to be patched within days if not within hours when a new vulnerability is found.

Corporate environments and government agencies typically only update once a quarter or less for some systems, yet you can pay less than $20 and scan the entire public internet’s IPv4 space in less than 10 minuets these days.

The silos produced by a large organization, mixed with a philosophy that focuses on the goal of up-time, vs the goal of reducing interruption mixed with external vendors delivering fragile code bases makes this impossible.

Part of the reason Amazon succeeded is the ability of Jeff Bezos to make mandates like the following.

They reached their scale, and could resell their services based on mandates and decisions like this, when most company say “Cisco only”, “We only use software that provides commercial contracts” etc…

It is hard to not throw problems over the fence or to ignore the companies need and dismiss the need for compassion around how your systems and policies impact others. But as someone who was in the traditional model for a long time I know for a fact that it is this misguided model that is the reason the cloud became popular in the first place.

It is a few years old but here is an example where 77% of .gov domains using the older versions of java with known security issues and no support.

https://gcn.com/articles/2013/03/27/java-vulnerabilities-goverment-unspported-versions.aspx

My company didn’t allow any of our data to go on the cloud - but we were a Fortune 50 IT company with its own cloud and a lot of security experts. For the average company, the cloud is much safer, and the hacks of lots of financial and retail companies who should have known better has shown.

As for cutting cable - check out Shark Tank on the Computerworld site for many examples of clowns pressing the emergency shutoff button in the server room by accident.
Another advantage - vulnerabilities have a good chance of affecting other people before they affect you, and getting fixed before you see it. I doubt many in-house shops have the resources to keep up with everything.

Given the speed of light, the lag will be more due to load and the number of hops than raw distance.

When I worked for Sun we had diskless clients called SunRays which did everything using internal servers. They and the emulator for them also worked fine long distance and from home over VPN. I never did it but people packed up their laptop, went from California to Taiwan, plugged it in, and saw no difference.

Yes, I conservatively assume 0.85ms per 100 backbone miles (If I know the route) for WAGs

As a comparison the safe assumption on high use Wifi is to add around 50ms and the average hard drive is: 1 second / 7200 rpm / 2 or ~70ms average access time.

~130ms is about the point that users start to complain about Interactive web experiences which is around the average ping time from Seattle to London.

There is a concern with the bandwidth delay product for some use cases for typical file server tasks for human and light office work the latency to a cloud provider’s region is typically not that bad for the use case. Latency is typically the primary performance limiter in systems today, but for this use case it is typically not a problem.

  • Sam Walton, fingers crossed: “AWS, AWS, you’re gonna go bust, I can feel it…” *

Before someone uses this WAG math I need to explain, this is the average worst case WAG formula for iSCSI, with slow drives in RAID 3P with a blown cache as a save pessimistic guess at worst case.

The raw drive is: 60s / 7200 RPM = 8.33 ms per rotation.

I would never give that other number to clients and always try to collect empirical data, do not copy that version for anything except to mock my glass-half full approach for the worst case scenario.

Also note, if you drink the koolaid, as you should when moving to the cloud, as you do lose the ability to pick your poison. Moving to another cloud provider, or to a hybrid model with a self hosted model should’t be a problem and in fact should be fairly trivial.

Search your favorite search engine for “pets vs cattle” for more background.

Microsoft and Amazon are not the only cloud vendors.

Moving to the cloud doesn’t, and won’t, fix this issue. GOTS products that were developed in-house by long gone contractors can’t be updated without major rewrites, so continuous waivers are generated to maintain old, unsecure Java versions.

I use Citrix to access live pages on a newspaper production system from home (I’m a copy editor) and rarely notice any lag. If you don’t have to be totally “live” then surely you can just us ea system like Google Docs, where you edit locally on your computer and the system saves automatically in the background every few seconds. No lag there.

Yes, the point is this is the main attack vector, and that it doesn’t significantly change by moving to the cloud. Noting that in many cases these would be accessed through an IPsec tunnel and not the open internet.

But that culture is what causes those problems, and if you look past the marketing hype around microservices,the broader service oriented architecture, or the “cloud” those models are a collection of loosely coupled services, with minimal interdependencies. Yes there is a shift in complexity but updating and even replacing services becomes easier.

There are very real advantages to moving to a cloud platform, which will not use that same model for portions of the shared responsibilities that relate to risk as cloud providers use technologies that are less susceptible to some attacks (really more modern like SDN) and/or are far more likely to keep those components up to date.

Most companies decide to not invest in internal expertise while simultaneously making the decision to not update often due to the culture that either grew organically or they chose to implement.

That tech debt piles high which makes forward movement even more expensive, which is fine from a business need depending on your priorities but leads to serious issues with security.

Those very serious issues with security provide a much larger attack surface than a cloud model adds and it is almost universally those types of cultures who tend to avoid the cloud based on security concern claims.

This is why I was saying security is often just theater, these companies are extremely soft targets, and it has nothing to do with their hosting location.

There are various flavors of Kool-Aid these providers prefer.

https://landing.google.com/sre/book/chapters/introduction.html

But the particular flavor is less important than the culture, which avoids silos, fiefdom and FUD that is counter to business needs. While there are many aspects that made these companies successful the choice to not pile on tech-debt and to build systems that allow for agility and updating is a huge factor in their success.

They also provide simple API access to distributed systems which are complicated and impossible to hire for today that address some of the reasons people tended to the unscalable monolithic pet model on critical systems too. Those tools for providing redundancy and automation can be of huge value too.

Cloud migrations almost universally fail due to cultural problems and battles, but this is also true about internal projects. But the point is that most of the security FUD is related to problems that are still an issue in self hosted data centers.

To be clear, these problems don’t always or even exclusively rise due to the traditional model and they can arise even if every decision in the past was the correct decision at the time. The Sunk Cost Fallacy is the problem, but if you had good enough metrics the costs of moving either in a private-cloud model or a public cloud model do amatorize nicely.

But the tendency of companies to avoid change despite the value, or to choose products based on brand name cause this to fail.

As an example, VMWARE is too expensive for a private cloud, and it’s dependence on enterprise storage induses high costs and performance penalties which do not work under a ‘cloud’ model (although appropriate under other models). Yet when people try to get away from that they don’t simply group ops and devs who typically already know how to deal with three tier web apps to run openstack, they pay an Enterprise Openstack vendor to come in and that vendor model works poorly for a private cloud need.

Typically they are still buying enterprise servers, with expensive options from vanity vendors and the same is true for networking and storage.

The instances you run in AWS or GCE are running on machines that have far more in common with a motherboard strapped to a board and are typically sourced from vendors that are not vanity badges.

There is a bit of a chicken and an egg problem, where the difficulty in hiring talent to self maintain causes people to outsource or build traditional monolithic systems, and these cultures limit the ability to grow talent in house. But the net effect is that you are going to have to get use to the idea of using the public cloud. Perhaps in the future the product landscape and skill sets will improve for the private cloud model. But the reality today is that the opex slush bucket means that you are going to be migrating a portion to the public cloud at some point due to the accounting and because integration costs are some of the largest costs, unless you provide a private system that emulates the way that cloud providers operate you will be participating in a smaller and smaller legacy model.

As per core performance has pretty much hit its limits, I would encourage anybody in the industry to start working on this sooner than later.

The public cloud model wouldn’t be nearly as large had the industry not dug in it’s heals to protect the model that chose vendors even before the needs were captured. But you will continue to be father behind in the ability to adapt to changes if you stay there no matter if your systems are in the public or a private cloud. There are efficiencies that are gained, and your competitors are taking advantage of them.

TLDR, This is a culture issue and not a tech issue.

Sorry about that novel length reply, but I do feel it is important information.

You’d be surprised with how many people in government think “moving to the cloud” will fix all security issues as well. Or maybe you wouldn’t be. :slight_smile:

Thanks for all that info, I know I appreciate it.

The NIST has pretty comprehensive documentation around this that will help with trying to be objective in that context.

https://www.nist.gov/sites/default/files/documents/itl/cloud/NIST_SP-500-291_Version-2_2013_June18_FINAL.pdf