Last week, AboveTheCloud started a three-part series about the multi-cloud vs. one-cloud. In part one of the series, we discussed a key decision that IT leaders need to make when embracing cloud enablement, best of breed vs best integrated.
As a quick recap, we looked at what it takes to setup a mature presence in a public cloud service provider (CSP) and compared the "big 3" (Amazon Web Services, Microsoft Azure, and Google Cloud) cloud providers.
This week, we'll apply those considerations to the benefits and implications of committing to a single cloud provider over multi-cloud.
When enterprises enter into leveraging a public cloud provider for Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), a proof of concept (POC) is typically utilized. Under the POC framework, a team will look at a deployment workflow (often called a code pipeline) of a single application into a public cloud provider to compare and contrast its development and operational characteristics vs. running on their own infrastructure. Many POCs also leverage some or all the CSP provided code deployment tools, usually with a small and constrained scope to measure cost, capability, and efficacy. POCs are usually run by a committed team of people who own deployment issues and are excited about taking control of their development destiny, working to find custom solutions for the application POC to demonstrate success. POCs provide a valuable service...validation of the leadership goals, requirements and expectations.
The micro focus nature of a POC allows the team to prove that they can pipeline code faster and in many cases with fewer defects over the status quo. Once completed, measurements are made, everyone congratulates themselves on proving the hypothesis and the enterprise starts to look at opening the floodgates for more teams and apps.
Yet, by its very nature, a POC measures the efficacy of code deploying a single application into a public cloud. It does not measure how well that fits the operating model of larger enterprise, the implications to applications that are not refactored, whether those "custom solutions" from the POC are repeatable and how the enterprise will manage and curate those applications once the innovation cycle slows for that app (and the developers focus on other innovation cycles). In short, they do not measure nor evaluate the macro in scale
Once an enterprise commits to cloud enablement as a portion of an application portfolio strategy, IT usually gets committed to figure out rest of the puzzle. It is here where the true requirements of cloud are exposed. As discussed in last article, building an operating model around a public cloud service provider requires enterprise architecture standards for the most or all of the following:
These are typically structured as elements of a "cloud center of excellence" (CCoE) and if implemented properly, reflect an absolute best practice for cloud enablement.
There are many pitfalls to manage and avoid, yet the irony is that most IT groups focus on cost and usage optimization. While cost and usage optimization is an important practice, it is only a part of the puzzle. In the article, "Is cloud cost optimization the next fad diet?" we explore this important element of a CCOE. While the majority of the focus is on usage cost, this often interferes with other important elements such as people skills, data protection, and developing solution patterns for the service catalog. We've found that these elements yield as good or better results than a singular focus on tangible usage costs.
There are both macro and micro benefits to a focus on a single public cloud strategy. From the macro level, skills development is narrowed focusing on developing experts at fewer things. And The benefit of this cannot be understated...with a single cloud focus, the CCoE team and their specialists can focus on one way of doing cloud. ...And yet, all three clouds are generally the same right? Surprisingly, I made the realization that different CSPs do similar things in very, very different ways. A clear example of this is Identity and Access Management (IAM). AWS has a fully robust method of doing IAM policy enforcement, that teams find complicated to master and manage. Consider the layering of Network Access Control Lists (NACL) and Security Groups: In AWS speak, a NACL is a policy enforcement rule that is applied at the network segment level, while a security group is a policy enforcement rule applied at the host level. Which should I use? According to Amazon's well architected framework, environments should use both for defense in depth. Yet, having two different layers of security rules can become very complicated with effective deployments requiring an in-depth understanding of where and how to use them. Now contrast that with Google Cloud Platform(GCP). GCP uses a centralized network tagging framework to apply policy enforcement rules based upon the service role. Want to enable SSH to a server? Create a network rule for "allow-SSH" and add the network tag to the host. ...and the differences go on and on...from how one cloud provider enables data protection for object storage vs. another, how virtual private clouds communicate with each other, and even how inter-region network transport is performed within each cloud provider. They all do similar fundamental things very, very differently.
So, would you rather be an expert at one cloud where you can yield the biggest benefits and optimizations out of a single cloud or a generalist across three? Unless an IT organization can scale to have specialist teams for each of the big 3 providers, an enterprise can yield better benefits from mastering how to leverage a single cloud and wring every last capability out of it.
Other benefits to consider:
Be sure to check out Part 3, where we explore the benefits of Multi-Cloud and Best of Breed.
Originally Published at Above The Cloud