11. August 2021 By Marcus Peters and Marcus Peters
Cloud native, multi cloud or rather cloud agnostic?
The motivation for IT managers to operate applications, or "workloads", in the cloud is always commonly debated. On the plus side, aspects such as the elimination of the administration of infrastructure components, the possibility of scaling under load or the reduction of operating costs are often mentioned. In the meantime, however, it has become clear that these arguments are not generally justifiable and that the data center in the clouds must be professionally controlled in order to provide the desired advantages.
It is undisputed that cloud technologies are part of the digitization strategy for most companies, as the IT requirements can be implemented more flexibly and efficiently from the cloud than from the company's own data center. Of course, this is not a general rule either, but one that takes security aspects and hybrid scenarios into account.
The first step into the cloud is often taken by moving virtual machines into the cloud infrastructure. This procedure is also known as "lift and shift". And this is where things start to get interesting: If the organization gets stuck in this phase, the pleasure of the cloud is only moderately great. Although the management of the hardware is eliminated, all other elements from the operating system to runtime components and persistence mechanisms to the application itself still have to be organized and maintained.
What was that about cloud native again?
The Cloud Native Computing Foundation defines cloud native technologies as follows.
“Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.”
According to this, an application is considered "native" if it uses the mechanisms for abstracting the operating system, runtime or persistence and thus does not run in a virtual machine. In the case of a typical web application with a database for storing data, corresponding services of the cloud provider could replace the web server as well as the database. An example of a transfer to this state is shown in the following figure.
Exemplary transformation of a web application to Cloud Native via "refactoring".
Only when the services of the cloud provider are utilized can the cloud's full capabilities be unleashed.
A typical point of criticism of this type of approach is that it is tied to a cloud provider. As soon as an application is natively integrated into the provider's environment, a dependency on this provider arises and thus the much-cited "vendor lock-in".
However, it is worth looking at this point in a little more detail: The closer a cloud application is "bound" to the underlying layers of the provider, i.e., native access to the services takes place, the more optimized this application runs with and at this provider. The computer under the desk serves as an analogy here. An application that is optimized for a selected operating system has the full performance of the system at its disposal. However, this performance comes at the expense of portability to another operating system. To get around this - as with software that can run on different operating systems - we often talk about multi-cloud approaches.
And what does multi-cloud do?
Multi-cloud, as the name suggests, means the decision to use hyperscalers from different vendors. In most cases, two scenarios are mapped: Either the enterprise decides to select a different cloud provider for each workload, depending on the situation. Or a workload is distributed among different cloud providers through a cloud services abstraction. The first case is quite simple in thought and follows typical strategies to diversify the supplier pool. Hardly any company today has all its software from one vendor, and that's a good thing. So, it can make sense, even when selecting cloud services, to select individual services with a "best of breed" approach. In addition to a platform strategy in order to directly save costs or maximize profits. However, this means that the individual application still runs on exactly one hyperscaler. And for these, the same applies as mentioned above for exhausting the performance through cloud-native approaches.
Cloud Agnostic - Run one application simultaneously on different clouds
If you want to completely break away from the cloud provider, the only approach left is cloud agnostic. This means that the application runs simultaneously on different hyperscalers or can be distributed to cold, warm and hot standby in the typical redundancy scenarios.
However, this freedom creates complexity and should not be underestimated. An obvious approach is distribution via containers. These are optimized for the respective cloud environment and corresponding components are available from many manufacturers. In addition to the "Red Hat OpenShift" solution selected by Forrester as the market leader in the "Forrester Wave", Microsoft, Google and AWS also provide corresponding mechanisms. The following figure shows an architecture example for managing a multi-cloud environment with Microsoft's Azure Arc.
Multi-cloud with Azure Arc, source: Microsoft.com
However, this approach and the motivation behind it must be considered in detail. Not every application is suitable across the board for every container scenario of all cloud providers. Differences in the underlying computing or network infrastructure, as well as proximity to dependent services, can lead to significantly different performance. Moving an application between cloud environments might also require moving data. In addition to economic parameters, environments often differ in the services and facilities they provide to store and manage that data.
Another approach to cloud agnostic, besides distributing via containers, is to incorporate an abstraction layer that aggregates cloud provider communications and behavior. This is shown in the following figure.
Multi-cloud scenario with an abstraction mechanism
This approach makes it possible to operate the application in a distributed manner through an abstraction layer with subsequent native coupling to the services of the cloud providers. However, the effort required to achieve this is the highest and, as with distribution via containers, it is not ensured that the last bit of performance is extracted from all clouds.
Conclusion
A multi-cloud strategy can - if well considered - be a sensible alternative to "everything from a single source". However, it is important to ask what exactly is to be achieved and whether it is really necessary for an application to run cloud-agnostically on different clouds. Or whether it is not simpler to define a home in the cloud for one application at a time. In the last case, it then makes sense to consider cloud-native approaches for the respective applications in order to make full use of the performance capabilities.