A significant portion of my time working in Azure is spent working with clients who are planning on moving, are moving, or have moved existing applications to the cloud. This, of course, makes sense – it’s nice to imagine a world in which new business applications can be rebuilt on a regular basis, but in reality, that will cost too much money. So what happens when you take an application that’s 5, 10, or 20 years old and run it in a public cloud such as Azure?
tl;dr: You need to plan your migration to the cloud – there’s great economic advantages to be realized, but if you don’t fully understand how the architecture is different than on-premises, you’re going to have a poor experience. Understand performance and availability limitations of the public cloud you’ve chosen. Consider modernizing the application with a focus on leveraging PaaS and SaaS as much as possible. Once you think you’ve got everything planned – look again, because you may have missed something!
This article will be split into two posts (because who wants to read 2000 words on their smartphone?)
- What defines a cloud-ready application (this post)
- What you shouldn’t do and What you should do
What defines a cloud-ready application?
Cloud services, by their nature, allow customers access only in specific ways, at specific levels. For example, in Office 365, you’ll never encounter a ‘server’ – just the services, living in a globally load balanced solution. In Azure, you don’t have access to anything physical – you don’t rent a physical server[footnote]You may not own it directly, but I rather suspect with the larger G-series that you’re pretty much getting access to a whole server[/footnote] – and as a result, you don’t get to control anything that happens in those layers.
This means that your skilled employees (or you) don’t need to worry about patching the virtualization hosts, managing the network, or even think about all the physical needs of a datacenter – maintaining the building, the generator and fuel, cooling systems, ventilation, racking servers, organizing cabling, troubleshooting all of the above – it’s endless. This is where the savings begin to be realized when you move to the cloud – whole layers of management disappear. However, this is a trade-off. You don’t have the ability to control when the hosts are patched, and you don’t have the ability to control when the network is updated, and you don’t really have a great ability to ensure that certain VMs are running on the same host, or even in the same building. This introduces some risks you have to take into account when migrating to the cloud.
When planning a move to the cloud, consider the following four factors:
- Availability
- Efficiency
- Scalability
- Fault Tolerance and Resiliency
- Compatibility
Availability requirements vary from application to application, and from business to business. Some things, such as the payment processing, really need to have pretty high availability. Other applications, perhaps such as an intranet, are not so business critical and can be offline for a few hours without having significant financial impact to the business. In a self-hosted datacenter, you have a large degree of control over when and where scheduled maintenance occurs. However, because there’s millions of other people using public cloud infrastructure, this control is not available. The solution, is to ensure two or more servers are running the same workload. In Azure, this is known as an “Availability Set”. If your application doesn’t support having two servers running the same workload (i.e. in a cluster or load balancer), there’s no SLA available to that server – and this is true for both AWS[footnote]AWS SLA: “The Service Commitment does not apply to any unavailability, suspension or termination of Amazon EC2 or Amazon EBS, or any other Amazon EC2 or Amazon EBS performance issues: […] (v) that result from failures of individual instances or volumes not attributable to Region Unavailability”[/footnote] and Azure[footnote]Azure SLA: For all Internet facing Virtual Machines that have two or more instances deployed in the same Availability Set, we guarantee you will have external connectivity at least 99.95% of the time.[/footnote].
This leads into efficiency – migrating the current architecture to Azure seldom achieves immediate cost savings, because of the requirements to get a SLA from Microsoft. You may find that it becomes cost-prohibitive to run your applications in a highly available environment. A common case are SQL servers, with high licensing costs to achieve high availability. In order to really achieve cost savings by moving to the cloud, you need to re-architect the application to take advantage of the Platform as a Service offerings that the provider offers. In the case of SQL, PaaS can be 1/2 the cost of a highly-available SQL environment (please note that this is a really, really, really rough number). I’ll get into that in a moment.
If your application supports multiple instances, it is already, in some regards, scalable. A huge benefit of moving to public cloud such as Azure is the ability to scale out almost infinitely. In the past, when every server had a name and was loved (or loathed) by all members of the IT department, the preference was for a single beefy server over many small servers. Now, it is recognized good practice to have many small instances running different tiers of your application, spreading the load out across all the instances. This is, partially, a concept that has spawned Containerization and Microservices. Making your application something that can be packaged according to function and deployed as many times as you want will pay dividends. Azure Web Apps and SQL Databases support this fairly natively, and VM Scale Sets are a great solution if you’re not ready for a full PaaS offering.
Associated with availability is fault-tolerance or resiliency. Older applications, particularly those built with the assumption that the application will run on a piece of physical hardware, often struggle in a modern, virtualized datacenter. This can often become worse in public cloud environments, as there is no guarantee that the components of the application will be in close physical proximity. These applications must be modified to have increased tolerance for timeouts and errors, with retry logic to try again in the case of a failure.
All of this rolls up into an application’s compatibility with Azure, but the key thought with that bullet point is to consider whether there is support for the components to run in Azure. For example, SQL 2005 is not supported anymore, nor is Server 2003 – they may run in Azure, but if there’s any problems running it, you won’t be able to get support from Microsoft[footnote]This isn’t the only reason not to run old software. In general, if you’re working with that technology today, and your CIO isn’t new in the last year or so, you should look for a job.[/footnote].
Availability and efficiency can be solved with a good helping of cash, but if you’re looking for the best cloud experience, you need to consider an Application Modernization project.
Don’t get me wrong – it’s still true that it’s easier than you think to move to Azure. I’ll be posting more information on App Modernization soon!