Disclaimer: This article will challenge your thinking on traditional ICT infrastructure!
One REALLY common question I get asked as customers are exploring the world of public cloud computing (think Microsoft Azure, Amazon Web Services) is How Much?
The HOW MUCH question comes in various guises:
- How much is this going to cost me?
- How much compute do I need?
- How much storage do I need?
- How much network bandwidth do I need?
Quite often customers are coming from a comfortably predictable world of running storage and servers and network on premises – it has a stable cost (most of the time) and predictable maintenance windows every few years for upgrades. Most customers find it easy to conceptualise tin; they know for the next $x years that it will cost them a fixed amount, bar any major incidents.
It’s also a comfortable existence where you don’t need to have a conversation with your business leaders about a blow out in costs in regards to ICT, something that is normally perceived as a Career Limiting Move (CLM).
But there’s a reason why most organisations are looking at public cloud platforms to give them agility and an edge on their competition in an ever increasing digital world.
Public cloud platforms like Azure give organisations benefits like:
Flexibility – the ability to build applications and platforms that live in a hyper-scale infrastructure from scratch, leveraging technology and services once unheard of for the price.
Scale – Providers like Azure and AWS are simply unmatched in scale – many billions have been spent on datacentres and network capacity the world over to simply handle any load you can throw at it.
Cost – Often cloud infrastructure can be cheaper to run – you just need to reframe how you look at costs. If you add up the investment in servers, storage, network, internet capacity, firewalls, load balancers, VPN concentrators, not to mention TRAINING your ICT staff to manage, support and improve your infrastructure. Cloud doesn’t look so expensive now, does it?
Hybrid Capability – Microsoft simply has the best story to tell here. Most (if not all) products being developed out of Redmond now are being built for the cloud. They are designed to be used in conjunction with the cloud. Look at services like Delve Analytics – looking at the metadata INSIDE your organisation using the Graph API and Machine Learning – you simply cannot have this capability on premises.
Another example of this is the Advanced Threat Protection capability within Office 365 E5 suite – being able to harness tens of millions of clients, servers and the scale of the Azure cloud to detect and protect your digital assets from harm. There is just no other competitor in this space with the scale to draw upon that you can get for a few dollars a month.
I’m almost jealous of a company that can start up now with no legacy ties to ICT infrastructure – the capabilities, scale, flexibility and productivity you can get as a small business is just astonishing if you look at it in perspective.
But that isn’t the point of this article. I’m going to go back to the HOW MUCH question and try to illustrate some of the innovative software solutions available to you to:
- Debunk public cloud costs
- Provide analytics on your current ICT infrastructure
- Use the analytics
- Plan a smooth migration to public cloud resources
First off, let’s tackle the costs.
Costs are one of the biggest challenges to an organisation when it comes to ICT – most business leaders groan when someone from IT books a meeting regarding budget, it becomes a conversation of “How much do you need now?” as opposed to a conversation about delivering value and becoming an innovation engine inside an organisation.
I (and some organisations) understand, and when you first contemplate moving workloads or infrastructure to public cloud providers, there will be some double up on costs. But the longer term play should leave your organisation with improved reliability, improved flexibility and scale, more opportunities to innovate and a lower cost structure going forward.
I’m going to show you a product that can help you debunk what makes sense to a public cloud provider and what doesn’t. I’m going to show you a product that can help you have a “cloud” conversation with your CxO leadership that at a worst case will show you’ve actively invested in working out if cloud can work for that organisation. If you’ve got a CxO that is putting pressure on your ICT team to “go cloud” then this is an ideal product to utilise to produce the data you need to make an informed decision.
That product is the Health Check for Azure (HCFA) product from BitTitan. You may know BitTitan from their famous MigrationWiz products – they have migrated millions upon millions of mailboxes from all sorts of messaging source systems to platforms like Office 365. Now, they are tackling the public cloud computing providers with this new tool.
Providing Analytics on your Current Infrastructure
HCFA will programmatically review your entire ICT infrastructure for 14 days. It does this by using a resident agent (small installer) that records telemetry on what that infrastructure does on a daily basis. It records:
- Processor time – peak and average, number of cores, speed, etc
- Uptime – including last shutdown time, last boot time
- Memory usage
- Disk usage, Disk access, Disk IOPS, all disk drive mappings
- Network usage, network services, ports used, communications on ingress and egress
- Application usage/identification
This all looks pretty standard right? This gives you a baseline over the 14-day period what you use right now, how you use it, what applications you use, what applications you are using on the infrastructure. It’s what it does NEXT that makes this tool so appealing.
Use the Analytics
So now we have our baseline of what our infrastructure is doing and what it looks like conceptually – how do we use that data? Simple. The agent is reporting all the collated data from each physical host, virtual machine, server, or client – back to the BitTitan cloud. The BitTitan HCFA software then will run analytics across this dataset – and spit back a few options. Let’s dig into what these look like.
Hardware, Pay as you Go
The HCFA software platform (don’t forget this is PaaS, available anywhere in the world!) spits out two options. The first one is Hardware, Pay as you Go. What this means is:
“If I take my infrastructure as it is, and put it in Azure, what does this look like?”
Simply put – it will remodel your entire ICT infrastructure in Azure, down to compute instances required, storage required, networks required, everything. You can put it in any Azure region in the world, apply a discount level and it will tell you EXACTLY what it will cost to run a workload in Azure if you keep everything the same as what you are doing now. It will tell you how much the storage will cost, the network will cost, everything! Now you have a level of certainty with moving your CURRENT workloads to the public cloud.
Workload, Pay as you Go
The platform also offers you this additional choice. What if we could right-size our infrastructure in Azure so that I am only paying for what I need to? Then this is the choice for you. HCFA has all your telemetry on your infrastructure – so it can work out EXACTLY how to right size your environment so you aren’t paying for what you don’t use. You can also utilise the tool to give you some headroom – e.g. I might have 8 virtual CPUs configured in my VM on premise, but HCFA has worked out I only need 4 virtual CPUs and I can actually ramp this right down in terms of processor spec, but still have enough headroom for my peak times.
By using a Workload model – you can see my cost to run the SAME infrastructure, but right sized, is now $17, 906.90!
Now I’m going to blow your mind.
HCFA goes one step further. HCFA will help you PLAN your migration to a public cloud computing provider. Let’s take a look at what this looks like.
I’ve decided I want to move an Active Directory Domain Controller to Azure. This is a simple workload that has relatively few communicating ports and is not a particularly heavy user of compute.
HCFA lets me drill down into my domain controller on premise and then show what my PER workload cost would be for that workload. As you can see, it would cost approximately $841 a year to run my domain controller in Azure. What you also will see, is I have a complete list of firewall rules required, installed apps, IP, DNS, application interconnectivity with all my other application servers at the click of a button!
So now I can build my complete migration plan and strategy with ALL the required information so I don’t have any surprises when I move something to Azure.
I know my application dependencies, network configuration and firewall rules so I don’t leave anything to chance. I can review workload dependencies and move interconnected workloads into Azure, making sure I don’t miss anything. HCFA also has a light-weight project management tool that I can use to assign the tasks to someone in my organisation along with information and tasks required to move this workload to Azure.
For example – below is my interconnected Applications for my workload LYNCWAC01, which is the Office Web Apps server for Skype for Business 2015. So if I was going to move this workload into Azure, I can see that it depends on IISARR01 and LYNCFE02 to function. So likely I would want to move them too.
This is such a powerful tool and we recommend it to ANY customer that is looking at making a decision around public cloud. It gives you the data and the decision points to illustrate to any leadership what DOES and DOESN’T make sense to move to the cloud.
If you would like a further demonstration and a chat around how this insanely great tool can help you answer “HOW MUCH?” then give us a call!
Co-founder and CTO
Insync Technology – www.insynctechnology.com.au
+61 7 3040 3603