If you’re a beginner to the world of microservices, and you’re learning what the differences are between Virtual Machines (VMs) and Containers, it opens up a world of questions of what’s possible with bare metal, VMs and containers.
This piece should give you a good starting point to understand how technology has evolved and impacted app development. Ultimately with each shift, the introduction of new technologies has made operations cheaper, more efficient, and easier for developers to focus on what really matters – delivering good quality improvements to end-users.
The bare metal machine started everything that we know as modern computing, and refers to the physical server/device. These machines are the foundational groundwork for everything else, from the first operating system in the 1960s to containers running today: everything still runs on bare metal.
There are a couple of options for where you’d run your services straight on bare metal:
- Private or Public (co-located) data centers
- If you own them outright, then you’ve absorbed a lot of the cost upfront, and you’re responsible for their maintenance, so you want to avoid wasted resources.
- Public cloud rented device
- If you’re renting the device then you’re probably being charged a fixed monthly rate for it, and it’s up to you how to use it, so again you won’t want any of it going to waste.
However, if you want to run more than one application on a bare metal server, you need to add more layers on top of it to provide a better developer experience. So, while you’ll never get away from the basic need for a bare metal server, you will get away from needing to work with them directly.
This industry drive to make operations cheaper and to give organizations the ability to use the power of the bare metal server more efficiently, running multiple applications on a single machine, gave us the Virtual Machine.
Bare metal > Virtual Machine (VM)
A virtual machine (VM) is a type of software that makes it possible to run what appear to be several computers on the same device. Each operating system (OS) and application that run inside each VM are entirely separate from the others that share the bare metal server and have no interaction with each other at all.
The VMs share a hypervisor that sits between the bare metal and VMs, to make sure that CPU, memory, networking and other hardware resources are spread fairly across the VMs. It’s the hypervisor that gives admins the ability to spin VMs up and down on demand.
By compacting more capability into smaller packages, you’re using more of the resources available and lowering cost as a result, in the public cloud, you pay for what you use versus the flat rate for server rental. VMs enable public cloud companies to reduce the cost further with multi-tenant offerings – which means multiple customers sharing bare metal resources, but with no interaction, or impact the quality of each tenants’ service. VMs can be spun up and down on request and with significant ease, giving organizations the ability to be selective of how many resources they need, and respond to demand quickly and efficiently.
Having your VMs in a public cloud gives you less to manage. The maintenance, upkeep and health of the physical machine is not your concern, and the updates the hypervisor layer needs are handled by the provider. Even if the physical server that hosts your services becomes unhealthy, the cloud provider moves your VMs onto other infrastructure without your manual intervention.
A developer’s only requirement is to focus on the VM and the application itself even when hosted on a public cloud. You retain the ability to spin them up, down and carry out deployments quickly and simply, which makes it easier to respond to changes in demand and make updates to your applications.
These changes abstracted concern away from the physical infrastructure. They gave engineers more time and energy to focus on applications and product improvements.
But things could still get smaller, faster and they could still be cheaper for you to run.
Bare metal > Containers
Containers exploded in popularity in 2013 with the emergence of Docker. Docker’s a platform tool that packages an application into virtual containers that can run on any OS (yes, any OS – anything that’s already running works, including inside a VM!), which makes them considerably smaller than a VM, but it still needs an ‘orchestrator’.
Kubernetes is by far the most popular orchestrator, and can be compared to a hypervisor on steroids. It stretches across the whole environment where you’re running containers whereas the hypervisor was limited to a single bare metal machine, giving you so much power in one place! By doing this, Kubernetes assumes the total resource capability of the environment – making a datacenter a single computer, or if you’re running Kubernetes in a public cloud, you have the potential to use as much compute as the cloud provider will allow!
A combination of the ability to move the containers quickly, respond to environmental changes faster (such as increase or decrease in demand) and the average lifecycle of a Kubernetes container being only a day makes public cloud offerings for containers even cheaper than VMs.
The majority of cases where organizations are running containers on bare metal is if you’re building a new application from scratch – something that’s often referred to as ‘greenfield’ development – however, for many, this isn’t the case. There’s some legacy crossover or a desire to migrate that leads to an intermediate step of running containers on virtual machines.
Bare metal > Virtual Machines > Containers
An interesting middle ground for some, is to run containers in VMs. The more abstraction layers that you have can make your systems more secure because there are more layers to penetrate, which means there’s more places to potentially catch that vulnerability. However, this isn’t the only reason.
For many this is a good middle step for migrating more onto containers. By putting a container on a VM, it’s providing an assurance that the workload will be unaffected by the transition whilst also keeping some legacy infrastructure in place ‘just in case’. The realistic fact is that few organizations will run entirely on VM or container, and there’s likely going to be a hybrid setup of legacy and modern environments that developers work in. So, for the sake of developer agility and speed in delivery to customers, organizations need to find a way to make their mix of environments work for them.
What does this look like in practice?
There’s nothing wrong with running a system that spans multiple environments. But, without appropriate management of a hybrid system, you’re going to run into a number of problems that slow down your organization’s development process.
- You need visibility into what’s going on in your applications,
- you need to be able to debug quickly whether you’re experiencing a networking issue or an application issue,
- you need to be able to dynamically shift traffic and respond to surges in demand quickly and efficiently
It could be a hard job in a hybrid environment.
Make your lives easier
There’s no reason to over-complicate your processes, or duplicate efforts when you have a hybrid environment because the addition of a service mesh will help all the teams that have a stake in your development lifecycle.
A service mesh allows you to manage hybrid environments from a single pane. It brings an extraordinary amount of control over your environment and significant insight into how it’s performing. Regardless of what’s going on ‘under the hood’, the service mesh will ensure that your primary focus is on the development process and maximizing your resource capabilities.
If you’d like to know more about what a service mesh can do, download our free PDF copy of Istio: Up and Running (O’Reilly) by Zack Butcher and Lee Calcote or take a look through our blogs and library of resources.
The technology industry is growing and changing quicker than ever before. However, some things remain constant. Bare metal and data centers are not going to disappear, but driving modernization forward and making the most of your infrastructure and compute capabilities will always be a factor. Make sure that you’re getting everything possible out of your infrastructure and that it’s working for you, not against you.