Part 2 kicks off our introduction to virtualization, where we get into the why and how of how this technology developed and why it’s so important for the future of IT.
Part 2 kicks off our introduction to virtualization, where we get into the why and how of how this technology developed and why it’s so important for the future of IT.
Some topics covered here include:
Historically, IT departments have had difficulties responding in a timely manner to business demands. Providing infrastructure for new internal applications could take months, even with proper planning. Different services often require different Operating Systems (OS), specific configuration, and isolation from other programs.
This led over time to an increased number of physical servers, each running one OS and one application inside that OS. One can only imagine the operational overhead and costs associated with this – racking, cooling, power consumption, cabling, and so on. On top of that, the majority of those separate physical servers were using just a fraction of what they were actually capable of, resulting in lots of unused CPU and memory.
There had to be a better way to provision, manage, and utilize compute workloads. And what better way than to decouple the hardware from the software you want to run on it? That’s exactly what virtualization helps us to achieve – it adds an abstraction layer between the physical hardware and your workloads.
This abstraction layer is known as a hypervisor. It provides access to physical resources (i.e. memory and CPU cycles) to one or multiple entities, called virtual machines (VMs). Yes, multiple! The hypervisor’s scheduler can serve the demand of many VMs requesting actual memory or CPU time. This provides better resource utilization of the physical servers, while still meeting the requirements for different OS, configuration, and isolation across multiple applications.
There are two types of hypervisors:
The abstraction and the simplified management of workloads helped the cloud computing industry reach the state it’s in today. Provisioning time is lowered from months to minutes, bringing lots of benefits along the way.
Now let’s take it a step further than better resource utilization and consider another scenario. Have you ever powered off your personal computer just so you can hook it up to a different power source? Or maybe you wanted to upgrade the amount of memory, insert a new disk or device? If so, you probably know that this requires you to stop all your work in progress and continue after you are ready with your intervention. While this is acceptable for end-users, in the IT enterprise world this will be counted as an outage.
Imagine you’re not able to access your favorite website, just because the company that is hosting it had to do similar maintenance of the server where the website is. That could be quite frustrating, but since virtualization provides an abstraction layer between hardware and VMs we can get around it. There are techniques, thanks to which a VM (workload) can be moved from one physical server to another, allowing such hardware maintenance to be performed without impacting the services running on it. This is a real game-changer for the application availability and data center operations all together.
Virtualization technology redefines the way we build and maintain IT infrastructures. It brings operational benefits by addressing two major problems:
Combined, those mark the foundations of the Software-Defined Data Center (SDDC), where the core components – compute, storage, and network are abstracted from the underlying physical infrastructure.
This form is only visible on Google Chrome, Mozilla Firefox, Safari and MS Edge web browsers.
If you're on an alternative browser or unable to see the form, we have a different form for you here.