“Cloud” and “on site” design
What is server virtualization?
Virtualization systems indicate the possibility to abstract some IT services from their respective dependencies (networks, storage systems and hardware), enabling the execution of multiple virtual operating systems on a single physical machine, but remaining distinct. The “host” operating system (the host) actually creates a sort of partitioned hardware by running multiple “guest” operating systems (the guests). In fact, the lower part of the software stack is occupied by a single instance of an ordinary operating system that is installed directly on the server. Above this, a virtualization layer handles redirection and emulation, which in turn composes the virtual computer. The combination of these two lower layers is then called the host. The host provides the various features of the computer up to the BIOS level and is able to generate virtual machines (and independent) of your choice, based on user-defined configurations.
The advantages of virtualization
These are just some of the benefits of a well-designed virtualization solution:
- Reduces implementation and management costs by consolidating hardware
- Reduces the energy consumption of the entire data center
- locate resources dynamically when and where needed
- Dramatically reduces the time needed to implement new systems
Easy testing and debugging of controlled environments
An undeniable advantage of the virtualized structure is the ability to make a really complete backup of the machine, including therefore the operating system settings, which are often the most critical part to restore on some servers.
Another important advantage is the great simplicity with which you can manage the technological evolution. If a hardware system becomes obsolete, it is possible to migrate the servers to the latest generation of machines in a fairly easy way (among other things, gaining in performance) without having to reinstall everything, but only by reinstalling the emulated layer and restoring the files of the virtual machines. Not to mention the ability to perform offline tests in a very simple way, to make the migration even more linear.
The need to virtualize systems stems from the awareness that a virtual server uses much better resources at its disposal – processors, memory, disk – than a physical server, especially in the case of high reliability configurations with an active and a passive server. Moreover, it could be the solution for those companies that have CED of small size, consisting of a few RACs (network, storage and server), without the possibility of adding other cabinets and air conditioners.
How it works
Operating systems and applications running on virtual servers do not have direct control over resources such as memory, hard drives and network ports. Instead, the virtual machine between them intercepts requests for interaction with the hardware. On the market there are solutions that are able to “simulate” a configuration that has only a vague resemblance to the hardware actually below. For example, the host could initialize the process of a SCSI controller down to the smallest detail, convincing the guest operating system, even without the actual existence of any SCSI controller. In fact, it makes IDE drives look like SCSI drives, converts network shares about locally connected storage, converts a single Ethernet adapter into multiple adapters, and creates gateways between older operating systems and modern unsupported hardware, such as for Fibre Channel adapters.