Introduction
The virtualization is present in both on the desktop of an enthusiast for the subject and in the IT environment a multitude of companies of different areas. This is not of “fashion” or a mere whim: thanks to this concept, it is possible, among other benefits, save with equipment and get the results of certain computing tasks in the shortest time. In this introductory text to the subject, you know what is virtual machine, know the main techniques of virtualization of existing and discover their the main advantages of.
The concept of virtualization
Despite being an old idea – its emergence occurred in the decade 1960, spread with more force from 1970 -, virtualization is extremely important to the world that is increasingly “digital” today. We can define the concept as solutions computing that allow the running multiple operating systems and their the respective software from a single machine, is it a conventional desktop or a powerful server.
Yes, it is as if you were to stumble upon one or more separate computers into one. The difference is that these machines are virtual: in practice, they offer results as any other computer, but there are only logically, not physically.
Each virtual machine represents a computational environment full: nearly all the resources of your operating system can be used, it is possible to connect them in network, you can install applications, anyway.
One of the reasons for the emergence of virtualization is that, years ago, at the time of the mainframes dominated the technological landscape and not had personal computers, for example, there was the convenience of “to acquire, install and use a software” – this was accompanied by libraries and other resources that made it almost unique to the the computer for which it was developing originally.
In this way, many times a organization that implementing a new system via required to purchase an equipment just to run it, instead of simply to take advantage of the existing machinery, leaving all the operation more expensive in the end.
Virtualization could solve this problem: you can take advantage of a computer already exists to run two or more separate systems, since each wheel within your own virtual machine. Avoid, therefore, spending new equipment and taking advantage of the possible resources idle of the computer.
Nowadays, virtualization allows you, for example, a firm to perform multiple services from a single server or even a user home test an operating system on your computer before to effectively install it. From the point of view of the corporate, their current use intended for various applications, such as ERP systems, the services of cloud computing, tools simulation, among many others.
The benefits of virtualization
You already know some of the advantages of the virtualization, but its use offers several other benefits. The main ones are discussed the following:
– Better use of the existing infrastructure: the run several services on a server or set of machines, for example, you can take advantage of the ability of the processing of these equipment as close as possible its entirety;
– The park of machines is smaller: with the best the exploitation of the existing resources, the need for the acquisition of new equipment decreases, as well as the consequential costs for installation, space physical, cooling, maintenance, energy consumption, etc. Imagine the impact that this advantage you can have in a data center, for example;
– Centralized management: depending on the solution virtualization is used, it becomes easier to monitor the services that are running, as your management is done in a centralized manner;
– Faster implementation:depending on application virtualization can allow its implementation is more rapid, since the infrastructure is already installed;
– Use of legacy systems: we can keep in use a system legacy, this is old, but still essential to the activities of the the company, by allocating to it a virtual machine compatible with your environment;
– Diversity of platforms: you can have a great diversity of platforms and, thus, to carry out tests of performance of certain application in each of them, for example;
– Testing environment: it is possible to evaluate a new system or an update before you actually to implement it, significantly reducing the inherent risks the type procedures;
– Security and reliability: as each machine virtual works independently of the other, a problem that arise in one of them – as a security vulnerability – do not will affect the other;
– Migration and expansion easier: change the service virtualization environment it is a task that can be done quickly, as well as the expansion of the infrastructure.
How virtualization works?
A virtualization solution is essentially consists of two “protagonists”: the host (host) and the guest or guest (guest). We can understand the host as being the operating system that is executed by a a physical machine. The guest, in turn, is the virtualized system should be run by the host. The virtualization occurs when these two elements exist.
The way the host and guests work varies according to the solution. In a method quite common is the figure of the VMM (Virtual Machine Monitor – Monitor Virtual machine), which can also be called hypervisor: it is a kind of platform implemented in the host that receives the systems to be virtualized, controlling their resources and keeping them “invisible” in relation to the other.
So you can do your job, the VMM has a treatment differentiated: it can be run in “supervisor mode”, while common programs (apps) run in “user mode”.
In the “supervisor mode”, the software may request instructions that deal directly with certain features hardware, such as specific features of the processor. In the “user mode”, these most critical resources can’t be accessed directly, and the operating system, which works in “supervisor mode”, to make a kind of intermediation when necessary.
The VMM must have privileged access because it is up to him to allocate the resources to be used by each virtual machine under your responsibility, as well as to determine the order in which each request these will be met.
The guest is running in “user mode”, but as the virtual machine has an operating system, any request statement more critical requested by this is “intercepted” by the hypervisor, that if in charge to provide it.
Virtualization full and paravirtualization
Virtualization through Virtual Machine Monitor it is commonly divided into two techniques: the virtualization total (full virtualization) and the paravirtualization (paravirtualization).
In virtualization, the system the operating of the guest works as if it were a physical machine entirely to your disposal. In this way, the system does not need to suffer no adaptation and works as if you do not there was virtualization, there. The problem is that this approach may have some limitations considerable.
One of them is the risk of some requests guest not being met in the expected manner. This it happens, for example, when the hypervisor is unable to handle with a certain privileged instruction, or when a hardware resource can not be fully accessed by there drivers (a kind of software that “teaches” the operating system to deal with a device) in the virtualization able to guarantee their full compatibility.
Paravirtualization emerged as a solution to problems of the type. In it, the system operating the guest runs on a virtual machine similar to the physical hardware, but not equivalent.
As this method, the guest is modified to appeal to the hypervisor when you need any privileged instruction and not directly to the processor. Thus, the VMM does not need to intercept these requests and test them (a task that causes loss performance), as is the case in virtualization total.
In addition, paravirtualization decreases evidently the problems with compatibility of hardware because the the operating system of the guest ends up being able to use drivers appropriate – in virtualization total, the drives available are “generic”, that is, created to withstand the maximum possible amount of devices, but without considering the particularities of each component.
The main drawback of paravirtualization is the the need of the operating system have to suffer modifications to “know” that is being virtualized, and may generate costs of adaptation and update or limitations regarding the migration to a new set of hardware, by example.
On the virtualization total, it is worth to remember, not there is need to change the system, but the procedure is subject to the problems mentioned at the beginning this topic. Thus, the adoption in one way or another depends on the analyses and tests that can determine which is more advantageous for a particular service.
Other methods of virtualization
The VMM is not the only technique virtualization that exist. So that you can meet a variety of different needs, various methods were (and are) developed. Among the rest, has the Process Virtual Machine, the Operating System Virtual Machine and the hardware-assisted virtualization.
Process Virtual Machine (Virtual Machine Process)
In this method, the virtual machine works as a application any running within the operating system.One of the most popular type virtual machines is the Java programming language: it, when a the program is compiled, a specific code is generated to be executed by a JVM (Java Virtual Machine- The Java Virtual Machine).
The Virtual Machine Monitor is a layer of software directly linked to the hardware, therefore, remains active throughout the time in the computer to remain connected. In the Process Virtual Machine, thethe virtual machine is treated as a process, as your name indicates. So, so when its execution is completed, the virtual machine environment ceases to exist.
Operating System Virtual Machine Virtual Machine (Operating System)
You already know that a single computer can support more than one virtual machine, without that a learn of existence of the other. The problem is that, many times, this approach can compromise performance. One of the ways of dealing with this problem is the use of the technique Operating System Virtual Machine (Virtual Machine Operating System).
Here, the physical machine receives an operating system, but there is the creation of multiple virtual environments about this. Each of these environments has access to certain resources, such as disk space and quotas processing, but share the same kernel (the core, that is, the part the main operating system). Typically, an environment not know of the existence of the other.
Virtual machines Operating System are quite used, for example, on the hosting companies web sites: each environment is offered to a client as if it were a system exclusive, when, in fact, the server is being shared with several other users.
Hardware virtualization
Up to now, we deal with virtualization as being a variety of techniques based on the software. But the hardware you can also have an important participation in solutions of the type.
Companies such as Intel and AMD, the largest manufacturers of processors world, developed (and developing) technologies that enable their chips work in enhanced solutions virtual machines, especially with regard to the virtualization total.
In the case of Intel, many of its processors, the current count of the technology Intel Virtualization Technology (Intel VT), which consists of a set of instructions applied to the chip, especially for treating of virtualization tasks. AMD has a technology equivalent (there is no compatibility one with the other), name AMD Virtualization (AMD-V).
Between the resources offered by these technologies is the the ability to facilitate the work of making the processor to work as a set of chips, one for each virtual machine in use.
Some virtualization solutions
How virtualization can meet the most different applications, there are several solutions of the kind in the market, as well as various companies specialized in the second. Here are some of the most well-known: VMware, Microsoft, Xen and VirtualBox.
VMware
The VMware is a company source north America specializes in virtualization. Their products are well known in the market and meet the applications of the most varied size.
One of them – the product of the input, so to speak – is the VMware Player, a free software virtualization that enables to the home user to create a virtual machine to run other operating systems in Windows or in Linux. Thus, we can study a system, testing of software, etc.
Another solution well-known company is the VMware Server, which is also free, but has the proposal to cater to the segment of servers of small and medium-sized businesses.
The paid solutions of the company, however, are much more extensive resources, and can meet a wide range of servers more simple to large data centers.
Microsoft
The Microsoft also has a presence in a significant market virtualization, especially because their software category are easily integrated into their operating systems – by least most of the time.
One of them is the free Virtual PC, that allows the user to run on a computer with Windows older versions of the platform, or even other systems operating as Linux distributions. Here, the idea it is also to allow the user to evaluate other operating systems, perform tests in the software, anyway.
But the highlight of the same is on account of Hyper-V, a virtualization solution that is integrated line of operating systems for servers company (such as Windows Server 2012), although it also works in certain versions for home use or in offices (such as the Windows 8).
Based on the concept of hypervisor, Hyper-V can cope with several virtualization scenarios, including those high-performance, and may, for example, work with date centers virtual.
Among the various features of Hyper-V are a feature that allows you to move virtual machines from one server to another (so that the first is repaired, for example) and the ability to create replicas – it is possible to have a unlimited number of “copies” of virtual machines to use them in tests or cover immediately a service that became unavailable.
Xen
Xen is another name very strong when the subject is virtualization. This is a the solution is based on the VMM that has had its development sponsored by the University of Cambridge in the United Kingdom. The project it is compatible with multiple platforms and architectures.
Released as free software, Xen it is free and its source code can be accessed by any person. Therefore, their use is quite widespread in the middle academic and enthusiasts systems Linux, for example.
In 2007, XenSource, the company that held the project was bought by Citrix, another great company in the segment of virtualization. So, too it is possible to find paid solutions for this which take the name of Xen.
VirtualBox
The VirtualBox is a the project started in 2007 by a German company named Innotek, but that now belongs to Oracle. His proposal is to allow the user to run a system operating inside the other without facing complexity.
There are versions of software for all major operating systems of the market, such as Windows, OS X and Linux distributions. The version most important of VirtualBox has open-source code and it is free, but Oracle offers editions for use corporate that require paid licenses.
Disadvantages of virtualization
The multitude of solutions and methods available causes the virtualization meets a variety of different needs, as you already know, but we also cannot understand the concept as a “miraculous medicine” for all IT problems. Depending on of the circumstances, the virtualization also it can have drawbacks. Here are a few:
– Overhead affects all the virtual machines: to start, the amount of virtual machines that a the computer can support is not unlimited, reason by which it is necessary to find a balance to avoid overload, otherwise, the performance of all virtual machines will be affected;
– Security: if there is a security vulnerability in the VMM, for example, all virtual machines may be affected by the problem;
– Portability: depending on the solution in use, to migrate a the virtual machine can be a problem. An example hypothetical: when a system uses instructions AMD-V, but need to be transferred to an Intel machine;
– Contingency: in critical applications, it is important to have a computer that can act immediately in place the main machine (as a server), because if this stop working, all of the virtualized systems that run on it will also be stopped;
– Performance: virtualization may not have good performance in all applications, so it is important to assess very well the solution before your effective implementation;
– Costs: there may be expenses that are not provided with maintenance, labor, training,implementation and other.
Ending
Although it is an old concept – its emergence refers the 1960s, as is shown in the the beginning of the text, the virtualization has gained great prominence in recent years and will certainly have yourspace solutions, the computing of the future.
This is due to the fact the processing power of the scenario current be high enough for that certain applications can take advantage of the idle capacity of the computers, and also because, with the computerization reaching practically all sectors of society, there is every more and more experience in the identification of the best solutions for every need.