libvirt
Virtualization is a technology that provides a way for a machine (Host) to run another operating system (guest virtual machines) on top of the host operating system.
SUSE Linux Enterprise includes the latest open source virtualization technologies, Xen and KVM. With these Hypervisors, SUSE Linux Enterprise can be used to provision, de-provision, install, monitor and manage multiple virtual machines (VM Guests) on a single physical system (for more information see Hypervisor.)
Out of the box, SUSE Linux Enterprise can create virtual machines running both modified, highly tuned, paravirtualized operating systems and fully virtualized unmodified operating systems. Full virtualization allows the guest OS to run unmodified and requires the presence of x86_64 processors (either Intel* Virtualization Technology (Intel VT) or AMD* Virtualization (AMD-V)).
The primary component of the operating system that enables virtualization is a Hypervisor (or virtual machine manager), which is a layer of software that runs directly on server hardware. It controls platform resources, sharing them among multiple VM Guests and their operating systems by presenting virtualized hardware interfaces to each VM Guest.
SUSE Linux Enterprise is an enterprise-class Linux server operating system that offers two types of Hypervisors: Xen and KVM. Both Hypervisors support virtualization on 64-bit x86-based hardware architectures. Both Xen and KVM support full virtualization mode. In addition, Xen supports paravirtualized mode. SUSE Linux Enterprise with Xen or KVM acts as a virtualization host server (VHS) that supports VM Guests with its own guest operating systems. The SUSE VM Guest architecture consists of a Hypervisor and management components that constitute the VHS, which runs many application-hosting VM Guests.
In Xen, the management components run in a privileged VM Guest often called Dom0. In KVM, where the Linux kernel acts as the hypervisor, the management components run directly on the VHS.
Virtualization design provides many capabilities to your organization. Virtualization of operating systems is used in many different computing areas:
Server consolidation: Many servers can be replaced by one big physical server, so hardware is consolidated, and Guest Operating Systems are converted to virtual machine. It provides the ability to run legacy software on new hardware.
Isolation: guest operating system can be fully isolated from the Host running it. So if the virtual machine is corrupted, the Host system is not harmed.
Migration: A process to move a running virtual machine to another physical machine. Live migration is an extended feature that allows this move without disconnection of the client or the application.
Disaster recovery: Virtualized guests are less dependent on the hardware, and the Host server provides snapshot features to be able to restore a known running system without any corruption.
Dynamic load balancing: A migration feature that brings a simple way to load-balance your service across your infrastructure.
Virtualization brings a lot of advantages while providing the same service as a hardware server.
First, it reduces the cost of your infrastructure. Servers are mainly used to provide a service to a customer, and a virtualized operating system can provide the same service, with:
Less hardware: You can run several operating system on one host, so all hardware maintenance will be reduced.
Less power/cooling: Less hardware means you do not need to invest more in electric power, backup power, and cooling if you need more service.
Save space: Your data center space will be saved because you do not need more hardware servers (less servers than service running).
Less management: Using a VM Guest simplifies the administration of your infrastructure.
Agility and productivity: Virtualization provides migration capabilities, live migration and snapshots. These features reduce downtime, and bring an easy way to move your service from one place to another without any service interruption.
Guest operating systems are hosted on virtual machines in either full virtualization (FV) mode or paravirtual (PV) mode. Each virtualization mode has advantages and disadvantages.
Full virtualization mode lets virtual machines run unmodified operating systems, such as Windows* Server 2003, but requires the computer running as the VM Host Server to support hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology.
Some guest operating systems hosted in full virtualization mode can be configured to run the Novell* Virtual Machine Drivers instead of drivers originating from the operating system. Running virtual machine drivers improves performance dramatically on guest operating systems, such as Windows Server 2003. For more information, see Appendix A, Virtual Machine Drivers.
Paravirtual mode does not require the host computer to support hardware-assisted virtualization technology, but does require the guest operating system to be modified for the virtualization environment. Typically, operating systems running in paravirtual mode enjoy better performance than those requiring full virtualization mode.
Operating systems currently modified to run in paravirtual mode are called paravirtualized operating systems and include openSUSE Leap and NetWare® 6.5 SP8.
VM Guests not only share CPU and memory resources of the host system, but also the I/O subsystem. Because software I/O virtualization techniques deliver less performance than bare metal, hardware solutions that deliver almost “native” performance have been developed recently. openSUSE Leap supports the following I/O virtualization techniques:
Fully Virtualized (FV) drivers emulate widely supported real devices, which can be used with an existing driver in the VM Guest. Since the physical device on the VM Host Server may differ from the emulated one, the hypervisor needs to process all I/O operations before handing them over to the physical device. Therefore all I/O operations need to traverse two software layers, a process that not only significantly impacts I/O performance, but also consumes CPU time.
Paravirtualization (PV) allows direct communication between the hypervisor and the VM Guest. With less overhead involved, performance is much better than with full virtualization. However, paravirtualization requires either the guest operating system to be modified to support the paravirtualization API or paravirtualized drivers. See Section 7.1.1, “Availability of Paravirtualized Drivers” for a list of guest operating systems supporting paravirtualization.
VFIO stands for Virtual Function I/O and is a new user-level driver framework for Linux. It replaces the traditional KVM PCI Pass-Through device assignment. The VFIO driver exposes direct device access to userspace in a secure memory (IOMMU) protected environment. With VFIO, a VM Guest can directly access hardware devices on the VM Host Server (pass-through), avoiding performance issues caused by emulation in performance critical paths. This method does not allow to share devices—each device can only be assigned to a single VM Guest. VFIO needs to be supported by the VM Host Server CPU, chipset and the BIOS/EFI.
Compared to the legacy KVM PCI device assignment, VFIO has the following advantages:
Resource access is compatible with secure boot.
Device is isolated and its memory access protected.
Offers a userspace device driver with more flexible device ownership model.
Is independent of KVM technology, and not bound to x86 architecture only.
The latest I/O virtualization technique, Single Root I/O Virtualization SR-IOV combines the benefits of the aforementioned techniques—performance and the ability to share a device with several VM Guests. SR-IOV requires special I/O devices, that are capable of replicating resources so they appear as multiple separate devices. Each such “pseudo” device can be directly used by a single guest. However, for network cards for example the number of concurrent queues that can be used is limited, potentially reducing performance for the VM Guest compared to paravirtualized drivers. On the VM Host Server, SR-IOV must be supported by the I/O device, the CPU and chipset, the BIOS/EFI and the hypervisor—for setup instructions see Section 14.10, “Adding a PCI Device to a VM Guest”.
This method of assigning PCI devices to VM Guests is deprecated and has been replaced by VFIO. KVM PCI Pass-Through is still supported by SUSE, but using VFIO instead is strongly recommended. Support for KVM PCI Pass-Through will be removed from future versions of openSUSE Leap.
To be able to use the VFIO and SR-IOV features, the VM Host Server needs to fulfill the following requirements:
IOMMU needs to be enabled in the BIOS/EFI.
For Intel CPUs, the Kernel parameter intel_iommu=on
needs to be provided on the Kernel command line. Refer to Book “Reference”, Chapter 12 “The Boot Loader GRUB 2”, Section 12.3.3.2 “ for details.
Tab”
The VFIO infrastructure needs to be available. This can be achieved by
loading the Kernel module vfio_pci
. Refer to Book “Reference”, Chapter 10 “The systemd
Daemon”, Section 10.6.4 “Loading Kernel Modules” for details.