Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
ContentsContents
Virtualization Guide
  1. Preface
  2. I Introduction
    1. 1 Virtualization technology
    2. 2 Virtualization scenarios
    3. 3 Introduction to Xen virtualization
    4. 4 Introduction to KVM virtualization
    5. 5 Virtualization tools
    6. 6 Installation of virtualization components
  3. II Managing virtual machines with libvirt
    1. 7 libvirt daemons
    2. 8 Preparing the VM Host Server
    3. 9 Guest installation
    4. 10 Basic VM Guest management
    5. 11 Connecting and authorizing
    6. 12 Advanced storage topics
    7. 13 Configuring virtual machines with Virtual Machine Manager
    8. 14 Configuring virtual machines with virsh
    9. 15 Xen to KVM migration guide
  4. III Hypervisor-independent features
    1. 16 Disk cache modes
    2. 17 VM Guest clock settings
    3. 18 libguestfs
    4. 19 QEMU guest agent
    5. 20 Software TPM emulator
    6. 21 Creating crash dumps of a VM Guest
  5. IV Managing virtual machines with Xen
    1. 22 Setting up a virtual machine host
    2. 23 Virtual networking
    3. 24 Managing a virtualization environment
    4. 25 Block devices in Xen
    5. 26 Virtualization: configuration options and settings
    6. 27 Administrative tasks
    7. 28 XenStore: configuration database shared between domains
    8. 29 Xen as a high-availability virtualization host
    9. 30 Xen: converting a paravirtual (PV) guest into a fully virtual (FV/HVM) guest
  6. V Managing virtual machines with QEMU
    1. 31 QEMU overview
    2. 32 Setting up a KVM VM Host Server
    3. 33 Guest installation
    4. 34 Running virtual machines with qemu-system-ARCH
    5. 35 Virtual machine administration using QEMU monitor
  7. VI Troubleshooting
    1. 36 Integrated help and package documentation
    2. 37 Gathering system information and logs
  8. Glossary
  9. A Configuring GPU Pass-Through for NVIDIA cards
  10. B GNU licenses
Navigation
Applies to openSUSE Leap 15.5

1 Virtualization technology Edit source

Abstract

Virtualization is a technology that provides a way for a machine (Host) to run another operating system (guest virtual machines) on top of the host operating system.

1.1 Overview Edit source

openSUSE Leap includes the latest open source virtualization technologies, Xen and KVM. With these hypervisors, openSUSE Leap can be used to provision, de-provision, install, monitor and manage multiple virtual machines (VM Guests) on a single physical system (for more information see Hypervisor). openSUSE Leap can create virtual machines running both modified, highly tuned, paravirtualized operating systems and fully virtualized unmodified operating systems.

The primary component of the operating system that enables virtualization is a hypervisor (or virtual machine manager), which is a layer of software that runs directly on server hardware. It controls platform resources, sharing them among multiple VM Guests and their operating systems by presenting virtualized hardware interfaces to each VM Guest.

openSUSE is a Linux server operating system that offers two types of hypervisors: Xen and KVM.

openSUSE Leap with Xen or KVM acts as a virtualization host server (VHS) that supports VM Guests with its own guest operating systems. The SUSE VM Guest architecture consists of a hypervisor and management components that constitute the VHS, which runs many application-hosting VM Guests.

In Xen, the management components run in a privileged VM Guest often called Dom0. In KVM, where the Linux kernel acts as the hypervisor, the management components run directly on the VHS.

1.2 Virtualization benefits Edit source

Virtualization brings a lot of advantages while providing the same service as a hardware server.

First, it reduces the cost of your infrastructure. Servers are mainly used to provide a service to a customer, and a virtualized operating system can provide the same service, with:

  • Less hardware: you can run several operating systems on a single host, therefore all hardware maintenance is reduced.

  • Less power/cooling: less hardware means you do not need to invest more in electric power, backup power, and cooling if you need more service.

  • Save space: your data center space is saved because you do not need more hardware servers (less servers than service running).

  • Less management: using a VM Guest simplifies the administration of your infrastructure.

  • Agility and productivity: Virtualization provides migration capabilities, live migration and snapshots. These features reduce downtime, and bring an easy way to move your service from one place to another without any service interruption.

1.3 Virtualization modes Edit source

Guest operating systems are hosted on virtual machines in either full virtualization (FV) mode or paravirtual (PV) mode. Each virtualization mode has advantages and disadvantages.

  • Full virtualization mode lets virtual machines run unmodified operating systems, such as Windows* Server 2003. It can use either Binary Translation or hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology. Using hardware assistance allows for better performance on processors that support it.

  • To be able to run under paravirtual mode, guest operating systems normally need to be modified for the virtualization environment. However, operating systems running in paravirtual mode have better performance than those running under full virtualization.

    Operating systems currently modified to run in paravirtual mode are called paravirtualized operating systems and include openSUSE Leap.

1.4 I/O virtualization Edit source

VM Guests not only share CPU and memory resources of the host system, but also the I/O subsystem. Because software I/O virtualization techniques deliver less performance than bare metal, hardware solutions that deliver almost native performance have been developed recently. openSUSE Leap supports the following I/O virtualization techniques:

Full virtualization

Fully Virtualized (FV) drivers emulate widely supported real devices, which can be used with an existing driver in the VM Guest. The guest is also called Hardware Virtual Machine (HVM). Since the physical device on the VM Host Server may differ from the emulated one, the hypervisor needs to process all I/O operations before handing them over to the physical device. Therefore all I/O operations need to traverse two software layers, a process that not only significantly impacts I/O performance, but also consumes CPU time.

Paravirtualization

Paravirtualization (PV) allows direct communication between the hypervisor and the VM Guest. With less overhead involved, performance is much better than with full virtualization. However, paravirtualization requires either the guest operating system to be modified to support the paravirtualization API or paravirtualized drivers.

PVHVM

This type of virtualization enhances HVM (see Full virtualization) with paravirtualized (PV) drivers, and PV interrupt and timer handling.

VFIO

VFIO stands for Virtual Function I/O and is a new user-level driver framework for Linux. It replaces the traditional KVM PCI Pass-Through device assignment. The VFIO driver exposes direct device access to user space in a secure memory (IOMMU) protected environment. With VFIO, a VM Guest can directly access hardware devices on the VM Host Server (pass-through), avoiding performance issues caused by emulation in performance critical paths. This method does not allow to share devices—each device can only be assigned to a single VM Guest. VFIO needs to be supported by the VM Host Server CPU, chipset and the BIOS/EFI.

Compared to the legacy KVM PCI device assignment, VFIO has the following advantages:

  • Resource access is compatible with UEFI Secure Boot.

  • Device is isolated and its memory access protected.

  • Offers a user space device driver with more flexible device ownership model.

  • Is independent of KVM technology, and not bound to x86 architecture only.

In openSUSE Leap the USB and PCI pass-through methods of device assignment are considered deprecated and are superseded by the VFIO model.

SR-IOV

The latest I/O virtualization technique, Single Root I/O Virtualization SR-IOV combines the benefits of the aforementioned techniques—performance and the ability to share a device with several VM Guests. SR-IOV requires special I/O devices, that are capable of replicating resources so they appear as multiple separate devices. Each such pseudo device can be directly used by a single guest. However, for network cards for example the number of concurrent queues that can be used is limited, potentially reducing performance for the VM Guest compared to paravirtualized drivers. On the VM Host Server, SR-IOV must be supported by the I/O device, the CPU and chipset, the BIOS/EFI and the hypervisor—for setup instructions see Section 13.12, “Assigning a host PCI device to a VM Guest”.

Important
Important: Requirements for VFIO and SR-IOV

To be able to use the VFIO and SR-IOV features, the VM Host Server needs to fulfill the following requirements:

  • IOMMU needs to be enabled in the BIOS/EFI.

  • For Intel CPUs, the kernel parameter intel_iommu=on needs to be provided on the kernel command line. For more information, see https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt#L1951.

  • The VFIO infrastructure needs to be available. This can be achieved by loading the kernel module vfio_pci. For more information, see Book “Reference”, Chapter 10 “The systemd daemon”, Section 10.6.4 “Loading kernel modules”.

Print this page