libvirt
This section documents how to set up and use openSUSE Leap 42.1 as a virtual machine host.
Usually, the hardware requirements for the Dom0 are the same as those for the openSUSE Leap operating system, but additional CPU, disk, memory, and network resources should be added to accommodate the resource demands of all planned VM Guest systems.
Remember that VM Guest systems, like physical machines, perform better when they run on faster processors and have access to more system memory.
Xen virtualization technology is available in openSUSE Leap products based on code path 10 and later. Code path 10 products include Open Enterprise Server 2 Linux, openSUSE Leap 10, SUSE Linux Enterprise Desktop 10, and openSUSE 10.x.
The virtual machine host requires several software packages and their dependencies to be installed. To install all necessary packages, run YaST
, select › and choose for installation. The installation can also be performed with YaST using the module › .After the Xen software is installed, restart the computer and, on the boot screen, choose the newly added option with the Xen kernel.
Updates are available through your update channel. To be sure to have the latest updates installed, run YaST
after the installation has finished.When installing and configuring the SUSE Linux Enterprise operating system on the host, be aware of the following best practices and suggestions:
If the host should always run as Xen host, run YaST
› and activate the Xen boot entry as default boot section.In YaST, click
.Change the default boot to the
label, then click .Click
.For best performance, only the applications and processes required for virtualization should be installed on the virtual machine host.
When using both iSCSI and OCFS2 to host Xen images, the latency
required for OCFS2 default timeouts in SUSE Linux Enterprise Server 12 may not be met. To
reconfigure this timeout, run systemctl configure o2cb
or edit O2CB_HEARTBEAT_THRESHOLD
in the
system configuration.
If you intend to use a watchdog device attached to the Xen host, use only one at a time. It is recommended to use a driver with actual hardware integration over a generic software one.
The Dom0 Kernel is running virtualized, so tools like
irqbalance
or lscpu
will not
reflect the real hardware characteristics.
When the host is set up, a percentage of system memory is reserved for the hypervisor, and all remaining memory is automatically allocated to Dom0.
A better solution is to set a default amount of memory for Dom0, so the memory can be allocated appropriately to the hypervisor. An adequate amount would be 20 percent of the total system memory up to 4 GiB. A recommended minimum amount would be 512 MiB
The minimum amount of memory heavily depends on how many VM Guest(s) the host should handle. So be sure you have enough memory to support all your VM Guests. If the value is too low, the host system may hang when multiple VM Guests use most of the memory.
Determine the amount of memory to set for Dom0.
At Dom0, type xl info
to view the amount of
memory that is available on the machine. The memory that is currently
allocated by Dom0 can be determined with the command xl
list
.
Run
› .Select the Xen section.
In dom0_mem=
mem_amount
where mem_amount is the maximum amount of
memory to allocate to Dom0. Add K
,
M
, or G
, to specify the size,
for example, dom0_mem=768M
.
Restart the computer to apply the changes.
When using the XL toolstack and the dom0_mem=
option
for the Xen hypervisor in GRUB 2 you need to disable xl
autoballoon in
etc/xen/xl.conf
, otherwise launching VMs will fail
with errors about not being able to balloon down Dom0. So add
autoballoon=0 to xl.conf
if
you have the dom0_mem=
option specified for
Xen. Also see
Xen
dom0 memory
In a fully virtualized guest, the default network card is an emulated Realtek network card. However, it also possible to use the split network driver to run the communication between Dom0 and a VM Guest. By default, both interfaces are presented to the VM Guest, because the drivers of some operating systems require both to be present.
When using SUSE Linux Enterprise, only the paravirtualized network cards are available for the VM Guest by default. The following network options are available:
To use an emulated network interface like an emulated Realtek card,
specify type=ioemu
in the vif
device section of the domain xl configuration. An example
configuration would look like:
vif = [ 'type=ioemu,mac=00:16:3e:5f:48:e4,bridge=br0' ]
Find more details about the xl configuration in the
xl.conf
manual page man 5
xl.conf
.
When you specify type=vif
and do not specify a
model or type, the paravirtualized network interface is used:
vif = [ 'type=vif,mac=00:16:3e:5f:48:e4,bridge=br0,backen=0' ]
If the administrator should be offered both options, simply specify both type and model. The xl configuration would look like:
vif = [ 'type=ioemu,mac=00:16:3e:5f:48:e4,model=rtl8139,bridge=br0' ]
In this case, one of the network interfaces should be disabled on the VM Guest.
If virtualization software is correctly installed, the computer boots to display the GRUB 2 boot loader with a
option on the menu. Select this option to start the virtual machine host.
In Xen, the hypervisor manages the memory resource. If you need to
reserve system memory for a recovery kernel in Dom0, this memory
need to be reserved by the hypervisor. Thus, it is necessary to add the
parameter crashkernel=size
to the
kernel
line instead of using the line with the other
boot options.
For more information on the crashkernel parameter, see
Book “System Analysis and Tuning Guide”, Chapter 17 “Kexec and Kdump”, Section 17.4 “Calculating crashkernel
Allocation Size”.
If the
option is not on the GRUB 2 menu, review the steps for installation and verify that the GRUB 2 boot loader has been updated. If the installation has been done without selecting the Xen pattern, run the YaST , select the filter and choose for installation.After booting the hypervisor, the Dom0 virtual machine starts and displays its graphical desktop environment. If you did not install a graphical desktop, the command line environment appears.
Sometimes it may happen that the graphics system does not work properly.
In this case, add vga=ask
to the boot parameters. To
activate permanent settings, use vga=mode-0x???
where
???
is calculated as 0x100
+ VESA
mode from
http://en.wikipedia.org/wiki/VESA_BIOS_Extensions,
e.g. vga=mode-0x361
.
Before starting to install virtual guests, make sure that the system time is correct. To do this, configure NTP (Network Time Protocol) on the controlling domain:
In YaST select
› .Select the option to automatically start the NTP daemon during boot. Provide the IP address of an existing NTP time server, then click
.
Hardware clocks commonly are not very precise. All modern operating
systems try to correct the system time compared to the hardware time by
means of an additional time source. To get the correct time on all
VM Guest systems, also activate the network time services on each
respective guest or make sure that the guest uses the system time of the
host. For more about Independent Wallclocks
in
openSUSE Leap see Section 16.2, “Xen Virtual Machine Clock Settings”.
For more information about managing virtual machines, see Chapter 20, Managing a Virtualization Environment.
To take full advantage of VM Guest systems, it is sometimes necessary to assign specific PCI devices to a dedicated domain. When using fully virtualized guests, this functionality is only available if the chipset of the system supports this feature, and if it is activated from the BIOS.
This feature is available from both AMD* and Intel*. For AMD machines, the feature is called IOMMU; in Intel speak, this is VT-d. Note that Intel-VT technology is not sufficient to use this feature for fully virtualized guests. To make sure that your computer supports this feature, ask your supplier specifically to deliver a system that supports PCI Pass-Through.
Some graphics drivers use highly optimized ways to access DMA. This is not supported, and thus using graphics cards may be difficult.
When accessing PCI devices behind a PCIe bridge, all of the PCI devices must be assigned to a single guest. This limitation does not apply to PCIe devices.
Guests with dedicated PCI devices cannot be migrated live to a different host.
The configuration of PCI Pass-Through is twofold. First, the hypervisor must be informed at boot time that a PCI device should be available for reassigning. Second, the PCI device must be assigned to the VM Guest.
Select a device to reassign to a VM Guest. To do this, run
lspci
and read the device number. For example, if
lspci
contains the following line:
06:01.0 Ethernet controller: Digital Equipment Corporation DECchip 21142/43 (rev 41)
In this case, the PCI number is (06:01.0)
.
Run
› › .
Select the Xen
section and press
.
Add the PCI number to the
line:pciback.hide=(06:01.0)
Press
and exit YaST.Reboot the system.
Check if the device is in the list of assignable devices with the command
xl pci-assignable-list
If you want to avoid restarting the host system, you can use dynamic assignment with xl to use PCI Pass-Through.
Begin by making sure that dom0 has the pciback module loaded:
modprobe pciback
Then make a device assignable by using xl
pci-assignable-add
. For example, if you wanted to make the
device 06:01.0 available for guests, you should
type the following:
xl pci-assignable-add 06:01.0
There are several possibilities to dedicate a PCI device to a VM Guest:
During installation, add the pci
line to the
configuration file:
pci=['06:01.0']
The command xl
can be used to add or remove PCI
devices on the fly. To add the device with number
06:01.0
to a guest with name
sles12
use:
xl pci-attach sles12 06:01.0
To add the device to the guest permanently, add the following snippet to the guest configuration file:
pci = [ '06:01.0,power_mgmt=1,permissive=1' ]
After assigning the PCI device to the VM Guest, the guest system must care for the configuration and device drivers for this device.
Xen 4.0 and newer supports VGA graphics adapter pass-through on fully virtualized VM Guests. The guest can take full control of the graphics adapter with high-performance full 3D and video acceleration.
VGA Pass-Through functionality is similar to PCI Pass-Through and as such also requires IOMMU (or Intel VT-d) support from the mainboard chipset and BIOS.
Only the primary graphics adapter (the one that is used when you power on the computer) can be used with VGA Pass-Through.
VGA Pass-Through is supported only for fully virtualized guests. Paravirtual guests (PV) are not supported.
The graphics card cannot be shared between multiple VM Guests using VGA Pass-Through — you can dedicate it to one guest only.
To enable VGA Pass-Through, add the following settings to your fully virtualized guest configuration file:
gfx_passthru=1 pci=['yy:zz.n']
where yy:zz.n
is the PCI controller ID of the VGA
graphics adapter as found with lspci -v
on Dom0.
In some circumstances, problems may occur during the installation of the VM Guest. This section describes some known problems and their solutions.
The software I/O translation buffer allocates a large chunk of low memory early in the bootstrap process. If the requests for memory exceed the size of the buffer it usually results in a hung boot process. To check if this is the case, switch to console 10 and check the output there for a message similar to
kernel: PCI-DMA: Out of SW-IOMMU space for 32768 bytes at device 000:01:02.0
In this case you need to increase the size of the
swiotlb
. Add
swiotlb=128
on the cmdline of Dom0. Note
that the number can be adjusted up or down to find the optimal size
for the machine.
The swiotlb=force
kernel parameter is required for
DMA access to work for PCI devices on a PV guest. For more information
about IOMMU and the swiotlb
option see the file
boot-options.txt
from the package kernel-source
.
There are several resources on the Internet that provide interesting information about PCI Pass-Through: