libvirt
Once you have a virtual disk image ready (for more information on disk
images, see Section 28.2, “Managing Disk Images with qemu-img
”), it is time to
start the related virtual machine.
Section 28.1, “Basic Installation with qemu-system-ARCH
” introduced simple commands to
install and run a VM Guest. This chapter focuses on a more detailed
explanation of qemu-system-ARCH
usage, and shows solutions
for more specific tasks. For a complete list of
qemu-system-ARCH
's options, see its manual page
(man 1 qemu
).
qemu-system-ARCH
Invocation #Edit source
The qemu-system-ARCH
command uses the following syntax:
qemu-system-ARCH options1 disk_img2
| |
Path to the disk image holding the guest system you want to virtualize.
|
qemu-system-ARCH
Options #Edit source
This section introduces general qemu-system-ARCH
options
and options related to the basic emulated hardware, such as the virtual
machine's processor, memory, model type, or time processing methods.
-name NAME_OF_GUEST
Specifies the name of the running guest system. The name is displayed in the window caption and used for the VNC server.
-boot OPTIONS
Specifies the order in which the defined drives will be booted. Drives
are represented by letters, where a
and
b
stand for the floppy drives 1 and 2,
c
stands for the first hard disk, d
stands for the first CD-ROM drive, and n
to
p
stand for Ether-boot network adapters.
For example, qemu-system-ARCH [...] -boot order=ndc
first tries to boot from network, then from the first CD-ROM drive, and
finally from the first hard disk.
-pidfile FILENAME
Stores the QEMU's process identification number (PID) in a file. This is useful if you run QEMU from a script.
-nodefaults
By default QEMU creates basic virtual devices even if you do not specify them on the command line. This option turns this feature off, and you must specify every single device manually, including graphical and network cards, parallel or serial ports, or virtual consoles. Even QEMU monitor is not attached by default.
-daemonize
“Daemonizes” the QEMU process after it is started. QEMU will detach from the standard input and standard output after it is ready to receive connections on any of its devices.
SeaBIOS is the default BIOS used. You can boot USB devices, any drive (CD-ROM, Floppy, or a hard disk). It has USB mouse and keyboard support and supports multiple VGA cards. For more information about SeaBIOS, refer to the SeaBIOS Website.
You can specifies the type of the emulated machine. Run
qemu-system-ARCH -M help
to view a list of supported
machine types.
The machine type isapc: ISA-only-PC is unsupported.
To specify the type of the processor (CPU) model, run
qemu-system-ARCH -cpu
MODEL.
Use qemu-system-ARCH -cpu help
to view a list of
supported CPU models.
CPU flags information can be found at CPUID Wikipedia.
The following is a list of most commonly used options while launching qemu from command line. To see all options available refer to qemu-doc man page.
-m MEGABYTES
Specifies how many megabytes are used for the virtual RAM size.
-balloon virtio
Specifies a paravirtualized device to dynamically change the amount of
virtual RAM memory assigned to VM Guest. The top limit is the amount
of memory specified with -m
.
-smp NUMBER_OF_CPUS
Specifies how many CPUs will be emulated. QEMU supports up to 255 CPUs on the PC platform (up to 64 with KVM acceleration used). This option also takes other CPU-related parameters, such as number of sockets, number of cores per socket, or number of threads per core.
The following is an example of a working
qemu-system-ARCH
command line:
tux >
qemu-system-x86_64 -name "SLES 12 SP2" -M pc-i440fx-2.7 -m 512 \
-machine accel=kvm -cpu kvm64 -smp 2 -drive format=raw,file=/images/sles.raw
-no-acpi
Disables ACPI support.
-S
QEMU starts with CPU stopped. To start CPU, enter
c
in QEMU monitor. For more information, see
Chapter 30, Virtual Machine Administration Using QEMU Monitor.
-readconfig CFG_FILE
Instead of entering the devices configuration options on the command
line each time you want to run VM Guest,
qemu-system-ARCH
can read it from a file that was
either previously saved with -writeconfig
or edited
manually.
-writeconfig CFG_FILE
Dumps the current virtual machine's devices configuration to a text
file. It can be consequently re-used with the
-readconfig
option.
tux >
qemu-system-x86_64 -name "SLES 12 SP2" -machine accel=kvm -M pc-i440fx-2.7 -m 512 -cpu kvm64 \ -smp 2 /images/sles.raw -writeconfig /images/sles.cfg (exited)tux >
cat /images/sles.cfg # qemu config file [drive] index = "0" media = "disk" file = "/images/sles_base.raw"
This way you can effectively manage the configuration of your virtual machines' devices in a well-arranged way.
-rtc OPTIONS
Specifies the way the RTC is handled inside a VM Guest. By default, the clock of the guest is derived from that of the host system. Therefore, it is recommended that the host system clock is synchronized with an accurate external clock (for example, via NTP service).
If you need to isolate the VM Guest clock from the host one, specify
clock=vm
instead of the default
clock=host
.
You can also specify the initial time of the VM Guest's clock with the
base
option:
tux >
qemu-system-x86_64 [...] -rtc clock=vm,base=2010-12-03T01:02:00
Instead of a time stamp, you can specify utc
or
localtime
. The former instructs VM Guest to start at
the current UTC value (Coordinated Universal Time, see
http://en.wikipedia.org/wiki/UTC), while the latter
applies the local time setting.
QEMU virtual machines emulate all devices needed to run a VM Guest. QEMU supports, for example, several types of network cards, block devices (hard and removable drives), USB devices, character devices (serial and parallel ports), or multimedia devices (graphic and sound cards). This section introduces options to configure various types of supported devices.
If your device, such as -drive
, needs a special driver
and driver properties to be set, specify them with the
-device
option, and identify with
drive=
suboption. For example:
tux >
sudo
qemu-system-x86_64 [...] -drive if=none,id=drive0,format=raw \ -device virtio-blk-pci,drive=drive0,scsi=off ...
To get help on available drivers and their properties, use -device
?
and -device
DRIVER,?
.
Block devices are vital for virtual machines. In general, these are fixed or removable storage media usually called drives. One of the connected hard disks typically holds the guest operating system to be virtualized.
Virtual Machine drives are defined with
-drive
. This option has many sub-options, some of which
are described in this section. For the complete list, see the manual page
(man 1 qemu
).
-drive
Option #file=image_fname
Specifies the path to the disk image that will be used with this drive. If not specified, an empty (removable) drive is assumed.
if=drive_interface
Specifies the type of interface to which the drive is connected.
Currently only floppy
, scsi
,
ide
, or virtio
are supported by
SUSE. virtio
defines a paravirtualized disk driver.
Default is ide
.
index=index_of_connector
Specifies the index number of a connector on the disk interface (see the
if
option) where the drive is connected. If not
specified, the index is automatically incremented.
media=type
Specifies the type of media. Can be disk
for hard
disks, or cdrom
for removable CD-ROM drives.
format=img_fmt
Specifies the format of the connected disk image. If not specified, the
format is autodetected. Currently, SUSE supports
raw
and qcow2
formats.
cache=method
Specifies the caching method for the drive. Possible values are
unsafe
, writethrough
,
writeback
, directsync
, or
none
. To improve performance when using the
qcow2
image format, select
writeback
.
none
disables the host page cache and, therefore, is
the safest option. Default for image files is
writeback
. For more information, see
Chapter 15, Disk Cache Modes.
To simplify defining block devices, QEMU understands several shortcuts
which you may find handy when entering the
qemu-system-ARCH
command line.
You can use
tux >
sudo
qemu-system-x86_64 -cdrom /images/cdrom.iso
instead of
tux >
sudo
qemu-system-x86_64 -drive format=raw,file=/images/cdrom.iso,index=2,media=cdrom
and
tux >
sudo
qemu-system-x86_64 -hda /images/imagei1.raw -hdb /images/image2.raw -hdc \ /images/image3.raw -hdd /images/image4.raw
instead of
tux >
sudo
qemu-system-x86_64 -drive format=raw,file=/images/image1.raw,index=0,media=disk \ -drive format=raw,file=/images/image2.raw,index=1,media=disk \ -drive format=raw,file=/images/image3.raw,index=2,media=disk \ -drive format=raw,file=/images/image4.raw,index=3,media=disk
As an alternative to using disk images (see
Section 28.2, “Managing Disk Images with qemu-img
”) you can also use existing
VM Host Server disks, connect them as drives, and access them from VM Guest.
Use the host disk device directly instead of disk image file names.
To access the host CD-ROM drive, use
tux >
sudo
qemu-system-x86_64 [...] -drive file=/dev/cdrom,media=cdrom
To access the host hard disk, use
tux >
sudo
qemu-system-x86_64 [...] -drive file=/dev/hdb,media=disk
A host drive used by a VM Guest must not be accessed concurrently by the VM Host Server or another VM Guest.
A Sparse image file is a type of disk image file that grows in size as the user adds data to it, taking up only as much disk space as is stored in it. For example, if you copy 1 GB of data inside the sparse disk image, its size grows by 1 GB. If you then delete for example 500 MB of the data, the image size does not by default decrease as expected.
That is why the discard=on
option is introduced on the
KVM command line. It tells the hypervisor to automatically free the
“holes” after deleting data from the sparse guest image. Note
that this option is valid only for the if=scsi
drive
interface:
tux >
sudo
qemu-system-x86_64 [...] -drive format=img_format,file=/path/to/file.img,if=scsi,discard=on
if=scsi
is not supported. This interface does not map to
virtio-scsi, but rather to the lsi SCSI
adapter.
IOThreads are dedicated event loop threads for virtio devices to perform I/O requests in order to improve scalability, especially on an SMP VM Host Server with SMP VM Guests using many disk devices. Instead of using QEMU's main event loop for I/O processing, IOThreads allow spreading I/O work across multiple CPUs and can improve latency when properly configured.
IOThreads are enabled by defining IOThread objects. virtio devices can
then use the objects for their I/O event loops. Many virtio devices can
use a single IOThread object, or virtio devices and IOThread objects
can be configured in a 1:1 mapping. The following example creates a
single IOThread with ID iothread0
which is then used
as the event loop for two virtio-blk devices.
tux >
qemu-system-x86_64 [...] -object iothread,id=iothread0\
-drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
iothread=iothread0 [...]
The following qemu command line example illustrates a 1:1 virtio device to IOThread mapping:
tux >
qemu-system-x86_64 [...] -object iothread,id=iothread0\
-object iothread,id=iothread1 -drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
iothread=iothread1 [...]
For better performance of I/O-intensive applications, a new I/O path was introduced for the virtio-blk interface in kernel version 3.7. This bio-based block device driver skips the I/O scheduler, and thus shortens the I/O path in guest and has lower latency. It is especially useful for high-speed storage devices, such as SSD disks.
The driver is disabled by default. To use it, do the following:
Append virtio_blk.use_bio=1
to the kernel command
line on the guest. You can do so via
› › .
You can do it also by editing /etc/default/grub
,
searching for the line that contains
GRUB_CMDLINE_LINUX_DEFAULT=
, and adding the kernel
parameter at the end. Then run grub2-mkconfig
>/boot/grub2/grub.cfg
to update the grub2 boot menu.
Reboot the guest with the new kernel command line active.
The bio-based virtio-blk driver does not help on slow devices such as spin hard disks. The reason is that the benefit of scheduling is larger than what the shortened bio path offers. Do not use the bio-based driver on slow devices.
QEMU now integrates with libiscsi
. This allows
QEMU to access iSCSI resources directly and use them as virtual
machine block devices.
This feature does not require any host iSCSI initiator
configuration, as is needed for a libvirt iSCSI target based storage
pool setup. Instead it directly connects guest storage interfaces
to an iSCSI target LUN by means of the user space library libiscsi.
iSCSI-based disk devices can also be
specified in the libvirt XML configuration.
This feature is only available using the RAW image format, as the iSCSI protocol has some technical limitations.
The following is the QEMU command line interface for iSCSI connectivity.
The use of libiscsi based storage provisioning is not yet exposed by the virt-manager interface, but instead it would be configured by directly editing the guest xml. This new way of accessing iSCSI based storage is to be done at the command line.
tux >
sudo
qemu-system-x86_64 -machine accel=kvm \ -drive file=iscsi://192.168.100.1:3260/iqn.2016-08.com.example:314605ab-a88e-49af-b4eb-664808a3443b/0,\ format=raw,if=none,id=mydrive,cache=none \ -device ide-hd,bus=ide.0,unit=0,drive=mydrive ...
Here is an example snippet of guest domain xml which uses the protocol based iSCSI:
<devices> ... <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='iscsi' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> </devices>
Contrast that with an example which uses the host based iSCSI initiator which virt-manager sets up:
<devices> ... <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/disk/by-path/scsi-0:0:0:0'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> </devices>
RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting, replication, and data consistency. You can use an RBD from your KVM-managed VM Guests similarly to how you use other block devices.
This section describes QEMU options affecting the type of the emulated video card and the way VM Guest graphical output is displayed.
QEMU uses -vga
to define a video card used to display
VM Guest graphical output. The -vga
option understands
the following values:
none
Disables video cards on VM Guest (no video card is emulated). You can still access the running VM Guest via the serial console.
std
Emulates a standard VESA 2.0 VBE video card. Use it if you intend to use high display resolution on VM Guest.
cirrus
Emulates Cirrus Logic GD5446 video card. Good choice if you insist on high compatibility of the emulated video hardware. Most operating systems (even Windows 95) recognize this type of card.
For best video performance with the cirrus
type,
use 16-bit color depth both on VM Guest and VM Host Server.
The following options affect the way VM Guest graphical output is displayed.
-display gtk
Display video output in a GTK window. This interface provides UI elements to configure and control the VM during runtime.
-display sdl
Display video output via SDL, usually in a separate graphics window. For more information, see the SDL documentation.
-spice option[,option[,...]]
Enables the spice remote desktop protocol.
-display vnc
Refer to Section 29.5, “Viewing a VM Guest with VNC” for more information.
-nographic
Disables QEMU's graphical output. The emulated serial port is redirected to the console.
After starting the virtual machine with -nographic
,
press
Ctrl–A
H in the virtual console to view the list of other
useful shortcuts, for example, to toggle between the console and the
QEMU monitor.
tux >
qemu-system-x86_64 -hda /images/sles_base.raw -nographic
C-a h print this help
C-a x exit emulator
C-a s save disk data back to file (if -snapshot)
C-a t toggle console timestamps
C-a b send break (magic sysrq)
C-a c switch between console and monitor
C-a C-a sends C-a
(pressed C-a c)
QEMU 2.3.1 monitor - type 'help' for more information
(qemu)
-no-frame
Disables decorations for the QEMU window. Convenient for dedicated desktop work space.
-full-screen
Starts QEMU graphical output in full screen mode.
-no-quit
Disables the close button of the QEMU window and prevents it from being closed by force.
-alt-grab
, -ctrl-grab
By default, the QEMU window releases the “captured” mouse
after pressing
Ctrl–Alt. You can change the key combination to either
Ctrl–Alt–Shift
(-alt-grab
), or the right
Ctrl key (-ctrl-grab
).
There are two ways to create USB devices usable by the VM Guest in KVM:
you can either emulate new USB devices inside a VM Guest, or assign an
existing host USB device to a VM Guest. To use USB devices in QEMU you
first need to enable the generic USB driver with the -usb
option. Then you can specify individual devices with the
-usbdevice
option.
SUSE currently supports the following types of USB devices:
disk
, host
,
serial
, braille
,
net
, mouse
, and
tablet
.
-usbdevice
option #disk
Emulates a mass storage device based on file. The optional
format
option is used rather than detecting the
format.
tux >
qemu-system-x86_64 [...] -usbdevice
disk:format=raw:/virt/usb_disk.raw
host
Pass through the host device (identified by bus.addr).
serial
Serial converter to a host character device.
braille
Emulates a braille device using BrlAPI to display the braille output.
net
Emulates a network adapter that supports CDC Ethernet and RNDIS protocols.
mouse
Emulates a virtual USB mouse. This option overrides the default PS/2
mouse emulation. The following example shows the hardware status of a
mouse on VM Guest started with qemu-system-ARCH [...]
-usbdevice mouse
:
tux >
sudo
hwinfo --mouse 20: USB 00.0: 10503 USB Mouse [Created at usb.122] UDI: /org/freedesktop/Hal/devices/usb_device_627_1_1_if0 [...] Hardware Class: mouse Model: "Adomax QEMU USB Mouse" Hotplug: USB Vendor: usb 0x0627 "Adomax Technology Co., Ltd" Device: usb 0x0001 "QEMU USB Mouse" [...]
tablet
Emulates a pointer device that uses absolute coordinates (such as touchscreen). This option overrides the default PS/2 mouse emulation. The tablet device is useful if you are viewing VM Guest via the VNC protocol. See Section 29.5, “Viewing a VM Guest with VNC” for more information.
Use -chardev
to create a new character device. The
option uses the following general syntax:
qemu-system-x86_64 [...] -chardev BACKEND_TYPE,id=ID_STRING
where BACKEND_TYPE can be one of
null
, socket
, udp
,
msmouse
, vc
, file
,
pipe
, console
,
serial
, pty
,
stdio
, braille
,
tty
, or parport
. All character
devices must have a unique identification string up to 127 characters long.
It is used to identify the device in other related directives. For the
complete description of all back-end's sub-options, see the manual page
(man 1 qemu
). A brief description of the available
back-ends
follows:
null
Creates an empty device that outputs no data and drops any data it receives.
stdio
Connects to QEMU's process standard input and standard output.
socket
Creates a two-way stream socket. If PATH is specified, a Unix socket is created:
tux >
sudo
qemu-system-x86_64 [...] -chardev \ socket,id=unix_socket1,path=/tmp/unix_socket1,server
The SERVER suboption specifies that the socket is a listening socket.
If PORT is specified, a TCP socket is created:
tux >
sudo
qemu-system-x86_64 [...] -chardev \ socket,id=tcp_socket1,host=localhost,port=7777,server,nowait
The command creates a local listening (server
) TCP
socket on port 7777. QEMU will not block waiting for a client to
connect to the listening port (nowait
).
udp
Sends all network traffic from VM Guest to a remote host over the UDP protocol.
tux >
sudo
qemu-system-x86_64 [...] \ -chardev udp,id=udp_fwd,host=mercury.example.com,port=7777
The command binds port 7777 on the remote host mercury.example.com and sends VM Guest network traffic there.
vc
Creates a new QEMU text console. You can optionally specify the dimensions of the virtual console:
tux >
sudo
qemu-system-x86_64 [...] -chardev vc,id=vc1,width=640,height=480 \ -mon chardev=vc1
The command creates a new virtual console called vc1
of the specified size, and connects the QEMU monitor to it.
file
Logs all traffic from VM Guest to a file on VM Host Server. The
path
is required and will be created if it does not
exist.
tux >
sudo
qemu-system-x86_64 [...] \ -chardev file,id=qemu_log1,path=/var/log/qemu/guest1.log
By default QEMU creates a set of character devices for serial and parallel ports, and a special console for QEMU monitor. However, you can create your own character devices and use them for the mentioned purposes. The following options will help you:
-serial CHAR_DEV
Redirects the VM Guest's virtual serial port to a character device
CHAR_DEV on VM Host Server. By default, it is a
virtual console (vc
) in graphical mode, and
stdio
in non-graphical mode. The
-serial
understands many sub-options. See the manual
page man 1 qemu
for a complete list of them.
You can emulate up to four serial ports. Use -serial
none
to disable all serial ports.
-parallel DEVICE
Redirects the VM Guest's parallel port to a
DEVICE. This option supports the same devices
as -serial
.
With openSUSE
Leap as a VM Host Server, you can directly use the hardware parallel
port devices /dev/parportN
where
N
is the number of the port.
You can emulate up to three parallel ports. Use -parallel
none
to disable all parallel ports.
-monitor CHAR_DEV
Redirects the QEMU monitor to a character device
CHAR_DEV on VM Host Server. This option supports
the same devices as -serial
. By default, it is a
virtual console (vc
) in a graphical mode, and
stdio
in non-graphical mode.
For a complete list of available character devices back-ends, see the man
page (man 1 qemu
).
Use the -netdev
option in combination with
-device
to define a specific type of networking and a
network interface card for your VM Guest. The syntax for the
-netdev
option is
-netdev type[,prop[=value][,...]]
Currently, SUSE supports the following network types:
user
, bridge
, and
tap
. For a complete list of -netdev
sub-options, see the manual page (man 1 qemu
).
-netdev
Sub-options #bridge
Uses a specified network helper to configure the TAP interface and attach it to a specified bridge. For more information, see Section 29.4.3, “Bridged Networking”.
user
Specifies user-mode networking. For more information, see Section 29.4.2, “User-Mode Networking”.
tap
Specifies bridged or routed networking. For more information, see Section 29.4.3, “Bridged Networking”.
Use -netdev
together with the related
-device
option to add a new emulated network card:
tux >
sudo
qemu-system-x86_64 [...] \ -netdev tap1,id=hostnet0 \ -device virtio-net-pci2,netdev=hostnet0,vlan=13,\ macaddr=00:16:35:AF:94:4B4,name=ncard1
Specifies the network device type. | |
Specifies the model of the network card. Use
Currently, SUSE supports the models
| |
Connects the network interface to VLAN number 1. You can specify your own number—it is mainly useful for identification purpose. If you omit this suboption, QEMU uses the default 0. | |
Specifies the Media Access Control (MAC) address for the network card. It is a unique identifier and you are advised to always specify it. If not, QEMU supplies its own default MAC address and creates a possible MAC address conflict within the related VLAN. |
The -netdev user
option instructs QEMU to use
user-mode networking. This is the default if no networking mode is
selected. Therefore, these command lines are equivalent:
tux >
sudo
qemu-system-x86_64 -hda /images/sles_base.raw
tux >
sudo
qemu-system-x86_64 -hda /images/sles_base.raw -netdev user,id=hostnet0
This mode is useful if you want to allow the VM Guest to access the external network resources, such as the Internet. By default, no incoming traffic is permitted and therefore, the VM Guest is not visible to other machines on the network. No administrator privileges are required in this networking mode. The user-mode is also useful for doing a network boot on your VM Guest from a local directory on VM Host Server.
The VM Guest allocates an IP address from a virtual DHCP server. VM Host Server
(the DHCP server) is reachable at 10.0.2.2, while the IP address range for
allocation starts from 10.0.2.15. You can use ssh
to
connect to VM Host Server at 10.0.2.2, and scp
to copy files
back and forth.
This section shows several examples on how to set up user-mode networking with QEMU.
tux >
sudo
qemu-system-x86_64 [...] \ -netdev user1,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,vlan=12,name=user_net13,restrict=yes4
Specifies user-mode networking. | |
Connects to VLAN number 1. If omitted, defaults to 0. | |
Specifies a human-readable name of the network stack. Useful when identifying it in the QEMU monitor. | |
Isolates VM Guest. It then cannot communicate with VM Host Server and no network packets will be routed to the external network. |
tux >
sudo
qemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,net=10.2.0.0/81,host=10.2.0.62,\ dhcpstart=10.2.0.203,hostname=tux_kvm_guest4
Specifies the IP address of the network that VM Guest sees and optionally the netmask. Default is 10.0.2.0/8. | |
Specifies the VM Host Server IP address that VM Guest sees. Default is 10.0.2.2. | |
Specifies the first of the 16 IP addresses that the built-in DHCP server can assign to VM Guest. Default is 10.0.2.15. | |
Specifies the host name that the built-in DHCP server will assign to VM Guest. |
tux >
sudo
qemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,tftp=/images/tftp_dir1,\ bootfile=/images/boot/pxelinux.02
Activates a built-in TFTP (a file transfer protocol with the functionality of a very basic FTP) server. The files in the specified directory will be visible to a VM Guest as the root of a TFTP server. | |
Broadcasts the specified file as a BOOTP (a network protocol that
offers an IP address and a network location of a boot image, often used
in diskless workstations) file. When used together with
|
tux >
sudo
qemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,hostfwd=tcp::2222-:22
Forwards incoming TCP connections to the port 2222 on the host to the
port 22 (SSH
) on VM Guest. If
sshd
is running on VM Guest,
enter
tux >
ssh qemu_host -p 2222
where qemu_host
is the host name or IP address of the
host system, to get a SSH
prompt
from VM Guest.
With the -netdev tap
option, QEMU creates a network
bridge by connecting the host TAP network device to a specified VLAN of
VM Guest. Its network interface is then visible to the rest of the
network. This method does not work by default and needs to be explicitly
specified.
First, create a network bridge and add a VM Host Server physical network
interface (usually eth0
) to it:
Start
and select › .Click
and select from the drop-down box in the window. Click .Choose whether you need a dynamically or statically assigned IP address, and fill the related network settings if applicable.
In the
pane, select the Ethernet device to add to the bridge.Click
. When asked about adapting an already configured device, click .Click
to apply the changes. Check if the bridge is created:tux >
bridge link
2: eth0 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 \
state forwarding priority 32 cost 100
Use the following example script to connect VM Guest to the newly created
bridge interface br0
. Several commands in the script
are run via the sudo
mechanism because they require
root
privileges.
To manage a network bridge, you need to have the tunctl package installed.
#!/bin/bash bridge=br01 tap=$(sudo tunctl -u $(whoami) -b)2 sudo ip link set $tap up3 sleep 1s4 sudo ip link add name $bridge type bridge sudo ip link set $bridge up sudo ip link set $tap master $bridge5 qemu-system-x86_64 -machine accel=kvm -m 512 -hda /images/sles_base.raw \ -netdev tap,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,vlan=0,macaddr=00:16:35:AF:94:4B,\ ifname=$tap6,script=no7,downscript=no sudo ip link set $tap nomaster8 sudo ip link set $tap down9 sudo tunctl -d $tap10
Name of the bridge device. | |
Prepare a new TAP device and assign it to the user who runs the script. TAP devices are virtual network devices often used for virtualization and emulation setups. | |
Bring up the newly created TAP network interface. | |
Make a 1-second pause to make sure the new TAP network interface is really up. | |
Add the new | |
The | |
Before | |
Deletes the TAP interface from a network bridge | |
Sets the state of the TAP device to | |
Tear down the TAP device. |
Another way to connect VM Guest to a network through a network bridge is
by means of the qemu-bridge-helper
helper program. It
configures the TAP interface for you, and attaches it to the specified
bridge. The default helper executable is
/usr/lib/qemu-bridge-helper
. The helper executable is
setuid root, which is only executable by the members of the virtualization
group (kvm
). Therefore the
qemu-system-ARCH
command itself does not need to be run
under root
privileges.
The helper is automatically called when you specify a network bridge:
qemu-system-x86_64 [...] \ -netdev bridge,id=hostnet0,vlan=0,br=br0 \ -device virtio-net-pci,netdev=hostnet0
You can specify your own custom helper script that will take care of the
TAP device (de)configuration, with the
helper=/path/to/your/helper
option:
qemu-system-x86_64 [...] \ -netdev bridge,id=hostnet0,vlan=0,br=br0,helper=/path/to/bridge-helper \ -device virtio-net-pci,netdev=hostnet0
To define access privileges to qemu-bridge-helper
,
inspect the /etc/qemu/bridge.conf
file. For example
the following directive
allow br0
allows the qemu-system-ARCH
command to connect its
VM Guest to the network bridge br0
.
By default QEMU uses a GTK (a cross-platform toolkit library) window to
display the graphical output of a VM Guest.
With the -vnc
option specified, you can make QEMU
listen on a specified VNC display and redirect its graphical output to the
VNC session.
When working with QEMU's virtual machine via VNC session, it is useful to
work with the -usbdevice tablet
option.
Moreover, if you need to use another keyboard layout than the default
en-us
, specify it with the -k
option.
The first suboption of -vnc
must be a
display value. The -vnc
option
understands the following display specifications:
host:display
Only connections from host
on the display number
display
will be accepted. The TCP port on which the
VNC session is then running is normally a 5900 +
display
number. If you do not specify
host
, connections will be accepted from any host.
unix:path
The VNC server listens for connections on Unix domain sockets. The
path
option specifies the location of the related Unix
socket.
none
The VNC server functionality is initialized, but the server itself is not started. You can start the VNC server later with the QEMU monitor. For more information, see Chapter 30, Virtual Machine Administration Using QEMU Monitor.
Following the display value there may be one or more option flags separated by commas. Valid options are:
reverse
Connect to a listening VNC client via a reverse connection.
websocket
Opens an additional TCP listening port dedicated to VNC Websocket connections. By definition the Websocket port is 5700+display.
password
Require that password-based authentication is used for client connections.
tls
Require that clients use TLS when communicating with the VNC server.
x509=/path/to/certificate/dir
Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.
x509verify=/path/to/certificate/dir
Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.
sasl
Require that the client uses SASL to authenticate with the VNC server.
acl
Turn on access control lists for checking of the x509 client certificate and SASL party.
lossy
Enable lossy compression methods (gradient, JPEG, ...).
non-adaptive
Disable adaptive encodings. Adaptive encodings are enabled by default.
share=[allow-exclusive|force-shared|ignore]
Set display sharing policy.
For more details about the display options, see the qemu-doc man page.
An example VNC usage:
tux >
qemu-system-x86_64 [...] -vnc :5 # (on the client:)wilber >
vncviewer venus:5 &
The default VNC server setup does not use any form of authentication. In the previous example, any user can connect and view the QEMU VNC session from any host on the network.
There are several levels of security that you can apply to your VNC client/server connection. You can either protect your connection with a password, use x509 certificates, use SASL authentication, or even combine some authentication methods in one QEMU command.
For more information about configuring x509
certificates on a VM Host Server and the client, see
Section 10.3.2, “Remote TLS/SSL Connection with x509 Certificate (qemu+tls
or xen+tls
)” and
Section 10.3.2.3, “Configuring the Client and Testing the Setup”.
The Remmina VNC viewer supports advanced authentication mechanisms.
Therefore, it will be used to view the graphical output of VM Guest in the
following examples. For this example, let us assume that the server x509
certificates ca-cert.pem
,
server-cert.pem
, and
server-key.pem
are located in the
/etc/pki/qemu
directory on the host.
The client certificates can be placed in any custom directory, as Remmina
asks for their path on the connection start-up.
qemu-system-x86_64 [...] -vnc :5,password -monitor stdio
Starts the VM Guest graphical output on VNC display number 5 (usually
port 5905). The password
suboption initializes a simple
password-based authentication method. There is no password set by default
and you need to set one with the change vnc password
command in QEMU monitor:
QEMU 2.3.1 monitor - type 'help' for more information (qemu) change vnc password Password: ****
You need the -monitor stdio
option here, because you
would not be able to manage the QEMU monitor without redirecting its
input/output.
The QEMU VNC server can use TLS encryption for the session and x509 certificates for authentication. The server asks the client for a certificate and validates it against the CA certificate. Use this authentication type if your company provides an internal certificate authority.
qemu-system-x86_64 [...] -vnc :5,tls,x509verify=/etc/pki/qemu
You can combine the password authentication with TLS encryption and x509 certificate authentication to create a two-layer authentication model for clients. Remember to set the password in the QEMU monitor after you run the following command:
qemu-system-x86_64 [...] -vnc :5,password,tls,x509verify=/etc/pki/qemu \ -monitor stdio
Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It integrates several authentication mechanisms, like PAM, Kerberos, LDAP and more. SASL keeps its own user database, so the connecting user accounts do not need to exist on VM Host Server.
For security reasons, you are advised to combine SASL authentication with TLS encryption and x509 certificates:
qemu-system-x86_64 [...] -vnc :5,tls,x509,sasl -monitor stdio