libvirt
A VM Guest system needs some means to communicate either with other VM Guest systems or with a local network. The network interface to the VM Guest system is made of a split device driver, which means that any virtual Ethernet device has a corresponding network interface in Dom0. This interface is set up to access a virtual network that is run in Dom0. The bridged virtual network is fully integrated into the system configuration of openSUSE Leap and can be configured with YaST.
When installing a Xen VM Host Server, a bridged network configuration will be proposed during normal network configuration. The user can choose to change the configuration during the installation and customize it to the local needs.
If desired, Xen VM Host Server can be installed after performing a
default Physical Server installation using the Install
Hypervisor and Tools
module in YaST. This module will
prepare the system for hosting virtual machines, including invocation of
the default bridge networking proposal.
In case the necessary packages for a Xen VM Host Server are installed
manually with rpm
or
zypper
, the remaining system configuration needs to
be done by the administrator manually or with YaST.
The network scripts that are provided by Xen are not used by default in openSUSE Leap. They are only delivered for reference but disabled. The network configuration that is used in openSUSE Leap is done by means of the YaST system configuration similar to the configuration of network interfaces in openSUSE Leap.
For more general information about managing network bridges, see Section 11.1, “Network Bridge”.
The Xen hypervisor can provide different types of network interfaces to the VM Guest systems. The preferred network device should be a paravirtualized network interface. This yields the highest transfer rates with the lowest system requirements. Up to eight network interfaces may be provided for each VM Guest.
Systems that are not aware of paravirtualized hardware may not have this option. To connect systems to a network that can only run fully virtualized, several emulated network interfaces are available. The following emulations are at your disposal:
Realtek 8139 (PCI). This is the default emulated network card.
AMD PCnet32 (PCI)
NE2000 (PCI)
NE2000 (ISA)
Intel e100 (PCI)
Intel e1000 and its variants e1000-82540em, e1000-82544gc, e1000-82545em (PCI)
All these network interfaces are software interfaces. Because every network interface must have a unique MAC address, an address range has been assigned to Xensource that can be used by these interfaces.
The default configuration of MAC addresses in virtualized environments creates a random MAC address that looks like 00:16:3E:xx:xx:xx. Normally, the amount of available MAC addresses should be big enough to get only unique addresses. However, if you have a very big installation, or to make sure that no problems arise from random MAC address assignment, you can also manually assign these addresses.
For debugging or system management purposes, it may be useful to know
which virtual interface in Dom0 is connected to which Ethernet
device in a running guest. This information may be read from the device
naming in Dom0. All virtual devices follow the rule
vif<domain
number>.<interface_number>
.
For example, if you want to know the device name for the third interface
(eth2) of the VM Guest with id 5, the device in Dom0 would be
vif5.2
. To obtain a list of all available interfaces,
run the command ip a
.
The device naming does not contain any information about which bridge
this interface is connected to. However, this information is available in
Dom0. To get an overview about which interface is connected to which
bridge, run the command bridge link
. The output may
look as follows:
tux >
sudo
bridge link 2: eth0 state DOWN : <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 master br0 3: eth1 state UP : <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 master br1
In this example, there are three configured bridges:
br0
, br1
and
br2
. Currently, br0
and
br1
each have a real Ethernet device added:
eth0
and eth1
, respectively.
Xen can be set up to use host-based routing in the controlling Dom0. Unfortunately, this is not yet well supported from YaST and requires quite an amount of manual editing of configuration files. Thus, this is a task that requires an advanced administrator.
The following configuration will only work when using fixed IP addresses. Using DHCP is not practicable with this procedure, because the IP address must be known to both, the VM Guest and the VM Host Server system.
The easiest way to create a routed guest is to change the networking from a bridged to a routed network. As a requirement to the following procedures, a VM Guest with a bridged network setup must be installed. For example, the VM Host Server is named earth with the IP 192.168.1.20, and the VM Guest has the name alice with the IP 192.168.1.21.
Make sure that alice is shut down. Use
xl
commands to shut down and check.
Prepare the network configuration on the VM Host Server earth:
Create a hotplug interface that will be used to route the traffic. To
accomplish this, create a file named
/etc/sysconfig/network/ifcfg-alice.0
with the following content:
NAME="Xen guest alice" BOOTPROTO="static" STARTMODE="hotplug"
Ensure that IP forwarding is enabled:
In YaST, go to
› .Enter the
tab and activate and options.Confirm the setting and quit YaST.
Apply the following configuration to firewalld
:
Add alice.0 to the devices in the public zone:
tux >
sudo
firewall-cmd --zone=public --add-interface=alice.0
Tell the firewall which address should be forwarded:
tux >
sudo
firewall-cmd --zone=public \ --add-forward-port=port=80:proto=tcp:toport=80:toaddr="192.168.1.21/32,0/0"
Make the runtime configuration changes permanent:
tux >
sudo
firewall-cmd --runtime-to-permanent
Add a static route to the interface of alice. To accomplish
this, add the following line to the end of
/etc/sysconfig/network/routes
:
192.168.1.21 - - alice.0
To make sure that the switches and routers that the VM Host Server is
connected to know about the routed interface, activate
proxy_arp
on earth. Add the following lines
to /etc/sysctl.conf
:
net.ipv4.conf.default.proxy_arp = 1 net.ipv4.conf.all.proxy_arp = 1
Activate all changes with the commands:
tux >
sudo
systemctl restart systemd-sysctl wicked
Proceed with configuring the Xen configuration of the VM Guest by changing the vif interface configuration for alice as described in Section 21.1, “XL—Xen Management Tool”. Make the following changes to the text file you generate during the process:
Remove the snippet
bridge=br0
And add the following one:
vifname=vifalice.0
or
vifname=vifalice.0=emu
for a fully virtualized domain.
Change the script that is used to set up the interface to the following:
script=/etc/xen/scripts/vif-route-ifup
Activate the new configuration and start the VM Guest.
The remaining configuration tasks must be accomplished from inside the VM Guest.
Open a console to the VM Guest with xl console
DOMAIN and log in.
Check that the guest IP is set to 192.168.1.21.
Provide VM Guest with a host route and a default gateway to the
VM Host Server. Do this by adding the following lines to
/etc/sysconfig/network/routes
:
192.168.1.20 - - eth0 default 192.168.1.20 - -
Finally, test the network connection from the VM Guest to the world outside and from the network to your VM Guest.
Creating a masqueraded network setup is quite similar to the routed
setup. However, there is no proxy_arp needed, and some firewall rules are
different. To create a masqueraded network to a guest dolly
with the IP address 192.168.100.1 where the host has its external
interface on br0
, proceed as follows. For easier
configuration, only the already installed guest is modified to use a
masqueraded network:
Shut down the VM Guest system with xl shutdown
DOMAIN.
Prepare the network configuration on the VM Host Server:
Create a hotplug interface that will be used to route the traffic. To
accomplish this, create a file named
/etc/sysconfig/network/ifcfg-dolly.0
with the following content:
NAME="Xen guest dolly" BOOTPROTO="static" STARTMODE="hotplug"
Edit the file
/etc/sysconfig/SuSEfirewall2
and add
the following configurations:
Add dolly.0 to the devices in FW_DEV_DMZ:
FW_DEV_DMZ="dolly.0"
Switch on the routing in the firewall:
FW_ROUTE="yes"
Switch on masquerading in the firewall:
FW_MASQUERADE="yes"
Tell the firewall which network should be masqueraded:
FW_MASQ_NETS="192.168.100.1/32"
Remove the networks from the masquerading exceptions:
FW_NOMASQ_NETS=""
Finally, restart the firewall with the command:
tux >
sudo
systemctl restart SuSEfirewall2
Add a static route to the interface of dolly. To
accomplish this, add the following line to the end of
/etc/sysconfig/network/routes
:
192.168.100.1 - - dolly.0
Activate all changes with the command:
tux >
sudo
systemctl restart wicked
Proceed with configuring the Xen configuration of the VM Guest.
Change the vif interface configuration for dolly as described in Section 21.1, “XL—Xen Management Tool”.
Remove the entry:
bridge=br0
And add the following one:
vifname=vifdolly.0
Change the script that is used to set up the interface to the following:
script=/etc/xen/scripts/vif-route-ifup
Activate the new configuration and start the VM Guest.
The remaining configuration tasks need to be accomplished from inside the VM Guest.
Open a console to the VM Guest with xl console
DOMAIN and log in.
Check whether the guest IP is set to 192.168.100.1.
Provide VM Guest with a host route and a default gateway to the
VM Host Server. Do this by adding the following lines to
/etc/sysconfig/network/routes
:
192.168.1.20 - - eth0 default 192.168.1.20 - -
Finally, test the network connection from the VM Guest to the outside world.
There are many network configuration possibilities available to Xen. The following configurations are not activated by default:
With Xen, you may limit the network transfer rate a virtual guest may use to access a bridge. To configure this, you need to modify the VM Guest configuration as described in Section 21.1, “XL—Xen Management Tool”.
In the configuration file, first search for the device that is connected to the virtual bridge. The configuration looks like the following:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0' ]
To add a maximum transfer rate, add a parameter
rate
to this configuration as in:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0,rate=100Mb/s' ]
Note that the rate is either Mb/s
(megabits per
second) or MB/s
(megabytes per second). In the above
example, the maximum transfer rate of the virtual interface is 100
megabits. By default, there is no limitation to the bandwidth of a guest
to the virtual bridge.
It is even possible to fine-tune the behavior by specifying the time window that is used to define the granularity of the credit replenishment:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0,rate=100Mb/s@20ms' ]
To monitor the traffic on a specific interface, the little application
iftop
is a nice program that displays the
current network traffic in a terminal.
When running a Xen VM Host Server, you need to define the interface
that is monitored. The interface that Dom0 uses to get access to
the physical network is the bridge device, for example
br0
. This, however, may vary on your system. To
monitor all traffic to the physical interface, run a terminal as
root
and use the command:
iftop -i br0
To monitor the network traffic of a special network interface of a specific VM Guest, supply the correct virtual interface. For example, to monitor the first Ethernet device of the domain with id 5, use the command:
ftop -i vif5.0
To quit iftop
, press the key Q. More
options and possibilities are available in the manual page man
8 iftop
.