libvirt
A container is a kind of “virtual machine” that can be started, stopped, frozen, or cloned (to name but a few tasks). To set up an LXC container, you first need to create a root file system containing the guest distribution:
There is currently no GUI to create a root file system. You will thus
need to open a terminal and use virt-create-rootfs
as
root to populate the new root file system. In the following steps, the
new root
file system will be created in
/path/to/rootfs.
Note that virt-create-rootfs
needs a registration
code to set up a SLE-12 root file system.
Run the virt-create-rootfs
command:
virt-create-rootfs --root /path/to/rootfs --distro SLES-12.0 -c registration code
Change the root path to the root file system with the
chroot
command:
chroot /path/to/rootfs
Change the password for user root
with
passwd
.
Create an operator
user
without root
privileges:
useradd -m operator
Change the operator's password:
passwd operator
Leave the chroot environment with exit
.
Open YaST and go to
› to open the Virtual Machine Manager.If not already present, add a local LXC connection by clicking
, . Select as hypervisor and click the button.Select the
connection and click menu.Select the
option and click the button.Type the path to the root file system from Procedure 31.1, “Creating a Root File System” and click the button.
Choose the maximum amount of memory and CPUs to allocate to the container. Then click the
button.
Type in a name for the container. This name will be used for all
virsh
commands on the container.
Click
. Select the network to connect the container to and click the button: the container will then be created and started. A console will also be automatically opened.
To configure the container network, edit the
/etc/sysconfig/network/ifcfg-*
files. Make sure not
to change the IPv6 setting: this would lead to errors while starting the
network.
Libvirt also allows to run single applications instead of full blown
Linux distributions in containers. In this example,
bash
will be started in its own container.
Open YaST and go to the Virtual Machine Manager module.
If not already present, add the LXC connection by clicking
, menu. Select as hypervisor and click .Select the
connection and click › .Select the
option and click .
Set the patch to the application to be launched. As an example, the
field is filled with /bin/sh
, which is fine to
create a first container. Click .
Choose the maximum amount of memory and CPUs to allocate to the container. Click
.
Type in a name for the container. This name will be used for all
virsh
commands on the container.
Click
. Select the network to connect the container to and click . The container will be created and started. A console will be opened automatically.Note that the container will be destroyed after the application has finished running.
By default, containers are not secured using AppArmor or SELinux. There
is no graphical user interface to change the security model for a libvirt
domain, but virsh
will help.
Edit the container XML configuration using virsh:
virsh -c lxc:/// edit mycontainer
Add the following to the XML configuration, save it and exit the editor.
<domain> ... <seclabel type="dynamic" model="apparmor"/> ... </domain>
With this configuration, an AppArmor profile for the container will be
created in the /etc/apparmor.d/libvirt
directory. The default profile only allows the minimum applications to
run in the container. This can be changed by modifying the
libvirt-container-uuid
file: this file is not overwritten by libvirt.
SUSE Linux Enterprise Server 11 SP3 was shipping LXC, while SUSE Linux Enterprise Server 12 comes with the libvirt LXC driver, sometimes named libvirt-lxc to avoid confusion. The containers are not managed or configured in the same way in these tools. Here is a non-exhaustive list of differences.
The main difference comes from the fact that domain configuration in libvirt is an XML file, while LXC configuration is a properties file. Most of the LXC properties can be mapped to the domain XML. The properties that cannot be migrated are:
lxc.network.script.up: this script can be
implemented using the /etc/libvirt/hooks/network
libvirt hook, though the script will need to be adapted.
lxc.network.ipv*: libvirt cannot set the container network configuration from the domain configuration.
lxc.network.name: libvirt cannot set the container network card name.
lxc.devttydir: libvirt does not allow changing the location of the console devices.
lxc.console: there is currently no way to log the output of the console into a file on the host for libvirt LXC containers.
lxc.pivotdir: libvirt does not allow to fine-tune
the directory used for the pivot_root
.
/.olroot
is used.
lxc.rootfs.mount: libvirt does not allow to fine-tune this.
LXC VLAN networks automatically create the VLAN interface on the host and then move it into the guest namespace. libvirt-lxc configuration can mention a VLAN tag ID only for openvSwitch tap devices or PCI pass-through of SRIOV VF. The conversion tool actually needs the user to manually create the VLAN interface on the host side.
LXC rootfs can also be an image file, but LXC brute-forces the mount to try to detect the proper file system format. libvirt-lxc can mount image files of several formats, but the 'auto' value for the format parameter is explicitly not supported. This means that the generated configuration will need to be tweaked by the user to get a proper match in that case.
LXC can support any cgroup configuration, even future ones, while libvirt domain configuration, needs to map each of them.
LXC can mount block devices in the rootfs, but it cannot mount raw partition files: the file needs to be manually attached to a loop device. On the other hand libvirt-lxc can mount block devices, but also partition files of any format.