Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to openSUSE Leap 15.1

11 Managing Storage Edit source

When managing a VM Guest on the VM Host Server itself, you can access the complete file system of the VM Host Server to attach or create virtual hard disks or to attach existing images to the VM Guest. However, this is not possible when managing VM Guests from a remote host. For this reason, libvirt supports so called Storage Pools, which can be accessed from remote machines.

Tip
Tip: CD/DVD ISO images

To be able to access CD/DVD ISO images on the VM Host Server from remote, they also need to be placed in a storage pool.

libvirt knows two different types of storage: volumes and pools.

Storage Volume

A storage volume is a storage device that can be assigned to a guest—a virtual disk or a CD/DVD/floppy image. Physically (on the VM Host Server) it can be a block device (a partition, a logical volume, etc.) or a file.

Storage Pool

A storage pool is a storage resource on the VM Host Server that can be used for storing volumes, similar to network storage for a desktop machine. Physically it can be one of the following types:

File System Directory (dir)

A directory for hosting image files. The files can be either one of the supported disk formats (raw or qcow2), or ISO images.

Physical Disk Device (disk)

Use a complete physical disk as storage. A partition is created for each volume that is added to the pool.

Pre-Formatted Block Device (fs)

Specify a partition to be used in the same way as a file system directory pool (a directory for hosting image files). The only difference to using a file system directory is that libvirt takes care of mounting the device.

iSCSI Target (iscsi)

Set up a pool on an iSCSI target. You need to have been logged in to the volume once before, to use it with libvirt. Use the YaST iSCSI Initiator to detect and log in to a volume. Volume creation on iSCSI pools is not supported, instead each existing Logical Unit Number (LUN) represents a volume. Each volume/LUN also needs a valid (empty) partition table or disk label before you can use it. If missing, use fdisk to add it:

~ # fdisk -cu /dev/disk/by-path/ip-192.168.2.100:3260-iscsi-iqn.2010-10.com.example:[...]-lun-2
Device contains neither a valid DOS partition table, nor Sun, SGI
or OSF disklabel
Building a new DOS disklabel with disk identifier 0xc15cdc4e.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
LVM Volume Group (logical)

Use an LVM volume group as a pool. You may either use a predefined volume group, or create a group by specifying the devices to use. Storage volumes are created as partitions on the volume.

Warning
Warning: Deleting the LVM-Based Pool

When the LVM-based pool is deleted in the Storage Manager, the volume group is deleted as well. This results in a non-recoverable loss of all data stored on the pool!

Multipath Devices (mpath)

At the moment, multipathing support is limited to assigning existing devices to the guests. Volume creation or configuring multipathing from within libvirt is not supported.

Network Exported Directory (netfs)

Specify a network directory to be used in the same way as a file system directory pool (a directory for hosting image files). The only difference to using a file system directory is that libvirt takes care of mounting the directory. Supported protocols are NFS and GlusterFS.

SCSI Host Adapter (scsi)

Use an SCSI host adapter in almost the same way as an iSCSI target. We recommend to use a device name from /dev/disk/by-* rather than /dev/sdX. The latter can change (for example, when adding or removing hard disks). Volume creation on iSCSI pools is not supported. Instead, each existing LUN (Logical Unit Number) represents a volume.

Warning
Warning: Security Considerations

To avoid data loss or data corruption, do not attempt to use resources such as LVM volume groups, iSCSI targets, etc., that are also used to build storage pools on the VM Host Server. There is no need to connect to these resources from the VM Host Server or to mount them on the VM Host Server—libvirt takes care of this.

Do not mount partitions on the VM Host Server by label. Under certain circumstances it is possible that a partition is labeled from within a VM Guest with a name already existing on the VM Host Server.

11.1 Managing Storage with Virtual Machine Manager Edit source

The Virtual Machine Manager provides a graphical interface—the Storage Manager—to manage storage volumes and pools. To access it, either right-click a connection and choose Details, or highlight a connection and choose Edit › Connection Details. Select the Storage tab.

11.1.1 Adding a Storage Pool Edit source

To add a storage pool, proceed as follows:

  1. Click Add in the bottom left corner. The dialog Add a New Storage Pool appears.

  2. Provide a Name for the pool (consisting of alphanumeric characters and _-.) and select a Type. Proceed with Forward.

  3. Specify the required details in the following window. The data that needs to be entered depends on the type of pool you are creating:

    Typedir
    • Target Path: Specify an existing directory.

    Typedisk
    • Target Path: The directory that hosts the devices. The default value /dev should usually fit.

    • Format: Format of the device's partition table. Using auto should usually work. If not, get the required format by running the command parted -l on the VM Host Server.

    • Source Path: Path to the device. It is recommended to use a device name from /dev/disk/by-* rather than the simple /dev/sdX, since the latter can change (for example, when adding or removing hard disks). You need to specify the path that resembles the whole disk, not a partition on the disk (if existing).

    • Build Pool: Activating this option formats the device. Use with care—all data on the device will be lost!

    Typefs
    • Target Path: Mount point on the VM Host Server file system.

    • Format: File system format of the device. The default value auto should work.

    • Source Path: Path to the device file. It is recommended to use a device name from /dev/disk/by-* rather than /dev/sdX, because the latter can change (for example, when adding or removing hard disks).

    Typeiscsi

    Get the necessary data by running the following command on the VM Host Server:

    tux > sudo iscsiadm --mode node

    It will return a list of iSCSI volumes with the following format. The elements in bold text are required:

    IP_ADDRESS:PORT,TPGT TARGET_NAME_(IQN)
    • Target Path: The directory containing the device file. Use /dev/disk/by-path (default) or /dev/disk/by-id.

    • Host Name: Host name or IP address of the iSCSI server.

    • Source Path: The iSCSI target name (IQN).

    Typelogical
    • Target Path: In case you use an existing volume group, specify the existing device path. When building a new LVM volume group, specify a device name in the /dev directory that does not already exist.

    • Source Path: Leave empty when using an existing volume group. When creating a new one, specify its devices here.

    • Build Pool: Only activate when creating a new volume group.

    Typempath
    • Target Path: Support for multipathing is currently limited to making all multipath devices available. Therefore, specify an arbitrary string here that will then be ignored. The path is required, otherwise the XML parser will fail.

    Typenetfs
    • Target Path: Mount point on the VM Host Server file system.

    • Host Name: IP address or host name of the server exporting the network file system.

    • Source Path: Directory on the server that is being exported.

    Typescsi
    • Target Path: The directory containing the device file. Use /dev/disk/by-path (default) or /dev/disk/by-id.

    • Source Path: Name of the SCSI adapter.

    Note
    Note: File Browsing

    Using the file browser by clicking Browse is not possible when operating from remote.

  4. Click Finish to add the storage pool.

11.1.2 Managing Storage Pools Edit source

Virtual Machine Manager's Storage Manager lets you create or delete volumes in a pool. You may also temporarily deactivate or permanently delete existing storage pools. Changing the basic configuration of a pool is currently not supported by SUSE.

11.1.2.1 Starting, Stopping and Deleting Pools Edit source

The purpose of storage pools is to provide block devices located on the VM Host Server that can be added to a VM Guest when managing it from remote. To make a pool temporarily inaccessible from remote, click Stop in the bottom left corner of the Storage Manager. Stopped pools are marked with State: Inactive and are grayed out in the list pane. By default, a newly created pool will be automatically started On Boot of the VM Host Server.

To start an inactive pool and make it available from remote again, click Start in the bottom left corner of the Storage Manager.

Note
Note: A Pool's State Does not Affect Attached Volumes

Volumes from a pool attached to VM Guests are always available, regardless of the pool's state (Active (stopped) or Inactive (started)). The state of the pool solely affects the ability to attach volumes to a VM Guest via remote management.

To permanently make a pool inaccessible, click Delete in the bottom left corner of the Storage Manager. You may only delete inactive pools. Deleting a pool does not physically erase its contents on VM Host Server—it only deletes the pool configuration. However, you need to be extra careful when deleting pools, especially when deleting LVM volume group-based tools:

Warning
Warning: Deleting Storage Pools

Deleting storage pools based on local file system directories, local partitions or disks has no effect on the availability of volumes from these pools currently attached to VM Guests.

Volumes located in pools of type iSCSI, SCSI, LVM group or Network Exported Directory will become inaccessible from the VM Guest if the pool is deleted. Although the volumes themselves will not be deleted, the VM Host Server will no longer have access to the resources.

Volumes on iSCSI/SCSI targets or Network Exported Directory will become accessible again when creating an adequate new pool or when mounting/accessing these resources directly from the host system.

When deleting an LVM group-based storage pool, the LVM group definition will be erased and the LVM group will no longer exist on the host system. The configuration is not recoverable and all volumes from this pool are lost.

11.1.2.2 Adding Volumes to a Storage Pool Edit source

Virtual Machine Manager lets you create volumes in all storage pools, except in pools of types Multipath, iSCSI, or SCSI. A volume in these pools is equivalent to a LUN and cannot be changed from within libvirt.

  1. A new volume can either be created using the Storage Manager or while adding a new storage device to a VM Guest. In either case, select a storage pool from the left panel, then click Create new volume.

  2. Specify a Name for the image and choose an image format.

    SUSE currently only supports raw or qcow2 images. The latter option is not available on LVM group-based pools.

    Next to Max Capacity, specify the amount maximum size that the disk image is allowed to reach. Unless you are working with a qcow2 image, you can also set an amount for Allocation that should be allocated initially. If both values differ, a sparse image file will be created which grows on demand.

    For qcow2 images, you can use a Backing Store (also called backing file) which constitutes a base image. The newly created qcow2 image will then only record the changes that are made to the base image.

  3. Start the volume creation by clicking Finish.

11.1.2.3 Deleting Volumes From a Storage Pool Edit source

Deleting a volume can only be done from the Storage Manager, by selecting a volume and clicking Delete Volume. Confirm with Yes.

Warning
Warning: Volumes Can Be Deleted Even While in Use

Volumes can be deleted even if they are currently used in an active or inactive VM Guest. There is no way to recover a deleted volume.

Whether a volume is used by a VM Guest is indicated in the Used By column in the Storage Manager.

11.2 Managing Storage with virsh Edit source

Managing storage from the command line is also possible by using virsh. However, creating storage pools is currently not supported by SUSE. Therefore, this section is restricted to documenting functions like starting, stopping and deleting pools and volume management.

A list of all virsh subcommands for managing pools and volumes is available by running virsh help pool and virsh help volume, respectively.

11.2.1 Listing Pools and Volumes Edit source

List all pools currently active by executing the following command. To also list inactive pools, add the option --all:

tux > virsh pool-list --details

Details about a specific pool can be obtained with the pool-info subcommand:

tux > virsh pool-info POOL

Volumes can only be listed per pool by default. To list all volumes from a pool, enter the following command.

tux > virsh vol-list --details POOL

At the moment virsh offers no tools to show whether a volume is used by a guest or not. The following procedure describes a way to list volumes from all pools that are currently used by a VM Guest.

Procedure 11.1: Listing all Storage Volumes Currently Used on a VM Host Server
  1. Create an XSLT style sheet by saving the following content to a file, for example, ~/libvirt/guest_storage_list.xsl:

    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0"
      xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
      <xsl:output method="text"/>
      <xsl:template match="text()"/>
      <xsl:strip-space elements="*"/>
      <xsl:template match="disk">
        <xsl:text>  </xsl:text>
        <xsl:value-of select="(source/@file|source/@dev|source/@dir)[1]"/>
        <xsl:text>&#10;</xsl:text>
      </xsl:template>
    </xsl:stylesheet>
  2. Run the following commands in a shell. It is assumed that the guest's XML definitions are all stored in the default location (/etc/libvirt/qemu). xsltproc is provided by the package libxslt.

    SSHEET="$HOME/libvirt/guest_storage_list.xsl"
    cd /etc/libvirt/qemu
    for FILE in *.xml; do
      basename $FILE .xml
      xsltproc $SSHEET $FILE
    done

11.2.2 Starting, Stopping and Deleting Pools Edit source

Use the virsh pool subcommands to start, stop or delete a pool. Replace POOL with the pool's name or its UUID in the following examples:

Stopping a Pool
tux > virsh pool-destroy POOL
Note
Note: A Pool's State Does not Affect Attached Volumes

Volumes from a pool attached to VM Guests are always available, regardless of the pool's state (Active (stopped) or Inactive (started)). The state of the pool solely affects the ability to attach volumes to a VM Guest via remote management.

Deleting a Pool
tux > virsh pool-delete POOL
Warning
Warning: Deleting Storage Pools

See Warning: Deleting Storage Pools

Starting a Pool
tux > virsh pool-start POOL
Enable Autostarting a Pool
tux > virsh pool-autostart POOL

Only pools that are marked to autostart will automatically be started if the VM Host Server reboots.

Disable Autostarting a Pool
tux > virsh pool-autostart POOL --disable

11.2.3 Adding Volumes to a Storage Pool Edit source

virsh offers two ways to add volumes to storage pools: either from an XML definition with vol-create and vol-create-from or via command line arguments with vol-create-as. The first two methods are currently not supported by SUSE, therefore this section focuses on the subcommand vol-create-as.

To add a volume to an existing pool, enter the following command:

tux > virsh vol-create-as POOL1NAME2 12G --format3raw|qcow24 --allocation 4G5

1

Name of the pool to which the volume should be added

2

Name of the volume

3

Size of the image, in this example 12 gigabytes. Use the suffixes k, M, G, T for kilobyte, megabyte, gigabyte, and terabyte, respectively.

4

Format of the volume. SUSE currently supports raw, and qcow2.

5

Optional parameter. By default virsh creates a sparse image file that grows on demand. Specify the amount of space that should be allocated with this parameter (4 gigabytes in this example). Use the suffixes k, M, G, T for kilobyte, megabyte, gigabyte, and terabyte, respectively.

When not specifying this parameter, a sparse image file with no allocation will be generated. To create a non-sparse volume, specify the whole image size with this parameter (would be 12G in this example).

11.2.3.1 Cloning Existing Volumes Edit source

Another way to add volumes to a pool is to clone an existing volume. The new instance is always created in the same pool as the original.

tux > virsh vol-clone NAME_EXISTING_VOLUME1NAME_NEW_VOLUME2 --pool POOL3

1

Name of the existing volume that should be cloned

2

Name of the new volume

3

Optional parameter. libvirt tries to locate the existing volume automatically. If that fails, specify this parameter.

11.2.4 Deleting Volumes from a Storage Pool Edit source

To permanently delete a volume from a pool, use the subcommand vol-delete:

tux > virsh vol-delete NAME --pool POOL

--pool is optional. libvirt tries to locate the volume automatically. If that fails, specify this parameter.

Warning
Warning: No Checks Upon Volume Deletion

A volume will be deleted in any case, regardless of whether it is currently used in an active or inactive VM Guest. There is no way to recover a deleted volume.

Whether a volume is used by a VM Guest can only be detected by using by the method described in Procedure 11.1, “Listing all Storage Volumes Currently Used on a VM Host Server”.

11.2.5 Attaching Volumes to a VM Guest Edit source

After you create a volume as described in Section 11.2.3, “Adding Volumes to a Storage Pool”, you can attach it to a virtual machine and use it as a hard disk:

tux > virsh attach-disk DOMAIN SOURCE_IMAGE_FILE TARGET_DISK_DEVICE

For example:

tux > virsh attach-disk sles12sp3 /virt/images/example_disk.qcow2 sda2

To check if the new disk is attached, inspect the result of the virsh dumpxml command:

root # virsh dumpxml sles12sp3
[...]
<disk type='file' device='disk'>
 <driver name='qemu' type='raw'/>
 <source file='/virt/images/example_disk.qcow2'/>
 <backingStore/>
 <target dev='sda2' bus='scsi'/>
 <alias name='scsi0-0-0'/>
 <address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
[...]

11.2.5.1 Hotplug or Persistent Change Edit source

You can attach disks to both active and inactive domains. The attachment is controlled by the --live and --config options:

--live

Hotplugs the disk to an active domain. The attachment is not saved in the domain configuration. Using --live on an inactive domain is an error.

--config

Changes the domain configuration persistently. The attached disk is then available after the next domain start.

--live--config

Hotplugs the disk and adds it to the persistent domain configuration.

Tip
Tip: virsh attach-device

virsh attach-device is the more generic form of virsh attach-disk. You can use it to attach other types of devices to a domain.

11.2.6 Detaching Volumes from a VM Guest Edit source

To detach a disk from a domain, use virsh detach-disk:

root # virsh detach-disk DOMAIN TARGET_DISK_DEVICE

For example:

root # virsh detach-disk sles12sp3 sda2

You can control the attachment with the --live and --config options as described in Section 11.2.5, “Attaching Volumes to a VM Guest”.

11.3 Locking Disk Files and Block Devices with virtlockd Edit source

Locking block devices and disk files prevents concurrent writes to these resources from different VM Guests. It provides protection against starting the same VM Guest twice, or adding the same disk to two different virtual machines. This will reduce the risk of a virtual machine's disk image becoming corrupted because of a wrong configuration.

The locking is controlled by a daemon called virtlockd. Since it operates independently from the libvirtd daemon, locks will endure a crash or a restart of libvirtd. Locks will even persist in the case of an update of the virtlockd itself, since it can re-execute itself. This ensures that VM Guests do not need to be restarted upon a virtlockd update. virtlockd is supported for KVM, QEMU, and Xen.

11.3.1 Enable Locking Edit source

Locking virtual disks is not enabled by default on openSUSE Leap. To enable and automatically start it upon rebooting, perform the following steps:

  1. Edit /etc/libvirt/qemu.conf and set

    lock_manager = "lockd"
  2. Start the virtlockd daemon with the following command:

    tux > sudo systemctl start virtlockd
  3. Restart the libvirtd daemon with:

    tux > sudo systemctl restart libvirtd
  4. Make sure virtlockd is automatically started when booting the system:

    tux > sudo systemctl enable virtlockd

11.3.2 Configure Locking Edit source

By default virtlockd is configured to automatically lock all disks configured for your VM Guests. The default setting uses a "direct" lockspace, where the locks are acquired against the actual file paths associated with the VM Guest <disk> devices. For example, flock(2) will be called directly on /var/lib/libvirt/images/my-server/disk0.raw when the VM Guest contains the following <disk> device:

<disk type='file' device='disk'>
 <driver name='qemu' type='raw'/>
 <source file='/var/lib/libvirt/images/my-server/disk0.raw'/>
 <target dev='vda' bus='virtio'/>
</disk>

The virtlockd configuration can be changed by editing the file /etc/libvirt/qemu-lockd.conf. It also contains detailed comments with further information. Make sure to activate configuration changes by reloading virtlockd:

tux > sudo systemctl reload virtlockd

11.3.2.1 Enabling an Indirect Lockspace Edit source

The default configuration of virtlockd uses a direct lockspace. This means that the locks are acquired against the actual file paths associated with the <disk> devices.

If the disk file paths are not accessible to all hosts, virtlockd can be configured to allow an indirect lockspace. This means that a hash of the disk file path is used to create a file in the indirect lockspace directory. The locks are then held on these hash files instead of the actual disk file paths. Indirect lockspace is also useful if the file system containing the disk files does not support fcntl() locks. An indirect lockspace is specified with the file_lockspace_dir setting:

file_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"

11.3.2.2 Enable Locking on LVM or iSCSI Volumes Edit source

When wanting to lock virtual disks placed on LVM or iSCSI volumes shared by several hosts, locking needs to be done by UUID rather than by path (which is used by default). Furthermore, the lockspace directory needs to be placed on a shared file system accessible by all hosts sharing the volume. Set the following options for LVM and/or iSCSI:

lvm_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"
iscsi_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"

11.4 Online Resizing of Guest Block Devices Edit source

Sometimes you need to change—extend or shrink—the size of the block device used by your guest system. For example, when the disk space originally allocated is no longer enough, it is time to increase its size. If the guest disk resides on a logical volume, you can resize it while the guest system is running. This is a big advantage over an offline disk resizing (see the virt-resize command from the Section 17.3, “Guestfs Tools” package) as the service provided by the guest is not interrupted by the resizing process. To resize a VM Guest disk, follow these steps:

Procedure 11.2: Online Resizing of Guest Disk
  1. Inside the guest system, check the current size of the disk (for example /dev/vda).

    root # fdisk -l /dev/vda
    Disk /dev/sda: 160.0 GB, 160041885696 bytes, 312581808 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
  2. On the host, resize the logical volume holding the /dev/vda disk of the guest to the required size, for example 200 GB.

    root # lvresize -L 200G /dev/mapper/vg00-home
    Extending logical volume home to 200 GiB
    Logical volume home successfully resized
  3. On the host, resize the block device related to the disk /dev/mapper/vg00-home of the guest. Note that you can find the DOMAIN_ID with virsh list.

    root # virsh blockresize  --path /dev/vg00/home --size 200G DOMAIN_ID
    Block device '/dev/vg00/home' is resized
  4. Check that the new disk size is accepted by the guest.

    root # fdisk -l /dev/vda
    Disk /dev/sda: 200.0 GB, 200052357120 bytes, 390727260 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

11.5 Sharing Directories between Host and Guests (File System Pass-Through) Edit source

libvirt allows to share directories between host and guests using QEMU's file system pass-through (also called VirtFS) feature. Such a directory can be also be accessed by several VM Guests at once and therefore be used to exchange files between VM Guests.

Note
Note: Windows Guests and File System Pass-Through

Note that sharing directories between VM Host Server and Windows guests via File System Pass-Through does not work, because Windows lacks the drivers required to mount the shared directory.

To make a shared directory available on a VM Guest, proceed as follows:

  1. Open the guest's console in Virtual Machine Manager and either choose View › Details from the menu or click Show virtual hardware details in the toolbar. Choose Add Hardware › Filesystem to open the Filesystem Passthrough dialog.

  2. Driver allows you to choose between a Handle or Path base driver. The default setting is Path. Mode lets you choose the security model, which influences the way file permissions are set on the host. Three options are available:

    Passthrough (Default)

    Files on the file system are directly created with the client-user's credentials. This is very similar to what NFSv3 is using.

    Squash

    Same as Passthrough, but failure of privileged operations like chown are ignored. This is required when KVM is not run with root privileges.

    Mapped

    Files are created with the file server's credentials (qemu.qemu). The user credentials and the client-user's credentials are saved in extended attributes. This model is recommended when host and guest domains should be kept completely isolated.

  3. Specify the path to the directory on the VM Host Server with Source Path. Enter a string at Target Path that will be used as a tag to mount the shared directory. Note that the string of this field is a tag only, not a path on the VM Guest.

  4. Apply the setting. If the VM Guest is currently running, you need to shut it down to apply the new setting (rebooting the guest is not sufficient).

  5. Boot the VM Guest. To mount the shared directory, enter the following command:

    tux > sudo mount -t 9p -o trans=virtio,version=9p2000.L,rw TAG /MOUNT_POINT

    To make the shared directory permanently available, add the following line to the /etc/fstab file:

    TAG   /MOUNT_POINT    9p  trans=virtio,version=9p2000.L,rw    0   0

11.6 Using RADOS Block Devices with libvirt Edit source

RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting, replication, and data consistency. You can use an RBD from your libvirt-managed VM Guests similarly to how you use other block devices.

Print this page