kvm/libvert in linux (1)

kvm background

kernel based virtual machine(KVM), is a Linux kernel module, which transfer Linux to a Hypervisor, which depends on the ability of hardware virtualization. usually the physical machine is called Host, and the virtual machine(VM) run in host is called Guest.

kvm itself doesn’t do any hardware emulator, which needs guest space to set an address space through dev/kvm interface, to which provides virtual I/O, e.g. QEMU.

virt-manager is a GUI tool for managing virtual machines via libvirt, mostly used by QEMU/KVM virtual machines.

  • check kvm model info
1
modinfo kvm
  • whether CPU support hardware virtualization
1
2
egrep -c '(vmx|svm)' /proc/cpuinfo
kvm-ok

install kvm

  • install libvirt and qemu packages
1
2
3
4
sudo apt install qemu qemu-kvm libvirt-bin bridge-utils
modprobe kvm #load kvm module
systemctl start libvirtd.service #
vrish iface-bridge ens33 virbr0 #create a bridge network mac address
1
2
3
sudo usermod -aG libvirtd $(whoami)
sudo usermod -aG libvirt-qemu $(whoami)
sudo reboot

network in kvm

default network is NAT(network address transation), when you create a new virtual machine, this forwards network traffic through your host system; if the host is connected to the Internet, then your vm have Internet access.

VM manager also creates an Ethernet bridge between the host and virtual network, so can ping IP address of VM from host, also ok on the other way.

  • List of network cards

go to /sys/class/net there are a few nic:

1
2
3
4
5
6
7
8
9
lrwxrwxrwx 1 root root 0 Mar 24 16:18 docker0 -> ../../devices/virtual/net/docker0
lrwxrwxrwx 1 root root 0 Mar 24 16:18 docker_gwbridge -> ../../devices/virtual/net/docker_gwbridge
lrwxrwxrwx 1 root root 0 Mar 24 16:18 eno1 -> ../../devices/pci0000:00/0000:00:1f.6/net/eno1
lrwxrwxrwx 1 root root 0 Mar 24 16:18 enp4s0f2 -> ../../devices/pci0000:00/0000:00:1c.4/0000:02:00.0/0000:03:03.0/0000:04:00.2/net/enp4s0f2
lrwxrwxrwx 1 root root 0 Mar 24 16:18 lo -> ../../devices/virtual/net/lo
lrwxrwxrwx 1 root root 0 Mar 24 16:18 veth1757da9 -> ../../devices/virtual/net/veth1757da9
lrwxrwxrwx 1 root root 0 Mar 24 16:18 vethd4d0e7f -> ../../devices/virtual/net/vethd4d0e7f
lrwxrwxrwx 1 root root 0 Mar 24 16:18 virbr0 -> ../../devices/virtual/net/virbr0
lrwxrwxrwx 1 root root 0 Mar 24 16:18 virbr0-nic -> ../../devices/virtual/net/virbr0-nic
  • multi interfaces on same MAC addresss

when a switch receives a frame from an interface, it creates an entry in the mac-address table with the source mac and interface. it the source mac is known, it will update the table with the new interface. so bascially if you assign the mac address of an external-network-avialable NIC-A to the special vm, NIC-A is lost.

  • virbr0

the default bridge NIC of libvirt is virbr0. bridge network means the guest and host share the same physical Network Cards, as well as offer the guest a special IP, which can be used to access the guest directly. the virbr0 do network address translation(NAT), basically transfer the internal IP address to an external IP address, which means the internal IP address is un-visiable from outside.

to add the virbr0, when it is deleted previously:

1
2
3
4
brctl addbr virbr0
brctl stp virbr0 on
brctl setf virbr0 0
ifconfig virbr0 192.168.122.1 netmask 255.255.255.0 up

to disable or delete virbr0:

1
2
3
4
virsh net-destroy default
virsh net-undefine default
service libvirtd restart
ifconfig

after starting the vm, can check the bridge network by:

1
2
virsh domiflist vm-name
virsh domifaddr vm-name

and we can login the vm, (after we assign current user to libvert group), and check NAT is working:

1
2
3
ssh 192.168.122.1
ping www.bing.com
ping 10.20.xxx.xxx # ping the host external IP

basically the vm can access external website, but external web can’t access vm_name.

1
attach-interface/detach-interface/domiflist

create vm

create a virtual machine, can be done either through virt-install or config.xml:

virt-install

virt-install has depends on system python, pip. if current ptyhon version is 2.7, it gives warnning and return -1 due to unfound module. so make sure the #PYTHONPATH# point to the correct path if you have multi python in system. and virt-install has to run with root.

then can start a virtual machine with following command options)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
sudo virt-install \
--name v1 \
--ram 2048 \
# --cdrom=ubuntu-16.04.3-server-amd64.iso \
--disk path=/var/lib/libvirt/images/ubuntu.qcow2 \
--vcpus 2 \
--virt-type kvm \
--os-type linux \
--os-variant ubuntu16.04 \
--graphics none \
--console pty, target_type=serial \
--location /var/lib/libvirt/images/ubuntu-16.04.3-server-amd64.iso \
--network bridge:virbr0 \
--extra-args console=ttyS0

during the installation, the process looks very much like Linux installation on a bare machine. I suppose this way, it’s like install a dual-OS in the bare machine. during the installation, there is an error failed to load installer component libc6-udeb, it’s may due to the iso or img has missing component.

config.xml

  • create volumes

    go to /var/lib/libvirt/images, and create volume as following:

    1
    qemu-img create -f qcow2 ubuntu.qcow2 40G

check qemu-kvm & qemu-img introduction

  • add vm image

    cp ubuntu.iso to /var/lib/libvirt/images as well:

    1
    2
    ubuntu.qcow2
    ubuntu-16.04.3-server-amd64.iso

vm.xml

follow an xml sample:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
<domain type='kvm'>
<name>v1</name>
<memory>4048576</memory>
<currentMemory>4048576</currentMemory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64' machine='pc'>hvm</type>
<boot dev='cdrom'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<serial type='pty'>
<target port='0' />
</serial>
<console type='pty' >
<target type='serial' port='0' />
</console>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/ubuntu.qcow2'/>
<target dev='hda' bus='ide'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/libvirt/images/ubuntu-16.04.3-server-amd64.iso'/>
<target dev='hdb' bus='ide'/>
</disk>
<interface type='bridge' >
<mac address='52:54:00:98:45:3b' />
<source bridge='virbr0' />
<model type='virtio' />
</interface>
<serial type='pty'>
<target port='0' />
</serial>
<console type='pty'>
<target type='serial' port='0' />
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='no' listen = '0.0.0.0' keymap='en-us'>
<listen type='address' address='0.0.0.0' />
</graphics>
</devices>
</domain>

a few tips about the xml above:

  • \ component is necessary for network interface.

  • if not assign a special mac address in the interface. since we had define virbr0, an automatic mac address will be assigned, which is unique from the host machine’s IP, but if ssh login to the guest (ssh username@guest_ip), it actually can ping host machine’s iP or any external ip(www.being.com)

  • \ compoennt, is setting for console.

finally run the following CLI to start vm: v1:

1
2
3
virsh define vm1.xml
virsh start ubuntu(the image)
virsh list

libvert

libvert is a software package to manage vm, including libvirtAPI, libvirtd(daemon process), and virsh tool.

1
2
sudo systemctl restart libvirtd
systemctl status libvirtd

only when libvirtd service is running, can we manage vm through libvert. all configure of the vm is stored ad /etc/libvirt/qemu. for virsh there are two mode:

  • immediate way e.g. in host shell virsh list
  • interactive shell e.g. by virsh to virsh shell

common virsh commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
virsh <command> <domain-id> [options]
virsh uri #hypervisor's URI
virsh hostname
virsh nodeinfo
virsh list (running, idel, paused, shutdown, shutoff, crashed, dying)
virsh shutdown <domain>
virsh start <domain>
virsh destroy <domain>
virsh undefine <domain>
virsh create #through config.xml
virsh connect #reconnect to hypervisor
virsh nodeinfo
virsh define #file domain
virsh setmem domain-id kbs #immediately
virsh sertmaxmem domain-id kbs
virsh setvcpus domain-id count
virsh vncdisplay domain-id #listed vnc port
virsh console <domain>

virsh network commands

  • host configure

Every standard libvirt installation provides NAT based connectivity to virtual machines out of the box. This is the so called ‘default virtual network’

1
2
3
4
5
virsh net-list
virsh net-define /usr/share/libvirt/networks/default.xml
virsh net-start default
virsh net-info default
virsh net-dumpxml default

When the libvirt default networkis running, you will see an isolated bridge device. This device explicitly does NOT have any physical interfaces added, since it uses NAT + forwarding to connect to outside world. Do not add interfaces. Libvirt will add iptables rules to allow traffic to/from guests attached to the virbr0 device in the INPUT, FORWARD, OUTPUT and POSTROUTING chains.

if default.xml is not found, check fix missing default network, default.xml is sth like:

1
2
3
4
5
6
7
8
9
10
11
12
<network>
<name>default</name>
<uuid>9a05da11-e96b-47f3-8253-a3a482e445f5</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:0a:cd:21'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>

then run:

1
2
3
sudo virsh net-define --file default.xml
sudo virsh net-start default
sudo virsh net-autostart --network default

if bind default to virbr0 already, need delete this brige first.

  • guest configure

add the following to guest xml configure:

1
2
3
4
<interface type='network'>
<source network='default'/>
<mac address='00:16:3e:1a:b3:4a'/>
</interface>

more details can check virsh networking doc

snapshots

snapshots used to save the state(disk mem, time..) of a domain

  • create a snapshot for a vm
1
2
3
virsh snapshot-create-as --domain test_vm \
--name "test_vm_snapshot1" \
--description "test vm snapshot "
  • list all snapshots for vm
1
virsh snapshot-list test_vm
  • display info about a snapshot
1
2
3
4
5
6
7
virsh snapshot-info --domain test_vm --snapshotname test_vm_snapshot1
```
* delete a snapshot
```sh
virsh snapshot-delete --domain test_vm --snapshotname test_vm_shapshot1

manage volumes

  • create a storage volume
1
2
3
virsh vol-create-as default test_vol.qcow2 10G
# create test_vol on the deafult storage pool
du -sh /var/lib/libvirt/images/test_vol.qcow2
  • attach to a vm

attache test-vol to vm test

1
2
3
virsh attach-disk --domain test \
--source /var/lib/libvirt/images/test-vol.qcow2 \
--persistent --target vdb

which can be check that the vm has added a block device /dev/vdb

1
2
ssh test #how to ssh to vm
lsblk --output NAME,SIZE,TYPE

or directly grow disk image:

1
qemu-img resize /var/lib/libvirt/images/test.qcow2 +1G
  • detach from a vm
1
virsh detach-disk --domain test --persistent --live --target vdb
  • delete a vm
1
2
3
virsh vol-delete test_vol.qcow2 --pool default
virsh pool-refresh default
virsh vol-list default

fs virsh commands

1
2
virt-ls -l -d <domain> <directory>
virt-cat -d <domain> <file_path>

refer

kvm introduction in chinese

kvm pre-install checklist

Linux network configuration

kvm installation official doc

creating vm with virt-install

install KVM in ubuntu

jianshu: kvm network configure

cloudman: understand virbr0

virsh command refer

qcow2 vs raw