1. Software Overview
Paradice on KVM software comes in two repositories.
The kernel modules: This repository has the main components of Paradice including the frontend and backend drivers and the device info modules.
The Linux kernel: This is a patched Ubuntu Linux kernel that needs to be installed in the VMs and in the host OS for Paradice. This kernel includes the required modifications to KVM to support Paradice.
Our source code is hosted in Github. You can use the following commands to download the repositories.
# Download the modules $ git clone -b testing_kvm https://github.com/arrdalan/dfv_modules.git # Download the kernel $ git clone -b testing_kvm https://github.com/arrdalan/ubuntu_dfv.git
In these instructions, we will show you how to set up Paradice to virtualize a Radeon GPU, a mouse, and a keyboard.
Note: Preferably, download the repos to ~/dfv in your file systems. This will make it easier for you to follow the rest of the instructions.
Follow the instructions in the following link to compile Ubuntu kernel from sources.
Note: You need to compile two versions of the kernel, one for 32-bit x86 PAE architecture (binary-generic-pae) and one for 32-bit x86 (binary-generic). Then install the x86 PAE version in the host OS and the non PAE one in the VM. Download one instance of the kernel repository and put it in ~/dfv/ubuntu_dfv and compile it for x86 PAE architecture for the host OS. Download another instance of the kernel repository and put it in ~/dfv/ubuntu_dfv2 and compile it for x86 architecture for the guest VM.
Finally, you can compile the kernel modules as follows:
$ cd ~/dfv/dfv_modules $ source make.sh kvm_server $ source make.sh kvm_client
Note: the modules Makefile assumes that the compiled kernels are located in ~/dfv/ubuntu_dfv and ~/dfv/ubuntu_dfv2.
2.2. Setting up the hypervisor and the VM
Now that you have compiled everything, you can set up the system.
First, configure KVM on your system. Then, create a guest VM, install 32-bit Ubuntu 12.04 in them, and then update the kernel with the image you built earlier.
2.3. Loading the kernel modules
We have developed some scripts that will make it easy to load and configure the kernel modules both in the host OS and in the guest VM. Download them using the following commands:
# Note: here is where we want to put the scripts $ cd ~/dfv/dfv_modules $ git clone -b testing_kvm https://github.com/arrdalan/dfv_scripts.git
2.3.1. Host OS
First, you need to create the device info files corresponding to the devices you want to virtualize. They are mainly needed by the guest VM to be able to fake the presence of these devices to fool applications, such as X. However, we do need the info files for input devices in the host OS too. This is because the device files for input devices can have a different name from boot to boot because of how udev configures them. Using these info files allows us to automatically connect the virtual device files in the guest VM to the actual device files in the host OS, no matter what name the device files of our input devices get from udev.
Identify the input devices that you want to virtualize. In Linux, the info for all input devices can be found in /sys/class/input/. In this directory, you’ll find a couple of subdirectories with the name of inputX, one per input device. Identify which one corresponds to the input device that you want to virtualize. For example, read the inputX/name files which tells you the name of the device. For example, you might identify input5 and input6 as your mouse and keyboard since input5/name and input6/name are both "Logitech USB Receiver". Then, you need to extract the info of these two devices into device info files using the following two commands in the host OS:
$ cd ~/dfv/dfv_modules $ mkdir info_files $ source scripts/extract_input_info.sh 5 info_files/keyboard_info.txt $ source scripts/extract_input_info.sh 6 info_files/mouse_info.txt
For GPU, find the PCI slot that your GPU is sitting on in the driver VM. This can be different from the PCI slot of the GPU as seen in the dom0. For example, let’s say the GPU is on 0000.02.00.0. You can now create the info file as follows:
$ cd ~/dfv/dfv_modules $ source scripts/extract_gpu_info.sh 0000 02 00 0 info_files/gpu_info.txt
Now you have all the three info files in ~/dfv/dfv_modules/info_files. You can compare them with three example info files that we have provided in the info_files directory in the tarball you downloaded earlier for the scripts. Your info files might look different but the examples should give you an idea of what the info file should look like. Fortunately, generating the info files is a one time effort. You don’t have to regenerate these files every time you want to use Paradice. Just generate them once per device.
Copy the compiled modules to the host OS. Assuming that the modules are in ~/dfv/dfv_modules/ in the VM, you can load them by:
$ source scripts/load_server_kvm.sh
If successful, this should print out something like:
device file successfully added device file successfully added device file successfully added
2.3.2. Guest VM
Move the compiled modules to ~/dfv/dfv_modules in the guest VM. Also, move the info files that you generated in the hostOS to ~/dfv/dfv_modules/info_files in the guest VM, and then run:
$ cd ~/dfv/dfv_modules $ source scripts/load_client_kvm.sh
If successful, this should print something like this:
device file successfully added Input device successfully registered device file successfully added Input device successfully registered device file successfully added
Note: The scripts will try to fake the presence of the GPU on a virtual PCI slot with the exact same PCI slot number as the one that the GPU is sitting on in the driver VM, e.g., 0000.02.00.0 that we used as an example above. This is a requirement for these two PCI slots to match. If this slot is already taken on the guest VM, the scripts will fail. In any case, after the scripts are done, run lspci in the guest and make sure the virtual GPU PCI slot is the same as the GPU PCI slot in the host OS.
2.3.3. Testing Paradice
Now, we’re all set and we can test everything. Launch the guest VM in a separate virtual terminal of the host OS. For example, assuming that your guest VM disk is on /dev/sda7, you can use the following command:
$ xinit /usr/bin/qemu-system-x86_64 /dev/sda7 -m 2048 -- :1
Now, make sure that you have already removed the emulated VGA card and that X is not running in the guest VM. If the emulated VGA card sits on PCI slot 0000.00.02.0, you can remove it using:
# kill off the lightdm first if it's running $ service lightdm stop # remove the emulated VGA card $ echo 1 > /sys/bus/pci/devices/0000\:00\:02.0/remove
Now test Paradice by running:
This should bring a simple white X terminal. Test to make sure mouse and keyboard is working. If you have an OpenGL app somewhere, you can now navigate there and run it.
You can also run the window manager (lightdm). First kill off the xinit by pressing Ctrl+C. Then run:
$ service lightdm start
Now, you’re in the Ubuntu window manager.
You can kill off the window manager by:
$ service lightdm stop
You can also run compute jobs on the GPU through Paradice. Since Paradice currently supports Radeon open source driver, you need to use compute frameworks compatible with these drivers, such as GalliumCompute.
You can also navigate between host OS and guest VM virtual terminals using Alt+Fn short keys (Note: do not use Alt+Ctrl+Fn). For example, Alt+F7 takes you back to the host OS and Alt+F8 takes you to the guest VM (the exact numbers may be different).
One main reason that you may want to use Paradice for GPU virtualization is to have access to the GPU from multiple VMs (rather than assigning your GPU to one VM only). You can indeed support multiple guest VMs using the released source code. Simply configure another guest VM, launch it in another virtual terminal (e.g., $ xinit [lanuch command] :2) and run the modules in it. Now you can navigate between all the guest VMs and the host OS using Alt+Fn short keys.