Virtualisation

From Things and Stuff Wiki
Jump to navigation Jump to search


General



Hardware level

  • https://en.wikipedia.org/wiki/Hardware_virtualization - The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has been called "platform virtualization", or "server virtualization", more recently.
  • https://en.wikipedia.org/wiki/Full_virtualization - a virtualization technique used to provide a certain kind of virtual machine environment, namely, one that is a complete simulation of the underlying hardware. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of full virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.
  • https://en.wikipedia.org/wiki/Hardware-assisted_virtualization - platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. Full virtualization is used to simulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) executes in complete isolation. Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in 2005 and 2006 (respectively). Hardware-assisted virtualization is also known as accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization.


To run HVM domU, the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running. If the command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM domU (or you are already running the Xen hypervisor).

grep -E "(vmx|svm)" --color=always /proc/cpuinfo



  • virt-manager - a desktop user interface for managing virtual machines through libvirt. It primarily targets KVM VMs, but also manages Xen and LXC (linux containers). It presents a summary view of running domains, their live performance & resource utilization statistics. Wizards enable the creation of new domains, and configuration & adjustment of a domain’s resource allocation & virtual hardware. An embedded VNC and SPICE client viewer presents a full graphical console to the guest domain.


QEMU


KVM

  • KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.


grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.

Xen

  • Xen Hypervisor is an open source virtualization platform that powers the world's largest clouds in production and is the foundation of many commercial products. Xen powers public clouds such as Amazon Web Services, Rackspace Public Cloud and many others. Examples of Xen based server products include Huawei UVP, Oracle VM and XenServer. Examples of client products and appliances include QubesOS, XenClient and Netscaler. Xen is 9 years old, mature and its stability and versatility is second to none.

VMWare

VirtualBox

VirtualBox config;

  • Enable host Linux modules: vboxnetflt and vboxnetadp
  • Enable host-only adapter 'virtualbox0' in VB Preferences > Network if not previously enabled, then select in guest preferences
  • Keep VM names lowercase and no spaces for easier command-line stuff

Install guest additions for mouse, etc.

Provisioning VirtualBoxes for local dev work;

Other

  • bhyve - the "BSD hypervisor" is a legacy-free hypervisor/virtual machine manager developed on FreeBSD and relies on modern CPU features such as Extended Page Tables (EPT) and VirtIO network and storage drivers. bhyve was officially released on January 20th, 2014 as part of FreeBSD 10.0.


Virtual machines

  • Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.

Vargrant

  • Vagrant - a tool for building complete development environments. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases development/production parity, and makes the "works on my machine" excuse a relic of the past. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.
  • Veewee is a tool for easily (and repeatedly) building custom Vagrant base boxes, KVMs, and virtual machine images.
  • PuPHPet - A simple GUI to set up virtual machines for PHP Web development.


Disk images




  • libguestfs, library for accessing and modifying VM disk images - a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks, resizing disks, and much more.libguestfs can access almost any disk image imaginable. It can do it securely — without needing root and with multiple layers of defence against rogue disk images. It can access disk images on remote machines or on CDs/USB sticks. It can access proprietary systems like VMware and Hyper-V.
  • guestfish - a shell and command-line tool for examining and modifying virtual machine filesystems. It uses libguestfs and exposes all of the functionality of the guestfs API, see guestfs(3).

Repos

Bitnami

ZeroVM

Operating system level

  • Operating system-level virtualization is a server virtualization method where the kernel of an operating system allows for multiple isolated user-space instances, instead of just one. Such instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from the point of view of its owner. On Unix systems, this technology can be thought of as an advanced implementation of the standard chroot mechanism. In addition to isolation mechanisms, the kernel often provides resource management features to limit the impact of one container's activities on the other containers.

chroot

  • https://en.wikipedia.org/wiki/chroot - an operation that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The term "chroot" may refer to the chroot(2) system call or the chroot(8) wrapper program. The modified environment is called a chroot jail.


  • https://github.com/dnschneid/crouton - a set of scripts that bundle up into an easy-to-use, Chromium OS-centric chroot generator. Currently Ubuntu and Debian are supported (using debootstrap behind the scenes), but "Chromium OS Debian, Ubuntu, and Probably Other Distros Eventually Chroot Environment" doesn't acronymize as well (crodupodece is admittedly pretty fun to say, though). [10]

PRoot

  • PRoot - a user-space implementation of chroot, mount --bind, and binfmt_misc. This means that users don't need any privileges or setup to do things like using an arbitrary directory as the new root filesystem, making files accessible somewhere else in the filesystem hierarchy, or executing programs built for another CPU architecture transparently through QEMU user-mode. Also, developers can use PRoot as a generic Linux process instrumentation engine thanks to its extension mechanism, see CARE for an example. Technically PRoot relies on ptrace, an unprivileged system-call available in every Linux kernel.

The new root file-system, a.k.a guest rootfs, typically contains a Linux distribution. By default PRoot confines the execution of programs to the guest rootfs only, however users can use the built-in mount/bind mechanism to access files and directories from the actual root file-system, a.k....ot uses the CPU emulator QEMU user-mode to execute transparently guest programs. It's a convenient way to develop, to build, and to validate any guest Linux packages seamlessly on users' computer, just as if they were in a native guest environment. That way all of the cross-compilation issues are avoided. PRoot can also mix the execution of host programs and the execution of guest programs emulated by QEMU user-mode. This is useful to use host equivalents of programs that are missing from the guest rootfs and to speed up build-time by using cross-compilation tools or CPU-independent programs, like interpreters.

It is worth noting that the guest kernel is never involved, regardless of whether QEMU user-mode is used or not. Technically, when guest programs perform access to system resources, PRoot translates their requests before sending them to the host kernel. This means that guest programs can use host resources (devices, network, ...) just as if they were "normal" host programs.

FreeBSD jail

User-mode Linux

LinuX Containers


  • lmctfy is the open source version of Google’s container stack, which provides Linux application containers.
  • warden - Manages isolated, ephemeral, and resource controlled environments.
  • wsh - execute command in a Linux Container through unix socket
  • Pipework lets you connect together containers in arbitrarily complex scenarios. Pipework works with "plain" LXC containers (created with lxc-start), and therefore, it also works with the awesome Docker.






Docker




Commands

docker search debian
  # search Docker Hub for Debian related images

docker pull [name]
  # Download an image from Docker Hub
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
docker run ubuntu uname -a

# Run an interactive shell in the ubuntu image,
# allocate a tty, attach stdin and stdout
# To detach the tty without exiting the shell,
# use the escape sequence Ctrl-p + Ctrl-q
docker run -i -t ubuntu /bin/bash
# Bind TCP port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine.
docker run -p 127.0.0.1:80:8080 <image> <cmd>

# Bind TCP port 8080 of the container to a dynamically allocated TCP port on 127.0.0.1 of the host machine.
docker run -p 127.0.0.1::8080 <image> <cmd>

# Bind TCP port 8080 of the container to TCP port 80 on all available interfaces of the host machine.
docker run -p 80:8080 <image> <cmd>

# Bind TCP port 8080 of the container to a dynamically allocated TCP port on all available interfaces of the host machine.
docker run -p 8080 <image> <cmd>
# List your containers
docker images

# Listing all running containers
docker ps

Running docker command returns container ID

# run a process which echoes 'hello world' in every second
CONTAINER_ID=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done")

# attach the console to the container
docker attach $CONTAINER_ID

# stop an active container
docker stop $CONTAINER_ID
# kill an active container
docker kill $CONTAINER_ID
# Commit your container to a new named image
docker commit <container_id> <some_name>

Setup

Add user to docker group to avoid sudo.

sysctl net.ipv4.ip_forward=1
  enable network forwarding
net.ipv4.ip_forward=1
  persistent network forwarding, goes in /etc/sysctl.d/docker.conf
sudo <path to>/docker -d &
   start docker in daemon mode

sudo systemctl enable docker
  start docker as service

sudo systemctl start docker
  start on system boot
ls -lah /var/run/docker.sock
  exists when docker is running

If you are behind a proxy you can launch docker with this command

sudo HTTP_PROXY=http://youruserproxy:yourpassword@theaddressproxy:theportproxy/ docker -d &

Guides







Container repos


Extra

  • boot2docker is a lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. It runs completely from RAM, weights ~38mb and boots in ~5-6s (YMMV).
  • dockerlite lets you run Linux apps in lightweight, isolated environments, using LXC (Linux Containers). Using BTRFS snapshots, dockerlite can save the state of a given environment in a frozen "image", and later, create more environments ("containers") out of that image.
  • DockerUI is a web interface to interact with the Remote API. The goal is to provide a pure client side implementation so it is effortless to connect and manage docker.
  • Docker Desktop - Dockerfile that creates a docker image which, once executed, creates a container that runs X11 and SSH services. The ssh is used to forward X11 and provide you encrypted data communication between the docker container and your local machine. Xpra + Xephyr allows to display the applications running inside of the container such as Firefox, LibreOffice, xterm, etc. with recovery connection capabilities.
  • Autodock is a docker automation tool. It automatically sorts servers in your Docker cluster by lowest load. It then distributes the containers you want to create among them. After bootstrapping the containers with Paramiko and Salt it saves this information to the ETCD cluster. Autodock was designed to use saltstack and etcd for configuration management and replication. [24]


  • Portainer - provides a detailed overview of Docker and allows you to manage containers, images, networks and volumes. It is also really easy to deploy, you are just one Docker command away from running Portainer anywhere.


  • Fig - Fast, isolated development environments using Docker. [25]
  • Drone is a Continuous Integration platform built on Docker [26]


Toolbox

Recipes

Linux-VServer

  • Linux-VServer provides virtualization for GNU/Linux systems. This is accomplished by kernel level isolation. It allows to run multiple virtual units at once. Those units are sufficiently isolated to guarantee the required security, but utilize available resources efficiently, as they run on the same kernel.

OpenVZ

  • OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.

CoreOS

  • CoreOS is Linux for the container world. Linux kernel + systemd. That's about it. CoreOS has just enough bits to run containers, but does not ship a package manager itself. In fact, the root partition is completely read-only, to guarantee consistency and make updates reliable. Use docker as a package manager to build and push your app. The primitive of an application is a container, not a one-off package. Build containers using docker, by hand, or however you see fit! [31]

Lguest

Solaris Containers / Zones

systemd-nspawn

  • https://wiki.archlinux.org/index.php/systemd-nspawn - like the chroot command, but it is a chroot on steroids. systemd-nspawn may be used to run a command or OS in a light-weight namespace container. It is more powerful than chroot since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name. systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys, /proc/sys or /sys/fs/selinux. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container. This mechanism differs from Lxc-systemd or Libvirt-lxc, as it is a much simpler tool to configure.

rkt

CoreOS container runtime


Other

  • bhyve - the "BSD hypervisor" is a hypervisor/virtual machine manager developed on FreeBSD and relies on modern CPU features such as Extended Page Tables (EPT) and VirtIO network and storage drivers. [33]


  • runC - a CLI tool for spawning and running containers according to the OCP specification. The code can be found on Github. Containers are started as a child process of runC and can be embedded into various other systems without having to run a Docker daemon. runC is built on libcontainer, the same container technology powering millions of Docker Engine installations. Docker images can be run with runC. [34]


  • ZeroVM is an open source virtualization technology that is based on the Chromium Native Client (NaCl) project. ZeroVM creates a secure and isolated execution environment which can run a single thread or application. ZeroVM is designed to be lightweight, portable, and can easily be embedded inside of existing storage systems. This functionality allows developers to push their application to their data instead of having to pull their data to their application, as is the case with more traditional architectures.


  • Jailhouse is a partitioning Hypervisor based on Linux. It is able to run bare-metal applications or (adapted) operating systems besides Linux. For this purpose it configures CPU and device virtualization features of the hardware platform in a way that none of these domains, called "cells" here, can interfere with each other in an unacceptable way. Jailhouse is optimized for simplicity rather than feature richness. Unlike full-featured Linux-based hypervisors like KVM or Xen, Jailhouse does not support overcommitment of resources like CPUs, RAM or devices. It performs no scheduling and only virtualizes those resources in software, that are essential for a platform and cannot be partitioned in hardware.


  • Rump kernels enable you to build the software stack you need without forcing you to reinvent the wheels. The key observation is that a software stack needs driver-like components which are conventionally tightly-knit into operating systems — even if you do not desire the limitations and infrastructure overhead of a given OS, you do need drivers.

We solve the problem by providing free, reusable, componentized, kernel quality drivers such as file systems, POSIX system calls, PCI device drivers and TCP/IP and SCSI protocol stacks. As a production-ready example, we offer the Rumprun unikernel, which clocks in at a few thousand lines of code plus rump kernel components, and supports POSIX'y software directly on both raw hardware and cloud hypervisors such as KVM and Xen.



  • HermiTux - a unikernel: a minimal operating system with low memory/disk footprint and sub-second boot time, executing an application within a single address space on top of an hypervisor. Moreover, HermiTux is binary-compatible with Linux: it can run native Linux executables. [36]


  • https://wiki.archlinux.org/index.php/Proot - proot is program that implements functionality similar to GNU/Linux's chroot, mount --bind, and binfmt_misc in user-space, allowing an unprivileged user to execute programs with an alternative root directory, much like a chroot "jail". This is useful in cases where a chroot is not possible due to a lack of root privileges.


  • Virtio - a virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization.Note that virtio is different, but architecturally similar to, Xen paravirtualized device drivers (such as the ones that you can install in a Windows guest to make it go faster under Xen). Also similar is VMWare's Guest Tools.

Firecracker

  • Firecracker - an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services.




gVisor


Kernel level

binfmt_misc


  • https://github.com/skilbjo/exagear - a virtual machine that translates x86 instructions to an ARM instruction set. However exagear is meant for a debian system, not arch linux.


Desktop level

  • http://en.wikipedia.org/wiki/Desktop_virtualization - software technology that separates the desktop environment and associated application software from the physical client device that is used to access it. Desktop virtualization can be used in conjunction with application virtualization and user profile management systems, now termed "user virtualization," to provide a comprehensive desktop environment management system. In this mode, all the components of the desktop are virtualized, which allows for a highly flexible and much more secure desktop delivery model. In addition, this approach supports a more complete desktop disaster recovery strategy as all components are essentially saved in the data center and backed up through traditional redundant maintenance systems. If a user's device or hardware is lost, the restore is straightforward and simple, because the components will be present at login from another device. In addition, because no data is saved to the user's device, if that device is lost, there is much less chance that any critical data can be retrieved and compromised.


Networking

See also Network, Remote

RAM

  • bootrino - a website that makes it easy to launch Run-From-RAM operating systems on cloud servers, a browser based ReactJS application, written in JavaScript, netboots Run-From-RAM operating systems on cloud servers. Amazon EC2, Google Compute Cloud, Digital Ocean. A Run-From-RAM operating system boots initially from hard disk, but once booted does not need the hard disk at all. Tiny Core Linux, Alpine linux, Yocto Linux, Unikernels