OS-Level Virtualisation: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Others
Contributor:

OS-level virtualization refers to an operating system paradigm in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called containers (Solaris, Docker), Zones (Solaris), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernel (DragonFly BSD) or jails (FreeBSD jail or chroot jail), may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. However, programs running inside of a container can only see the container's contents and devices assigned to the container. On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. The term "container," while most popularly referring to OS-level virtualization systems, is sometimes ambiguously used to refer to fuller virtual machine environments operating in varying degrees of concert with the host OS, e.g. Microsoft's "Hyper-V Containers."

  • virtual machine
  • virtual environments
  • virtualization

1. Operation

On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:

  1. Hardware capabilities that can be employed, such as the CPU and the network connection
  2. Data that can be read or written, such as files, folders and network shares
  3. Connected peripherals it can interact with, such as webcam, printer, scanner, or fax

The operating system may be able to allow or deny access to such resources based on which program requests them and the user account in the context of which it runs. The operating system may also hide those resources, so that when the computer program enumerates them, they do not appear in the enumeration results. Nevertheless, from a programming point of view, the computer program has interacted with those resources and the operating system has managed an act of interaction.

With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another.

Containerization has similarities to application virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only.

2. Uses

Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users. System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on the one server.

Other typical scenarios include separating several programs to separate containers for improved security, hardware independence, and added resource management features. The improved security provided by the use of a chroot mechanism, however, is nowhere near ironclad.[1] Operating-system-level virtualization implementations capable of live migration can also be used for dynamic load balancing of containers between nodes in a cluster.

2.1. Overhead

Operating-system-level virtualization usually imposes less overhead than full virtualization because programs in virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine, as is the case with full virtualization (such as VMware ESXi, QEMU or Hyper-V) and paravirtualization (such as Xen or UML). This form of virtualization also does not require hardware support for efficient performance.

2.2. Flexibility

Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted.

Solaris partially overcomes the limitation described above with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available on x86-based Solaris systems, providing a complete Linux userspace and support for the execution of Linux applications; additionally, Solaris provides utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions inside "lx" zones.[2][3] However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting 32-bit Linux kernels.[4]

2.3. Storage

Some implementations provide file-level copy-on-write (CoW) mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.

3. Implementations

Mechanism Operating system License Available since or between Features
File system isolation Copy on Write Disk quotas I/O rate limiting Memory limits CPU quotas Network isolation Nested virtualization Partition checkpointing and live migration Root privilege isolation
chroot Most UNIX-like operating systems Varies by operating system 1982 Partial[5] No No No No No No Yes No No
Docker Linux,[6] FreeBSD,[7] Windows x64 (Pro, Enterprise and Education)[8] macOS [9] Apache License 2.0 2013 Yes Yes Not directly Yes (since 1.10) Yes Yes Yes Yes Only in Experimental Mode with CRIU [1] Yes (since 1.10)
Linux-VServer
(security context)
Linux, Windows Server 2016 GNU GPLv2 2001 Yes Yes Yes Yes[10] Yes Yes Partial[11] ? No Partial[12]
lmctfy Linux Apache License 2.0 2013 Yes Yes Yes Yes[10] Yes Yes Partial[11] ? No Partial[12]
LXC Linux GNU GPLv2 2008 Yes[13] Yes Partial[14] Partial[15] Yes Yes Yes Yes Yes Yes[13]
Singularity Linux BSD Licence 2015[16] Yes[17] Yes Yes No No No No No No Yes[18]
OpenVZ Linux GNU GPLv2 2005 Yes Yes [19] Yes Yes[20] Yes Yes Yes[21] Partial[22] Yes Yes[23]
Virtuozzo Linux, Windows Trialware 2000[24] Yes Yes Yes Yes[25] Yes Yes Yes[21] Partial[26] Yes Yes
Solaris Containers (Zones) illumos (OpenSolaris),
Solaris
CDDL,
Proprietary
2004 Yes Yes (ZFS) Yes Partial[27] Yes Yes Yes[28][29][30] Partial[31] Partial[32][33] Yes[34]
FreeBSD jail FreeBSD, DragonFly BSD BSD License 2000[35] Yes Yes (ZFS) Yes[36] Yes Yes[37] Yes Yes[38] Yes Partial[39][40] Yes[41]
vkernel DragonFly BSD BSD Licence Yes[42] Yes[42] N/A ? Yes[43] Yes[43] Yes[44] ? ? Yes
sysjail OpenBSD, NetBSD BSD License 2006–2009 Yes No No No No No Yes No No ?
WPARs AIX Commercial proprietary software 2007 Yes No Yes Yes Yes Yes Yes[45] No Yes[46] ?
iCore Virtual Accounts Windows XP Freeware 2008 Yes No Yes No No No No ? No ?
Sandboxie Windows Trialware 2004 Yes Yes Partial No No No Partial No No Yes
systemd-nspawn Linux GNU LGPLv2.1+ 2010 Yes Yes Yes[47][48] Yes[47][48] Yes[47][48] Yes[47][48] Yes ? ? Yes
Turbo Windows Freemium 2012 Yes No No No No No Yes No No Yes
RKT Linux Apache License 2.0 2014[49] ? ? ? ? ? ? ? ? ? ?

The content is sourced from: https://handwiki.org/wiki/OS-level_virtualisation

References

  1. Korff, Yanek; Hope, Paco; Potter, Bruce (2005). Mastering FreeBSD and OpenBSD Security. O'Reilly Series. O'Reilly Media, Inc.. p. 59. ISBN 0596006268. https://books.google.com/books?id=gqKwaHmXp4YC&pg=PA59. 
  2. "System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones, Chapter 16: Introduction to Solaris Zones". Oracle Corporation. 2010. http://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/zones.intro-1/index.html. Retrieved 2014-09-02. 
  3. "System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones, Chapter 31: About Branded Zones and the Linux Branded Zone". Oracle Corporation. 2010. http://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/gchhy/index.html. Retrieved 2014-09-02. 
  4. Bryan Cantrill (2014-09-28). "The dream is alive! Running Linux containers on an illumos kernel". http://www.slideshare.net/bcantrill/illumos-lx. Retrieved 2014-10-10. 
  5. Root user can easily escape from chroot. Chroot was never supposed to be used as a security mechanism.[6]
  6. "Docker drops LXC as default execution environment". InfoQ. http://www.infoq.com/news/2014/03/docker_0_9. 
  7. "Docker comes to FreeBSD". https://www.freebsdnews.com/2015/07/09/docker-freebsd/. 
  8. "Get started with Docker for Windows". Docker. https://docs.docker.com/docker-for-windows/. 
  9. "Get started with Docker for Mac". https://docs.docker.com/docker-for-mac/. 
  10. Utilizing the CFQ scheduler, there is a separate queue per guest.
  11. Networking is based on isolation, not virtualization.
  12. A total of 14 user capabilities are considered safe within a container. The rest may cannot be granted to processes within that container without allowing that process to potentially interfere with things outside that container.[11]
  13. Graber, Stéphane (1 January 2014). "LXC 1.0: Security features [6/10"]. https://www.stgraber.org/2014/01/01/lxc-1-0-security-features/. Retrieved 12 February 2014. "LXC now has support for user namespaces. [...] LXC is no longer running as root so even if an attacker manages to escape the container, he’d find himself having the privileges of a regular user on the host" 
  14. Disk quotas per container are possible when using separate partitions for each container with the help of LVM, or when the underlying host filesystem is btrfs, in which case btrfs subvolumes are automatically used.
  15. I/O rate limiting is supported when using Btrfs.
  16. ""Sylabs brings Singularity containers into commercial HPC"". https://www.top500.org/news/sylabs-brings-singularity-containers-into-commercial-hpc/. 
  17. ""SIF — Containing Your Containers"". https://www.sylabs.io/2018/03/sif-containing-your-containers/. 
  18. ""Singularity: Scientific containers for mobility of compute"". http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0177459. 
  19. Bronnikov, Sergey. "Comparison on OpenVZ wiki page". OpenVZ. https://wiki.openvz.org/Comparison. Retrieved 28 December 2018. 
  20. Available since Linux kernel 2.6.18-028stable021. Implementation is based on CFQ disk I/O scheduler, but it is a two-level schema, so I/O priority is not per-process, but rather per-container.[17]
  21. Each container can have its own IP addresses, firewall rules, routing tables and so on. Three different networking schemes are possible: route-based, bridge-based, and assigning a real network device (NIC) to a container.
  22. Docker containers can run inside OpenVZ containers.[18]
  23. Each container may have root access without possibly affecting other containers.[19]
  24. "Initial public prerelease of Virtuozzo (named ASPcomplete at that time)". http://www.paul.sladen.org/vserver/aspcomplete/2000-08-25/ve-0.4.2-for-2.4.0-test6.diff.gz. 
  25. Available since version 4.0, January 2008.
  26. Docker containers can run inside Virtuozzo containers.[21]
  27. Yes with illumos[22]
  28. See OpenSolaris Network Virtualization and Resource Control for more details.
  29. Network Virtualization and Resource Control (Crossbow) FAQ http://www.opensolaris.org/os/project/crossbow/faq/
  30. "Managing Network Virtualization and Network Resources in Oracle® Solaris 11.2". http://docs.oracle.com/cd/E36784_01/html/E36813/index.html. 
  31. Only when top level is a KVM zone (illumos) or a kz zone (Oracle).
  32. Starting in Solaris 11.3 Beta, Solaris Kernel Zones may use live migration.
  33. Cold migration (shutdown-move-restart) is implemented.
  34. Non-global zones are restricted so they may not affect other zones via a capability-limiting approach. The global zone may administer the non-global zones.[25]
  35. "Contain your enthusiasm - Part Two: Jails, Zones, OpenVZ, and LXC". http://www.cybera.ca/news-and-events/tech-radar/contain-your-enthusiasm-part-two-jails-zones-openvz-and-lxc/. "Jails were first introduced in FreeBSD 4.0 in 2000" 
  36. Check the "allow.quotas" option and the "Jails and File Systems" section on the FreeBSD jail man page for details. http://www.freebsd.org/cgi/man.cgi?query%3Djail&sektion%3D8
  37. "Hierarchical_Resource_Limits - FreeBSD Wiki". Wiki.freebsd.org. 2012-10-27. http://wiki.freebsd.org/Hierarchical_Resource_Limits. Retrieved 2014-01-15. 
  38. "Implementing a Clonable Network Stack in the FreeBSD Kernel". usenix.org. 2003-06-13. http://static.usenix.org/publications/library/proceedings/usenix03/tech/freenix03/full_papers/zec/zec.pdf. 
  39. "VPS for FreeBSD". http://www.7he.at/freebsd/vps/. Retrieved 2016-02-20. 
  40. "[Announcement VPS // OS Virtualization // alpha release"]. https://forums.freebsd.org/threads/34284/. Retrieved 2016-02-20. 
  41. "3.5. Limiting your program's environment". Freebsd.org. http://www.freebsd.org/doc/en/books/developers-handbook/secure-chroot.html. Retrieved 2014-01-15. 
  42. "vkd(4) — Virtual Kernel Disc". DragonFly BSD. http://mdoc.su/d/vkd.4. ""treats the disk image as copy-on-write."" 
  43. Sascha Wildner (2007-01-08). "vkernel, vcd, vkd, vke — virtual kernel architecture". DragonFly Miscellaneous Information Manual. DragonFly BSD. http://bxr.su/d/share/man/man7/vkernel.7. 
  44. "vke(4) — Virtual Kernel Ethernet". DragonFly BSD. http://mdoc.su/d/vke.4. 
  45. Available since TL 02.[35]
  46. Live Application Mobility in AIX 6.1 http://www.ibm.com/developerworks/aix/library/au-aix61mobility/?ca=dgr-btw77liveappmobile61&S_TACT=105AGX59&S_CMP=GR
  47. https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html#--property=
  48. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/resource_management_guide/sec-modifying_control_groups
  49. Polvi, Alex. "CoreOS is building a container runtime, rkt". https://coreos.com/blog/rocket.html. Retrieved 12 March 2019. 
More
This entry is offline, you can click here to edit this entry!
ScholarVision Creations