{
	"id": "133085d7-9173-4b1a-8ba1-0d111be80b21",
	"created_at": "2026-04-06T00:14:08.637028Z",
	"updated_at": "2026-04-10T03:33:57.008938Z",
	"deleted_at": null,
	"sha1_hash": "962699d8c39a8091bcb4305e0dfc7e115690857e",
	"title": "Hypervisor",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 203122,
	"plain_text": "Hypervisor\r\nBy Contributors to Wikimedia projects\r\nPublished: 2004-12-11 · Archived: 2026-04-05 16:02:57 UTC\r\nFrom Wikipedia, the free encyclopedia\r\nA hypervisor, also known as a virtual machine monitor (VMM), is a type of computer software, firmware or\r\nhardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual\r\nmachines is called a host machine or virtualization server, and each virtual machine is called a guest machine. The\r\nhypervisor presents the guest operating systems with a virtual operating platform and manages the execution of\r\nthe guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware.[1]\r\nMultiple instances of a variety of operating systems may share the virtualized hardware resources: for example,\r\nLinux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating\r\nsystem-level virtualization, where all instances (usually called containers) must share a single kernel, though the\r\nguest operating systems can differ in user space, such as different Linux distributions with the same kernel.\r\nThe term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the\r\nhypervisor is the supervisor of the supervisors,[2] with hyper- used as a stronger variant of super-.\r\n[a]\r\n The term\r\ndates to circa 1970;[3] IBM coined it for software that ran OS/360 and the 7090 emulator concurrently on the\r\n360/65[4] and later used it for the DIAG handler of CP-67. In the earlier CP/CMS (1967) system, the term Control\r\nProgram was used instead.\r\nSome literature, especially in microkernel contexts, makes a distinction between hypervisor and virtual machine\r\nmonitor (VMM). There, both components form the overall virtualization stack of a certain system. Hypervisor\r\nrefers to kernel-space functionality and VMM to user-space functionality. Specifically in these contexts, a\r\nhypervisor is a microkernel implementing virtualization infrastructure that must run in kernel-space for technical\r\nreasons, such as Intel VMX. Microkernels implementing virtualization mechanisms are also referred to as\r\nmicrohypervisor.\r\n[5][6]\r\n Applying this terminology to Linux, KVM is a hypervisor and QEMU or Cloud Hypervisor\r\nare VMMs utilizing KVM as hypervisor.\r\n[7]\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 1 of 7\n\nType-1 and type-2 hypervisors\r\nIn his 1973 thesis Architectural Principles for Virtual Computer Systems, Robert P. Goldberg classified two types\r\nof hypervisor:[1]\r\nType-1, native or bare-metal hypervisors\r\nThese hypervisors run directly on the host's hardware to control the hardware and to manage guest\r\noperating systems. For this reason, they are sometimes called bare-metal hypervisors. The first hypervisors,\r\nwhich IBM developed in the 1960s, were native hypervisors.[8] These included the test software SIMMON\r\nand the CP/CMS operating system, the predecessor of IBM's VM family of virtual machine operating\r\nsystems. Examples of Type-1 hypervisor include Hyper-V, Xen and VMware ESXi.\r\nType-2 or hosted hypervisors\r\nThese hypervisors run on a conventional operating system (OS) just as other computer programs do. A\r\nvirtual machine monitor runs as a process on the host, such as VirtualBox. Type-2 hypervisors abstract\r\nguest operating systems from the host operating system, effectively creating an isolated system that can be\r\ninteracted with by the host. Examples of Type-2 hypervisor include VirtualBox and VMware Workstation.\r\nThe distinction between these two types is not always clear. For instance, KVM and bhyve are kernel modules[9]\r\nthat effectively convert the host operating system to a type-1 hypervisor.\r\n[10]\r\nThe first hypervisors providing full virtualization were the test tool SIMMON and the one-off IBM CP-40\r\nresearch system, which began production use in January 1967 and became the first version of the IBM CP/CMS\r\noperating system. CP-40 ran on a S/360-40 modified at the Cambridge Scientific Center to support dynamic\r\naddress translation, a feature that enabled virtualization. Prior to this time, computer hardware had only been\r\nvirtualized to the extent to allow multiple user applications to run concurrently, such as in CTSS and IBM\r\nM44/44X. With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating\r\nsystems to run concurrently in separate virtual machine contexts.\r\nProgrammers soon implemented CP-40 (as CP-67) for the IBM System/360-67, the first production computer\r\nsystem capable of full virtualization. IBM shipped this machine in 1966; it included page-translation-table\r\nhardware for virtual memory and other techniques that allowed a full virtualization of all kernel tasks, including\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 2 of 7\n\nI/O and interrupt handling. (The \"official\" operating system, the ill-fated TSS/360, did not employ full\r\nvirtualization.) Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers\r\nfrom 1968 to early 1970s, in source code form without support.\r\nCP/CMS formed part of IBM's attempt to build robust time-sharing systems for its mainframe computers. By\r\nrunning multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if\r\none operating system crashed, the others would continue working without interruption. Indeed, this even allowed\r\nbeta or experimental versions of operating systems‍—or even of new hardware[11]—to be deployed and debugged,\r\nwithout jeopardizing the stable main production system, and without requiring costly additional development\r\nsystems.\r\nIBM announced its System/370 series in 1970 without the virtual memory feature needed for virtualization, but\r\nadded it in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor\r\nsystems, such that all modern-day IBM mainframes, including the zSeries line, retain backward compatibility with\r\nthe 1960s-era IBM S/360 line. The 1972 announcement also included VM/370, a reimplementation of CP/CMS\r\nfor the S/370. Unlike CP/CMS, IBM provided support for this version (though it was still distributed in source\r\ncode form for several releases). VM stands for Virtual Machine, emphasizing that all, not just some, of the\r\nhardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by\r\nuniversities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in\r\nongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed\r\nand bitter battles[citation needed], time-sharing lost out to batch processing through IBM political infighting, and\r\nVM remained IBM's \"other\" mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of\r\npopularity and support from 2000 as the z/VM product, for example as the platform for Linux on IBM Z.\r\nAs mentioned above, the VM control program includes a hypervisor-call handler that intercepts DIAG\r\n(\"Diagnose\", opcode x'83') instructions used within a virtual machine. This provides fast-path non-virtualized\r\nexecution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used\r\nin normal programming, and thus is not virtualized. It is therefore available for use as a signal to the \"host\"\r\noperating system). When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating\r\nsystem interface that was analogous to the System/360 Supervisor Call instruction (SVC), but that did not require\r\naltering or extending the system's virtualization of SVC.\r\nIn 1985 IBM introduced the PR/SM hypervisor to manage logical partitions (LPAR).\r\nOperating system support\r\n[edit]\r\nSeveral factors led to a resurgence around 2005 in the use of virtualization technology among Unix, Linux, and\r\nother Unix-like operating systems:[12]\r\nExpanding hardware capabilities, allowing each single machine to do more simultaneous work\r\nEfforts to control costs and to simplify management through consolidation of servers\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 3 of 7\n\nThe need to control large multiprocessor and cluster installations, for example in server farms and render\r\nfarms\r\nThe improved security, reliability, and device independence possible from hypervisor architectures\r\nThe ability to run complex, OS-dependent applications in different hardware or OS environments\r\nThe ability to overprovision resources, fitting more applications onto a host\r\nMajor Unix vendors, including HP, IBM, SGI, and Sun Microsystems, have been selling virtualized hardware\r\nsince before 2000. These have generally been large, expensive systems (in the multimillion-dollar range at the\r\nhigh end), although virtualization has also been available on some low- and mid-range systems, such as IBM\r\npSeries servers, HP Superdome series machines, and Sun/Oracle SPARC T series CoolThreads servers.\r\nIBM provides virtualization partition technology known as logical partitioning (LPAR) on System/390, zSeries,\r\npSeries and IBM AS/400 systems. For IBM's Power Systems, the POWER Hypervisor (PHYP) is a native (bare-metal) hypervisor in firmware and provides isolation between LPARs. Processor capacity is provided to LPARs in\r\neither a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to\r\nbusy workloads. Groups of LPARs can have their processor capacity managed as if they were in a \"pool\" - IBM\r\nrefers to this capability as Multiple Shared-Processor Pools (MSPPs) and implements it in servers with the\r\nPOWER6 processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to\r\neach LPAR (at LPAR initiation or dynamically) and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems (AIX, Linux, IBM i), the Power processors (POWER4 onwards) have\r\ndesigned virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to\r\narrive at the physical memory address. Input/Output (I/O) adapters can be exclusively \"owned\" by LPARs or\r\nshared by LPARs through an appliance partition known as the Virtual I/O Server (VIOS). The Power Hypervisor\r\nprovides for high levels of reliability, availability and serviceability (RAS) by facilitating hot add/replace of\r\nmultiple parts (model dependent: processors, memory, I/O adapters, blowers, power units, disks, system\r\ncontrollers, etc.)\r\nHPE provides HP Integrity Virtual Machines (Integrity VM) to host multiple operating systems on their Itanium\r\npowered Integrity systems. Itanium can run HP-UX, Linux, Windows and OpenVMS, and these environments are\r\nalso supported as virtual servers on HP's Integrity VM platform. The HP-UX operating system hosts the Integrity\r\nVM hypervisor layer that allows for multiple features of HP-UX to be taken advantage of and provides major\r\ndifferentiation between this platform and other commodity platforms - such as processor hotswap, memory\r\nhotswap, and dynamic kernel updates without system reboot. While it heavily leverages HP-UX, the Integrity VM\r\nhypervisor is really a hybrid that runs on bare-metal while guests are executing. Running normal HP-UX\r\napplications on an Integrity VM host is heavily discouraged,[by whom?] because Integrity VM implements its own\r\nmemory management, scheduling and I/O policies that are tuned for virtual machines and are not as effective for\r\nnormal applications. HPE also provides more rigid partitioning of their Integrity and HP9000 systems by way of\r\nVPAR and nPar technology, the former offering shared resource partitioning and the latter offering complete I/O\r\nand processing isolation. The flexibility of virtual server environment (VSE) has given way to its use more\r\nfrequently in newer deployments.[citation needed]\r\nAlthough Solaris has always been the only guest domain OS officially supported by Sun/Oracle on their Logical\r\nDomains hypervisor, as of late 2006, Linux (Ubuntu and Gentoo), and FreeBSD have been ported to run on top of\r\nthe hypervisor (and can all run simultaneously on the same processor, as fully virtualized independent guest\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 4 of 7\n\nOSes). Wind River \"Carrier Grade Linux\" also runs on Sun's Hypervisor.\r\n[13]\r\n Full virtualization on SPARC\r\nprocessors proved straightforward: since its inception in the mid-1980s Sun deliberately kept the SPARC\r\narchitecture clean of artifacts that would have impeded virtualization. (Compare with virtualization on x86\r\nprocessors below.)[14]\r\nSimilar trends have occurred with x86/x86-64 server platforms, where open-source projects such as Xen have led\r\nvirtualization efforts. These include hypervisors built on Linux and Solaris kernels as well as custom kernels.\r\nSince these technologies span from large systems down to desktops, they are described in the next section.\r\nx86 virtualization was introduced in the 1990s, with its emulation being included in Bochs.\r\n[15]\r\n Intel and AMD\r\nreleased their first x86 processors with hardware virtualisation in 2005 with Intel VT-x (code-named Vanderpool)\r\nand AMD-V (code-named Pacifica).\r\nAn alternative approach requires modifying the guest operating system to make a system call to the underlying\r\nhypervisor, rather than executing machine I/O instructions that the hypervisor simulates. This is called\r\nparavirtualization in Xen, a \"hypercall\" in Parallels Workstation, and a \"DIAGNOSE code\" in IBM VM. Some\r\nmicrokernels, such as Mach and L4, are flexible enough to allow paravirtualization of guest operating systems.\r\nEmbedded hypervisors, targeting embedded systems and certain real-time operating system (RTOS) environments,\r\nare designed with different requirements when compared to desktop and enterprise systems, including robustness,\r\nsecurity and real-time capabilities. The resource-constrained nature of multiple embedded systems, especially\r\nbattery-powered mobile systems, imposes a further requirement for small memory-size and low overhead. Finally,\r\nin contrast to the ubiquity of the x86 architecture in the PC world, the embedded world uses a wider variety of\r\narchitectures and less standardized environments. Support for virtualization requires memory protection (in the\r\nform of a memory management unit or at least a memory protection unit) and a distinction between user mode and\r\nprivileged mode, which rules out most microcontrollers. This still leaves x86, MIPS, ARM and PowerPC as\r\nwidely deployed architectures on medium- to high-end embedded systems.[16]\r\nAs manufacturers of embedded systems usually have the source code to their operating systems, they have less\r\nneed for full virtualization in this space. Instead, the performance advantages of paravirtualization make this\r\nusually the virtualization technology of choice. Nevertheless, ARM and MIPS have recently added full\r\nvirtualization support as an IP option and has included it in their latest high-end processors and architecture\r\nversions, such as ARM Cortex-A15 MPCore and ARMv8 EL2.\r\nOther differences between virtualization in server/desktop and embedded environments include requirements for\r\nefficient sharing of resources across virtual machines, high-bandwidth, low-latency inter-VM communication, a\r\nglobal view of scheduling and power management, and fine-grained control of information flows.[17]\r\nSecurity implications\r\n[edit]\r\nThe use of hypervisor technology by malware and rootkits installing themselves as a hypervisor below the\r\noperating system, known as hyperjacking, can make them more difficult to detect because the malware could\r\nintercept any operations of the operating system (such as someone entering a password) without the anti-malware\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 5 of 7\n\nsoftware necessarily detecting it (since the malware runs below the entire operating system). Implementation of\r\nthe concept has allegedly occurred in the SubVirt laboratory rootkit (developed jointly by Microsoft and\r\nUniversity of Michigan researchers[18]) as well as in the Blue Pill malware package. However, such assertions\r\nhave been disputed by others who claim that it would be possible to detect the presence of a hypervisor-based\r\nrootkit.[19]\r\nIn 2009, researchers from Microsoft and North Carolina State University demonstrated a hypervisor-layer anti-rootkit called Hooksafe that can provide generic protection against kernel-mode rootkits.\r\n[20]\r\nComparison of platform virtualization software\r\nOS-level virtualization\r\nVirtual memory\r\n1. ^ super- is from Latin, meaning \"above\", while hyper- is from the cognate term in Ancient Greek (ὑπέρ-),\r\nalso meaning above or over.\r\n1. ^ Jump up to: a\r\n \r\nb\r\n Goldberg, Robert P. (1973). Architectural Principles for Virtual Computer Systems (PDF)\r\n(Technical report). Harvard University. ESD-TR-73-105.\r\n2. ^ Bernard Golden (2011). Virtualization For Dummies. p. 54.\r\n3. ^ \"How did the term \"hypervisor\" come into use?\".\r\n4. ^ Gary R. Allred (May 1971). System/370 integrated emulation under OS and DOS (PDF). 1971 Spring\r\nJoint Computer Conference. Vol. 38. AFIPS Press. p. 164. doi:10.1109/AFIPS.1971.58. Retrieved June 12,\r\n2022.\r\n5. ^ Steinberg, Udo; Kauer, Bernhard (2010). \"NOVA: A Microhypervisor-Based Secure Virtualization\r\nArchitecture\" (PDF). Proceedings of the 2010 ACM European Conference on Computer Systems (EuroSys\r\n2010). Paris, France. Retrieved August 27, 2024.\r\n6. ^ \"Hedron Microkernel\". GitHub. Cyberus Technology. Retrieved August 27, 2024.\r\n7. ^ \"Cloud Hypervisor\". GitHub. Cloud Hypervisor Project. Retrieved August 27, 2024.\r\n8. ^ Meier, Shannon (2008). \"IBM Systems Virtualization: Servers, Storage, and Software\" (PDF). pp. 2, 15,\r\n20. Retrieved December 22, 2015.\r\n9. ^ Dexter, Michael. \"Hands-on bhyve\". CallForTesting.org. Retrieved September 24, 2013.\r\n10. ^ Graziano, Charles (2011). A performance analysis of Xen and KVM hypervisors for hosting the Xen\r\nWorlds Project (MS thesis). Iowa State University. doi:10.31274/etd-180810-2322.\r\nhdl:20.500.12876/26405. Retrieved October 16, 2022.\r\n11. ^ See History of CP/CMS for virtual-hardware simulation in the development of the System/370\r\n12. ^ Loftus, Jack (December 19, 2005). \"Xen virtualization quickly becoming open source 'killer app'\".\r\nTechTarget. Retrieved October 26, 2015.\r\n13. ^ \"Wind River To Support Sun's Breakthrough UltraSPARC T1 Multithreaded Next-Generation Processor\".\r\nWind River Newsroom (Press release). Alameda, California. November 1, 2006. Archived from the original\r\non November 10, 2006. Retrieved October 26, 2015.\r\n14. ^ Fritsch, Lothar; Husseiki, Rani; Alkassar, Ammar. Complementary and Alternative Technologies to\r\nTrusted Computing (TC-Erg./-A.), Part 1, A study on behalf of the German Federal Office for Information\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 6 of 7\n\nSecurity (BSI) (PDF) (Report). Archived from the original (PDF) on June 7, 2020. Retrieved February 28,\r\n2011.\r\n15. ^ \"Introduction to Bochs\". bochs.sourceforge.io. Retrieved April 17, 2023.\r\n16. ^ Strobl, Marius (2013). Virtualization for Reliable Embedded Systems. Munich: GRIN Publishing GmbH.\r\npp. 5–6. ISBN 978-3-656-49071-5. Retrieved March 7, 2015.\r\n17. ^ Gernot Heiser (April 2008). \"The role of virtualization in embedded systems\". Proc. 1st Workshop on\r\nIsolation and Integration in Embedded Systems (IIES'08). pp. 11–16. Archived from the original on March\r\n21, 2012. Retrieved April 8, 2009.\r\n18. ^ \"SubVirt: Implementing malware with virtual machines\" (PDF). University of Michigan, Microsoft. April\r\n3, 2006. Retrieved September 15, 2008.\r\n19. ^ \"Debunking Blue Pill myth\". Virtualization.info. August 11, 2006. Archived from the original on\r\nFebruary 14, 2010. Retrieved December 10, 2010.\r\n20. ^ Wang, Zhi; Jiang, Xuxian; Cui, Weidong; Ning, Peng (August 11, 2009). \"Countering kernel rootkits with\r\nlightweight hook protection\". Proceedings of the 16th ACM conference on Computer and communications\r\nsecurity (PDF). CCS '09. Chicago, Illinois, USA: ACM. pp. 545–554. CiteSeerX 10.1.1.147.9928.\r\ndoi:10.1145/1653662.1653728. ISBN 978-1-60558-894-0. S2CID 3006492. Retrieved November 11, 2009.\r\nWikimedia Commons has media related to Hypervisor.\r\nHypervisors and Virtual Machines: Implementation Insights on the x86 Architecture\r\nA Performance Comparison of Hypervisors, VMware\r\nSource: https://en.wikipedia.org/wiki/Hypervisor\r\nhttps://en.wikipedia.org/wiki/Hypervisor\r\nPage 7 of 7",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://en.wikipedia.org/wiki/Hypervisor"
	],
	"report_names": [
		"Hypervisor"
	],
	"threat_actors": [
		{
			"id": "655f7d0b-7ea6-4950-b272-969ab7c27a4b",
			"created_at": "2022-10-27T08:27:13.133291Z",
			"updated_at": "2026-04-10T02:00:05.315213Z",
			"deleted_at": null,
			"main_name": "BITTER",
			"aliases": [
				"T-APT-17"
			],
			"source_name": "MITRE:BITTER",
			"tools": [
				"ZxxZ"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "77b28afd-8187-4917-a453-1d5a279cb5e4",
			"created_at": "2022-10-25T15:50:23.768278Z",
			"updated_at": "2026-04-10T02:00:05.266635Z",
			"deleted_at": null,
			"main_name": "Inception",
			"aliases": [
				"Inception Framework",
				"Cloud Atlas"
			],
			"source_name": "MITRE:Inception",
			"tools": [
				"PowerShower",
				"VBShower",
				"LaZagne"
			],
			"source_id": "MITRE",
			"reports": null
		},
		{
			"id": "bf6cb670-bb69-473f-a220-97ac713fd081",
			"created_at": "2022-10-25T16:07:23.395205Z",
			"updated_at": "2026-04-10T02:00:04.578924Z",
			"deleted_at": null,
			"main_name": "Bitter",
			"aliases": [
				"G1002",
				"T-APT-17",
				"TA397"
			],
			"source_name": "ETDA:Bitter",
			"tools": [
				"Artra Downloader",
				"ArtraDownloader",
				"Bitter RAT",
				"BitterRAT",
				"Dracarys"
			],
			"source_id": "ETDA",
			"reports": null
		}
	],
	"ts_created_at": 1775434448,
	"ts_updated_at": 1775792037,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/962699d8c39a8091bcb4305e0dfc7e115690857e.pdf",
		"text": "https://archive.orkl.eu/962699d8c39a8091bcb4305e0dfc7e115690857e.txt",
		"img": "https://archive.orkl.eu/962699d8c39a8091bcb4305e0dfc7e115690857e.jpg"
	}
}