VMware vSphere Hypervisor

VMware vSphere Hypervisor


VMware, Inc.

Delivery Type:


Available to:

You will be able to place an order for this product again in 12 months after the initial order.
The license you will receive with this offering is valid 12 months starting with the 1st of the month the offering was ordered.

This product will no long be offered in March 2019

Fully featured version, available for download. This product allows for personal use on a single machine, not for use on shared workstations. You must be a faculty or staff member of an academic institution to qualify for this offer.

You will be able to place an order for this product again one year after the initial order.

Benefits of the VMware ESXi Hypervisor Architecture

The hypervisor architecture of VMware vSphere plays a critical role in the management of the virtual infrastructure. The introduction of the bare-metal ESX architecture in 2001 significantly enhanced performance and reliability, which in turn allowed customers to extend the benefits of virtualization to their mission-critical applications. The removal of the Linux based console operating system (COS or 'service console") with the new ESXi architecture represents a similar leap forward in reliability and virtualization management. Less than 5% of the size of ESX, the new vSphere ESXi architecture improves hypervisor management in the areas of security, deployment and configuration, and ongoing administration.

Improve Reliability and Security. The ESX architecture available in releases prior to vSphere 5.0 relied on a Linux-based COS for serviceability and agent-based partner integration. In the new, operating-system independent ESXi architecture, the approximately 2 GB COS has been removed and the necessary management functionality has been implemented directly in the core VMkernel. Eliminating the COS drastically reduces the install footprint of the vSphere ESXi hypervisor to approximately 150 MB improving security and reliability by removing the security vulnerabilities associated with a general purpose operating system.

Streamline Deployment and Configuration. The new ESXi architecture has far fewer configuration items greatly simplifying deployment and configuration and making it easier to maintain consistency.

Reduce Management Overhead. The API-based partner integration model of the ESXi architecture eliminates the need to install and manage third party management agents. You can automate routine tasks by leveraging remote command line scripting environments such as vCLI or PowerCLI.

Simplify Hypervisor Patching and Updating. Due to its small size and limited components, the ESXi architecture requires far fewer patches than early versions, shortening service windows and reducing security vulnerabilities. Over its lifetime, the ESXi architecture requires approximately 10 times fewer patches than the ESX hypervisor running with the COS.

What’s New in vSphere 5.1

In the vSphere 5.1 release VMware has added several significant enhancements to ESXi.

NEW Improved Security. There is no longer a dependency on a shared root account when working from the ESXi Shell.  Local users assigned administrative privileges automatically get full shell access.  With full shell access local users no longer need to “su” to root in order to run privileged commands.
NEW Improved Logging and Auditing. In vSphere 5.1 all host activity, from both the Shell and the Direct Console User Interface (DCUI), are now logged under the account of the logged in user. This ensures user accountability making it easy to monitor and audit activity on the host.
NEW Enhanced SNMPv3 support. VSphere 5.1 adds support for SNMP v.3 to include both SNMP authentication and SSL encryption.
NEW Enhanced vMotion. vSphere 5.1 provide a new level of ease and flexibility for live virtual machine migrations.  vSphere 5.1 now allows combining vMotion and Storage vMotion into one operation. The combined migration copies both the virtual machine memory and its disk over the network to the destination host.  In smaller environments the ability to simultaneously migrate both memory and storage allows virtual machines to be migrated between hosts that do not have shared storage. In larger environments this capability allows virtual machines to be migrated between clusters that do not have a common set of datastores.
NEW vShield Endpoint bundling. Now included in vSphere 5.1, vShield Endpoint offloads antivirus and anti-malware agent processing inside guest VMs to a dedicated secure virtual appliance delivered by VMware partners.
New virtual hardware. New virtual hardware. VSphere 5.1 introduces a new generation of virtual hardware with virtual machine hardware version 9, which includes the following new features:
  • 64-way virtual SMP. vSphere 5.1 supports virtual machines with up to 64 virtual CPUs, which lets you run larger CPU-intensive workloads on the VMware vSphere platform.
  • 1TB virtual machine RAM. You can assign up to 1TB of RAM to VSphere 5.1 virtual machines.
  • Hardware accelerated 3D graphics support for Windows Aero support. vSphere 5.1 supports 3D graphics to run Windows Aero and Basic 3D applications in virtual machines.
  • Guest OS Storage Reclamation. With Guest OS Storage Reclamation, when files are removed from inside the guest OS the size of the VMDK file can be reduced and the de-allocated storage space returned to the storage array’s free pool. Guest OS Storage Reclamation utilizes a new SE Sparse VMDK format available with VMware View.
  • Improved CPU virtualization. In vSphere 5.1 the vSphere host is better able to virtualize the physical CPU and thus expose more information about the CPU architecture to the virtual machine. vSphere 5.1 also adds the ability to exposes additional low-level CPU counters to the guest OS. Exposing the low-level CPU counter information allows for improved debugging, tuning and troubleshooting of operating systems and applications running inside the virtual machine.
Other significant capabilities available with vSphere since the 4.1 release:
AD Integration. Ability to configure the host to join an Active Directory domain. Once added to the AD domain users accessing vSphere hosts will be authenticated against the centralized user directory.
Scripted Installation. Ability to do a scripted installation of the vSphere software to the local disk of a server. Various deployment methods are supported, including booting the vSphere installer off a CD or over PXE, and accessing the configuration file over the network using a variety of protocols, such as secure HTTP. The configuration file can also specify the following scripts to be executed during the installation:
  • Pre-install
  • Post-install
  • First-boot
These scripts run locally on the vSphere host, and can perform various tasks such as configuring the host’s virtual networking and joining it to vCenter Server.
Boot from SAN support for vSphere. This support includes Fibre Channel SAN, as well as iSCSI and FCoE for certain storage adapters that have been qualified for this capability.
NEW Image Builder. A new set of command line utilities allows administrators to create custom ESXi images that include 3rd party components required for specialized hardware, such as drivers and CIM providers.  Image Builder can be used to create images suitable for different types of deployment, such as ISO-based installation, PXE-based installation, and Auto Deploy.  It is designed as a Power Shell snap-in component and is bundled with PowerCLI.
NEW vSphere Firewall. The vSphere host management interface is protected by a service-oriented and stateless firewall, which you can configure using the vSphere Client or at the command line with ESXCLI command line interfaces. A new firewall engine eliminates the use of iptables and allows the administrator to define port rules for each service. For remote hosts, you can specify the IP addresses or range of IP addresses that are allowed to access each service.
NEW  Secure Syslog. All log messages are handled by syslog, and messages can now be logged on either local and/or one or more remote log servers. Log messages can be remotely logged using either the Secure Sockets Layer (SSL) or TCP connections.
NEW Central management of host image and configuration via Auto Deploy. Combining the features of host profiles, Image Builder, and PXE, vSphere Auto Deploy simplifies the task of managing vSphere host installation and upgrade for hundreds of machines. vSphere host images are centrally stored in the Auto Deploy library. New hosts are automatically provisioned based on rules defined by the user. Rebuilding a server to a clean slate is as simple as a reboot.
NEW Enhanced Unified CLI Framework. An expanded and enhanced ESXCLI command line framework offers a rich set of consistent and extensible commands, including new commands to facilitate on-host troubleshooting and maintenance. The framework allows consistency of authentication, roles, and auditing, using the same methods as other management frameworks such as vCenter Server and PowerCLI.  You can use the ESXCLI framework both remotely as part of vSphere CLI and locally on the ESXi Shell (formerly Tech Support Mode).


VMware vSphere Hypervisor (ESXi)

ESXi Hardware Requirements

Make sure the host meets the minimum hardware configurations supported by ESXi 5.1.
Hardware and System Resources
To install and use ESXi 5.1, your hardware and system resources must meet the following requirements:
  • Supported server platform. For a list of supported platforms, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility.
  • ESXi 5.1 will install and run only on servers with 64-bit x86 CPUs.
  • ESXi 5.1 requires a host machine with at least two cores.
  • ESXi 5.1 supports only LAHF and SAHF CPU instructions.
  • ESXi 5.1 requires the NX/XD bit to be enabled for the CPU in the BIOS.
  • ESXi supports a broad range of x64 multicore processors. For a complete list of supported processors, see the VMware compatibility guide at http://www.vmware.com/resources/compatibility.
  • ESXi requires a minimum of 2GB of physical RAM. Provide at least 8GB of RAM to take full advantage of ESXi features and run virtual machines in typical production environments.
  • To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
  • One or more Gigabit or 10Gb Ethernet controllers. For a list of supported network adapter models, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility.
  • Any combination of one or more of the following controllers:
  • Basic SCSI controllers. Adaptec Ultra-160 or Ultra-320, LSI Logic Fusion-MPT, or most NCR/Symbios SCSI.
  • RAID controllers. Dell PERC (Adaptec RAID or LSI MegaRAID), HP Smart Array RAID, or IBM (Adaptec) ServeRAID controllers.
  • SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the virtual machines.
  • For Serial ATA (SATA), a disk connected through supported SAS controllers or supported on-board SATA controllers. SATA disks will be considered remote, not local. These disks will not be used as a scratch partition by default because they are seen as remote.
Note: You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 5.1 host. To use the SATA CD-ROM device, you must use IDE emulation mode.
Storage Systems
ESXi 5.1 supports installing on and booting from the following storage systems:
  • SATA disk drives. SATA disk drives connected behind supported SAS controllers or supported on-board SATA controllers.

Supported SAS controllers include:

  • LSI1068E (LSISAS3442E)
  • LSI1068 (SAS 5)
  • IBM ServeRAID 8K SAS controller
  • Smart Array P400/256 controller
  • Dell PERC 5.0.1 controller
Supported on-board SATA include:
  • Intel ICH9
  • ServerWorks HT1000 
Note: ESXi does not support using local, internal SATA drives on the host server to create VMFS datastores that are shared across multiple ESXi hosts.
  •  Serial Attached SCSI (SAS) disk drives. Supported for installing ESXi 5.1 and for storing virtual machines on VMFS partitions.
  • Dedicated SAN disk on Fibre Channel or iSCSI
  • USB devices. Supported for installing ESXi 5.1.
  • Software Fibre Channel over Ethernet (FCoE). See Installing and Booting ESXi with Software FCoE.
ESXi Booting Requirements
vSphere 5.1 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI you can boot systems from hard drives, CD-ROM drives, or USB media. Network booting or provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI.
ESXi can boot from a disk larger than 2TB provided that the system firmware and the firmware on any add-in card that you are using support it. See the vendor documentation.
Note: Changing the boot type from legacy BIOS to UEFI after you install ESXi 5.1 might cause the host to fail to boot. In this case, the host displays an error message similar to: Not a VMware boot bank. Changing the host boot type between legacy BIOS and UEFI is not supported after you install ESXi 5.1.
Storage Requirements for ESXi 5.1 Installation
Installing ESXi 5.1 requires a boot device that is a minimum of 1GB in size. When booting from a local disk or SAN/iSCSI LUN, a 5.2GB disk is required to allow for the creation of the VMFS volume and a 4GB scratch partition on the boot device. If a smaller disk or LUN is used, the installer will attempt to allocate a scratch region on a separate local disk. If a local disk cannot be found the scratch partition, /scratch, will be located on the ESXi host ramdisk, linked to /tmp/scratch. You can reconfigure /scratch to use a separate disk or LUN. For best performance and memory optimization, VMware recommends that you do not leave /scratch on the ESXi host ramdisk.
Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. As such, there is no tangible benefit to using large USB/SD devices as ESXi uses only the first 1GB. When installing on USB or SD devices, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found, /scratch is placed on the ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.
In Auto Deploy installations, the installer attempts to allocate a scratch region on an available local disk or datastore. If no local disk or datastore is found /scratch is placed on ramdisk. You should reconfigure /scratch to use a persistent datastore following the installation.
For environments that boot from a SAN or use Auto Deploy, it is not necessary to allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN should be weighed against the LUN size and the I/O behavior of the virtual machines.


Loading... Loading...