Esxi Server Build

- 20.15

Small 8 Bay Home Nas / ESXI Server Build: U-NAS NSC-800
photo src: matthill.eu

VMware ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware for deploying and serving virtual computers. As a type-1 hypervisor, ESXi is not a software application that one installs in an operating system (OS); instead, it includes and integrates vital OS components, such as a kernel.

After version 4.1 (released in 2010), VMware renamed ESX to ESXi. ESXi replaces Service Console (a rudimentary operating system) with a more closely integrated OS. ESX/ESXi is the primary component in the VMware Infrastructure software suite.

The name ESX originated as an abbreviation of Elastic Sky X.


ESXi 5.0 AMD Whitebox Server for $500 with Passthrough (IOMMU ...
photo src: thehomeserverblog.com


Maps, Directions, and Place Reviews



Architecture

ESX runs on bare metal (without running an operating system) unlike other VMware products. It includes its own kernel: A Linux kernel is started first, and is then used to load a variety of specialized virtualization components, including ESX, which is otherwise known as the vmkernel component. The Linux kernel is the primary virtual machine; it is invoked by the service console. At normal run-time, the vmkernel is running on the bare computer, and the Linux-based service console runs as the first virtual machine. VMWare dropped development of ESX at version 4.1, and now uses ESXi, which does not include a Linux kernel.

The vmkernel is a microkernel with three interfaces: hardware, guest systems, and the service console (Console OS).

Interface to hardware

The vmkernel handles CPU and memory directly, using scan-before-execution (SBE) to handle special or privileged CPU instructions and the SRAT (system resource allocation table) to track allocated memory.

Access to other hardware (such as network or storage devices) takes place using modules. At least some of the modules derive from modules used in the Linux kernel. To access these modules, an additional module called vmklinux implements the Linux module interface. According to the README file, "This module contains the Linux emulation layer used by the vmkernel."

The vmkernel uses the device drivers:

  1. net/e100
  2. net/e1000
  3. net/e1000e
  4. net/bnx2
  5. net/tg3
  6. net/forcedeth
  7. net/pcnet32
  8. block/cciss
  9. scsi/adp94xx
  10. scsi/aic7xxx
  11. scsi/aic79xx
  12. scsi/ips
  13. scsi/lpfcdd-v732
  14. scsi/megaraid2
  15. scsi/mptscsi_2xx
  16. scsi/qla2200-v7.07
  17. scsi/megaraid_sas
  18. scsi/qla4010
  19. scsi/qla4022
  20. scsi/vmkiscsi
  21. scsi/aacraid_esx30
  22. scsi/lpfcdd-v7xx
  23. scsi/qla2200-v7xx

These drivers mostly equate to those described in VMware's hardware compatibility list. All these modules fall under the GPL. Programmers have adapted them to run with the vmkernel: VMware Inc has changed the module-loading and some other minor things.

Service console

In ESX (and not ESXi), the Service Console is a vestigial general purpose operating system most significantly used as bootstrap for the VMware kernel, vmkernel, and secondarily used as a management interface. Both of these Console Operating System functions are being deprecated from version 5.0, as VMware migrates exclusively to the ESXi model, current version being ESXi. The Service Console, for all intents and purposes, is the operating system used to interact with VMware ESX and the virtual machines that run on the server.

Linux dependencies

ESX uses a Linux kernel to load additional code: often referred to by VMware, Inc. as the "vmkernel". The dependencies between the "vmkernel" and the Linux part of the ESX server have changed drastically over different major versions of the software. The VMware FAQ states: "ESX Server also incorporates a service console based on a Linux 2.4 kernel that is used to boot the ESX Server virtualization layer". The Linux kernel runs before any other software on an ESX host. On ESX versions 1 and 2, no VMkernel processes run on the system during the boot process. After the Linux kernel has loaded, the S90vmware script loads the vmkernel. VMware Inc states that vmkernel does not derive from Linux, but acknowledges that it has adapted certain device-drivers from Linux device drivers. The Linux kernel continues running, under the control of the vmkernel, providing functions including the proc file system used by the ESX and an environment to run support applications. ESX version 3 loads the VMkernel from the Linux initrd, thus much earlier in the boot-sequence than in previous ESX versions.

In traditional systems, a given operating system runs a single kernel. The VMware FAQ mentions that ESX has both a Linux 2.4 kernel and vmkernel - hence confusion over whether ESX has a Linux base. An ESX system starts a Linux kernel first, but it loads vmkernel (also described by VMware as a kernel), which according to VMware 'wraps around' the linux kernel, and which (according to VMware Inc) does not derive from Linux.

The ESX userspace environment, known as the "Service Console" (or as "COS" or as "vmnix"), derives from a modified version of Red Hat Linux, (Red Hat 7.2 for ESX 2.x and Red Hat Enterprise Linux 3 for ESX 3.x). In general, this Service Console provides management interfaces (CLI, webpage MUI, Remote Console).

As a further detail which differentiates the ESX from other VMware virtualization products: ESX supports the VMware proprietary cluster file system VMFS. VMFS enables multiple hosts to access the same SAN LUNs simultaneously, while file-level locking provides simple protection to file-system integrity.

Purple Screen of Death

In the event of a hardware error, the vmkernel can 'catch' a Machine Check Exception. This results in an error message displayed on a purple diagnostic screen. This is colloquially known as a purple diagnostic screen, or purple screen of death (PSOD, cf. Blue Screen of Death (BSOD)).

Upon displaying a purple diagnostic screen, the vmkernel writes debug information to the core dump partition. This information, together with the error codes displayed on the purple diagnostic screen can be used by VMware support to determine the cause of the problem.

vMotion: live migration

Live migration (vMotion) in ESX allows a virtual machine to move between two different hosts. Live storage migration (Storage vMotion) enables live migration of virtual disks on the fly. During vMotion Live Migration (vLM) of a running virtual machine (VM) the content of the (RAM) memory of the VM is sent from the running VM to the new VM (the instance on another host that will become the running VM after the vLM). The content of memory is by its nature changing all the time. ESX uses a system where the content is sent to the other VM and then it will check what data is changed and send that, each time smaller blocks. At the last moment it will very briefly 'freeze' the existing VM, transfer the last changes in the RAM content and then start the new VM. The intended effect of this process is to minimize the time during which the VM is suspended; in a best case this will be the time of the final transfer plus the time required to start the new VM.


Esxi Server Build Video



Versions

VMware ESX is available in two main types: ESX and ESXi, although since version 5 only ESXi is continued.

VMware ESX

Version release history:

  • VMware (7 January 2002)

VMware ESX 1.5

  • VMware ESX Server 1.5 (13 May 2002)

VMware ESX 2.0 (21 July 2003)

  • VMware ESX Server 2.0.1 Build 22983 (13 April 2006)
  • VMware ESX Server 2.0.2 Build 23922 (4 May 2006)

VMware ESX 2.5 (14 December 2004)

  • VMware ESX Server 2.5.0 Build 11343 (29 November 2004)
  • VMware ESX Server 2.5.1 Build 13057 (20 May 2005)
  • VMware ESX Server 2.5.1 Build 14182 (20 June 2005)
  • VMware ESX Server 2.5.2 Build 16390 (15 September 2005)
  • VMware ESX Server 2.5.3 Build 22981 (13 April 2006)
  • VMware ESX Server 2.5.4 Build 32233 (5 October 2006)
  • VMware ESX Server 2.5.5 Build 57619 (8 October 2007)

VMware Infrastructure 3.0 (VI3) (5 June 2006)

  • VMware ESX Server 3.0 Build 27701 (13 June 2006)
  • VMware ESX Server 3.0.1 Build 32039 (25 September 2006)
  • VMware ESX Server 3.0.2 Build 52542 (31 July 2007)
  • VMware ESX Server 3.0.3 Build 104629 (8 August 2008)
  • VMware ESX Server 3.0.3 Update 1 Build 231127 (8 March 2010)
  • VMware ESX Server 3.5 (10 December 2007)
  • VMware ESX Server 3.5 Build 64607 (20 February 2008)
  • VMware ESX Server 3.5 Update 1 Build 82663 (10 April 2008)
  • VMware ESX Server 3.5 Update 2 Build 110268 (13 August 2008)
  • VMware ESX Server 3.5 Update 3 Build 123630 (6 November 2008)
  • VMware ESX Server 3.5 Update 4 Build 153875 (30 March 2009)
  • VMware ESX Server 3.5 Update 5 Build 207095 (20 December 2009) This was the last version to support 32-bit systems

VMware vSphere 4.0 (20 May 2009)

  • VMware ESX 4.0 Build 164009 (21 May 2009)
  • VMware ESX 4.0 Update 1 Build 208167 (19 November 2009)
  • VMware ESX 4.0 Update 2 Build 261974 (10 June 2010)
  • VMware ESX 4.0 Update 3 Build 398348 (5 May 2011)
  • VMware ESX 4.0 Update 4 Build 504850 (17 November 2011)
  • VMware ESX 4.1 Build 260247 (13 July 2010)
  • VMware ESX 4.1 Update 1 Build 348481 (10 February 2011)
  • VMware ESX 4.1 Update 2 Build 502767 (27 October 2011)
  • VMware ESX 4.1 Update 3 Build 800380 (30 August 2012)

ESX and ESXi before version 5.0 do not support Windows 8/Windows 2012. These Microsoft operating systems can only run on ESXi 5.x or later.

18 July 2010 vSphere 4.1 and its subsequent update and patch releases are the last releases to include both ESX and ESXi hypervisor architectures. Future major releases of VMware vSphere will include only the VMware ESXi architecture. For this reason, VMware recommends that deployments of vSphere 4.x utilize the ESXi hypervisor architecture.

VMware ESXi

VMware ESXi, a smaller-footprint version of ESX, does not include the ESX Service Console. It is available - without the need to purchase a vCenter license - as a free download from VMware, with some features disabled.

ESXi apparently stands for "ESX integrated".

VMware ESXi originated as a compact version of VMware ESX that allowed for a smaller 32 MB disk footprint on the host. With a simple configuration console for mostly network configuration and remote based VMware Infrastructure Client Interface, this allows for more resources to be dedicated to the guest environments.

Two variations of ESXi exist:

  • VMware ESXi Installable
  • VMware ESXi Embedded Edition

The same installation media will install to either one or the other of these installation modes depending on the size of the target media. One can upgrade ESXi to VMware Infrastructure 3 or to VMware vSphere 4.0 ESXi.

Originally named VMware ESX Server ESXi edition, through several revisions the ESXi product finally became VMware ESXi 3. New editions then followed: ESXi 3.5, ESXi 4 ESXi 5 and (as of 2015) ESXi 6.

To virtualize Windows 8 or Windows Server 2012 as a guest operating system, the ESXi version must be 5.0 update 1 or later.


My Home Lab, ESXi 5.5 Server Build, and The Logic Behind It All ...
photo src: ethancbanks.com


Version history

Version release history:

  • VMware ESX 3.0.0 GA (Build 27701) (16 June 2006 (2006-06-16))
  • VMware ESX 3.0.1 GA (Build 32039) (25 September 2006 (2006-09-25))
  • VMware ESX 3.0.2 GA (Build 52542) (26 July 2007 (2007-07-26))
  • VMware ESX 3.0.3 GA (Build 104629) (14 July 2008 (2008-07-14))
  • VMware ESXi 3.5 GA (Build 64607) (20 February 2008 (2008-02-20))
  • VMware ESXi 3.5 Update 1 (Build 82663) (10 April 2008 (2008-04-10))
  • VMware ESXi 3.5 Update 2 (Build 103909) (13 August 2008 (2008-08-13))
  • VMware ESXi 3.5 Update 3 (Build 123629) (6 November 2008 (2008-11-06))
  • VMware ESXi 3.5 Update 4 (Build 153875) (30 March 2009 (2009-03-30))
  • VMware ESXi 3.5 Update 5 (Build 207095) (3 December 2009 (2009-12-03))
  • VMware ESXi 4.0 GA (Build 164009) (21 May 2009 (2009-05-21))
  • VMware ESXi 4.0 Update 1 (Build 208167) (19 November 2009 (2009-11-19))
  • VMware ESXi 4.0 Update 2 (Build 261974) (10 June 2010 (2010-06-10))
  • VMware ESXi 4.0 Update 3 (Build 403554) (5 May 2011 (2011-05-05))
  • VMware ESXi 4.0 Update 4 (Build 523315) (17 November 2011 (2011-11-17))
  • VMware ESXi 4.1 GA (Build 260247) (13 July 2010 (2010-07-13))
  • VMware ESXi 4.1 Update 1 (Build 351620) (10 February 2011 (2011-02-10))
  • VMware ESXi 4.1 Update 2 (Build 502767) (27 October 2011 (2011-10-27))
  • VMware ESXi 4.1 Update 3 (Build 800380) (30 August 2012 (2012-08-30))
  • VMware ESXi 5.0 GA (Build 469512) (24 August 2011 (2011-08-24))
  • VMware ESXi 5.0 Update 1 (Build 623860) (15 March 2012 (2012-03-15))
  • VMware ESXi 5.0 Update 2 (Build 914586) (20 December 2012 (2012-12-20))
  • VMware ESXi 5.0 Update 3 (Build 1311175) (17 October 2013 (2013-10-17))
  • VMware ESXi 5.1 GA (Build 799733) (10 September 2012 (2012-09-10))
  • VMware ESXi 5.1 Update 1 (Build 1065491) (25 April 2013 (2013-04-25))
  • VMware ESXi 5.1 Update 2 (Build 1483097) (16 January 2014 (2014-01-16))
  • VMware ESXi 5.1 Update 3 (Build 2323236) (4 December 2014 (2014-12-04))
  • VMware ESXi 5.5 GA (Build 1331820) (22 September 2013 (2013-09-22))
  • VMware ESXi 5.5 Update 1 (Build 1623387) (11 March 2014 (2014-03-11))
  • VMware ESXi 5.5 Update 2 (Build 2068190) (9 September 2014 (2014-09-09))
  • VMware ESXi 5.5 Update 3 (Build 3029944) (16 September 2015 (2015-09-16))
  • VMware ESXi 6.0 GA (Build 2494585) (12 March 2015 (2015-03-12))
  • VMware ESXi 6.0 Update 1 (Build 3029758) (10 September 2015 (2015-09-10))
  • VMware ESXi 6.0 Update 1a (Build 3073146) (6 October 2015 (2015-10-06))
  • VMware ESXi 6.0 Update 1b (Build 3380124) (7 January 2016 (2016-01-07))
  • VMware ESXi 6.0 Update 2 (Build 3620759) (15 March 2016 (2016-03-15))
  • VMware ESXi 6.0 Update 3 (Build 5050593) (24 February 2017 (2017-02-24))
  • VMware ESXi 6.5 GA (Build 4564106) (15 November 2016 (2016-11-15))

Small 8 Bay Home Nas / ESXI Server Build: U-NAS NSC-800
photo src: matthill.eu


Lawsuit

VMware has been sued by Christoph Hellwig, a Linux kernel developer, for GPL license violations. It was alleged that VMware had misappropriated portions of the Linux kernel, and used them without permission. The lawsuit was dismissed by the court in July 2016 based on a technicality and Hellwig announced he would file an appeal.


My Home Lab, ESXi 5.5 Server Build, and The Logic Behind It All ...
photo src: ethancbanks.com


Related or additional products

The following products operate in conjunction with ESX:

  • vCenter Server, enables monitoring and management of multiple ESX, ESXi and GSX servers. In addition, users must install it to run infrastructure services such as:
    • vMotion (transferring virtual machines between servers on the fly whilst they are running, with zero downtime)
    • svMotion aka Storage vMotion (transferring virtual machines between Shared Storage LUNs on the fly, with zero downtime)
    • enhanced vMotion aka evMotion (a simultaneous vMotion and svMotion, supported on version 5.1 and above)
    • Distributed Resource Scheduler (DRS) (automated vMotion based on host/VM load requirements/demands)
    • High Availability (HA) (restarting of Virtual Machine Guest Operating Systems in the event of a physical ESX Host failure)
    • Fault Tolerance (almost instant stateful fail-over of a VM in the event of a physical host failure)
  • Converter, enables users to create VMware ESX Server- or Workstation-compatible virtual machines from either physical machines or from virtual machines made by other virtualization products. Converter replaces the VMware "P2V Assistant" and "Importer" products -- P2V Assistant allowed users to convert physical machines into virtual machines; and Importer allowed the import of virtual machines from other products into VMware Workstation.
  • vSphere Client (formerly VMware Infrastructure Client), enables monitoring and management of a single instance of ESX or ESXi server. After ESX 4.1, vSphere Client was no longer available from the ESX/ESXi server, but must be downloaded from the VMware web site.


Cisco Nexus 1000v

Network-connectivity between ESX hosts and the VMs running on it relies on virtual NICs (inside the VM) and virtual switches. The latter exists in two versions: the 'standard' vSwitch allowing several VMs on a single ESX host to share a physical NIC and the 'distributed vSwitch' where the vSwitches on different ESX hosts together form one logical switch. Cisco offers in their Cisco Nexus product-line the Nexus 1000v, an advanced version of the standard distributed vSwitch. A Nexus 1000v consists of two parts: a supervisor module (VSM) and on each ESX host a virtual ethernet module (VEM). The VSM runs as a virtual appliance within the ESX cluster or on dedicated hardware (Nexus 1010 series) and the VEM runs as module on each host and replaces a standard dvS (distributed virtual switch) from VMware. Configuration of the switch is done on the VSM using the standard NX-OS CLI. It offers capabilities to create standard port-profiles which can then be assigned to virtual machines using vCenter.

There are several differences between the standard dvS and the N1000v, one is that the Cisco switch generally has full support for network technologies such as LACP link aggregation or that the VMware switch supports new features such as routing based on physical NIC load. However the main difference lies in the architecture: Nexus 1000v is working in the same way as a physical Ethernet switch does while dvS is relying on information from ESX. This has consequences for example in scalability where the Kappa limit for a N1000v is 2048 virtual ports against 60000 for a dvS. The Nexus1000v is developed in co-operation between Cisco and VMware and uses the API of the dvS

Third party management tools

Because VMware ESX is a leader in the server-virtualisation market, software and hardware vendors offer a range of tools to integrate their products or services with ESX. Examples are the products from Veeam Software with backup and management applications and a plugin to monitor and manage ESX using HP OpenView, Quest Software with a range of management and backup-applications and most major backup-solution providers have plugins or modules for ESX. Using Microsoft Operations Manager (SCOM) 2007/2012 with a Bridgeways ESX management pack gives you a realtime ESX datacenter health view.

Also hardware-vendors such as HP and Dell include tools to support the use of ESX(i) on their hardware platforms. An example is the ESX module for Dell's OpenManage management platform.
VMware has added a Web Client since v5 but it will work on vCenter only and does not contain all features. vEMan is a Linux application which is trying to fill that gap. These are just a few examples: there are numerous 3rd party products to manage, monitor or backup ESX infrastructures and the VMs running on them.


photo src: hackingaway.org


Known limitations

Known limitations of VMware ESXi, as of April 2015, include the following:

Infrastructure limitations

Some maximums in ESXi Server 6.0 may influence the design of data centers:

  • Guest system maximum RAM: 4080 GB
  • Host system maximum RAM: 6 TB (12 TB on certain certified OEM hardware platforms)
  • Number of hosts in a high availability or Distributed Resource Scheduler cluster: 64
  • Maximum number of processors per virtual machine: 128
  • Maximum number of processors per host: 480
  • Maximum number of virtual CPUs per physical CPU core: 32
  • Maximum number of virtual machines per host: 1024
  • Maximum number of virtual CPUs per fault tolerant virtual machine: 4
  • Maximum guest system RAM per fault tolerant virtual machine: 64 GB
  • VMFS5 maximum volume size: 64 TB, but maximum file size is 62 TB -512 bytes

Performance limitations

In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in operating system calls. In an unmodified operating system, OS calls introduce the greatest portion of virtualization "overhead".

Paravirtualization or other virtualization techniques may help with these issues. VMware developed the Virtual Machine Interface for this purpose, and selected operating systems currently support this. A comparison between full virtualization and paravirtualization for the ESX Server shows that in some cases paravirtualization is much faster.

Network limitations

When using the advanced and extended network capabilities by using the Cisco Nexus 1000v distributed virtual switch the following network-related limitations apply:

  • 2048 active VLAN's (one to be used for communication between VEM's and VSM)
  • 2048 port-profiles
  • 32 physical NIC's per ESX/ESXi (physical) host
  • 256 port-channels per VMWare vDS (virtual distributed switch)

Fibre Channel Fabric limitations

Regardless of the type of virtual SCSI adapter used, there are these limitations:

  • Maximum of 4 Virtual SCSI adapters, one of which should be dedicated to virtual disk use
  • Maximum of 15 SCSI LUNs per adapter

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search