Mastering VMware vSphere 6. Marshall Nick. Читать онлайн. Newlib. NEWLIB.NET

Автор: Marshall Nick
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная образовательная литература
Год издания: 0
isbn: 9781118925164
Скачать книгу
eight NICs per server (again, assuming management and vMotion share a pair of NICs). You’ll want to ensure that you have enough network ports available, at the appropriate speeds, to accommodate the needs of this sort of vSphere deployment. This is only a rudimentary discussion of networking design for vSphere and doesn’t incorporate any discussion on the use of 10GbE, FCoE (which, though a storage protocol, impacts the network design), or what type of virtual switching infrastructure you will use. All of these other factors would affect your networking setup.

      How About 10GbE nics?

      Lots of factors go into designing how a vSphere deployment will integrate with the existing network infrastructure. For example, only in the last few years has 10GbE networking become pervasive in the datacenter. This bandwidth change fundamentally changes how virtual networks are designed.

      In one particular case, a company wished to upgrade its existing rack-mount server clusters with six NICs and two Fibre Channel HBAs to two dual-port 10GbE CNAs. Not only physically was there a stark difference from a switch and cabling perspective but the logical configuration was significantly different, too. Obviously this allowed greater bandwidth to each host but it also allowed more design flexibility.

      The final design used vSphere Network I/O Control (NOIC) and Load-Based Teaming (LBT) to share available bandwidth between the necessary types of traffic but only restricted bandwidth when the network was congested. This resulted in an efficient use of the new bandwidth capability without adding too much configuration complexity. Networking is discussed in more detail in Chapter 5.

      With these questions answered, you at least have the basics of a vSphere deployment established. As mentioned previously, this discussion on designing a vSphere solution is far from comprehensive. You should find a good resource on vSphere design and consider performing a comprehensive design exercise before deploying vSphere.

      Deploying VMware ESXi

      Once you’ve established the basics of your vSphere design, you must decide exactly how you will deploy ESXi. You have three options:

      • Interactive installation of ESXi

      • Unattended (scripted) installation of ESXi

      • Automated provisioning of ESXi

      Of these, the simplest is an interactive installation of ESXi. The most complex – but perhaps the most powerful, depending on your needs and your environment – is automated provisioning of ESXi. In the following sections, we’ll describe all three of these methods for deploying ESXi in your environment.

      Let’s start with the simplest method first: interactively installing ESXi.

      Installing VMware ESXi Interactively

      VMware has done a great job of making the interactive installation of ESXi as simple and straightforward as possible. It takes just minutes to install, so let’s walk through the process.

      Perform the following steps to interactively install ESXi:

      1. Ensure that your server hardware is configured to boot from the CD-ROM drive.

      This will vary from manufacturer to manufacturer and will also depend on whether you are installing locally or remotely via an IP-based keyboard, video, mouse (KVM) or other remote management facility.

      2. Ensure that VMware ESXi installation media are available to the server.

      Again, this will vary based on a local installation (which involves simply inserting the VMware ESXi installation CD into the optical drive) or a remote installation (which typically involves mapping an image of the installation media, known as an ISO image, to a virtual optical drive).

      Obtaining vmware esxi installation media

      You can download the installation files from VMware’s website at www.vmware.com/download/.

      Physical boxed copies of VMware products are no longer sold, but if you hold a valid license all products can be downloaded directly from VMware. These files are typically ISO files that you can mount to a server or burn to a physical CD or DVD.

      3. Power on the server.

Once it boots from the installation media, the initial boot menu screen appears, as shown in Figure 2.2.

      4. Press Enter to boot the ESXi installer.

      The installer will boot the vSphere hypervisor and eventually stop at a welcome message. Press Enter to continue.

      5. At the End User License Agreement (EULA) screen, press F11 to accept the EULA and continue with the installation.

      6. Next, the installer will display a list of available disks on which you can install or upgrade ESXi.

Potential devices are identified as either local devices or remote devices. Figure 2.3 and Figure 2.4 show two different views of this screen: one with a local device and one with remote devices.

Figure 2.2 The initial ESXi installation routine has options for booting the installer or booting from the local disk.

Figure 2.3 The installer offers options for both local and remote devices; in this case, only a local device was detected.

Figure 2.4 Although local SAS devices are supported, they are listed as remote devices.

      Running esxi as a vM

You might be able to deduce from Figure 2.3 that I’m actually running ESXi 6 as a VM. Yes, that’s right – you can virtualize ESXi! In this particular case, I’m using VMware’s desktop virtualization solution for Mac OS X, VMware Fusion, to run an instance of ESXi as a VM. As of this writing, the latest version of VMware Fusion is 6, and it includes ESXi as an officially supported guest OS. This is a great way to test out the latest version of ESXi without the need for server class hardware. You can also run ESXi as a VM on ESXi itself, but remember it is not supported for running production workloads inside these “nested” or virtual hypervisors.

      Storage area network logical unit numbers, or SAN LUNs, are listed as remote, as you can see in Figure 2.4. Local serial attached SCSI (SAS) devices are also listed as remote. Figure 2.4 shows a SAS drive connected to an LSI Logic controller; although this device is physically local to the server on which we are installing ESXi, the installation routine marks it as remote.

      If you want to create a boot-from-SAN environment, where each ESXi host boots from a SAN LUN, then you’d select the appropriate SAN LUN here. You can also install directly to your own USB or Secure Digital (SD) device – simply select the appropriate device from the list.

      Which Destination is Best?

      Local device, SAN LUN, or USB? Which destination is the best when you’re installing ESXi? Those questions truly depend on the overall vSphere design you are implementing, and there is no simple answer. Many variables affect this decision. Are you using an iSCSI SAN and you don’t have iSCSI hardware initiators in your servers? That would prevent you from using a boot-from-SAN setup. Are you installing into an environment like Cisco UCS, where booting from SAN is highly recommended? Is your storage larger than 2 GB? Although you can install ESXi on a 2 GB partition, no log files will be stored locally so you’ll receive a warning in the UI advising you to set an external logging host. Be sure to consider all the factors when deciding where to install ESXi.

      7. To get more information about a device, highlight the device and press F1.

The information about