Fortunately, a deeper look into the server models available from a specific vendor, such as HP, reveals server models of all types and sizes (see Figure 2.1), including the following:
• Half-height C-class blades, such as the BL460c and BL465c
• Full-height C-class blades, such as the BL685c
• Dual-socket 1U servers, such as the DL360
• Dual-socket 2U servers, such as the DL380 and the DL385
• Quad-socket 4U servers, such as the DL580 and DL585
Figure 2.1 Servers on the Compatibility Guide come in various sizes and models.
You’ll note that Figure 2.1 doesn’t show vSphere 6 in the list; as of this writing, VMware’s Compatibility Guide hadn’t yet been updated to include information on vSphere 6. However, once VMware updates its guide to include vSphere 6 and vendors complete their testing, you’ll be able to easily view compatibility with the latest version using VMware’s website. Hardware is added to the list as it is certified, not just at major vSphere releases.
Which server is the right server? The answer to that question depends on many factors. The number of CPU cores is often used as a determining factor, but you should also consider the total number of RAM slots. A higher number of RAM slots means that you can use lower-cost, lower-density RAM modules and still reach high memory configurations. You should also consider server expansion options, such as the number of available Peripheral Component Interconnect Express (PCIe) buses, expansion slots, and the types of expansion cards supported in the server. If you are looking to use converged storage in your environment, the number of local drive bays and type of storage controller are yet other considerations. Finally, be sure to consider the server form factor; blade servers have advantages and disadvantages when compared to rack-mount servers.
Determining a Storage Architecture
Selecting the right storage solution is the second major decision you must make before you proceed with your vSphere deployment. The lion’s share of advanced features within vSphere – features like vSphere DRS, vSphere HA, and vSphere FT – depend on the presence of a shared storage architecture. Although we won’t talk in depth about a particular brand of storage hardware, VMware has Virtual SAN (VSAN), which we’ll discuss more in Chapter 6. As stated earlier, vSphere’s dependency on shared storage makes choosing the correct storage architecture for your deployment as critical as choosing the server hardware on which to run ESXi.
The Compatibility Guide isn’t Just for Servers
VMware’s Compatibility Guide isn’t just for servers. The searchable guide also provides compatibility information on storage arrays and other storage components. Be sure to use the searchable guide to verify the compatibility of your host bus adapters (HBAs) and storage arrays to ensure the appropriate level of support from VMware.
VMware also has the Product Interoperability Matrixes to assist with software compatibility information; it can be found here:
http://www.vmware.com/resources/compatibility/sim/interop_matrix.php
Fortunately, vSphere supports a number of storage architectures out of the box and has implemented a modular, plug-in architecture that will make supporting future storage technologies easier. vSphere supports storage based on Fibre Channel and Fibre Channel over Ethernet (FCoE), iSCSI-based storage, and storage accessed via Network File System (NFS). In addition, vSphere supports the use of multiple storage protocols within a single solution so that one portion of the vSphere implementation might run over Fibre Channel while another portion runs over NFS. This provides a great deal of flexibility in choosing your storage solution. Finally, vSphere provides support for software-based initiators as well as hardware initiators (also referred to as HBAs or converged network adapters), so this is another option you must consider when selecting your storage solution.
What is Required for Fibre Channel over Ethernet Support?
Fibre Channel over Ethernet (FCoE) is a somewhat newer storage protocol. However, because FCoE was designed to be compatible with Fibre Channel, it looks, acts, and behaves like Fibre Channel to ESXi. As long as drivers for the FCoE converged network adapter (CNA) are available – and this is where you would go back to the VMware Compatibility Guide again – support for FCoE should not be an issue.
When determining the correct storage solution, you must consider these questions:
• What type of storage will best integrate with your existing storage or network infrastructure?
• Do you have experience or expertise with some types of storage?
• Can the storage solution provide the necessary performance to support your environment?
• Does the storage solution offer any form of advanced integration with vSphere?
The procedures involved in creating and managing storage devices are discussed in detail in Chapter 6.
Integrating with the Network Infrastructure
The third and final major decision of the planning process is how your vSphere deployment will integrate with the existing network infrastructure. In part, this decision is driven by the choice of server hardware and the storage protocol.
For example, an organization selecting a blade form factor may run into limitations on the number of network interface cards (NICs) that can be supported in a given blade model. This affects how the vSphere implementation will integrate with the network. Similarly, organizations choosing to use iSCSI or NFS instead of Fibre Channel will typically have to deploy more NICs in their ESXi hosts to accommodate the additional network traffic or use 10 Gigabit Ethernet (10GbE). Organizations also need to account for network interfaces for vMotion and vSphere FT.
Until 10GbE became common, ESXi hosts in many vSphere deployments had a minimum of 6 NICs and often 8, 10, or even 12 NICs. So, how do you decide how many NICs to use? We’ll discuss some of this in greater detail in Chapter 5, “Creating and Configuring Virtual Networks,” but here are some general guidelines:
• The ESXi management network needs at least one NIC. I strongly recommend adding a second NIC for redundancy. In fact, some features of vSphere, such as vSphere HA, will note warnings if the hosts do not have redundant network connections for the management network.
• vMotion needs a NIC. Again, I heartily recommend a second NIC for redundancy. These NICs should be at least Gigabit Ethernet. In some cases, this traffic can be safely combined with ESXi management traffic, so you can assume that two NICs will handle both ESXi management and vMotion.
• vSphere FT (if you will be using that feature) needs a NIC. A second NIC would provide redundancy and is recommended. This should be at least a Gigabit Ethernet NIC; it can require a 10GbE NIC depending on how many vCPUs the FT-enabled VM has.
• For deployments using iSCSI, NFS, or VSAN, at least one more NIC, preferably two, is needed. Gigabit Ethernet or 10GbE is necessary here. Although you can get by with a single NIC, I strongly recommend at least two.
• Finally, at least two NICs are needed for traffic originating from the VMs themselves. Gigabit Ethernet or faster is strongly recommended for VM traffic.
This