iSCSI Boot from SAN with Cisco UCS

Configuring Fibre Channel boot-from-SAN is tricky. iSCSI boot-from-SAN is similar, but has additional considerations.

Boot-from-SAN with iSCSI storage is more involved than FC boot-from-SAN, and much more involved than local storage ESXi installations. There’s some front-end work to configure boot-from-SAN on the UCS side, and then some work on a per-host basis to configure the boot LUNs, test boot behavior, troubleshoot (don’t discount this!), and so forth. It can take a lot of debugging before going into a production mode of deploying each host’s settings. If there’s a legitimate reason to boot from SAN, such as a large number of hosts and a desire to have truly stateless servers with service profile portability, then boot-from-SAN makes sense. Anything less than about 10 hosts in a given workload type (ESXi for servers, ESXi for VDI, Linux, Windows, etc), makes boot-from-SAN less advantageous because of the extra effort to configure it.

iSCSI boot-from-SAN has some configuration details that you need to keep in mind: 

  • In any boot-from-SAN scenario, each host needs its own boot LUN. Typically I configure 5-GB boot LUNs (FC or iSCSI), which is about 4 GB more than needed, but it’s better to allow a good bit of overhead. Doing so requires explicit iSCSI initiator mapping to each LUN.
  • ESXi cannot dump its core logs to the iSCSI boot LUN. You can mitigate this by installing the Log Collector software (part of the vCenter installer) on a VM in the environment–perhaps the vCenter server or a utility box–and point all the hosts to that.
  • ESXi needs a logging location. Boot-from-SAN (and USB and SD as well) requires you to create a log folder on a datastore and point the host to it. After ESXi installation, the host will complain that it has no persistent storage for logging; a quick google search turns up the KB article. 
  • iSCSI boot-from-SAN with ESXi automatically creates a vSwitch and a port group with a single NIC for iSCSI boot. You should not modify this switch or the port group. Its settings may change on a subsequent reboot if the primary boot NIC isn’t available. This gives rise to some troubles later with respect to the iSCSI software initiator, which UCS also automatically creates on the ESXi host when you configure boot-from-iSCSI. This is the strongest argument against doing iSCSI boot-from-iSCSI.
  • Don’t configure fabric failover for the UCS boot NICs. Configure one NIC per fabric, with no failover. Use one for the primary boot NIC and the other for the secondary boot NIC.
  • iSCSI boot-from-SAN creates a software iSCSI initiator on the host as part of the boot process. This initiator comes from the pool(s) you create in UCS Manager. As the keen-eyed reader will recognize, you cannot create more than one iSCSI software initiator on an ESXi host. Therefore, all iSCSI volumes must be mounted through this initiator.
  • If you have other IP storage (NFS, for example), configure it to use separate NICs if possible. Put NFS on its own vSwitch (one NIC on fabric A, one on fabric B), and in a different subnet, than iSCSI. On NetApp arrays, this is easy to do because you can create different VIFs for iSCSI and NFS storage
  • This one really warrants its own subject–but for the purposes of this post, I’ll keep it to this: If you have separate uplinks from the fabric interconnects to separate upstream switches, a common case for which is when you have dedicated storage switches, you must configure Disjoint Layer 2 to keep the storage traffic on only the correct uplinks, or you will have nightmares making SAN boot work consistently.

A bit more on why you may not want to boot from iSCSI storage. When you configure iSCSI storage on an ESXi host that isn’t configured to boot from SAN, you manually create the iSCSI initiator on the host, then bind your storage NICs to it so that you can configure and take advantage of ESXi iSCSI storage multipathing. With boot-from-iSCSI, multipathing is impossible. Good thing UCS hosts have 10-Gbps NICs, right?

When you configure boot-from-iSCSI in UCS, the UCS creates the iSCSI software initiator on the ESXi host, as mentioned earlier. It uses an IQN based on your initiator pool for the fabric from which the host boots. Let’s say that you create a second iSCSI vSwitch for storage, and configure its NICs in accordance with VMware’s recommended practices–two vmkernel ports, two NICs, with one NIC Active and one Unused per vmkernel port. When you then map these NICs to the iSCSI initiator, all connectivity to the iSCSI storage drops. The only way I’ve found to keep it connected, so far, is to unmap the separate storage NICs from the software iSCSI initiator. Thus the single-link constraint.

A few additional tips:

  • Watch the host boot through the KVM and make sure that the software initiators are logging into the storage. You should see this occur during the boot sequence, and you should see the correct host drivers start when ESXi is booting.
  • Check the SAN to make sure you are seeing only the desired initiators logging into the boot LUNs.
  • Perform several reboots after installing ESXi and watch the KVM to see that the hosts boot consistently before continuing with any other host configuration.
  • This one also deserves its own topic, but I’ll give it a bullet here. Never route storage traffic if you can avoid it; keep it in the same VLAN and do it all at layer 2. Routing it is suboptimal at best. If you must route storage traffic, configure QoS and make it the highest priority on the network, end to end. Block storage is designed to operate in a lossless environment with low, consistent latency. Be sure you configure the network to provide that level of performance.

Rus

About virtualrush

CCIE #15025, VCP5, many certifications in Cisco, VMware, NetApp, Citrix, and others. I work for a midsize technology integrator.
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

One Response to iSCSI Boot from SAN with Cisco UCS

  1. William Lingle says:

    Hello and thank you for this article,

    This is the only information I can find about booting UCS/ESXI from iSCSI SAN.

    First, let me say that your article eliminated many headaches we had trying to get this to work. So thank you!

    I have some questions for you I hope you can answer. You say not to use hardware failover, but to assign a vnic to each fabric.

    Could you elaborate here why not hardware failover? And then, since I now have 2 vnics in my vswitch/port group from ESXI (assigned the same iSCSI initiator from UCS), why wouldn’t I be able to use multipathing? Thanks in advance.

    Hope you can answer these questions and thank you for such a great article.

    Bill

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s