Booting RHEL 6 from NetApp iSCSI

Today we’re going to take a look at booting RHEL 6.1 x86_64 from a NetApp iSCSI LUN, but first we’re going to take a look at why we would want to do this in the first place.  Sure, it’s cool from a techie standpoint, but to do this in the data center requires a compelling argument.

  • Centralizing the boot disks means that backup and DR is vastly simplified.  You don’t need a separate backup network to all of your servers, or even just your critical servers.  Provided you have the proper configuration and procedures in place for your storage, entire volumes of boot LUNs or individual boot LUNs can quickly be restored.
  • Using NetApp as the backend storage will actually reduce the overall footprint – as long as Deduplication is enabled and all FlexVols  LUNs are thinly provisioned.
  • All the benefits of an FCP boot solution, without the add infrastructure costs of FC switches, FC HBA’s, or dedicated FC SAN frames.
  • Additional flexibility above and beyond FCP when used in conjunction with NetApp MultiStore.  MultiStore provides the feature of ‘vFilers’ – in short, lightweight instances of Data ONTAP that can be moved from physical controller to physical controller as necessary.  The concept is similar to that of a virtual machine on a hypervisor.  (I see another blog post idea….)

So, what did I use as far as equipment?  Here’s a run down of the components involved:

  • NetApp FAS3170 (any FAS controller would have been fine)
  • Fujitsu Primergy RX200 S6 Server (I like the iSCSI boot capable onboard NICs)
  • Cisco Catalyst 4948 Switch (ubiquitous switch)

NOTE: This is a DVD install only, but I promise I will put up another post that covers the iSCSI boot install via Kickstart in the near future.

Here’s what I did to install and boot from the NetApp iSCSI LUN:

I created an iSCSI LUN, an igroup (initiator group), and mapped the igroup to the LUN on the NetApp controller:

> lun create -s 40g -t linux -o noreserve /vol/boot_vol/boot_lun
> igroup create -i -t linux igroup_rhel6
> lun map /vol/boot_vol/boot_lun igroup_rhel6 0

Specifically, this creates a 40GB thin provisioned LUN optimized for Linux, in the /vol/boot_vol volume.  The iSCSI igroup is created and contains a single iSCSI initiator that is then mapped to the LUN we created.  (What I didn’t show was that I also created a VLAN interface just for iSCSI traffic and that the switch port is configured for that same VLAN.)

Next, I booted the Fujitsu server with the RHEL 6.1 ISO, and waited for the prompt to configure the onboard iSCSI and typed [CTRL-d], where the following menu greeted me:

From there, I pressed [P] to make the 2nd device the Primary boot device, and the next menu came up:

I entered the iSCSI Boot Configuration, which triggered the last menu:

Here I specified the Initiator Name & IP information as well as the Target Name & IP information, leaving the default iSCSI port of 3260.

NOTE: If there is not a default gateway on the storage network, then “fake it” by entering in the IP of the storage target to avoid ‘dracut’ complaining that it can’t resolve the gateway via ARP.

I exited the menus, saved the configuration and rebooted to the RHEL 6.1 ISO.

Go through the installation menu as normal until you reach the screen below, where you will select Specialized Storage Devices:

From there, click on the Other SAN Devices tab, where the iSCSI device should be listed (the onboard NIC logged into the iSCSI device earlier in the process):

othersandev From there, install as normal.

So there we go – it’s all very straightforward.  This is a perfect solution for the non-FCP datacenter that still requires or prefers SAN boot.  All of the Storage Efficiencies are still available, DR & backup are easier to plan and deploy, and added flexibility – but without any additional infrastructure cost with added.

That’s a win-win-win in my book blog.

thanks for reading,


13 thoughts on “Booting RHEL 6 from NetApp iSCSI”

    1. Hi qemm,

      I’m not sure what you’re asking, but perhaps you’re wondering about having a VLAN tag on the server side boot device? The short answer is that the server side doesn’t have to have the VLAN tag as it will be added by the switch. (If you’re using an iSCSI HBA, you may be able to add a VLAN id to the HBA BIOS.) Please reply back if this isn’t what you were asking about.



  1. Pingback: tableau
    1. Hi Beau,

      I apologize for the delayed response. I have in fact successfully tested mpath boot devices. With RHEL 5, ‘mpath’ had to be specified as a kernel boot option, but RHEL 6 picks it up automatically.

      Captain KVM

  2. Hi Captain;
    I’m constantly running into the problem of a kernel panic after rebooting a fresh centos 6.5 install via iSCSI. The machine installs just fine; however when i reboot it, kernel panic. I kicked up debugging on the kernel boot line and get the error ‘no root device “xxxx” found. Any thoughts as to why the iSCSI drive works fine during install but then
    disappears when actually trying to boot from SAN? Thanks in advance!

    1. Hi Adam,

      Thanks for reaching out. Can you give me a little bit more info? Specifically, what kind of initiator are you using (hardware? Software?), and what is your iSCSI target (NetApp? Linux host?).

      Somethings to check in the meantime:
      At the end of the install, before you allow the host to reboot, go to the virtual terminal that allows you access to the command line. Take a look at the grub.conf file and be sure that the boot line is what you expect..
      I’ve seen situations where the iSCSI boot was configured using an iSCSI interface, but the boot failed because the software initiator was engaging first…
      also, if using a hardware initiator, be sure that it is set as the boot device and that the boot priority is correct..

      Captain KVM

      1. Hardware: NetApp FAS 2240 (target)
        Dell m620 using broadcom dual 10g nics; iSCSI HBA activated.

        My network config is a little tricky. My networks are using vlan tags. I’ve set this up properly in the iSCSI BIOS as well. Whenever I drop down to a dracut shell; the network is not active with vlan tags up.
        I can manually bring network up and connect to the iscsi target from the dracut shell. (Do you have any experience with VLAN tags during initrd?)

        The RHEL 6.5 release notes document a bug in dracut that I think is lending to some of my headaches:

        “dracut component

        For iSCSI boot from SAN on Dell systems which enable settingbiosdevname=1 by default, the installation completes successfully, but the system will not be able to mount the rootfs partition after reboot. This is because of a bug in Dracut where the boot network interface is not brought up if biosdevname naming is used. In order to install and reboot the system successfully in this case, use the biosdevname=0 installation parameter to avoid biosdevname naming.”

        I reinstalled and set the biosdevname parameter; however I’m still having some issues with the network activating & mounting the iSCSI root.

        I really appreciate you taking the time to reply! Thanks!

        1. Hi Adam,

          The bug that you found is actually quite helpful as it really sounds like you set things up correctly. And I was about to ask if you enabled VLAN tagging in the iSCSI BIOS, as I seem to remember Dell allowing that… but you already answered in the affirmative.

          Back to the bug. After you set that variable to account for the dracut bug, did you re-run dracut? Actually, let’s take it a step further.. you install your system, you unset the biosdevname variable, THEN manually load the 802.1q module (`modprobe 8021q`), THEN run dracut (`dracut`), THEN reboot. Then lets see if that helps you out. Here’s my working theory: because the VLAN tag is being called from the hardware initiator and not the OS, the module is never loaded. (Dracut will build the initrd/initramfs based on what is loaded at boot time..) The boot line in the grub.conf is likely fine as it sounds like it is pointing to your LUN, but it times out or outright fails because the the switch doesn’t recognize or is blocking the VLAN tag…. So, if that’s the case, we manually load the VLAN module (8021q), then manually run dracut and it will rebuild the initrd/initramfs based on what is then loaded….

          Hopefully this sets you straight, otherwise re-ping me and we’ll see what other options we have.

          Captain KVM

          1. Thanks for the reply!
            It turns out I was able to pass the option vlan=(physdev.vlantag:physdev) to the grub commandline along with the iscsi info (host & target) and it finally

            Now, the iSCSI_firmware option was also passed, which should be pulling the iBFT info so I wouldn’t have to enter this info manually but for some reason that did not work. Maybe I’ll ping the Red Hat folks with a bugzilla report about this not working as expected.

            One thing I did notice previously; 802.1q was already loaded before I
            modified anything in initrd. It would also connect to the iSCSI target
            (as evidenced by the block device showing up as well as iSCSI related kernel messages) but would then fail to boot from the device.

            Thanks again for the pointers.


          2. Hi…it’s me again 🙂
            So, I previously had my setup working. I had to go and complicate things by using a LACP bond on eth0 & eth1. The problem is, it appears that vlan tags on bonded interfaces aren’t supported in initrd/dracut, hence “8021q VLANs not supported on bond0”. Do you have any experience wtih this? I’ve tried passing the bond / vlan creation flags via grub commandline. They seem to execute just fine but return the aforementioned error.
            I also get this same behavior when using the dracut shell (rdshell) and try to create the vlans
            manually on the bond0 device.

            Another thing that I tried was not using bonded interfaces and just vlan tagging the physical
            interface; that allows an iscsi boot just fine and then the network scripts that init loads on
            actually booting the OS seem to choke because I’m trying to have bond interfaces with vlan
            tags associated that are already shared by my eth0 device.

            Thoughts? Sorry to monopolize your time here 🙂

          3. Hi Adam,

            No apologies necessary.. I’ve got 2 posts on “maximizing 10GbE”.. you should check them out as they deal with bonding and VLANs.


Agree? Disagree? Something to add to the conversation?