I have been doing testing various hyper converged storage platforms that can coexist with ESX, along with some bare metal software storage platforms. In all cases I am using embedded RAID controllers in the servers, in some cases I using some add-on cards. I have two cards in use currently, one is some Intel flashed LSI card and the other are SuperMicro LSI 2208 that is embedded in the FAT Twin. While in all of these cases you can use single-disk RAID0 logical volumes, doing so adds a lot of extra steps and in many of my systems it offers no gain.
WARNING: Proceed at your own risk, I recommend verifying that no data will be impacted by this task. I also encourage you to confirm that the JBOD (aka pass-through mode) configuration is supported with your hardware and your storage platform.
It is possible that you can do some of these steps with getting into the boot BIOS, however in the case of the Intel flashed LSI cards the boot BIOS is really horrible. I spent an hour trying to navigate the BIOS over remote console via the Intel Remote Management Module…but it was absolutely painful and the only thing that worked was using the wizard, which created undesirable configurations. I ended up working around this by doing the following steps:
- Download a live boot CD Linux image
- Connect ISO to server through virtual media insertion of remote console
- Boot Linux image
- Configure networking on Linux
- Download MegaCLI to local workstation, then SCP it to the Linux machine
- Install MegaCLI
- Run MegaCLI commands
In more detail:
I downloaded MegaCLI and placed it on my Dropbox folder, this made it easy so I could just use wget on the Linux server after it booted. Once Linux was booted I configured an IP address onto my appropriate network interface using ifconfig statement, added DNS to resolve.conf, and a default gateway. I then could SSH in where I had copy and paste to just run the same commands quickly across my dozen hosts. In my case I selected the CentOS 6.5 LiveCD from a close by mirror, but you should be able to use any Linux bootable CD that is of a more recent build.
I will warn that doing these steps with any data in place will absolutely lead to data destruction. I am not liable for how quickly the -CfgLdDel command obliterates any existing logical volume configuration, proceed at your own risk.
Here are the commands I would run after SSHing into the Linux server.
wget https://<URL to your location of MegaCLI rpm)
rpm -ivh MegaCli-8.07.14-1.noarch.rpm
./MegaCli64 -CfgLdDel -LALL -aALL
./MegaCli64 -AdpSetProp EnableJBOD 1 -aALL
./MegaCli64 -PDMakeGood -PhysDrv[252:1,252:2,252:3,252:4,252:5,252:6,252:7] -Force -a0
./MegaCli64 -PDMakeJBOD -PhysDrv[252:1,252:2,252:3,252:4,252:5,252:6,252:7] -a0
In the above commands it executes against every possible directly connected target, if you have JBODs with SAS expanders you will need to tailor these commands to your environment. The final flag of “-a0” is for adapter0, if you have more than 1 adapter you can repeat this for each adapter as needed or alternatively you can specify “-aALL”, you can see the list of adapters and their corresponding IDs with something similar to this and look for the adapter numbers:
./MegaCli64 -AdpAllInfo -aALL
If you actually have ESX installed then there is MegaCLI available for ESX as well, I was able to find it in the LSI download archives in the 8.07.07_MegaCLI package. I cannot say if this is maintained or supported by VMware or ESX any longer. If you use a version of MegaCLI that is not 64-bit (e.g. that for ESX) the command input is the same, however it is simply “MegaCli” rather than “MegaCli64”.
If you prefer you can use the same method, but different MegaCLI commands, to create a single RAID volume for every physical device. The downfall of having a logical volume in the mix is that your OS may not be able to detect which disk devices are SSDs vs spinning media, however if you want any HBA cache to operate you will have to use RAID0 mode. Those commands are:
./MegaCli64 -CfgEachDskRaid0 -aALL
This command will create a single disk RAID0 logical volume for every attached disk on all controllers. Alternatively you can add settings to change cache settings to the command in order to enable those advanced features depending on your desired configuration.
This is by no means an all encompassing post for MegaCLI, which is a very powerful tool and has a lot of options to pick from…so proceed carefully.
5 thoughts on “Switching LSI SAS 2208 and similar chipsets to JBOD mode”
The following link:
says that configuring LSI MegaRAID 2208 card to JBOD mode works, but not stable.
Did you noticed any sign of unstableness after doing such switch? I have some servers from Quanta, and I noticed that the RAID controller is basicallly a MegaRAID 2208 based solution. And I was just about to have a shot with your receipt, but would like to hear from your feedback before doing so.
This got stuck in some spam filter, sorry for the 7+ month delay in approval and response. I have moved to new hardware and different testing so I can’t speak to it personally, however I have heard of issues with the 2208 in JBOD mode. Like many things, YMMV and it is subject to change I guess.
A JBOD disk is NOT the same as RAID. It is specifically NOT a “RAID0” as JBOD is not a type of RAID at all.
JBOD is just JBOD. It is a disk that is independent of the RAID controller’s enhanced functions.
Pardon. I just wanted to add that as your page is coming up in search results and it seems that some people are led to believe that they are the same. I wanted to emphasize that point. 🙂
I’m not sure what this adds, as I never stated that JBOD mode was the same as RAID0. In fact the sentence on this from above states “While in all of these cases you can use single-disk RAID0 logical volumes, doing so adds a lot of extra steps and in many of my systems it offers no gain.”
Clearly JBOD and RAID are not the same mode, however single disk RAID0 offers no feature advantage for most use cases over the JBOD mode. If you are just looking for a higher performance, greater queue depth initiator to access your disks (vs using the onboard SATA controller) then the JBOD mode is a perfect replacement without additional complexity. In fact for many storage software platforms you have to take additional steps for the single-disk RAID0 configuration to disable the “advanced features” of the controller to keep them from conflicting with the storage software stack.