Simplified remote access to a home lab

One of the challenges of being someone that travels on a regular basis is that you are often not near your lab. The investment in a home lab really requires the ability access it from anywhere in order to meet any hope of a falsely perceived ROI. I’ve had a Unix/Linux based workstation for more of my working life than I’ve had a Windows one, sure Windows was always involved as a virtual machine on VMware Workstation (Linux) and now VMware Fusion (Mac).

There are insecure, complex and/or expensive options, such as buying a Cisco ASA or some other “firewall” that supports VPN…but that doesn’t support the goals and requirements for my lab and is the expensive option. The possibly more complex option would be to build a firewall from an PC, but that is high maintenance and I prefer my regular access to be simple and reliable (thus I have a Mac + Airport household, other than the 3 lab servers). The insecure option would be to expose RDP on your Windows guest directly to the Internet, that is not an option for me. My service provider background makes me paranoid about Windows security, or lack there of.

I have chosen to go with the cheapest and simplest option, in my mind. Linux virtual machines are light weight, use few resources, and you could always use a non-persistent disk to make it revert to a known config with a simple reboot (or restore from a snapshot). I leverage SSH tunneling, which is often overlooked and people peruse more complex L2TP or IPSEC based options…but SSH is just simple, seldom blocked on networks and does the job. I have not gone as far as using L3 tunneling, though that is an option with SSH.

Firewall Settings

In my network I have 1 open port on my “firewall” (Apple Airport Extreme) which is forwarded to a minimal Linux virtual machine with a static (private) IP address.

  • Public Internet –> Port 8080 on firewall –> Port 22 on Linux

I would recommend creating multiple port forwards on your firewall, this will give you other options if the one you choose was blocked. I’ve had good luck with 8080 and 8022 so far, but some environments may block those…there is nothing to say you can’t use port 80, however any forced proxy server will break your SSH session access…or protocol inspecting firewalls, and some service providers block the ports 25, 80, 443 and others.

The beauty is that from the Linux side very little needs to be done, I would recommend editing your SSH config on the Linux VM to prevent root access. Keep in mind you really must create your non-root users before you do so, otherwise you cannot login via SSH and will have to add those accounts via console.

Secure Linux SSH Settings

I would recommend making sure your Linux VM is up to date using the correct update process for whichever distribution you select. The SSH server is pretty sure anymore, but when compromises are found you should update to apply the relevant patches.

I would recommend editing the config file for sshd (/etc/ssh/sshd_config). Find the line that states PermitRootLogin and edit it to be “no”, if it is commented out remove the “#” and set it to “no”.

  • PermitRootLogin no

Now restart SSH: $: sudo /etc/init.d/sshd restart

The reason to remove root access to SSH is that its a “known” account and can easily be targeted. You should generally use hard to guess usernames and complex passwords for this “access server”, it is going to be port scanned and have attempts made to compromise it. You ideally would configure the authentication policies so that account lock-out occurs after too many false attempts. Personally I do not allow interactive password based logins, I use only pre shared keys (much more difficult to guess a 2048 bit RSA key than a 8 character password). You can investigate the RSAAuthentication and PubkeyAuthentication options within the sshd_config file to learn more about that option.

Public Access

My cable modem provider issues me a DHCP address, it happens to have been the same address for many months but there is always the chance it could change. I use Dyn ( to provide dynamic DNS to my home lab. You can install one of their dynamic DNS clients ( on any OS within your home network that is generally always on (e.g on your Linux access server), some “routers” (e.g. Cisco/Linksys) have one built in.

Client Connection

Setup SSH Saved Configs
At this point you just need to configure your client. I happen to use the default SSH client on Mac OS, though if you are using Windows you could use PuTTY or another client and achieve the same. In my case I don’t want to manually type out all of my config settings every time I connect, remember this is more than for SSH CLI access…it is for our simple “VPN”.

In my environment I either want SSH access or RDP (e.g. to Windows for vSphere Client) access. I do this through simple port forwarding rules.

In order to configure saved “session” settings for the shell SSH client on OS X you will need to do the following:

  1. Open a terminal window of your choice ( or my preferred iTerm2)
  2. Navigate to your home directory: $: cd ~/
  3. Create a .ssh directory: $:~ mkdir .ssh
  4. Create a .ssh/config file: $: touch ~/.ssh/config
  5. Set security settings on the .ssh directory, otherwise sshd will not accept your keys if you use them in the future: $: chmod 700 ~/.ssh
  6. Set security settings on config (not really necessary, but anything in .ssh should be set this way): $: chmod 600 ~/.ssh/*
  7. Now we can move on to building our configuration

You can use the editor of your choice to open the config file, if you wish to use an app you can go to finder and press CMD-Shift-G and you will be given a box to type in your target folder (e.g. ~/.ssh/ ), you can then edit the file with whichever editor you prefer (e.g. TextMate). The format of the file is:

Host <name used as ssh target>
        HostName <target hostname>
        User <username>
        Port <TCP port on firewall>
        Compression yes
        AddressFamily inet
        CompressionLevel 9
        KeepAlive yes
        # RDP to Server1
        LocalForward localhost:3389 <private IP>:3389
        # RDP to Server2
        LocalForward localhost:3399 <private IP>:3389
        # RDP to Server3
        LocalForward localhost:3390 <private IP>:3389

Working example:
Host remotelab
        User user0315
        Port 8080
        Compression yes
        AddressFamily inet
        CompressionLevel 9
        # Privoxy
        LocalForward localhost:8118 localhost:8118
        # RDP to Control Center Server
        LocalForward localhost:3389
        # RDP to vCenter
        LocalForward localhost:3399
        # RDP to AD Server
        LocalForward localhost:3390
        # HTTPS to vCloud Director cell
        LocalForward localhost:443

In my case I also installed and configured Privoxy ( ) to give me the ability to tunnel other protocols via proxy settings on my laptop (e.g. web browser, instant messengers, etc).

Connect To Your Lab

What was the point of all of this if I don’t show you how to connect? Open your terminal again and type “ssh” followed by your saved config name (e.g. $: ssh remotelab). Authenticate as needed, you should then be connected to the shell of your Linux VM.

Now open your RDP client of choice (I suggest CoRD: ), select to connect to one of your target tunnels specifying localhost:<target port for desired server>.


Now anyone lazy, errr…striving for efficiency, will save a config for their servers within CoRD for connecting directly when on your network or via the tunnel. You can then just select the saved session within without having to remember which TCP port is for each server.

Of course, for those Windows users this doesn’t help. In Windows you have a really neat client you can use to simplify this, I would recommend Tunnelier from bitvise: There may be simpler GUI driven SSH clients for configuring this for Mac OS, however I just use what is included as its always there and it doesn’t break when you upgrade to the next version.

Have a better way that is easy? Let me know, I’m always open to new ideas around getting access to the lab. I’ve always intended to setup View with a secure server, but that is also on the complex path and I want something that just works. Once this configuration is setup you can duplicate it easily, as the complexity is in the saved .ssh/config file and not the “server”.


Managing Windows 2008 Server Core

In the interest of reducing overhead within my lab environment I decided to try and use Windows 2008 Server Core (R2/x64). If you’ve ever installed Server Core, the first thing you notice is you are only presented with a CMD shell at login, there is no full GUI. You can launch applications, installers, etc. however there is no Start Menu to assist you on your way.

I decided I’d chase this rabbit for a bit, as I am researching the use of Server Core in conjunction with vCloud Directory deployed vApp services…as a bloated UI heavy OS isn’t the most practical when it comes to “scale of cloud”, sure we have magical Transparent Page Sharing (for more info on TPS see: PDF written by Carl Warldspurger)…but just like energy efficiency, the easiest WATT to save is the one you never used. I know many of my fellow vGeeks have home labs, and very few of us have the host resources we’d like to have so we are all chasing efficiency, especially in regard to storage and host memory. In order to adapt to using Server Core you need to figure out how to manage it, so I thought I would write an article about what I have learned.


The first option Microsoft offers is the shell tool, Sconfig, “Server Configuration”, you can access this by running Sconfig.cmd from the CMD prompt.


We can navigate this tool easily by inputing the number of the section we wish to view or modify. If we select #4 (Configure Remote Management) we are given the following subset of options:


If you recall, by default almost all of the remote management tools are disabled on Windows. Likewise, the firewall is enabled and fairly restrictive. Being able to turn on these remote management options quickly so that we can move away from the console is always a benefit.

I went ahead and selected #1 Allow MMC Remote Management, remote MMC is pretty useful as it allows me to consolidate the management tasks between multiple target servers into one place. The window immediately indicated that it was configuring the firewall and enabling required services. It then gave a popup indicating the final status.


I would personally prefer to not have the popup window that I then having to use a mouse to navigate to and select OK, my preference would have been for that status update and acknowledgement to be provided within the textual interface; I suppose I shouldn’t be surprised that a company that is bad at UI is even worse at shell. I will give them some credit, as Sconfig seems to have more intelligence than many of their GUI based wizards do, which often leave some tasks incomplete that must be manually finished in other wizards. Since this is my template I personally went ahead and enabled all of the remote management options to avoid having to do so later.

Core Configurator

The next tool which I found is Core Configurator, which can be downloaded from CodePlex. This tool is delivered as an ISO, one option within vCloud Director would be to upload this ISO and attach it to your Windows 2008 Server Core virtual machine as needed, this may provide some additional security however it isn’t necessarily convenient. Personally, I opted to copy the contents of the ISO to a directory within my Windows 2008 Server Core template so that it is readily available.

Microsoft states that Core Configurator supports the following tasks:

  • Product Licensing
  • Networking Features
  • DCPromo Tool
  • ISCSI Settings
  • Server Roles and Features
  • User and Group Permissions
  • Share Creation and Deletion
  • Dynamic Firewall settings
  • Display | Screensaver Settings
  • Add & Remove Drivers
  • Proxy settings
  • Windows Updates (Including WSUS)
  • Multipath I/O
  • Hyper-V including virtual machine thumbnails
  • JoinDomain and Computer rename
  • Add/remove programs
  • Services
  • WinRM
  • Complete logging of all commands executed

In order to launch Core Configurator, you simply navigate to the directory that contains it and run Start_Coreconfig.wsf (default for attached ISO this would likely be D:\Start_Coreconfig.wsf), which presents this interface:


Selecting the small expansion arrow at the bottom reveals a few more convenient options:


Control Panel view:


I was able to use this tool to install all of the latest Microsoft hotfix packages, the interface could use a “select all” option…but then again, you generally want to review what you are installing and this encourages you to do so. You must select the hotfix to be installed one at a time.


Firewall Settings

You can easily view and modify the Firewall Settings:





Without the GUI we are accustomed to, even the most basic tasks become challenging. Perhaps you know how to configure interfaces, join an Active Directory domain, or even change the computer name from command line on Windows; this task was previously foreign to me. Just how much do we save by using Core instead of a full version of Windows 2008 Server? Here is a screen shot taken from vCenter on resource utilization for this particular Windows 2008 Server Core (R2 x64):


Here is the same resource accounting for a similar (base) config for Windows 2008 R2 Standard:


Notice the Active Guest Memory of each of the above? With only default installation + VMware Tools installed thats a 47% decrease in active memory, it is also a ~decrease of ~20% decrease in storage capacity to support the base OS. While this isn’t much in a large production environment, however I don’t have the luxury of a Cisco UCS B230 with 32-DIMM slots for my lab…when my host only has 16GB of RAM, that increases the number of base OS I can support…again, the easiest unit of X to conserve is the one you don’t use.

Home Lab – Storage Performance Test (Part 1)

This is a continuation of my Home Lab Build – Overview

In order to help out my fellow vGeeks I thought I should keep with my “comparison” to the hardware storage appliances. While I personally won’t be running my system as a 4-disk configuration, I realize that some of them may. I ran some tests using Bonnie++ benchmarking and dd from /dev/zero to provide some benchmarks, I realize that these will not be 100% representative of the performance that would be experienced with protocols in place however it should provide a relative comparison between the disk configuration options.

I have chosen to use Bonnie++ for my benchmarks as it is a far faster setup, it operates from within the management console of Nexenta. If you are not familiar with Bonnie++ and how it performs testing you can find more info here:

I will run three tests using Bonnie++, only varying the block size between 4KB, 8KB, and 32KB.

  • run benchmark bonnie-benchmark -p2 -b 4k
  • run benchmark bonnie-benchmark -p2 -b 8k
  • run benchmark bonnie-benchmark -p2 -b 32k

Each test will be performed against each of the following storage pool configurations:

  • RAID0 (no disk failure protection)
  • RAIDZ1 (single disk failure protection, similar to RAID5)
  • RAIDZ2 (double disk failure protection, similar to RAID6) *this configuration will only be tested with 4K block sizes to display the parity tax*

I will run a few “hardware” variations, my target configuration with 2 vCPU and 4-6GB RAM as well as a reduced configuration with 2vCPU and 2GB of RAM. I expect the decrease in RAM to mostly decrease read performance as it will reduce the working cache size.

I intended to have the time to create some lovely graphs to simplify the process of comparing the results of each test, however I could either wait another week or two before finding time or I should share the results in the output format from Bonnie++. In order to get this info to my fellow vGeeks, I have decided to publish the less-than-pretty format, after all, any real geek prefers unformatted text to PowerPoint and glossy sales docs.

Hardware Variation 1 (2 vCPU/6GB RAM) / 4-disk RAID0

================== 4k Blocks ==================
157MB/s 42% 84MB/s 32% 211MB/s 28% 1417/sec
156MB/s 41% 83MB/s 32% 208MB/s 28% 1579/sec
——— —- ——— —- ——— —- ———
314MB/s 41% 168MB/s 32% 420MB/s 28% 1498/sec

================== 8k Blocks ==================
148MB/s 22% 92MB/s 20% 212MB/s 20% 685/sec
147MB/s 21% 90MB/s 20% 212MB/s 21% 690/sec
——— —- ——— —- ——— —- ———
295MB/s 21% 182MB/s 20% 424MB/s 20% 688/sec

================== 32k Blocks ==================
144MB/s 12% 90MB/s 11% 210MB/s 14% 297/sec
153MB/s 12% 92MB/s 12% 210MB/s 15% 295/sec
——— —- ——— —- ——— —- ———
298MB/s 12% 183MB/s 11% 420MB/s 14% 296/sec

Hardware Variation 2 (2 vCPU/2GB RAM) / 4-disk RAID0

================== 4k Blocks ==================
113MB/s 21% 75MB/s 22% 216MB/s 31% 980/sec
113MB/s 21% 74MB/s 22% 217MB/s 31% 936/sec
——— —- ——— —- ——— —- ———
227MB/s 21% 150MB/s 22% 434MB/s 31% 958/sec

================== 8k Blocks ==================
110MB/s 13% 80MB/s 15% 209MB/s 22% 521/sec
110MB/s 13% 80MB/s 15% 210MB/s 23% 524/sec
——— —- ——— —- ——— —- ———
220MB/s 13% 161MB/s 15% 420MB/s 22% 523/sec

================== 32k Blocks ==================
114MB/s 8% 81MB/s 9% 218MB/s 13% 297/sec
113MB/s 8% 79MB/s 9% 218MB/s 12% 294/sec
——— —- ——— —- ——— —- ———
228MB/s 8% 161MB/s 9% 436MB/s 12% 296/sec

Hardware Variation 1 (2 vCPU/6GB RAM) / 4-disk RAID1+0 (2 x 1+1 mirrors)

================== 4k Blocks ==================
89MB/s 27% 53MB/s 19% 143MB/s 22% 1657/sec
89MB/s 27% 53MB/s 19% 144MB/s 22% 1423/sec
——— —- ——— —- ——— —- ———
178MB/s 27% 106MB/s 19% 288MB/s 22% 1540/sec

================== 8k Blocks ==================
83MB/s 13% 53MB/s 12% 147MB/s 16% 800/sec
83MB/s 12% 54MB/s 12% 147MB/s 16% 752/sec
——— —- ——— —- ——— —- ———
167MB/s 12% 107MB/s 12% 294MB/s 16% 776/sec

================== 32k Blocks ==================
85MB/s 7% 55MB/s 7% 141MB/s 9% 277/sec
82MB/s 7% 53MB/s 7% 135MB/s 9% 266/sec
——— —- ——— —- ——— —- ———
167MB/s 7% 109MB/s 7% 276MB/s 9% 271/sec

Hardware Variation 2 (2 vCPU/2GB RAM) / 4-disk RAID1+0 (2 x 1+1 mirrors)

================== 4k Blocks ==================
65MB/s 11% 48MB/s 14% 154MB/s 22% 892/sec
65MB/s 11% 48MB/s 13% 152MB/s 22% 786/sec
——— —- ——— —- ——— —- ———
130MB/s 11% 97MB/s 13% 306MB/s 22% 839/sec

================== 8k Blocks ==================
67MB/s 7% 47MB/s 9% 157MB/s 14% 669/sec
67MB/s 7% 47MB/s 9% 155MB/s 14% 637/sec
——— —- ——— —- ——— —- ———
135MB/s 7% 94MB/s 9% 313MB/s 14% 653/sec

================== 32k Blocks ==================
68MB/s 5% 31MB/s 3% 153MB/s 8% 338/sec
68MB/s 5% 31MB/s 3% 151MB/s 8% 342/sec
——— —- ——— —- ——— —- ———
136MB/s 5% 62MB/s 3% 304MB/s 8% 340/sec

Hardware Variation 1 (2 vCPU/6GB RAM) / 4-disk RAIDZ1 (RAID5)

================== 4k Blocks ==================
109MB/s 30% 54MB/s 22% 133MB/s 21% 813/sec
108MB/s 32% 54MB/s 22% 131MB/s 20% 708/sec
——— —- ——— —- ——— —- ———
218MB/s 31% 108MB/s 22% 265MB/s 20% 761/sec

================== 8k Blocks ==================
114MB/s 25% 60MB/s 17% 131MB/s 14% 525/sec
118MB/s 24% 60MB/s 18% 133MB/s 14% 517/sec
——— —- ——— —- ——— —- ———
232MB/s 24% 121MB/s 17% 265MB/s 14% 521/sec

================== 32k Blocks ==================
107MB/s 12% 60MB/s 8% 138MB/s 9% 163/sec
111MB/s 11% 60MB/s 8% 138MB/s 9% 172/sec
——— —- ——— —- ——— —- ———
218MB/s 11% 121MB/s 8% 276MB/s 9% 167/sec

Hardware Variation 2 (2 vCPU/2GB RAM) / 4-disk RAIDZ1 (RAID5)

================== 4k Blocks ==================
74MB/s 15% 40MB/s 12% 160MB/s 18% 715/sec
76MB/s 15% 41MB/s 13% 165MB/s 19% 651/sec
——— —- ——— —- ——— —- ———
151MB/s 15% 82MB/s 12% 325MB/s 18% 683/sec

================== 8k Blocks ==================
75MB/s 9% 42MB/s 8% 167MB/s 21% 384/sec
73MB/s 8% 42MB/s 8% 166MB/s 20% 387/sec
——— —- ——— —- ——— —- ———
149MB/s 8% 85MB/s 8% 333MB/s 20% 386/sec

================== 32k Blocks ==================
73MB/s 5% 41MB/s 4% 168MB/s 11% 182/sec
71MB/s 5% 40MB/s 4% 168MB/s 11% 183/sec
——— —- ——— —- ——— —- ———
144MB/s 5% 82MB/s 4% 337MB/s 11% 182/sec

Hardware Variation 3 (2vCPU/8GB RAM) / 4-disk RAIDZ1 (RAID5)

================== 4k Blocks ==================
114MB/s 34% 58MB/s 22% 146MB/s 22% 872/sec
114MB/s 34% 59MB/s 23% 147MB/s 21% 693/sec
——— —- ——— —- ——— —- ———
228MB/s 34% 118MB/s 22% 293MB/s 21% 783/sec

Hardware Variation 1 (2 vCPU/6GB RAM) / 4-disk RAIDZ2 (RAID6)

================== 4k Blocks ==================
71MB/s 20% 43MB/s 16% 111MB/s 20% 716/sec
71MB/s 20% 43MB/s 16% 110MB/s 20% 677/sec
——— —- ——— —- ——— —- ———
143MB/s 20% 86MB/s 16% 221MB/s 20% 696/sec

================== 8k Blocks ==================
75MB/s 13% 42MB/s 10% 110MB/s 12% 540/sec
74MB/s 16% 42MB/s 11% 104MB/s 11% 491/sec
——— —- ——— —- ——— —- ———
149MB/s 14% 84MB/s 10% 215MB/s 11% 515/sec

================== 32k Blocks ==================
70MB/s 7% 43MB/s 6% 109MB/s 8% 202/sec
70MB/s 7% 42MB/s 6% 109MB/s 8% 203/sec
——— —- ——— —- ——— —- ———
140MB/s 7% 85MB/s 6% 218MB/s 8% 203/sec

I have to admit, I was incorrect in my prediction that the RAM size would more directly correlate to read performance…it actually seems that increasing the RAM somehow leads to a slight decrease in read performance, while improving write performance. I am going to speculate this has to do with poor caching algorithms, or at least poor for this workload, as well as ZIL being performed in RAM. The larger RAM leads to increased L2ARC (cache) side, this does improve random-seeks significantly but decreases max read throughout (large block) due to the L2ARC leading to inaccurate predictive reads (speculation).

Much like a NetApp storage systems, writes are always attempted to be done in large chunks…if you actually were to watch the iostat output for the physical devices you would see that it is very much peaks and valleys for writes to the physical media, even though the incoming workload is steady state. NetApp and ZFS both attempt to play a form of tetris in order to make the most efficient write possible, the more RAM available the better it can stage these writes to complete efficiently.

One key measure is the actual throughput per given disk, RAID0 is a good way to determine this. If we look at the results we have the following metrics, as expected re-writes always suffer as they do in any file system. We will focus on writes, reads and random-seeks and I will use the numbers from the lowest memory configuration for RAID0:

  • Writes: 227MB/s
  • Reads: 434MB/s
  • Random Seeks: 958/sec

Now we need to break this into per-disk statistics, which is simply dividing the above value by the number of physical disk.

  • Writes: 56.75MB/sec/disk
  • Reads: 108.5MB/sec/disk
  • Random Seeks: 239.5 IOPS/disk

Of course, we can see that the one flaw in Bonnie++ is that we do not have latency statistics. We normally expect a 7200 RPM SATA disk to offer 40-60 IOPS with sub-20 millisecond response, I have no measure of the response time being experienced during this test or how much of the random seeks were against cache. I selected the lowest RAM (cache) configuration to try and minimize that in our equation.

We can then use this as a baseline to measure the degradation in each protection scheme on a per-data disk basis. In a RAID1+0 configuration we have 2 disks supporting writes, and 4-disks supporting reads and this leads to our reasonable performance for reads. The reason my lab is operating in a RAID1+0 configuration is that my environment is heavily read oriented, and with the low number of physical disks I did not want the parity write-tax in addition with 6 1TB SATA drives I am not capacity restricted.

I almost went into a full interpretation of my results, however I stumbled upon this site in my research: You will find a detailed description into the performance expectations of each RAID configuration, the telling portion is this:

Blocks Available
Random FS Blocks / sec
(N – 1) * dev
1 * dev
(N / 2) * dev
N * dev
N * dev
N * dev

The key item to interpret is that with RAID-Z, the random IOPS are limited to a single device. You will see in the referenced blog posting that a configuration of multiple small RAID-Z groups performs better than a large RAID-Z group, as each group would have 1-device supporting the random workload. This may not be 100% in correlation with RAID5, or whatever RAID scheme your storage platform uses as they are not all created equal.

Home Lab Build – NexentaStor Setup

NOTE: This installation was performed with NexentaStor 3.0.4, later versions may have slight differences in the installation process and the GUI interface.

I’m going to skip on insulting your intelligence by providing screen shots of the installation process for Nexenta, or the configuration of the VM if you go that route. I will start with the assumption that you have NexentaStor (Community Edition) installed on either a physical system or a VM, if you have gone the physical route obviously your network interface names are going to be different than I show. Since I am using VT-d of an actual SAS controller card, the rest should be similar.

  1. Proceed and start the configuration wizard


  2. Select which detected network interface you wish to be your primary (management) – we get more advanced control after the wizard is complete


  3. Select your configuration option (static)


  4. Input your IP Address you wish to use


  5. Proceed through the network configuration defining your subnet mask, DNS servers and gateway


  6. Review your configuration settings. If your configuration is correct, select N(o). If you need to make a correction, select Y(es)


  7. Select if you wish to use HTTP or HTTPS for management access. SSL does add CPU overhead and may be less responsive as the system warns.


  8. Make note of your configured TCP port and change it if desired (default = 2000), this will be the port the web management GUI listens on.


  9. Make note of the provided URL and access it in order to continue configuration.


  10. Open the management GUI in a web browser (Flash enabled) to proceed with the configuration wizard (Wizard 1).


  11. Populate the fields to meet your configuration goals and proceed to the next step.


  12. Configure your passwords for the two default management contexts and proceed.


  13. Define your notification preferences and continue to the next step.


  14. Review your configuration settings and save your configuration.
  15. We are now into the “Wizard 2” stage, this is where we will configure the actual storage options.
  16. Review your current interface settings, you can edit the existing configuration or add a new one. If you wish to aggregate multiple links into a single logical interface you must add a new interface to get that option. I will leave these as they are and can edit them at another time.
  17. Next we are prompted to configure the iSCSI initiator service, this would be used to access another storage device for resources (e.g. to add NFS to an iSCSI only system such as a Dell/EQ). I am not using any other iSCSI systems so this is irrelevant.

  18. This next screen shows us the list of detected disk devices, if you had configured iSCSI on the previous screen and had mapped storage to this initiator those resources should also be visible. I currently have 2 1TB Seagate drives attached to the SATA controller I assigned through VT-d.

  19. In this next section we are asked to create volumes (storage pools). The process is to select the physical resources and assign it to the volume. You can select multiple devices and change the “Redundancy Type” to configure for RAID protection (None=stripe, Mirror, RAIDZ1 = ~RAID5, RAIDZ2 = ~RAID6, and RAIDZ3 = paranoid?) 

I am starting with “none” as I will perform some testing comparing different options in a later post.

  20. In the lower section we configure the properties of the pool, including name, deduplication, and Sync settings (which we will discuss more later). I will leave all settings as default at this time.

  21. Verify your volume was created, if not a red error description will flash temporarily across the upper section of the screen.

  22. In this next portion we can create “folders”, each folder can have its own access type (NFS, CIFS, FTP, RSYNC, etc). I will add a single folder which I will configure for NFS, I am selecting a block size of 4KB to match that of most of my guest OS systems. I also am setting the file system to be case sensitive and to enable unicode.
  23. This is the final step of the guided wizard, we can make any additional changes through the actual management interface. Set the checkboxes to meet your comfort level, I will attempt to compare some of these options in a later post for performance impact.
  24. This completes the basic configuration, the rest will be done through the standard management interface.

NexentaStor Storage Concepts

Within the Nexenta, or perhaps Solaris ZFS storage management, there are :

  • Datasets (ZFS Pools) which are made up of physical disk (or logical from outside array)
  • Shares – logical units presented as file services (CIFS, NFS, RSYNC, FTP, etc)
  • ZVols – logical units presented as block storage (iSCSI)

With that being said, there are really just 2 different processes for allocating storage depending on if it is file based or block based storage.

Again, I hope this helps someone. I will cover configuring storage and accessing it from ESX in a later post.

Home Lab Build – Overview

I realize I’m not alone in this process, it seems many of my fellow VMware enthusiasts are putting together home labs. This will be the first of a few blog posts regarding my home lab, I am working on planning out the requirements and my overall goals. Primary requirements are to be able to operate the majority of the VMware products in order to advance my understanding and satisfy curiosity of the growing portfolio, and hopefully to help me obtain more advanced certifications.

I have always had a lab of sorts on my laptop, though the new corporate issued laptop isn’t quite as beefy as the one from my previous employer. I previously had a MacBook Pro 17” with the i7 processor, the new laptop is a 15” with the i5. In order to make the most out of the laptop hardware both have 8GB of RAM and a dual drive configuration, including SSD. I will go into specifics of my MBP configuration in a separate entry. However I needed something more powerful, 8GB of RAM just doesn’t go very far to supporting multiple ESX hosts, vCenter, database servers and the other infrastructure requirements for hosting a vCloud environment. I had contemplated going for a larger virtual host environment, perhaps a Mac Pro 12-core with 32GB of RAM…until I did the math and compared the results to my “budget”. I decided to go for a lower cost route.

Hosts – So far I have determined the following requirements:

  • Minimum of 2 hosts capable of operating ESXi 4.1
  • Each host must provide at least 2 Gigabits of connectivity
  • Ideally is listed on VMware compatibility list

Network – I have set the following network hardware requirements

  • 8 Gigabit Ethernet ports
  • Switch must support 802.1Q VLAN tagging
  • Should support LACP
  • No proprietary software to manage
  • Support for jumbo frames

Storage – As we all know, shared storage is essential. Yes, we can operate without shared storage but every advanced feature requires shared storage. Since this is a home lab my performance requirements are minimal.

  • Provide iSCSI and NFS storage
  • Provide RAID capabilities to increase performance and resiliency
  • Performance scalability
  • Flexibility

With a little bit of time and creativity I believe I found solutions for each of the requirements. I will detail the hardware selected for each area above.

I have selected to use Dell T110 servers, these servers feature the entry quad core Xeon processors (X3400). I settled on these after looking at several options, including those from HP and home built from bare components. The T110 won out in large part on price, the base price with the Xeon X3430 was $379 but I opted for the upgrade to the X3440 with Hyper-Threading for $90. I couldn’t find any VMware specific benchmarks on either of these processors, however the PassMark score for the X3440 was 5303 vs the X3430 with 3638 which represents a 45% improvement. This is in part due to the Hyper-Threading, which is a debate in of itself regarding hypervisor benefits.

EDITED Feb-16-2010 – NOTE: after purchasing my first 2 hosts Dell decreased the pricing to $329 + $90 for $419 per host, Dell does not provide price guarantee but AMEX does…

The server is listed on the VMware CL and the Xeon 34xx processor includes Fault Tolerance support. Additionally, this server and motherboard support both ECC and non-ECC memory which allows for selecting lower priced non-ECC memory. Due to this I was able to max out the RAM on each host to 16GB for a reasonable price.

The servers are scheduled to arrive early next week.

I considered Netgear, D-Link, HP, Linksys and Cisco switches in trying to pick which was the best value. I would have loved to have a Catalyst switch due the proven track record, however that price alone would have exceeded what I now spent on my 2 ESX hosts. I settled on the Cisco SLM2008, it offers LACP (for when VMware gets around to it), static link aggregation (802.3ad – 2 group limit), jumbo frames and VLANs. Additionally it has a built in management web interface that works from any browser, not requiring any software to be installed is a bonus in my book. If you have a PoE switch to connect it to (or a power injector) it can run from PoE on port 1, otherwise a power brick is included. While I don’t see any value in jumbo frames for IP storage, being able to support MTU sizes larger than 1500 is critical in using Layer2 tunneling options, such as private vCloud Network Isolation features.

The switch arrived today.

EDITED Feb-16-2010
As a storage guy I would have loved to have a NetApp FAS3210 with Flash Cache (a.k.a. PAM, or Performance Acceleration Module) but this would neither fit into my budget nor my wife’s noise tolerance. I have selected to go with a software based solution which I haven’t found many using it judging by the blog posts I’ve read. I have decided to use Nexenta Community Edition for my storage build out, I have advised former customers about this as an option for labs but haven’t actually worked with it myself. In a lab environment it can be self contained, in an enterprise solution it should be combined with an enterprise FC SAN.

While an Iomega, Synology, Drobo, or other storage appliance may be simpler to setup I am certain the option I am going with will smoke the competition at a lower price…we’ll see if I can stay on “budget”. For “budget” comparison sake I am going to work with Amazon pricing for devices that I may have considered:

  • Iomega IX4-200d 4TB (4x1TB)        $593.98
  • Thecus 4-bay N4200                         $779.21 + disk 4 x $64.99 = $1039.17
  • Synology DS411+                                 $639.99 + disk 4 x $64.99 = $899.95
  • Drobo FS (5-bay)                                $695.00 + disk 5 x $64.99 = $1019.95

I already had a server purchased that I am adding this role onto, but I also ended up adding a 3rd host to my configuration as the physical server hosting my storage system is clearly taken out of the ability to be “flexible” on maintenance and configuration changes…so I will show both totals. I already had a few parts that I am going to use, however I will try to add a price for those in to keep a fair tally.

The hardware that is added to the ESX host specific to storage is:

  • SAS Controller: Intel SASUC8I PCIe (OEM’d LSI SAS3801 for a big savings) + breakout cable: $154.99 + $19.99
  • 4-bay external SAS/SATA enclosure and cables        $179.00 + $29.50 + $27.50
  • SATA HDDs: 4 x Seagate 1TB 32MB Cache, 2 x Hitachi 1TB – 4 x $64.99 + 2 x $60
  • SSD for cache: OCZ 90GB SandForce controller drive $129.00
  • Dual-port Intel Gigabit ET NIC        $162.99
  • Dell T110 + 16GB RAM                $419.00 + $129.00

So the rough total here is ~$1630 including my ESX host and ~$1082 without the host, which gives me a full 6 disk storage system that I can expand pretty easily with dual GigE and it serves as my management host for the rest of my environment. Now I realize this is over the price of the other systems, but I believe this will provide more flexibility and far better performance than what those other systems are capable of.

EDITED Feb-23-2010

I’ve had a few questions about the pricing break down, so I thought I would try to make this a more reasonable comparison. In reality the storage I ended up with is far greater than any of the NAS appliances, I have more drive bays (8) and can actually increase to 16-bays for the price of the disk, enclosure and SAS/SATA controller.

In order to keep this an actual comparison to the appliance based options I thought I should show the pricing that is added to my first host.

  • SAS Controller + Breakout cable                        $154.99 + $19.99
  • 4 x Seagate 1TB 7200.12 Drives                        $259.96
  • 1 x 3.5” to 5.25” drive bay adapter (for ESX DASD) ~$5
  • TOTAL =                                                        $439.94

Those are truly the only additional hardware pieces needed, this would give you a 4-disk storage appliance that shares your first host. You can allocate as much or little vCPU and RAM as you wish, realizing that most of those hardware appliance options only have a low end desktop processor and 512MB of RAM.

Edit: You can find more info in the continuation of my lab build here:
Home Lab – Storage Performance Test (Part 1)

Time and tide wait for none

I’ve been in professional services for over 8 years now, most of that has really centered around data storage.  I spent a period of time implementing EMC commercial systems for an EMC contracted services partner, I then spent a few years contracted to do the same for NetApp.  In the mix of all of this I worked for EMC, IBM, HDS, and NetApp resellers with exposure to almost all of the systems on a technical pre-sales basis and post-sales implementation efforts.  Out of my experience I have formed some fairly strong and, I’d like to think, informed opinions of what should be in “enterprise” storage systems.

Now during all of this time consulting on storage systems they were always connected to something.  In the earlier years (as if it was so long ago…) it was generally application servers, and really just physical Windows hosts.  At the time I never even had to make a distinction that it was “physical”, because, really, there was no other option.  Yes, on occasion I worked with “virtual” systems through Sun Solaris, HP-UX and IBM AIX systems..but even these were somewhat rare, and many of them weren’t very virtual at all (virtual hardware didn’t exist).  As time progressed the type of systems connected to the storage evolved, and I had to support them all.  A storage system without any connected servers isn’t very useful, it can make lights blink, burn electricity and generate heat, but their usefulness without servers really ends there.

As projects changed with the evolution of applications and what businesses determined were critical the storage systems proceeded from supporting application servers that primarily included databases (e.g. MS SQL, Oracle, etc) to supporting email systems (e.g. MS Exchange).  It was really interesting that in the beginning most customers considered email to not be “valuable enough” to justify shared storage, but email quickly evolved into being one of the most critical applications in all of our environments right behind telephony.  As it turns out, communication is a critical function and we all prefer email for broadcast.

Of course, this all changed even further as we look at the recent years.  VMware quickly became the primary “server” I was connecting to storage.  This matured from being a couple of servers in an environment would be running ESX to all servers operating ESX, this happened in a far shorter time than it did for the progression from only databases to also email on these storage systems.  There are so many variables in deploying virtualization that my informed opinions of storage systems became more validated (at least in my mind), as flexibility became more important than ever.

All of this is to say that a good “storage consultant” never knows just storage, though I know plenty of one-hat experts that can provision storage all day long but can’t ever plan for the actual requirements of the application on the other end of the communication chain.  I always had to keep pace with understanding the application that was connected, as the storage was always a critical piece of meeting SLAs either for performance, availability or data protection.  Storage architecture without awareness of the application will always fail to meet requirements.  Now that being said, I wouldn’t ever consider myself a DBA or an Exchange administrator, in part because I wouldn’t want either job, but I know enough to architect storage to meet business requirements for those applications.

Of course, that evolved into the same for virtualization…but with a distinct difference.  Virtualization changed how storage is managed, provisioned and how data is protected.  If I was only consulting on the small storage portion of a project my billable utilization (critical measure of success in the professional services environment) would have been pretty small, probably less than 25%…however due to my awareness of the other components and dedication to learning VMware I was easily able to fill the other 50-75% of my time with the virtualization components.

I’ve been really fortunate in the past about keeping ahead of the curve, my first “real” tech job was in the ISP/telecom space.  This evolved from being in a support center for business leased line customers (DS0, DS1, DS3, OCx, etc) to being more involved on the managing and planning the backend network.  As I watched the ISPs fade away and consolidate I saw this as the tea leaves telling me that not as many router jockeys were going to be needed, so I switched into the more traditional IT role…as every company has an IT department.

This all changed when my wife and I moved across the country for her to attend law school, I left a perfectly good job that I hated to move to a new job market where I knew no one.  By luck, I found a job traveling as a storage consultant and that progressed to where we are today.

The next advancement in my career was due to the realization that the IT industry is yet again changing.  It doesn’t take much time reading Gartner reports or other IT business case studies to realize that virtualization is here to stay, and the next logical evolution is Cloud Computing.  I have now moved to the next step along my career path and joined the industry leader in creating Cloud Computing solutions, VMware.

I am more excited today about my job than I have been in a long time, I just hope I can keep pace with the shifting tides and the evolution of such a radical change in the industry.  I join a team of individuals that I have a lot of respect for and look forward to learning from within the VMware vCloud Services group.

Crazy schedule and its not letting up

Well, as everyone can tell…I just started this thing and I already fell off, or so it would seem.  I took a week of time off to go to a friends wedding, which meant no laptop…and I just didn’t have anything relevant to post while trying to avoid thinking about work and technology.  This week has been a 3-day work week for me, and its gone by all too quickly.  I’ve been rushing through the inbox trying to get caught up and keep my head above water on the projects I am assigned to, I’m still breathing…but there were times I had to pull out a soda straw and fight for a breath.

Next week is VMworld in San Francisco, I’m really excited and a bit stressed at the same time.  I have a ton to get done before flying out on Sunday, I have week of yard work and other homeowner chores that have been neglected with consecutive weeks of travel and insane temperatures.

I will try to post some details from VMworld during the week, but I can’t make any promises…but I hope to at least have something exciting to share afterwards at the minimum.  I keep hoping my work schedule will slow down just a bit so I can leverage my lab environment to actually generate content anyone would want to read/watch.  Oh well, back to the struggle of leaving the place better than I found it…it is true that not all consultants seem to have that goal, gets frustrating to clean up after “experts” and even aggravating when the mess was created by someone I know.