Ask Good Questions

This happened a long time ago. I arrived at a customer site to install View Desktop Manager (may have been version 2). This was before any cool VDI sizing tools like Liquidware Labs. I am installing ESX and VDM I casually ask, “What apps will you be running on this install?” The answer was, “Oh, web apps like youtube, flash and some shockwave stuff.” I thought “ah dang” in my best Mater voice. This was a case of two different organizations thinking someone else had gathered the proper information. Important details sometimes fall through the cracks. Since that day, I try to at least uncover most of this stuff before I show up on site.

Even though we have great assessment tools now, remember to ask some questions and get to know what is your customers end goal.

Things I learned that day. As related to VDI.

1. Know what your client is doing, “What apps are you going to use?”

2. Know where your client wants to do that thing from, “So, what kind of connection do you have to that remote office with 100+ users?”

This is not the full list of questions I would ask, just some I learned along the way.

iSCSI Connections on EqualLogic PS Series

Equallogic PS Series Design Considerations

VMware vSphere introduces support for multipathing for iSCSI. Equallogic released a recommended configuration for using MPIO with iSCSI.   I have a few observations after working with MPIO and iSCSI. The main lesson is know the capabilities of the storage before you go trying to see how man paths you can have with active IO.

  1. EqualLogic defines a host connection as 1 iSCSI path to a volume. At VMware Partner Exchange 2010 I was told by a Dell guy, “Yeah, gotta read those release notes!”
  2. EqualLogic limits the number of hosts in the to 128 per pool or 256 per group connections in the 4000 series (see table 1 for full breakdown) and to 512/2048 per pool/group connections in the 6000 series arrays.
  3. The EqualLogic MPIO recommendation mentioned above can consume many connections with just a few vSphere hosts.

I was under the false impression that by “hosts” we were talking about physical connections to the array. Especially since the datasheet says “Hosts Accessing PS series Group”. It actually means iSCSI connections to a volume. Therefore if you have 1 host with 128 volumes singly connected via 1 iSCSI path each, you are already at your limit (on the PS4000).

An example of how fast vSphere iSCSI MPIO (Round Robin) can consume available connections can be seen this this scenario. Five vSphere hosts with 2 network cards each on the iSCSI network. If we follow the whitepaper above we will create 4 vmkernel ports per host. Each vmkernel creates an additional connection per volume. Therefore if we have 10 300 GB volumes for datastores we already have 200 iSCSI connections to our Equallogic array. Really no problem for the 6000 series but the 4000 will start to drop connections. I have not even added the connections created by the vStorage API/VCB capable backup server. So here is a formula*:

N – number of hosts

V – number of vmkernel ports

T – number of targeted volumes

B – number of connections from the backup server

C – number of connections

(N * V * T) + B = C

Equallogic PS Series Array Connections (pool/group)
4000E 128/256
4000X 128/256
4000XV 128/256
6000E 512/2048
6000S 512/2048
6000X 512/2048
6000XV 512/2048
6010,6500,6510 Series 512/2048

Use multiple pools within the group in order to avoid dropped iSCSI connections and provide scalability. This reduces the number of spindles you are hitting with your IO. Using care to know the capacity of the array will help avoid big problems down the road.

*I have seen the connections actually be higher and I can only figure this is because the way EqualLogic does iSCSI redirection.

New VMware KB – zeroedthick or eagerzeroedthick

Due to the performance hit while zeroing mentioned in the Thin Provisioning Performance white paper this article in the VMware knowledge base could be of some good use.

I would suggest using eagerzeroedthick for any high IO tier 1 type of Virtual Machine. This can be done when creating the VMDK from the GUI by selecting the “Support Clustering Features such as Fault Tolerance” check box.

So go out and check your VMDK’s.