VMware Space Consumption on Thin Provisioned Data-Reducing Arrays

A common question I get from my customers is: Why does vSphere say my data store is full? when the array is 4% used? I usually make a quick explanation of how the VMFS file system has no clue that the block device underneath is actually deduping and compressing the data. So even though you provisioned 1TB of VM’s the Array might only write a fraction of that amount. This can get many different reactions. Anger, disbelief, astonishment and understanding. This post is to visually show that what vSphere is thinking is used on VMFS will not necessarily be reflected the same on a data reducing array (including FlashArray).

When vSphere says I am FULL

media_1466090762665.png

Even when the FlashArray says plenty of space

media_1466090873568.png

You can tell from my environment testing vRealize Automation and Orchestrator that there is a lot more being “used” in vSphere than is written to the array. In your head start to do the match though. 3.4 times 8.19GB does not equal 169GB. That is because we do not claim thin provisioning as actual data reduction. This includes any set of “zeros”. Space not provisioned to a VM at all, the empty VMFS space AND the empty space provisioned to a VM (lazy or eager zeros) not consumed or written to by the VM. Since my enironment is mostly empty VM’s you can see the Total Reduction is ridiculously high.

Some solutions.
1. Use Thin Provisioned VM’s with Automatic UNMAP in vSphere 6. Read more from Cody Hosterman here. Direct Guest OS Unmap in vSphere 6.
This will give closer accounting of VM provisioned space and space consumed on the array. It is still not aware of the compression and dedupe behind the scenes on the array.
2. vVols provide the storage awareness needed to let VMware know the actual consumption per VM. Come see at the Pure Storage booth at VMworld.

Use the plugin!

media_1466090985027.png

At least you can quickly see that the 169.4GB will be reduced by 3.4:1 (for actual written data) all in one screen.

Leave a Reply

Your email address will not be published. Required fields are marked *