Biggest question around sizing your VDI usually comes down to sizing the storage.
Some of the solutions team created a pretty cool sizing whitepaper a few months back. Which inspired me to create this web based calculator. It is not meant to do everything in the whole world.
Just give a quick and easy VNX setup.
The source is on GitHub so please have fun with that.
I was very excited to try out my View desktop using my new ZaggFolio Keyboard case. I did not have a chance to try out the View Client with the keyboard until today. I was sad to find out the keyboard does not work very smoothly. So I would like to point this out:
First you have to tap the keyboard icon in the top menu. Not sure why this exists, but it would be great if the keyboard fully worked. The keyboard fully working would be great because using the on screen keyboard it uses the half of the screen.
Anyone else think this is kind of weird?
Sometimes I am sitting up late at night and I have a thought of something I think would be cool, like if x and y worked together to get z. This time I thought this was good enough to blog about. Now I want to stress that I do not have any special insight into what is coming. This is just how I wish things would be.
Today there are two end user portals from VMware. The vCloud Director for self-service cloud interface and the View Manager access point for end-users to access Virtual Desktops. Each interface interacts with one or more vCenter instances to deploy, manage, and destroy virtual machines. Below is a way over simplified representation of how View, vCloud Director (plus Request Manager) relate to the user experience. I think maybe there is a divide when there does not need to be (someday).
What if vCloud director could be used in the future to be the one stop user interface portal. Leveraging vCloud Request Manager, vCD could deploy cloud resources, Desktops or Servers or both. vCloud Director would be the orchestration piece for VMware View. Once the Request for a desktop is approved the entitlement to the correct pool is automatically given. If extra desktops are needed the cloning begins. vCloud Director will learn to speak the View Composer’s language, providing the ever elusive ability to use linked clones with vCD. vCloud Director with this feature could be great for lab and test/dev environments. The best part is operationally there is one place to request, deploy, manage all virtual resources from the end-user perspective. This could eliminate the ambiguity for a user (and service providers) on how to consume (and deliver) resources. This has implications on how IaaS and DaaS would be architected.
Now some drawbacks
You might say, hey, Jon you are going to make me buy and run vCD just to get VDI? No. The beauty of the API’s is each product could stand alone or work together (in my Vision of how they should work). Maybe even leverage Composer with vCD without View or Request Manager with View without vCD.
One Cloud Portal to rule them all.
This happened a long time ago. I arrived at a customer site to install View Desktop Manager (may have been version 2). This was before any cool VDI sizing tools like Liquidware Labs. I am installing ESX and VDM I casually ask, “What apps will you be running on this install?” The answer was, “Oh, web apps like youtube, flash and some shockwave stuff.” I thought “ah dang” in my best Mater voice. This was a case of two different organizations thinking someone else had gathered the proper information. Important details sometimes fall through the cracks. Since that day, I try to at least uncover most of this stuff before I show up on site.
Even though we have great assessment tools now, remember to ask some questions and get to know what is your customers end goal.
Things I learned that day. As related to VDI.
1. Know what your client is doing, “What apps are you going to use?”
2. Know where your client wants to do that thing from, “So, what kind of connection do you have to that remote office with 100+ users?”
This is not the full list of questions I would ask, just some I learned along the way.
*Disclaimer – I work for a Xsigo and VMware partner.
I was in the VMware View Design and Best practices class a couple weeks ago. Much of the class is built on the VMware View Reference Architecture. The picture below is from that PDF.
It really struck me how many IO connections (Network or Storage) it would take to run this POD. Minimum (in my opinion) would be 6 cables per host with ten 8 host clusters that is 480 cables! Let’s say that 160 of those are 4 gb Fiberchannel and the other 320 are 1 gb ethernet. The is 640 gb for storage and 320 for network.
Xsigo currently uses 20 gb infiniband and best practice would be to use 2 cards per server. The same 80 servers in the above cluster would have 3200 gb of bandwidth available. Add in the flexibility and ease of management you get using virtual IO. The cost savings in the number director class fiber switches and datacenter switches you no longer need and the ROI I would think the pays for the Xsigo Directors. I don’t deal with pricing so this is pure contemplation. So I will stick with the technical benefits. Being in the datacenter I like any solution that makes provisioning servers easier, takes less cabling, and gives me unbelievable bandwidth.
So just in the way VMware changed the way we think about the datacenter. Virtual IO will once again change how we deal with our deployments.
Recently I have spent time re-thinking certain configuration scenarios and asking myself, “Why?” If there is something I do day to day during installs is this still true when it comes to vSphere? or will it still be true when it comes to future versions.
Lately I have questioned how I deploy LUNs/volumes/datastores. I usually deploy multiple moderate size datastores. In my opinion this was always the best way to fit in MOST situations. I also will create datastores based on need afterward. So will create some general use datastores then add a bigger or smaller store based on performance/storage needs. After all the research I have done and asking questions on twitter* I still think this is a good plan in most situations.
I went over a VMworld.com session TA3220 – VMware vStorage VMFS-3 Architectural Advances since ESX 3.0 and read this paper:
I also went over some blog posts at Yellow-Bricks.com and Virtualgeek.
An idea occurred to me when it comes to using extents in VMFS, SCSI Reservations/Locks, and VDI “Boot Storms”. First some things a picked up.
1. Extents are not “spill and fill” VMFS places VM files across all the LUNs. Not quite what I would call load balancing, since it does not take IO load into account when placing files. So in situations where all the VM’s have similar loads this won’t be a problem.
2. Only the first LUN in a VMFS span gets locked by “storage and VMFS Administrative tasks” (Scalable Storage Performance pg 9). Not sure if this implies all locks.
Booting 100’s of VM’s for VMware View will cause locking and even though vSphere is much better when it comes to how quickly this process takes. There is still an impact. So I am beginning to think of a disk layout to ease administration for VDI, and possibly lay the groundwork for improved performance. Here is my theory:
Create four LUNs with 200GB each. Use VMFS to extents to group them together. Resulting in an 800 GB datastore with 4 disk queues and only 1 LUN that locks during administrative tasks.
Give this datastore to VMware View and let it have at it. Since the IO load for each VM is mostly the same, and really at the highest during boot other tasks performed on the LUN after the initial boot storm will have even less impact. So we can let desktops get destroyed and rebuilt/cloned all day with only locking that first LUN. This part I still need to confirm in the LAB.
What I have seen in the lab is with same sized clones the data on disk was spread pretty evenly across the LUNs.
Any other ideas? Please leave a comment. Maybe I am way off base.
*(thanks to @lamw @jasonboche and @sakacc for discussing or answering my tweets)
I was looking for last couple weeks for a good way to re-purpose PC’s as thin clients to ease the investment in VDI. I stumbled across this PDF from VMware and I thought it was great. I would tend towards using group policy to deploy the new shell described on pages 3 and 4. It can always be undone if the PC is needed as a PC again.
Check it out.
You pretty much replace the default shell (explorer.exe) with the VMware View Client. I would suggest using some group policy to keep people from using the task manager to start new processes. This should be a temporary solution until you have budget to buy some real thin clients or net books even.
There are of course lots of options out there for thin clients, and software to provision a “thin OS” to machines. This is free and easy though. I thought it was cool so I decided to share.