If vSAN Powered The Matrix…

After finding out I was approved to attend VMworld this year, I signed up to present a vBrownBag Tech Talk on VMware vSAN.  Just a few weeks before the conference, I was lucky to secure an official slot on the agenda for Thursday afternoon.

The idea for the talk was pretty simple.  If vSAN powered The Matrix, would we plan it carefully like an enterprise technology rather than trying to run it on whatever hardware is available in the server closet?  Is cost really a barrier to people using this technology, or are there ways to save money and still harness the power of VMware vSAN?  What are some other pitfalls to be aware of?  Watch the video, and see for yourself [VMTN6733U].  I hope you will find it helpful.

 

 

Here are some corrections / additions to the video:

  • Video at 2:31 – I mentioned Starwind Virtual SAN in this part of the video.  This is a software solution that will run on top of VMware or Xen as a nested VM or as a native application on Hyper-V and Windows.  It’s slightly different than VMware’s vSAN but still a very interesting product that can provide extreme performance for many different types of workloads.
  • Video at 3:20 – I don’t think I ever mention in relation to this slide that in the VMware vSAN realm we are turning off RAID on the physical hosts and letting the hypervisor pool the capacity drives from disk groups all hosts to make up a vSAN datastore.  The RAID level (1, 5, 6) describes the way the VMDK objects are protected (number of components of a VMDK object determined by RAID level plus witness).
  • Video at 9:30 (or thereabouts) – keep in mind we want all hosts in our vSAN cluster to have the same hardware configuration (same processor family and number of processors, same amount of RAM, same number and capacity of disks, same number of disk groups, etc.).
  • Video at 12:50 – stretched clusters pool storage in hosts at multiple sites to make up a single vSAN datastore that can be seen and managed in vCenter.  Bandwidth requirements do apply in these cases (see the slide deck or VMware StorageHub for more).
  • Video at 16:00 – in the 4 node RAID 1 configuration, one node failing allows for an additional copy of the component that was on the failed host to be created on one of the other hosts so all VMs are still compliant with the storage policy.

 

Additional Resources

Leave a Reply

Your email address will not be published. Required fields are marked *