Yesterday I wrote an article about an apparent conflicting support statement between EMC and VMware titled Storage I/O Control with FAST Auto Tiering Support Statement Conflicts. Less than 8 hours later I have an answer thanks to some great guys at VMware and EMC.
Thanks to one of the great team at VMware – Manish Patel a.k.a. @Mandivs I have got a response from one of the EMC TechBook document authors. It turns out that the reference to SIOC not being supported with FAST is incorrect and will be completely removed from the next version of the document, which is due out in a couple of weeks. It feels so good to be connected to these guys! Here is the response straight from one of the authors of the EMC TechBook that caused me some confusion. This is a quote from Cody Hosterman regarding the TechBook note: “This note is incorrect in the Techbook and is being pulled entirely in an upcoming version. It somehow arose from the fact that SIOC would rarely be useful with FAST VP-enabled devices and in a properly configured environment shouldn’t be necessary at all on a VMAX. But there are some extreme situations where it could help so it is supported. Please disregard the note.”
While researching this problem I had responses from Andrew Mitchell, Duncan Epping and Scott Lowe and I would like to thank them for helping get to the bottom of this.
Please note that this discussion is around Storage I/O Control used in combination with EMC FAST. This is not a discussion around using I/O metrics in Storage DRS in combination with EMC FAST. VMware’s current recommendation is that Storage DRS should be used for initial placement and load balancing based on space utilization only and I/O metrics balancing should be disabled when the arrays are using FAST. In the case with the project I’m working on we are configuring Storage DRS in manual mode and will be using it for initial placement and making load balancing recommendations only, and based on utilization. Any implementation of those recommendations will be manual. This is in line with the recommendations.
For further information regarding Storage DRS interoperability with array features check out this great blog – Storage DRS and Storage Array Feature Interoperability. I would also recommend that you check out this article from Chad Sakac – vSphere 4.1, SOIC, and Array Auto-Tiering.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2012 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
Great article.
Good to hear SIOC is supported with EMC FAST (just in case) and thanks for clarifying it should be disabled under normal circumstances.
Like you we use Storage DRS to manage initial placement of VMs onto FAST VP (VNX 7500) storage and currently IO Metrics is disabled.
Hi Gareth, SIOC should be enabled. It will help smooth out any unacceptably high latency peaks. But you should have storage metrics disabled in Storage DRS when using FAST. These are two quite different things. SIOC Yes, Storage Metrics on SDRS when using FAST NO.
Ok, thanks. Getting confused there. All our non-replicated datastores reside in Storage DRS and our replicated datastores outside Storage DRS as its not compatible with SRM 5.
Currently we have SIOC enabled on all our non-replicated datastores with Storage DRS IOMetrics disabled, but we have had to disable SIOC on all our replicated datastores (outside Storage DRS) as otherwise we have found that SRM cannot unmount the datastore.
We just migrated VM's to datastores on a VMAX with FASTVP. I was planning on keeping SIOC off. My reasoning is we have some pretty large SQL VM's that have high IOPS and in turn higher response times. I'm assuming that SIOC is going to throttle them back when it doesn't neccessarily need to. My other reason is the annoying SIOC eternal IO alarm that is pretty noisy in the logs even if you turn the alarm off. We are migrating off a AMS2500 that served datastores only in this vcenter, which by documentation says vcenter should know where the IO is. Unfortunately after talking with HDS it's not quite the case when using disk pools which spread the data across all disks in a 42MB chunk. There has to be some kind of overhead with SIOC as well so if it's not doing much if any good why keep it on?
After reading this i'm re-thinking turning SIOC on since we just bumped up to 6TB datastores. We will have anywhere between 1 and 20 VM's on each datastore.
Hi Garret, Definitely have SIOC on. It will only kick in if there is storage performance contention and response times for IO's go above 30ms by default. Average IO service times should be below 10ms with spikes less than 20ms ideally. In which case SIOC will have no impact. 30ms IO latency shouldn't be happening on the VMAX. So it's definitely a good idea to have SIOC there, especially if you have quite a few VM's per datastore. It will ensure that each VM gets its fair share of IO. There is no overhead for having it enabled either. Which version of vSphere are you using?
The alarm about external influences is a result of other options on the SAN impacting performance. SIOC is expecting uniform behaviour if only the virtual environment is impacting the datastore. Operations such as SAN based snapshots, replication, backups ETC can influence this. Sharing physical and virtual on the same disk pools within an array will also cause that alarm to be raised. But the alarm doesn't self clear, so it might have only been an issue for a few seconds. In vSphere 5.0 and 5.1 the SIOC functionality has been greatly improved and the alarms and error messages made more sensible.
[…] Michael Webster […]