Anyone as something to share on how to know when your Fi's are at or near capacity?
UCS central and UCSM are very, very poor at capacity planning.
we are looking to put our hands on Bluemedora plugins for Vrops to get end to end metrics.
But even then When is full, full?
So if anyone as a idea of what an actual FI is capable of handling before it goes boom. I would be really happy.
our current configuration as an example on one of our domain:
4 10 GB uplink to the network per FI
4 chassis connection per chassis/ per FI (so 8 per chassis) x 6 chassis per domain, 24ports
4 8 GB uplink to the storage.
so that is 28 port over 32 being use on each fabric with no expansion board being used.
Thanks for any help/suggestions
May you noticed something about UCS Performance Manager. It has been developed by Cisco and Zenoss partnership.
It's simple to use and show you real metrics about FI's behaivour as well as the rest of environment. Things like how are your uplinks, server ports, FC uplinks. If you want to add your storage, upstream switches (Eth/SAN), vCenter/SCVMM, you'll get a holistic view about how the things are going. In a very organised view.
You can ask for your representative to get a trial license I'm not sure about 30 to 90 days - sometimes it's enough to have an idea and be safety, before decide.
Probably you'll surprise how light your systems are running.
I hope you get the information that you need.
Check this links: Cisco UCS Performance Manager Data Sheet - Cisco
ok I already was aware of this software tx.
but without it is there a way to tell?
Have you considered building thresholds for notifications based on capacity? like any interface that exceeds 75% triggers a log entry etc. smcquerr wrote a good guide a few years back. Here's a link, but its hosted on jive software for some reason. If you cant access it let me know and Ill get it to you.
I will give it a look.
I also got news that there is a new plugin from Cisco in VROPS and its free. here is the link:
I will give it a try and post back my observation:
To Ravikumar for the link.