I have four MDS switches that I intend to use as part of a DR solution with native fibre channel and a Cisco ONS. I am moving from a Brocade solution and am interesting in the buffer credits for the ISL. Each link is 2 Gpbs and eventually they will be port channels when I get a couple more links.
255 buffer-to-buffer credits-are assigned to each port for optimal bandwidth utilization across distance.
Does that mean the B2B credits are automagically assigned for use or we have to set them manually. The max distance in my alternate loop is 35 km. We have found that the traffic is odd and the typical B2B numbers are just not good enough. So, can I set say 70 or 105 BB credits manually to each interface that will be an ISL? If so, then which of the BB Credit options do I use? The list is interesting:
Perf Bufs Admin.
Oddly enough, after googling a fair bit and going over my training notes, I can't find any doco on this anywhere.
Can someone please comment on this?
Google 0, Cisco website 1.
Note The receive BB_credit values depend on the module type and the port mode, as follows:
For 16-port switching modules and full rate ports, the default value is 16 for Fx mode and 255 for E or TE modes. The maximum value is 255 in all modes. This value can be changed as required.
It probably would help if I had already setup my ISL on this. Too much planning can be a dangerous thing. I read the above as it will be 255 when I setup the ISL.
Thanks again for a response.
Do you know why there are so many B2B credits available for the interfaces compared to Brocade? Eg, a Brocade interface will only offer 3 credits on a standard interface compared to the 16 available on the MDS. This causes issues on heavily worked interfaces. Also, after forking out a sum of money to get long distance mode for the Brocade, the maximum is still less that the standard of 255 for ISL's.
Isn't a buffer credit the same for all vendors devices? In relatively simple terms, it lets you extend a 1 Gpbs FC interface up to 2 kilometres. So a 2 Gpbs interface over 32 kilometres would require 64 buffer credits. Was Cisco planning for 8 Gpbs interfaces when they did the buffer credit allocation?
I wonder how the Nexus does this?
I hope this finds you well!
The reason for the larger buffer credits one could assume is that is alot cheaper to throw in the capacity for larger buffers now vs rejigging hardware to be able to do it later.
As far as the nexus goes, if you are talking about the fcoe part it does not use buffer credits at all :-) As far as the normal FC modules in the nexus 5000... I'm sure someone at Cisco know's but without seeing one of these up close and personal we would only be guessing :)
The buffer credit numbers for vendors is no doubt a long and arduous history. Qlogic use 12 buffer credits in the hba's some old bridge s only use 3.
I have only ever seen buffer credit issues once over some distances - nothing within the datacentre.