leolaohoo wrote: Word has it is the F2 card has a design flaw in the hardware. So there's a crossroad as to whether develop a software fix or to roll out a new card altogether. Not so much a design flaw, but just a completely different forwarding engine, hence the incompatibility. While I understand the caveat around the F2 in it's own VDC, there still needs to be a migration path from M1 in a HA environment. I'll be heading to a CPOC in October/November this year to run through some migration "strategies" which will basically be providing a method for the least amount of downtime, currently between myself and the SE's at cisco they best way we can do it is a forced split brain mode, and then a shut / no shut on the physical ports on either side..
... View more
leolaohoo wrote: How about create a totally different VPC, put the F2 there and start moving things across. Once this is done, switch back to the original VPC. I assume you mean create a new VDC with the F2 cards in it, create a new VPC between the switches, and migrate the vlans one by one. Which unfortunately would require an outage for each of the vlans, rather than one large one, and because we have downstream Nexus 5K's and 2K's with the vlans trunked it wouldn't work. *edit* I had already thought of a variation on this: Create new VDC on each switch Create new VPC on each switch Connect original VDC to new VDC at layer 2 - trunk m1 -> f2 Create HSRP entries and spanning tree entries on F2 interfaces Create uplinks on F2 cards into core. Add secondary VPC links for 2 sets of downstream 5K's Shutdown M1 cards and failover to F2 environment Take the outage hit on the Nexus 2K's that are currently direct attached to the 7K's A) This is a hack B) I'm not sure on if the failover would even work C) This is a hack D) Loops galore. E) Time required to implement and cutover extends the risk, fibre capacity etc.
... View more
Yeah I already noted that, and it was understood before the purchase. The intention is to replace all of the M1 cards, with F2 cards, one chassis at a time. i.e. Shutdown chassis B. replace m1 cards with F2, power on, bring up vpc, shutdown chassis A, repeat. Unfortunately due to the CrossCard VPC issue, i.e F2 in Chassis B, won't bring up a VPC peer link to Chassis A with a M1 card, this can't be done. The only alternative is to shut down both sides of the server farm switches, pull all of the M1 cards at the same time, and replace with F2.. The result being a total outage which is unacceptable for a HA designed network.
... View more
So, as the topic indicates we have 2 x N7K's as our serverfarm switches, VPC between them, single vpc, and about 150 vlans. It was always our intention to upgrade to the F2 line cards when they were made available, rather than the M1 cards that we purchased during the initial upgrade. The problem we've run into, is there's no way to migrate one chassis at a time, without a complete outage due to the Cross Card VPC limitation. i.e. a F2 card, cannot create a VPC peer to neighbour chassis with a M1 card, even though the port channel comes up. %VPC-3-VPC_PEER_LINK_BRINGUP_FAILED: vPC peer-link bringup failed (F2 VDC support mismatch) Creating a new VDC for the F2 cards, and moving networks or creating new ones is not viable as we'll end up with an outage just as long. The 10-15 minutes of down time required to shutdown both switches to do the upgrade is somewhat unrealistic in our environment, for the next 6 months or so. Has anyone else run into this issue / migration path, and what was your solution to it. At this stage, because the port channel comes up, it's likely to be a software limitation to avoid a non supported topology, the easiest solution I can come up with is to ask Cisco for an engineering fix to the code, to allow the non support topology (for a few minutes)
... View more
So we ran into an interesting issue today, and hoping someone might be able to confirm my assumptions / It'll help someone else out if they come across it. NX-OS 5.0(3)N2(1) Nexus 7010 N7K-M132XP-12 line cards. By default the linecards/system use only 2 egress queues for the 32 port M1 cards. 1p7q4t-out-pq1 (matches cos 5-7) 1p7q4t-out-q-default (matches cos 0-4) Which really doesn't line up with what wanted so we modify the config on the default vdc to the following. class-map type queuing match-any 1p7q4t-out-pq1 match cos 5 class-map type queuing match-any 1p7q4t-out-q2 match cos 7 class-map type queuing match-any 1p7q4t-out-q3 match cos 6 class-map type queuing match-any 1p7q4t-out-q4 match cos 3 class-map type queuing match-any 1p7q4t-out-q5 match cos 2 class-map type queuing match-any 1p7q4t-out-q6 match cos 4 class-map type queuing match-any 1p7q4t-out-q7 match cos 0 class-map type queuing match-any 1p7q4t-out-q-default match cos 1 Create the relevant policy-map which sets the priority queue and bandwidth requirements for each queue/class Apply the policy map to the interfaces that we wanted.. Viola, everything works.. Well actually no, the system basically started dropping packets on the egress (output discards) on all the ports that didn't have the new policy applied to them. Now we can't create custom class maps, we have to use the system default ones, and then modify them suit (match cos statements) It's my assumption, that once we modify these default class maps, it's also being applied to the default policy Which reads as follows - sh queuing int e x/x Service-policy (queuing) output: default-out-policy policy statistics status: enabled Class-map (queuing): out-pq1 (match-any) priority level 1
queue-limit percent 16
queue dropped pkts : 0
Class-map (queuing): out-q2 (match-any) queue-limit percent 1
queue dropped pkts : 0
Class-map (queuing): out-q3 (match-any) queue-limit percent 1
queue dropped pkts : 0
Class-map (queuing): out-q-default (match-any) queue-limit percent 82
bandwidth remaining percent 25
queue dropped pkts : 0
By modifying the default queues it's affected the default egress policy-map, and in order to function correctly, a custom default list needs to be setup. With the default egress policy, anything not in the above 4 queues gets dropped. Few things. Why doesn't the output queue in the default policy map (out-q3) match the actual class-map name of 1p7q4t-out-q3 Are my assumptions about the output discards correct relating to the fact there is no relevant mapping for the modified egress queues in the default system policy Where in the config guide is this? I've seen some reference that infer it, but don't actually point it out. I'm sure someone can explain to me why this is a system wide config item, rather than on a per vdc basis. Cheers
... View more
Hi Guys, My first foray into IPCC based scripting and I've run into a few problems. Firstly, what I'm trying to do, is allow a user to press a button during during the queue process, which then allows them to manually input their call back number. Once an agent becomes available, the system calls the agent and the callback number and connects them.. I've attached the script that I'm working with, and the log file. What I'm seeing is, I can get one call queued, and it works fine, the agent comes online, the system calls the agent prompts them to initiate the callback (press any key) and then connects them to the user. When I have multiple callbacks in the queue I get the "Narky Lady" informing me we have system problems. This happens to all the queued callbacks. The second part is I'm getting "Event queue time exceeded" and the call is being dropped out of the queue after the callback has been queued. I'm running CUCM 7.0.1.11000-2 & UCCX 7.0(1)_Build168 Any help would be appreciated, or if someone could point me to a working example? Thanks
... View more