Thanks Hozaifa. So I'm not sure I conveyed my last question correctly. If default log levels are used for both global and ACL logs, which I believe is log level 6, then is there any point to using ACL logs, other than if you want to add additional logging on a per-ACL level?
Here's a default host logging, with no buffer config changes:
logging enable logging trap informational logging host MANAGEMENT 1.1.1.1
Here's a default ACL logging level:
access-list outside-acl permit ip host 1.1.1.1 any log
... View more
So to make sure I understand, using the global "logging host" command would be the same as having "log" at the end of every ACL, granted that default logging levels are used in either situation?
... View more
Hello,
We currently get all of our logging needs with our ASAs by using "logging host" command to send all firewall traffic to an event collector where we can search and correlate traffic events. I'm working to determine if there's any advantage to using the "log" command on the end of our extended access-lists in addition to this. In Cisco documentation, I'm finding that using it results in ACL hits being grouped into "flows" as opposed to separate log messages for each hit, but not really sure why else it would be used. It mentions it could increase CPU usage enabling this on an ACL, but reduces the volume of logs produced. Any thoughts on why enabling "log" on extended ACLs is useful?
Thanks,
LK
... View more
I'm having trouble finding any decent guides to upgrading a 2960X stack with the .BIN file. The official Cisco guide goes over the process with the .TAR file, but it's not the same. Wondering if anyone has had any experience with the .BIN upgrade process? Based off of bits and pieces of articles that I found on Google, here's how I see the process going for our 4-switch stack....
1. Copy the .BIN file to the "flash" directory of EACH switch in the stack.
- copy ftp: flash1:
- copy ftp: flash2:
- copy ftp: flash3:
- copy ftp: flash4:
2. Upgrade the Master (switch #1) switch first. Save and reload the device.
- boot system switch all flash:/bin-file-name.bin
3. Upgrade the other switches one at a time.
- archive copy-sw /force-reload /overwrite /dest 2 1
- archive copy-sw /force-reload /overwrite /dest 3 1
- archive copy-sw /force-reload /overwrite /dest 4 1
4. Verify all switches upgraded post their reboots.
Seems time intensive, but I don't see any other way to do the upgrade. Anyone else?
Thank you,
LK
... View more
I had this problem and removing the "aaa authentication login default group radius", as well as any other radius-related commands, did not work. I also verified that the default accounting and authentication mechanism was "local."
After some troubleshooting I found that you can remove the radius-server configurations by simply trying to only remove the host IP portion. So, instead of typing this:
no radius-server host 10.14.206.210 authentication accounting
you would type this:
no radius-server host 10.14.206.210
It worked for me. May help others.
Thanks,
Logan
... View more
Akash - I've literally been spending all day trying to understand Burst rate within policing. Your explanation hit the right chord with me and now I understand. Thank you!
... View more
I've been searching the Internet for an answer to this for quite some time and unable to find anything on it. What happens if you do not configure the class-default command in a policy-map that you apply to an interface? Will non-matched traffic get dropped? My understanding is the class-default class is designed to catch anything that isn't matched in previous class-maps with the policy, and then give that traffic best effort delivery. Thanks in advance for the help. LK
... View more
Edit - mistook your question to include line cards, which would not be a vPC. My apologies. As you mentioned I do not believe you can do VFC over a vPC because that traffic is not allowed over the vPC peer link. Logan
... View more
Fawad, We have 5596's with 2000 FEXes as well and will be going from a 6.0 to 7.0 NX-OS here in a couple weeks. We're primarily following the upgrade procedures at the end of this guide, in a section titled "Upgrading a Dual-Homed FEX Access Layer." http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5500/sw/upgrade/705_N1_1/n5500_upgrade_downgrade_700.html During the listed process it actually does have a portion of the steps where you set the boot variable, save config, then reload. One of the other posters in this thread says not to do this, but according to the documentation you need to do it on the secondary 5K once the primary 5K is upgraded. Because we have heard issues with the upgrade rebooting all the FEXes upon upgrading the primary 5K (see glen's comment in this thread), we'll be modifying the upgrade process on our end a bit. Might be something to try for yourself. Before we upgrade our primary 5K we'll be shutting down half of the FEX ports on the primary so that the upgrade won't get pushed out to those FEXes. This will prevent upgrades on those FEXes and a reboot if it would have occurred. We'll upgrade those FEXes upon upgrading the secondary 5K with the "install all" command. I ran this by Cisco TAC and they say it should work. Can never been to cautious.... Thanks, Logan
... View more
Hi Joris, The FEXes *shouldn't* reboot when you upgrade the primary. Please see the following link for reference on exact instructions (ref: "Upgrading a Dual-Homed FEX Access Layer") http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/upgrade/503_N1_1/n5k_upgrade_downgrade_503.html During this procedure your server's network connections should be maintained through the secondary 5K. However, if you are not comfortable with this upgrade method, there is another that I have discussed with Cisco as being workable where you shut down half your FEX ports on the primary 5K prior to the primary 5K upgrade. This allows you to upgrade half the FEXes at a time. This is useful if you have two FEXes per rack, and your hosts/servers are dual homed to these FEXes. The procedure would look like this. 1.On the primary 5K shut down half of the FEX ports. In this situation you would shut down the ports to one FEX per rack, leaving another FEX online in that rack to maintain host reachability. 2.Upgrade the primary 5K. This will result in the primary 5K and half of the FEXes it can reach to be upgraded and rebooted, but leaving half of the FEXes and the secondary 5K as available. 3.Verify the secondary 5K is now operational primary. If it’s not then it will not be able to upgrade. Reboot the primary 5K again if the secondary is not operational primary. (in order for FEXes to grab upgrade files from the parent 5K being upgraded, it must either be primary or in operational primary mode) 4. Upgrade secondary 5K. This will result in the secondary 5K and the other half of the FEXes to be upgraded and rebooted. 5.Turn up all FEX ports the on primary 5K. 6.Verify stabilized and upgraded network. Hope that helps. Logan
... View more
Hi Joris. You can "minimize" disruption during a non-ISSU upgrade of your two 5Ks, as long as the FEXes are dual-homed to the 5Ks via vPC, which sounds like they are. The process works like this... 1. Upgrade the primary 5K first with the install all command. Upgrading the primary will also upgrade the FEXes, but will not reboot them. The primary is rebooted however. During the reboot the traffic traversing the FEXes will utilize the vPC link still active to the secondary 5K. 2. Upon the primary 5K being rebooted, it is now on the new code, whereas the FEXes and secondary 5K are still on the old. The primary will show the FEXes offline at this point. This is normal, as the secondary will still see them as online. 3. On the secondary, change your boot variables to the new code bins. Copy the running config to the startup config. DO NOT RELOAD YET. You now want to reload your FEXes, one at a time, verifying after each individual FEX reboot that they come back. Note, once the FEXes are rebooted they are running the new code, so the secondary will now show them as offline, whereas the primary can now see them as online. 4. Once all the FEXes are manually reloaded you will reload the secondary 5K. NOTE: you are NOT using "install all" command. Just do a simple reload, granted that you changed the boot variables in step #3 already. Also, do not save the running configuration when prompted when reloading, as it could orphan the FEXes from the secondary (per Cisco). 5. Once the secondary comes back up, everything should be on the new code and everything should be happy. Verify 5K and FEX code versions, FEX statuses, and vPC peer health as part of your cleanup. Hope that helps. Logan
... View more