I have problem with Switch Cisco 3560E-24TD
My system have 22 Camera IP, use one 3560E. each Camera IP have bw is 2Mbps. System Camera have worked not good, Video of Camera are transmitted to the shock. I tried ping from my PC to all Camera, very low packet. Time and TTL were very hight
But when i decreased Bandwidth of all Camera to 1Mbps, The Signal Video was better . I tried ping again, Time and TTL very good
What is this issue ?it is related to Bandwidth or throughput of Switch 3560E ?
To start with analysis we need to know following:
- 3560 configuration
- if IP cameras do any DSCP marking
- where do you ping from? Did you connect PC to same 3560 switch to ping? If not then please explain the path (L2 and L3) between PC and Camera
- Where do cameras send their video stream? Is the collector server connected to same 3560 switch? If not what is the topology between the server and IP cameras.
The thing is that is your PC and Collector Server are connected to same switch - then we need to consider 3560 switch performance. But if video is sent through some other devices or even WAN link then bottleneck can be there as well and we need to heck that first.
Please provide me with the information above and I can share with you some basic guidelines on further analysis.
Good questions and not that simple to answer as it depend on many factors. Buffer limit comes into picture when speed of the wire /port is not enough to transmit all the traffic which is supposed to go out there. In that case port is starting the queuing (queueing strategies are very different so I will not consider those in this answer and take the First In First out queueing for example). Traffic is being stored then in port buffer. If there is too much traffic going out that buffer is exausted then the excess is just dropped.
Thus size of the buffer does matter. Other thing is that few ports may share the same buffer pool - thus most agressive port might eat the buffers for the rest of the ports and make them starve. And so on.
The above was said related to HW buffers which are pre-built for particular line card. If you talk about queue-limit, then it is sometimes differ from HW buffers. Queue-limit specify the number of traffic which needs to be handled by CPU and queued when CPU is busy handling other processes. Usually most of current platform are able to use their HW resource to handle traffic. In some cases you need to send that to CPU for decision (common examples are - traffic sent to the switch management interface or traffic with TTL=1, etc). If traffic needs to be sent to CPU and CPU is busy - it is stored in SW queue and the size of that queue is limited by queue-limit. Again if that limit is reached - all excessive traffic is dropped.
So buffer limit tells you how much traffic you can store before starting to drops if your media/CPU is busy. The bigger buffer/limit - more traffic you can store. Not always that is good as that add the latency to packet handling and not all kinds of traffic fine with it. But now we are coming to QoS which is different topic.
Sorry for confusing answer but I hope I answered your question or at least shared some food for next questions.
I have a question regarding the Management Port on SUP7-E for C4503-E. It seems that this port "fastethernet1" cannot be brought to the UP status when connected to another catalyst switch (in access vlan 1) (3500 or 2960), for out-of band management. LED never come to green or orange when connected, even after many tries with tuning in mdix or speed/duplex..
When this port is connected to a PC or event to a linecard of same or other C4503, it is immediately up.
Do you have any clue ?
Thank you very much
In earlier IOS versions mangement port was used only for disaster recovery of the switch. E.G. from ROMMON and thus it is not coming up with the other switches. See limitation section for SUP7e:
The supervisor engine front-panel management port (FastEthernet1 interface) is not supported.
As far as I know they included it in the last release available on CCO. I will check later on it.
Does the 3560-X support IPv6 VRF-lite? Everytime I try to configure "address-family ipv6", I get "IPv6 VRF not supported for this platform or this template".
IPV6 in a VRF is not supported on 3750/3650 switches currently, while you
can do IPV6 in the global routing table, according to the product managers,
there is no planned support for this feature yet on the 3K switches.
For the 4500 IPV6 features are supported in Supervisor Engine 6-E only, you
can check this on the following link:
For the 6500 switches, I was checking on the available IPV6 features
available one is the "MPLS VPN - VRF CLI for IPv4 & IPv6 VPNs", you can
check this on the following link:
After IOS version 12.2(33)SXI the feature "IPv6 unicast forwarding (vrf-lite
IPv6)" is supported.
I hope this information is useful for you, please let me know if you have any questions or doubts.
How do I know the total consumption of traffic in real time on a 3750 and 2960 switches?
For example, i need to know this number to find out if my switch is not overloaded(throughput).
As these platforms are capable to send traffic in HW I would start with checking the TCAM to see if it is reaching it's limits or not:
show platform tcam utilization
For backplane and interface utilization you can use following commands
- Switch#show controllers utilization ------------- display bandwidth utilization on the switch or specific ports.
- Switch#show switch stack-ring activity detail ---------------- display the number of
frames per stack member that are sent to the stack ring. (This command was introduced on IOS version 12.2 SE)
Other useful commands for interface statistics.
Sh interface counters
Sh interface summary
Sh interface stats
Please keep in mind that result or availability of each command may depend on the particular switch or IOS.
The Switch 3750-x REP does not support the protocol right? So what can I use (the closest) in his place?
REP is included into the coming 15.0(2)SE release for 3750X. AFAIK this release will be available by end of Q3CY12. At the moment it is supported by ME platforms (e.g. 3750-ME or ME-3400, ME-3800).
I have stack of 3750-X switches running IOS 15.0.x. Switch 2 is the master. If I do a:
show controllers ethernet-controller port-asic statistics
I only see results for switch 2.
1. Is there any way to see the statistics for the other switches? Appending a .... switch 1 to that command returns remote statistics not currently supported.
2. For switch 2 I see results for port-asic's 0-2 for a WS-C3750X-48P. How are those asics allocated across the ports?
For your Q1:
By default you are connecting to the master switch. To see the details for the other peer in stack you can either switch to the other console using "session #" (session 1). Then you will come into switch 1 console and will be able to run necessary commands.
You can also try to run thos commands from mane console prepending command with "remote command #" (e.g. remote command 1 show controllers ethernet-controller port-asic statistics).
You can find the port-to-ASIC mapping with any of the commands below:
Show platform pm if-numbers
Show platform pm platform-block