We receive numerous reports of slow network performance during they business day. These calls result in Network Analyst time spent looking for a network-issue to explain the slowness. We would like to deploy a test solution in which we would advise a client to open a web session with a server, that would then initiate some throughput-style tests to that client, for the sake of documenting the WAN performance for that user. This data would then be compared with known benchmarks for that kind of remote site to better diagnose if a network issue or something else might explain a reduction in network performance. Has anyone heard of such a system/product?
This has to be one of the most frequent complains by users ever. There should be processes that when experiencing problems, users should contact PC administrator, if he confirms it, relay the problem to application administrator, if they don't find it, relay the problem to server admins and only they should relay the problem to the network people.
That's what we have done. Standard enterprise has towards a 1000 of applications, and for the most of the time, network is not responsible.
As to your question.
1. You can setup monitoring system for "test transactions" that periodically initiates TCP sessions and monitor responsiveness (on some specific port) - just a delay value. Delay value should correlate to the software output queue length, and therefore you would know if the link was really loaded. If it's in high speed LAN/MAN environment, those tests can be carried out against Application front-end servers, to see if the problem with the application (again, this should be handled by monitoring people)
2. If you don't have QoS this could be your problem. Remember, all network links are at every picosecond of time (instance) are either 0 percent loaded (no load) or 100 percent loaded (load). It's 1 or 0. there is nothing in between. If you see a link load of 30 percent, that means it's an AVARAGE over some time (30 seconds or 1 minute or 3 minutes or 5minutes). That is not descriptive as to if there was a huge surge or not. If someone sends a large email, and another person is using delay-sensitive thin client of some application, delay sensitive data will experience delays. I cannot overrate the importance of QoS. It's like a silver bullet.
Hope this helps.
Please rate all helpful posts.
Do you have a server at your remote sites? If so, just install a distribution of ttcp (command line utility) at the remote site and on a machine at your site, and send traffic from your site to the remote site or vice versa. If you use windows, I would recommend pcattcp: http://www.pcausa.com/Utilities/ttcpdown1.htm
It should tell you the actual throughput, similar to an FTP transaction. If your throughput is good, then you can look at the application / server / etc.
You might also want to investigate what can be done with the Cisco IP SLA feature for monitoring.
For a tool, I use iperf in conjunction with psexec. (As long as you have rights to the end workstations, end users don't need to be involved.)
The other thing you really need to look into developing is NetFlow. One of the best free tools your routers offer - with the exception of those that require a module. Have your routers send that NetFlow data to aggregation software, and you have wonderful picture of what is happening on your network.