Did you ever find a solution to this problem? I'm in a similar boat. We have 3560 switches all around our plant, many of which are only accessible via a scissor lift. True out of band management shouldn't necessitate that one resort to desperate hacks like using solenoid actuators to push buttons.
... View more
We are a mid-sized manufacturing firm operating out of three locations and we are in the process of making plans to restructure and renumber our networks so as to better facilitate automated configuration management and security, in addition to easing our deployment of IPv6. Currently, at each site the L3/L2 boundary resides at the network core, but increasing traffic/chatter has us considering moving the L3/L2 boundary to the access layer(s), which consist of 3560-X units in the wiring closets that are supporting edge devices either directly or via 8-port 3560-C compact switches in the further reaches of our manufacturing and warehouse spaces. As we contemplate moving to a completely routed network, the big unknown we're struggling with is whether or not it is safe or even desirable to abandon ACL-friendly addressing, and whether, in doing so, we can expect to run into hardware limitations resulting from longer ACLs. Currently, each of our site-wide VLANs gets a subnet of the form 10.x.y.0/24, where x identifies the site and y identifies the class of equipment connected to said VLAN. This allows us to match internal traffic of a given type with just a single ACE, irrespective of where the end-point device resides geographically. Moving L3 routing decisions out to the access switches will require that we adopt smaller prefix assignments, with as many as 8 distinct subnets on each of our standard-issue 3560CG-8PC compact switches. Why so many, you ask? We currently have more than 30 ACL-relevant classifications of devices/hosts - a number that will only grow with time, and to maximize the availability of all services, it is our policy to physically distribute edge devices of a given class (eg. printers, access points, etc) over as many access switches as possible. From what I can see, we have three options, each of which present trade-offs in terms of management complexity and address utilization efficiency: Option 1: Stick with ACL-friendly addressing, both for IPv4 and IPv6, and allocate uniform prefixes to each access switch. For IPv4, within the 10.0.0.0/8 block we would probably allocate 8 bits to the site ID (/16), followed by 6 bits as the switch ID (/22), and 7 bits to identify the equipment/host classification (/29), for a maximum of 5 available addresses for a given class of devices on a given access switch. For IPv6, assuming we have a /48 block for each site, we would use the first two bits to identify the type of allocation, the following 6 as the switch ID (/56), and the following 8 as the equipment/host classification (/64). Option 2: Abandon ACL-friendly addressing and dynamically allocate standard-sized prefixes from a common pool to each VLAN on a given switch. The advantages of this approach are increased utilization efficiency and more addresses available within each VLAN, but it comes at the cost of non-summarizable routing tables and ACLs, and even if the hardware can handle this, it means we're talking about a more complex configuration management system and less ease in troubleshooting problems. Option 3: Do something similar to option 1, but with the L2/L3 boundary positioned at the distribution layer rather than the access layer. I'm disinclined to go this route, as it seems to require the same, if not more, management complexity than we'll encounter with option 1, with only marginal benefits over keeping things the way they are currently (L2/L3 boundary at the network core). Thoughts? What issues have we neglected to consider? No matter which approach we select, it shall be assumed that we will be building a system to track all of these prefix assignments, provision switches, and manage their configurations. From a standpoint of routing protocols, we would probably be looking at OSPFv2/v3. It can also be assumed that if we encounter legacy devices requiring direct L2 connectivity to one another that we already have ways of bridging their traffic using external devices, so as far as this discussion is concerned, they aren't an issue. Thanks in advance for your ideas! -Aaron
... View more