I have 2 field switches, connecting to 1 core switch in the headend.
The first field switch has devices on 192.168.130.1/23, the second field switch has devices on 192.168.132.1/23. They both are going to go back to one core switch in the headend via fiber.
All of the devices in the field do not have a gateway programmed into them.
I can use a mixture of whatever I need to, but it was suggested to use a vlan for the first group, and another vlan for the second group. However, I get lost on how you can have these together.
How can I accomplish this type of set-up? In the end, I need to be able to ping all ranges of devices (192.168.130, 131, 132, and 133 ranges) from a couple ports on each core for monitoring devices.
It seems to me that there are at least 2 significant aspects to what you describe: 1) configuration and operation of the switches 2)configuration and operation of the connected devices in the field.
1) The suggestion that 2 networks in the field be supported by 2 vlans, and that a core L3 device provide intervlan routing is the very typical approach in modern networks. But I think your situation is quite different and I would suggest that it may be easier to get the results that you want if all of the field network is in a single vlan. The main issue here is that the traditional approach with 2 vlans and intervlan routing at the core assumes that the field devices have configured a default gateway and when they want to communicate with a "remote" address that they will forward their packet to their gateway. But you have told us that the field devices do not have a default gateway. If you configure the field as a single vlan then (at least from the perspective of the switches) everything is local and there is no need for intervlan vlan routing and so no need for gateway.
2) We do not know what type of devices you have in the field. And therefore we do not know what their behavior will be when they do not have a configured gateway. This may be significant for a couple of reasons:
- What is local vs what is remote? The IP stack in some devices has an inherent recognition that 192.168.130.0 is different from 192.168.131.0 and will treat it as remote while some other IP stack might process the /23 mask and treat them as the same (local) network. Which behavior will your devices have? In this context I would suggest that it might be better if your network used the private IP address range of 172.16.0.0 rather than the range of 192.168.0.0.
- How do your get to something that is remote? The IP stack in some devices depends on a default gateway - it will arp for local addresses and will forward packets for remote addresses to its gateway. If there is not a configured gateway then it may declare that remote networks are not reachable. The IP stack in other devices is not necessarily dependent on a default gateway. If there is not a configured gateway it may just arp for every address it want to communicate with.
It seems to me that this last option is your best chance to have the network work. If every device will arp for every destination address (and if the network is all in the same vlan) then it should work as you want. If it will not arp for remote addresses, and depending on how it recognizes remote addresses then you have significant challenges in getting this to work.
Thank you for the quick reply. Since you seem to understand what I'm getting at, let me expand to help you see the bigger picture, and then you can let me know what you suggest to move forward with.
The field devices are all IP based security cameras (powered by PoE). There are about 600 devices in all.
In the field, you have a max of 24 of those cameras going to a Cisco C3650 that is in the field. There may one, or many, of these same switches in the cabinet, depending on the density of the area for security coverage.
Each "field device switch/Cisco C3650" then goes back to the headend via Fiber.
Once in the headend, in all, there will be 3x C3850s to be the "core". I have the ability to stack them, but haven't found the way to make it work without stacking them so Im taking baby steps!
From the three core switches, there will be servers acting as the network video recorders (NVRs, DVRs), about 17 of them in total.
As stated earlier, we were instructed to not use default gateways on the devices (IP Cameras), so making this work someway without having to go back to all 600 devices would be amazing.
This is a physically separated network, all these switches in the field and headend are strictly for this newly created security camera network. We can pretty much do whatever we want to get this going.
What do you think? Thank you again for assisting!
Is this maybe possible with intervlan routing? Since all this trafffic will be local, and there is no router in the picture? Servers (recorders) are just plugged into the core to capture all the video feeds.
Just doing some thinking out loud here.
Thank you for the additional information. I have several comments in response.
- I am not clear about the choice of 192.168.130.0, etc. Did you choose that? Did the vendor choose that? I continue to believe that some of the challenges you face would be easier if the addressing used 172.16.0.0 (or even 10.0.0.0).
- I continue to believe that on the network side (switch configuration etc) that it would be better to configure this as a single vlan (single broadcast domain) in which any device can communicate with any other device as locally connected. This provides maximum flexibility. I know that this is contrary to most of the current conventional wisdom which would suggest separate vlans. But the specification that the devices not have a default gateway also runs contrary to current conventional wisdom.
- I am not clear about the relationship between servers and cameras. Does a particular camera always report to the same server? Or is the relationship dynamic? How is that relationship established? Does the camera initiate a connection to a server? Does the server poll the camera and the camera responds? Is it something else? If the relationship between camera and server is predictable then having the camera and server in the same /24 could address some of the issues mentioned in my previous response.
- I believe that stacking the 3850s would be advantageous. I do not have advice about how to do that. But I believe that treating the core as a single device (rather than as several cooperative devices) would be beneficial.
- Being able to treat this as a stand alone network does in some respects make it easier to implement. There are no questions about how to communicate with anything that is actually remote. And it reduces any concerns about security from outside devices.
I tried to communicate something in my previous response that I am not sure came through clearly. So let me try again: we can implement the networking with the access switches and the core switches, etc and have that all working just fine. But ultimately the question is about the behavior of the IP stack in the cameras. If they do not conform to some assumptions which I have made (especially about arping for any destination rather than any local destination) then this implementation may not work.
Do the cameras and the servers come from the same vendor? Or from different vendors? I am assuming that the suggestion to not configure a default gateway came from the camera vendor. If so they might have suggestions about how to implement the networking.
I apologize for any miscommunication on my behalf. Here are responses to your post.
1. The 192.168.130, 192.168.131, 192.168.132, and 192.168.133 were all requested by the customer.
2. I agree, a single vlan may make this a lot easier. Is it as simple as re-configuring the vlan on all switches to have a different IP/Subnet, or is there more to it?
3. I believe the plan on the customer side is to have a certain camera record on a certain server, to better allow mass storage of video captured. This leads me to believe the same camera will always talk to the same server, e.g. Cameras 1 thru 10 will always talk to Server 1, while another group such as Cameras 11 thru 20 will always talk to Server 2.
4. Thank you for the tip, as soon as we can come to a resolution here, I will explore what it will take to get that going.
5. I'm not too certain on how to answer the behavior of the IP stack in the cameras. As far as I am able to tell you, the Recorder is programmed to communicate with the camera via the Static IP address we put into the system.
6. The cameras are Axis Communications, and the customer is utilizing Dell servers (likely Poweredges and Vaults) with a recording software, by DVTel / FLIR. The direction for the IP address', along with subnet, and advising to not utilize a gatway, came from the customer. I'm assuming they have knowledge we do not, and they are not sharing! :)
Thanks for the information. It does seem like the customer knows things that they have not shared, and possibly has something in mind that they have not communicated to you. You describe this as a stand alone, physically separate network and it would be nice if that were true. But I wonder about that. As I think about it I wonder about how code upgrades would be done. And I wonder what will be done with the output from the cameras that is collected on the servers. Surely someone is going to want to access that information? And I wonder if something happens with a camera (or a server) and they want to troubleshoot - how will they access it?
Let me try to explain what I am suggesting from a different perspective. The most significant part of creating this network design is that the cameras do not have a default gateway. This means that when a camera wants to communicate with anything it must arp for the destination address. The safest way to enable the camera to arp for any address is to make all addresses in the same broadcast domain (and a single broadcast domain is a network with a single vlan) since the arp request is sent as a broadcast.
Excellent feedback, again, thank you very much. When it comes to accessing the servers holding the storage, these servers will have additional NICs that will be on a separate physical network that is a managed "corporate" LAN, if you will. Utilizing that network they will be able to use the iDRAC and Remote Desktop features to gain access to the computers and manipulate from there.
I agree with your thoughts on making everything on the same broadcast domain, having a single broadcast domain, single vlan, type of set-up. Would you be able to assist me in creating such a set-up?
Your explanation about servers with multiple nic does answer my wondering about access to servers. So the camera network really will be separated. That does simplify in some ways what you need to do. The suggestion of having 2 /23 IP networks does suggest that they may have something in mind that we do not know about yet. It does suggest implementing 2 vlans and routing between the networks/vlans. In a couple of your posts you have asked about inter vlan routing. My perspective is that multiple vlans and routing between vlans increases the risk of cameras not being able to communicate (routing kind of implies default gateways and the cameras do not have gateways) and does not seem to offer any offsetting advantages.
What kind of assistance do you have in mind?