After evacuating the data from the device, I performed a factory reset and then STARTED with Link Aggregation - "Balance-alb" and all is working now!!! Not sure why it wouldn't work after having already been configured before. Glad to have it working though!
... View more
All, As the subject specifies, I have a pair of devices on a local network that I would like to play nicely together. My problem: If the TS-459 is configured for standalone (No Link Aggregation), then I can access the NAS via Ping, web interface, NFS, etc.. however, If I try enabling Link Aggregation, then I lose TCP/IP connectivity to it and must use QNAP's Finder utility to change Link Aggregation mode back to Standalone. What I'm asking is - has anyone here configured a QNAP NAS appliance with Link Aggregation enabled with an SG300 or similar?? If so, could you kindly provide a walk-through on the appropriate settings for each? My first attempts on the NAS side were to choose the "Balance-alb" (automatic load balance) and "Balance-tlb" (adaptive transmit load balance) - The QNAP_Turbo_NAS_User_manual_V3.4_ENG.pdf states that these two modes are for "General Switches", so I assumed they should work.. unfortunately, no luck for me.. Next up, I tried the IEEE 802.3ad / Link Aggregation setting on the NAS, while the device was rebooting, I configured the two GB ports on the SG300 to be a LAG with LACP enabled. Unfortunately, this too did not work as I had hoped. I called QNAP support and of course they don't have an Sg300-10 to test with, but will test their TS-459 Pro II to confirm the Link Aggregation settings function as advertised. I hope to hear back from them later this evening. In the meantime, if anyone here has insight, it would be very much appreciated. The NAS and SG300 are for a home lab consisting of a pair of Dual-six core systems with 48GB of mem each for vSphere. NAS Details: Model: QNAP TS-459 Pro II Firmware: 3.4.4 (0718T) Drives: Seagate 2TB x4 Switch Details: Model: Cisco SG300-10 Firmware version active image: 220.127.116.11 Thank you, Burke
... View more