When described like this, it sounds easy to find a hash algorithm which will split 600 ip addresses in 2 equal size groups.
But this is actually very complicated.
First because when we designed the ACE code, we didn't know how many ip, which ip ... would be used.
Morever, this information changes with every customer of ours.
In conclusion we made a generic algorithm which works most of the time.
But this algorithm can't guarantee that you will have equal loadbalancing.
If you need equal load on your caches, you need to switch to leastconn or roundrobin.
Finally, I don't see the need to use hash address source.
Usually when using ACE with caches, we use either hash url (if we want to make sure one object only exists on one cache - save disk space) or roundrobin/leastconn to have equal load on the caches.
Regards,
Gilles.