cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2774
Views
4
Helpful
8
Replies

SMTP cluster

securahosting
Beginner
Beginner

Hi all,

How can I manage to setup failover for SMTP in cluster.

So far I have two machines in cluster, each of them has a same interdace called "smtp" (different IPs).

In the Listeners settings (cluster mode), I setup a SMTP listener, and there is a dropdown list for you to pick up an interface, I can see an interface called "smtp", I assume because I gave the same name to the two interfaces on two machines.

Is it the right way to setup a SMTP server over cluster, have I got failover features in these settings

Regards,

RW

8 Replies 8

sudeepsharma
Beginner
Beginner

Hi John,

The cluster mode in Ironport has only one advantage that is if you make changes to one of the Ironport in cluster the changes would be replicated to the other. However for the failover you would need to take the help of DNS, so failover would be the MX records. By this I mean if one MX records fails the other MX would take the emails.

Ironport Cluster are different from the Cluster of computers we know which runs other servers like file-print, or exchange where if one server fails the other  would take the load completly.

I hope that would answer your question.

thanks

SS

(Sudeep Sharma)

Hello John,

Sudeep is correct in his reply. The Cisco IronPort Email Security Appliance acts as standalone device in terms of handling email traffic. The 'clustering' is just for configuration purpose and not for mail handling purpose (which is why the associated feature key is called 'Centralized Management').

If you want to achieve a active/passive or active/active solution, you'd either need a physical load balancer or trim your MX record priority as Sudeep suggested so that the sending mail servers connect to your appliances as required.

Hope this helps. if not, please let us know.

Thanks and regards,

Martin Eppler

Cisco IronPort Customer Support

I would like to add on an additional question if I may. Is it possible to "cluster" two appliances and use MX failover with different external listener IP addressing. Basically to a type of "automatic" failover if I have an extended circuit outage.

Thanks.

Chris

Hello Christopher,

MX failover is one option (e.g. having one DMS MX record with prio 10 and another with prio 50). If the first appliance is not responsive for a given sending server, he will then try the one with the lower prio (= higher prio value). However, then you have to ensure that the other appliance is really not responding at all to get this working (e.g. by shutting down the Listeners or locking port 25 in your firewall).

The Centralized Management feature (a.k.a. 'Clustering') will just ensure that all appliances in the cluster configuration use the same configuration (except for the configuration done on machine level).

Regards,

Martin

Martin,

Thank you for the reply i think you touched on the answer I am looking for in the second part of your reply. I am aware of how MX prioritization works and am more curious of the configuration sharing. So let me reprhase my question, If i have two IronPorts with the networking configuration below. Will I be able to share the configuration without modifying the external interface IP settings.

ISP A                                           ISP B

Ironport C360 #1                           IronPort C360 #2

External IP - 205.36.x.12               External IP - 86.54.X.10

Internal  IP - 192.168.1.2               Internal IP - 192.168.1.3

External MX

MX   10  205.36.x.12

MX   20  86.54.x.10

Basically, this would allow for me to provide redundancy should ISP A have an issue with my circuit by then routing mail to my second C360 sitting

on the other circuit provided by ISP B.

"Clustering" or really, sharing of configs, of the IronPorts would not break this design, correct?

Thanks,

Chris

Hello Christopher,

> Basically, this would allow for me to provide redundancy should ISP A

> have an issue with my circuit by then routing

> mail to my second C360  sitting on the other circuit provided by ISP B.

This is correct. I would even dare to say that this setup is a very common one across our customers for redundancy purposes.

> "Clustering" or really, sharing of configs, of the IronPorts would

> not break this design, correct?

Yes, also this is correct. The IP Interfaces configuration is always in machine level mode and is not shared as configuration in the cluster (to avoid having the same IP across multiple appliances). However, it is mandatory that the IP Interface name is the same across all appliances in the cluster. Failing to do so would would result in appliances not being able to receive messages as the Listener configuration does not find the configured IP Interface and therefore has to disable the Listener.

As a rule of thumb regarding cluster configuration you can say that everything that has to be unique in a network (such as IP Interface IP addresses, hostnames, etc) are not shared in the cluster configuration (this is called 'machine level' then), all other configuration (e.g. Listeners, Incoming/Outgoing Mail Policies, log subscriptions, HAT, RAT etc.) can be shared across the cluster (this is then considered as 'group level' or 'cluster level' configuration).

I hope this answers your question. If not, please let me know so that I can provide you with the answers your're looking for :-)

Regards,

Martin

Martin,

thank you that does help. Is it documented anywhere what exactly resides the machine vs. cluster level? I'm looking in the advanced config guide

but can't find it in detail.

If you have that would be great.

Chris

Hello Christopher,

as far as I know this is not documented (but I agree it should be). Would it be possible for you to raise a support request with us so that we can formalize this as a documentation change request?

Thanks and regards,

Martin

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: