|Email Plug-in (Reporting):||1.1.0-114|
|Email Plug-in (Encryption):||1.2.1-118|
I'm having an issue sending e-mails to domain capitaliii.com from my Ironport. I see the following message in the Ironport log.
"DNS Temporary Failure capitaliii.com MX - unable to reach nameserver on any valid IP"
When I issue a nslookup on the MX record from the Ironport, I get the below message.
Temporary query error: "unable to reach nameserver on any valid IP" looking up A record for "capitaliii.com" to nameserver
I can however do a successful a successful nslookup from my workstation. See the results below. Any ideas? I've done
a flushdns on the Ironport to no avail.
capitaliii.com MX preference = 10, mail exchanger = mail1.digipark.commail1.digipark.com internet address = 18.104.22.168
This may just be an issue with the DNS servers that your IronPort is configured to query. Are you using root(default setting) DNS servers or are have you configured your appliance to query internal servers?
check if your firewall in front of the IronPort appliance allows TCP for DNS on port 53. The query results I get are close to the 512 byte limit, where DNS switches from UDP to TCP:
cisco$ dig mx capitaliii.com +trace
; <<>> DiG 9.6-ESV-R4-P3 <<>> mx capitaliii.com +trace
;; global options: +cmd
. 274955 IN NS f.root-servers.net.
. 274955 IN NS g.root-servers.net.
. 274955 IN NS h.root-servers.net.
. 274955 IN NS i.root-servers.net.
. 274955 IN NS j.root-servers.net.
. 274955 IN NS k.root-servers.net.
. 274955 IN NS l.root-servers.net.
. 274955 IN NS m.root-servers.net.
. 274955 IN NS a.root-servers.net.
. 274955 IN NS b.root-servers.net.
. 274955 IN NS c.root-servers.net.
. 274955 IN NS d.root-servers.net.
. 274955 IN NS e.root-servers.net.
;; Received 260 bytes from xxx.xxx.xxx.xxx in 189 ms
com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; Received 504 bytes from 22.214.171.124#53(a.root-servers.net) in 23 ms
capitaliii.com. 172800 IN NS ns4.digipark.com.
capitaliii.com. 172800 IN NS ns3.digipark.com.
;; Received 109 bytes from 126.96.36.199#53(m.gtld-servers.net) in 50 ms
capitaliii.com. 86400 IN MX 10 mail1.digipark.com.
capitaliii.com. 86400 IN NS ns3.digipark.com.
capitaliii.com. 86400 IN NS ns4.digipark.com.
;; Received 147 bytes from 188.8.131.52#53(ns4.digipark.com) in 148 ms
Hope that helps,
I’ve made the below change on my ASA, but that doesn’t seem to have helped.
inspect dns maximum-length 4096
I had a similiar problem, it was an issue on our border routers. Someone had put in a bogon list, but not kept it updated, so the router was dropping the traffic...
Do you have connectivity to 184.108.40.206???
Yes it has finally been resolved. I had to change my Ironport DNS setting to NOT use the Internet's root DNS servers. If I use my internal DNS servers or Google's DNS server, it works.
So I'm on the other end - seems anyone with an Ironport has intermittent MX query failures to our domain Hitchcock.org.
We've had a few critical partners set up SMTP routes to our mail servers, or as noted here have the Ironport query their internal DNS server instead of following the Internet root server referrals. But not everyone is willing or able to change their configuration.
Has anyone figured out what the root cause is here? Nslookups from these domains succeed, only the Ironport seems to fail to get a reply from our DNS server. Network captures show the MX query going out but no response ever coming back. Many of the A record queries fail also but they seem to get responses more often. PTR queries always seem to get replies.
Sounds like there is some strange Name Server issue for this Domain.
When I run your domain against MXToolbox DNS Check, it said it could not find an authoritative Name Server. It found a name server but it did not seem to be authoritative. I would check your Domain Register to make sure it has the correct Name Server Selected. Work with your Domain register and check your name server configuration. It is likely it is not just an Ironport Cisco ESA issue.
Here is the output I get for hitchcock.org from MXToolbox DNS Check. Notice all the entries say Non-Auth.
0 a2.org.afilias-nst.info 220.127.116.11 NON-AUTH 78 ms Received 2 Referrals , rcode=NO_ERROR NS dns2.hitchcock.org,NS alfred.hitchcock.org,
1 alfred.hitchcock.org 18.104.22.168 NON-AUTH 78 ms Timeout after 3 sec, rcode=NO_ERROR
1 dns2.hitchcock.org 22.214.171.124 NON-AUTH 78 ms Timeout after 3 sec, rcode=NO_ERROR
RESOLVED. In our case, it turned out our ISP had implemented some kind of DNS grey listing policy, in which DNS packets between [client IP, server IP] pairs were blocked unless there was a preceding attempt in the last 60 seconds. (see http://en.wikipedia.org/wiki/Greylisting )
IronPorts have unusually long DNS retry intervals, and very rarely made the retry window. The timeline of a failed query to our domain looks like this:
t=0s: IronPort tries primary DNS server, receives no response. Greylisting retry window opens for [client, primary].
t=15s: IronPort tries secondary DNS server, receives no response. Greylisting retry window opens for [client, secondary].
t=60s: Greylisting retry window to primary DNS server expires.
t=75s: Ironport retries primary DNS server and receives no response. Greylisting retry window to secondary DNS server expires.
t=90s: Ironport retries secondary DNS server and receives no response.
Note that nslookup and dig typically retry after 2 seconds. I believe by default, dig won't even show you the timeouts. So this behavior was hidden from our troubleshooting efforts until a thorough Cisco engineer figured this out. I was then able to duplicate the timeouts from several locations, and determined our ISP's name server was also affected. Traceroute from those locations showed that there were no upstream networks in common that might be blocking the queries, so the fault had to be with our ISP.