cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8048
Views
15
Helpful
5
Replies
AJWatsonLCT
Beginner

IPv4 Internal to IPv6 External

I can't seem to find any info online about methodologies for translating IPv4 private addresses to IPv6 global addresses (essentially making your ISP-side IPv6-only and leaving your internal network IPv4-only). All the articles I can find seem to be focused on the exact opposite.

Assuming a Cisco ASA 5512 is used as the gateway for all internal IPv4 networks and that ASA peers via BGP to the ISP with IPv6-only, what methodology / configuration would be used to accomplish this?

Example:

ServerA-192.168.1.100 ==> Cisco ASA Inside-192.168.1.1 ==> Cisco ASA static NATs 192.168.1.100 to the IPv6 address for internet communication-2620:ABCD:1234:A:1::100

Is this possible?

2 ACCEPTED SOLUTIONS

Accepted Solutions
Ashwin Kotha
Beginner

Hi,

This document explains exactly your scenario of V4 to V6 NAT and the router's configuration as well.

http://www.cisco.com/c/en/us/support/docs/ip/network-address-translation-nat/113275-nat-ptv6.html

Let me know if this helps.

Thanks,

Ash

View solution in original post

So would it be safe to say that if an existing business wants to move their ISP network to IPv6, they should dual-stack their internal network as a best practice?

Yes, eventually. The first thing you do is dual-stack your network monitoring infrastructure, since the mere presence of dual-stack clients (all modern OS's and smartphones, including windows since Vista, OS-X 10, Linux, iOS, Android, ...) plus the default preference for v6 over v4 plus the existence of tunneling technologies such as Teredo, 6to4 and ISATAP opens you up to v6-malware attacks on supposedly v4-only networks.  It helps if your firewalls are blocking protocol 41 (IPv6 payload for a v4 packet) and port 3544/udp (default Teredo server contact point).  The motto for production networks should be "native or nothing - no tunneling".  At layer 2 you would like your switch ports to block ethertype protocol 0x86dd for IPv6 on the v4-only vlans, and at layer 3 you want to block DHCPv6 and ICMPv6 router advertisements on the v6-enabled vlans similarly to the way you block v4 DHCP now.

The second thing you have to dual stack is the layer 2 & 3 networking, so routers & switches and firewalls.  Otherwise no v6 packets go anywhere. 

Since over 50% of mobile web access in the USA is already on v6-only transport, if you have mobile customers you will get better performance, better analytics, and better geolocation if your DMZ services are dual-stacked at the border.  Since all the modern server OS's and web servers (Apache, IIS, nginx, ...) were long since dual-stacked the main obstacle is any cookie processing and log file analysis and IP address storage you are doing; all that has to be upgraded to cope with the 128-bit v6 addresses.  This is comparable to the Y2K problem, but smaller.

As a rule of thumb, figure 6 weeks to dual-stack desktop LANs, 6 months to dual-stack DMZ services, and 6 years to eradicate v4 from a datacenter.  Facebook has actually done that latter, by the way - they ran out of v4 to number their datacenter racks with.  Since there are very few if any v6-only services at present, dual-stacking desktops is currently optional.  It's easy enough, and makes you look very up to date, but provides little actual benefit.  Provided your firewall, DNS, and load-balancers are dual-stacked on the front-end it doesn't matter if some of your backend connections for app servers to DB servers linger on v4 for a while longer.  Yahoo and others did it that way in 2012 for world-IPv6 test day.

Let's say a business is running 100 percent IPv6...

They have a problem, because the legacy v4-internet clients won't be able to reach them.  The v4-only clients won't query DNS for AAAA records, and if they got them, no packets could be created or routed to such a v6 destination.  Businesses currently need to be dual-stack, so that the mostly-disjoint v4-only and v6-only separate chunks of our dual-protocol internet can both reach them.

The v4-only businesses have an additional, different problem: Tier 1 internet backbone providers can cut their router TCAM usage by about 75% if they go v6-only, since though v6 is routed down to /48 at twice the size of v4 routed down to /24, v6 aggregates about 7x better.  Once planetary backbone traffic hits 99% v6 - it's currently averaging about 15% - there will be a very strong incentive to ditch v4 routing.  At that point legacy v4 will have to tunnel over a v6-only backbone, like the original R&D 6bone had to tunnel over v4, or as current LTE4 cellular data already does.  There was a very small economic incentive to be the first to deploy v6, but there is a very large economic incentive to ditch v4, so the end of v4 as a globally routed protocol is going to be much more sudden than the v4-luddites who aren't thinking about this problem expect.  Cisco and AT&T were on record projecting that to happen as early as 2020, but that's probably 2-3 years sooner than will actually transpire.  When I give talks about IPv6 I predict that the last islands of v4-only traffic in, say, HVAC monitoring, will linger until 2036 in spite of the loss of backbone routing.  The reasoning there is that the classic unix 32-bit time rollover and the v4-only 32-bit address problem will combine to make people convert just before the 2038 time deadline.

[A v4 only host tries to use external DNS, beyond a NAT46 translator]

A monoprotocol host needs more than a simple protocol translator.  A v4-only host can't make use of DNS AAAA v6 addresses; it has no where to put them when creating packets.  Ditto for a v6-only host faced with a DNS A record.  So a NAT46 or NAT64 gateway is useless unless it is tightly coupled with a DNS46 or DNS64 translator, which will cache the external v6 or v4 address, and provide an inside lie to the client in its native protocol.  When the NAT gateway receives a packet going to the fake address, it has to implement the same mapping as the DNS side lied to the client about.  This is feasible for DNS64+NAT64 since the address space of v6 is so large that a complete copy of the entire v4 internet is easily embedded in it somewhere.  The converse problem is insoluble at ISP scale, which is one reason NAT-PT has fallen out of favor.  The issue isn't using google for DNS; you can query v6 AAAA records from 8.8.8.8 or wherever just fine over a v4 transport (and vica versa); the issue is that mono-stack clients can't talk to both sides of the internet.  But unless you built your network out of baling wire and chewing gum, all enterprise-grade equipment sold in this century should be able to dual-stack (consumer grade is a different story, e.g. the DSL folks didn't even write v6 standards until late 2010), and even completely obsolete operating systems like windows-XP can do IPv6 if you feed them DNS over v4 and don't insist on DHCPv6.

The current trend in DNS64/NAT64 is toward 464xlat, where a dual-stack client with v6-only transport gets two IPv6 prefixes routed to it, one to use for native traffic and the other to use with tunneled v4-over-v6 traffic.  This is what LTE4 android cellphones do, since obsolete v4-only apps tend to balk at running on a host with only a v6-address.

Your basic IP routing goal at present, where global-scope IPv4 addresses are scarce and becoming expensive to transfer, is to dual-stack with native IPv6 ASAP and have no more than one layer of NAT44 between an inside client and an external server.  ISP's are increasingly running "carrier-grade" NAT44 devices, and double NAT ("nat444") breaks way more stuff than single NAT.  This is more of an issue outside of North America, provided you aren't on LTE4 as your transport.  The USA has about 5 IPv4 addresses per capita, whereas Africa has about 0.1 and China about 0.3.  So you do NAT44 your border, or the ISP does it at its border, but not both.  Chinese clients may be suffering from as many as 3 layers of NAT44 translation, which makes their sluggish uptake of IPv6 a bit surprising.

-- Jim Leinweber, WI State Lab of Hygiene, wearing is IPv6 evangelist hat :-)

View solution in original post

5 REPLIES 5
Ashwin Kotha
Beginner

Hi,

This document explains exactly your scenario of V4 to V6 NAT and the router's configuration as well.

http://www.cisco.com/c/en/us/support/docs/ip/network-address-translation-nat/113275-nat-ptv6.html

Let me know if this helps.

Thanks,

Ash

View solution in original post

While this document helps me understand the IOS configuration aspect, it leaves one question to be asked. What to do on the host NIC? 

If:

- A host has 192.168.1.100 with the gateway of the ASA at 192.168.1.1

- That host is configured to look for DNS through Google at 8.8.8.8

- The host tries to browse to www.cisco.com

Then:

- How is the host going to get the IPv6 address of cisco.com?

I understand that DNS64 is a thing that exists, but the documents that cover it don't make much sense to me.

James Leinweber
Enthusiast

The IETF demoted RFC-2766 NAT-PT (the main NAT46 proposal) to historic status - meaning the technology exists, but should avoided on production networks - in RFC-4966 a decade ago.  In general, the theory is to add native IPv6 with global-scope addresses and routing in parallel with whatever legacy IPv4 you have instead.  The reason for the focus on the opposite translation is the exhaustion of IPv4 and rise of v6-only networks such as LTE4 cell-phones.  It's certainly possible to NAT IPv4 onto IPv6 in limited circumstances; in addition to the usual NAT issues cited in RFC-4966 NAT-PT can't scale to ISP-size ("carrier-grade") due to the inability to get the lifetime of DNS46 fake A records right.

Your main ASA technology for doing NAT46 is the post 8.3 firmware "twice-NAT"; make the source address network object IPv4 and the mapped one IPv6.

-- Jim Leinweber, WI State Lab of Hygiene

So would it be safe to say that if an existing business wants to move their ISP network to IPv6, they should dual-stack their internal network as a best practice?

Another question that comes to mind is:

Let's say a business is running 100 percent IPv6. They have a web server that hosts www.mycompany.com. There is only an AAAA record for www.mycompany.com because it does not have an IPv4 address. How would someone from another company, running 100 percent IPv4, successfully resolve and browse to www.mycompany.com?

So would it be safe to say that if an existing business wants to move their ISP network to IPv6, they should dual-stack their internal network as a best practice?

Yes, eventually. The first thing you do is dual-stack your network monitoring infrastructure, since the mere presence of dual-stack clients (all modern OS's and smartphones, including windows since Vista, OS-X 10, Linux, iOS, Android, ...) plus the default preference for v6 over v4 plus the existence of tunneling technologies such as Teredo, 6to4 and ISATAP opens you up to v6-malware attacks on supposedly v4-only networks.  It helps if your firewalls are blocking protocol 41 (IPv6 payload for a v4 packet) and port 3544/udp (default Teredo server contact point).  The motto for production networks should be "native or nothing - no tunneling".  At layer 2 you would like your switch ports to block ethertype protocol 0x86dd for IPv6 on the v4-only vlans, and at layer 3 you want to block DHCPv6 and ICMPv6 router advertisements on the v6-enabled vlans similarly to the way you block v4 DHCP now.

The second thing you have to dual stack is the layer 2 & 3 networking, so routers & switches and firewalls.  Otherwise no v6 packets go anywhere. 

Since over 50% of mobile web access in the USA is already on v6-only transport, if you have mobile customers you will get better performance, better analytics, and better geolocation if your DMZ services are dual-stacked at the border.  Since all the modern server OS's and web servers (Apache, IIS, nginx, ...) were long since dual-stacked the main obstacle is any cookie processing and log file analysis and IP address storage you are doing; all that has to be upgraded to cope with the 128-bit v6 addresses.  This is comparable to the Y2K problem, but smaller.

As a rule of thumb, figure 6 weeks to dual-stack desktop LANs, 6 months to dual-stack DMZ services, and 6 years to eradicate v4 from a datacenter.  Facebook has actually done that latter, by the way - they ran out of v4 to number their datacenter racks with.  Since there are very few if any v6-only services at present, dual-stacking desktops is currently optional.  It's easy enough, and makes you look very up to date, but provides little actual benefit.  Provided your firewall, DNS, and load-balancers are dual-stacked on the front-end it doesn't matter if some of your backend connections for app servers to DB servers linger on v4 for a while longer.  Yahoo and others did it that way in 2012 for world-IPv6 test day.

Let's say a business is running 100 percent IPv6...

They have a problem, because the legacy v4-internet clients won't be able to reach them.  The v4-only clients won't query DNS for AAAA records, and if they got them, no packets could be created or routed to such a v6 destination.  Businesses currently need to be dual-stack, so that the mostly-disjoint v4-only and v6-only separate chunks of our dual-protocol internet can both reach them.

The v4-only businesses have an additional, different problem: Tier 1 internet backbone providers can cut their router TCAM usage by about 75% if they go v6-only, since though v6 is routed down to /48 at twice the size of v4 routed down to /24, v6 aggregates about 7x better.  Once planetary backbone traffic hits 99% v6 - it's currently averaging about 15% - there will be a very strong incentive to ditch v4 routing.  At that point legacy v4 will have to tunnel over a v6-only backbone, like the original R&D 6bone had to tunnel over v4, or as current LTE4 cellular data already does.  There was a very small economic incentive to be the first to deploy v6, but there is a very large economic incentive to ditch v4, so the end of v4 as a globally routed protocol is going to be much more sudden than the v4-luddites who aren't thinking about this problem expect.  Cisco and AT&T were on record projecting that to happen as early as 2020, but that's probably 2-3 years sooner than will actually transpire.  When I give talks about IPv6 I predict that the last islands of v4-only traffic in, say, HVAC monitoring, will linger until 2036 in spite of the loss of backbone routing.  The reasoning there is that the classic unix 32-bit time rollover and the v4-only 32-bit address problem will combine to make people convert just before the 2038 time deadline.

[A v4 only host tries to use external DNS, beyond a NAT46 translator]

A monoprotocol host needs more than a simple protocol translator.  A v4-only host can't make use of DNS AAAA v6 addresses; it has no where to put them when creating packets.  Ditto for a v6-only host faced with a DNS A record.  So a NAT46 or NAT64 gateway is useless unless it is tightly coupled with a DNS46 or DNS64 translator, which will cache the external v6 or v4 address, and provide an inside lie to the client in its native protocol.  When the NAT gateway receives a packet going to the fake address, it has to implement the same mapping as the DNS side lied to the client about.  This is feasible for DNS64+NAT64 since the address space of v6 is so large that a complete copy of the entire v4 internet is easily embedded in it somewhere.  The converse problem is insoluble at ISP scale, which is one reason NAT-PT has fallen out of favor.  The issue isn't using google for DNS; you can query v6 AAAA records from 8.8.8.8 or wherever just fine over a v4 transport (and vica versa); the issue is that mono-stack clients can't talk to both sides of the internet.  But unless you built your network out of baling wire and chewing gum, all enterprise-grade equipment sold in this century should be able to dual-stack (consumer grade is a different story, e.g. the DSL folks didn't even write v6 standards until late 2010), and even completely obsolete operating systems like windows-XP can do IPv6 if you feed them DNS over v4 and don't insist on DHCPv6.

The current trend in DNS64/NAT64 is toward 464xlat, where a dual-stack client with v6-only transport gets two IPv6 prefixes routed to it, one to use for native traffic and the other to use with tunneled v4-over-v6 traffic.  This is what LTE4 android cellphones do, since obsolete v4-only apps tend to balk at running on a host with only a v6-address.

Your basic IP routing goal at present, where global-scope IPv4 addresses are scarce and becoming expensive to transfer, is to dual-stack with native IPv6 ASAP and have no more than one layer of NAT44 between an inside client and an external server.  ISP's are increasingly running "carrier-grade" NAT44 devices, and double NAT ("nat444") breaks way more stuff than single NAT.  This is more of an issue outside of North America, provided you aren't on LTE4 as your transport.  The USA has about 5 IPv4 addresses per capita, whereas Africa has about 0.1 and China about 0.3.  So you do NAT44 your border, or the ISP does it at its border, but not both.  Chinese clients may be suffering from as many as 3 layers of NAT44 translation, which makes their sluggish uptake of IPv6 a bit surprising.

-- Jim Leinweber, WI State Lab of Hygiene, wearing is IPv6 evangelist hat :-)

View solution in original post