Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
adrienbcn
New Contributor

IPSEC with overlapping

Hello, I am currently in moving our whole Datacenter from one location to another one. I have setup an IPSEC tunnel between this 2 locations, up and running. My problem is the following : - All my servers point to 2 DC/DNS servers (Let' s say 192.168.1.1 for DC and 192.168.1.2 for DNS) that are internal and still running on our Datacenter1 for now. The problem is that for the servers that we already migrated to our Datacenter2, they can´t reach the DC/DNS, as they' re on the same network (I guess ?) I added a static route on the fortigate to route these 2 ips throught the IPSEC tunnel. From Firewall in Datacenter2, I can ping the DC and DNS. From Firewall in Datacenter1, I am able to ping the servers I already migrated. BUT, from the servers, they can' t reach each other. What could I do to have this running without using Virtual IPs ? Problem is that if I use virtual ips, I would need to change this on evey server to make dns pointing to this virtual IP instead of the real one I guess ? And I can' t really afford that. I don' t really know how I could do that ? I tried using NAT for outbound rules on both firewalls but it doesn´t seems to work. And a traceroute does not reach anything so it' s hard to say.... Any advices on how to accomplish this ?
9 REPLIES 9
emnoc
Esteemed Contributor III

Could you SNAT the new data-center @ location#2, and define that in your IPSEC-tunnel porxy-ids? This way all devices in DC#2 are masked behind a new unique address . But I think you know that you need to build VIP to mask the overlapping space. You have no way around this, but to use some tricks with NAT' ing. Ideally you should have crafted new subnets at DC#2, but i understand your picking up and migrating. The next problem I see, since your trying to keep the same address, when a host at DC#2 tries to reach a host at Dc#1, it' s going to think that host is local. This is why you need to use some NATing. just my 2 cts

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
ede_pfau
SuperUser
SuperUser

emnoc states the reason why you will have to use VIPs (destination NAT) and not IP pools (source NAT): with a VIP the (local) Fortigate will proxy-arp for the remote host so that an arp broadcast will succeed. If you only source NAT in DC#2 local broadcasts will remain local and not cross the tunnel. Just an idea to save some work: what if you leave the DNS setting on the migrated servers as .1.2 and create a VIP .1.2 -> .1.200 and assign .1.200 to your DNS (as secondary address)? This way you could keep the old DNS setting, keep the DNS primary address and have DNS cross the tunnel. Just my 2 ct, on the fly.
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
adrienbcn

Ah yes, thanks this could be a good idea ! If I am doing that, do I need to define this VIP on both firewall, or just on Firewall1 (Where the DC is still located) ? And then, I guess I would need to define a VIP in a different subnet that the one currently used, right ? (Because in your message you said .1.200 so I guess you' re saying to put on the same subnet. If yes, then I don' t really see how this could fix my issue ?)
adrienbcn
New Contributor

Thanks for your answer. I tried to do that, but with no luck so far. Example : My DC on Datacenter1 is IP 192.168.1.1 My server on Datacenter2 is IP 192.168.1.2 and has DNS/DC pointing to 192.168.1.1 In my firewall on Datacenter2, I added a new VIP 10.10.0.2 pointing to 192.168.1.2 I also added an IP Pool of 10.10.0.2 Then on my policy, I enabled NAT and asked to use 10.10.0.2 as the NAT IP. Is it the correct way to do ? I could do it with SNAT I guess, I only need that my servers in Datacenter2 be able to reach the DC until everything gets migrated. From Datacenter1, there is no need that the servers needs to access Datacenter2 servers.
ede_pfau
SuperUser
SuperUser

You would be employing DNAT on the FGT in DC#2! External IF: internal Mapped-to IF: tunnel2DC1 external IP: 192.168.1.2 mapped to IP: 192.168.1.200 (yes, identical subnet BUT on tunnel interface) And in DC#1: DNS primary IP: 192.168.1.2 secondary IP : 192.168.1.200 where .1.200 is any arbitrary unused IP address from the SAME subnet. Reason: this subnet already gets transported across the tunnel if the QM selectors are set right. There is no need to configure anything (any NAT) on the FGT of DC#1. The FGT will send the reply traffic down the tunnel back to DC#2 because of the proxy IDs used. Just give it a try.
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
adrienbcn
New Contributor

Ok, so this what I' ve done so far : Added Virtual IP on Fortigate2 as you said : External Interface : SERVERS_Interface (Internal) External IP : 192.168.1.1 Mapped to : 192.168.1.200 Added static route to 192.168.1.200 to pass thought the IPSEC Tunnel Added IP 192.168.1.200 as secondary IP on my server in Datacenter1 From fortigate on Datacenter2 I am able to ping both IPs of the Datacenter1 server. But still, from my server in Datacenter2 I am still unable to ping servers in Datacenter1. This what I get in the debug flow when I try to ping my DNS on Datacenter1 from Datacenter2 (192.168.1.20 is my test server located in Datacenter2, where I am trying to ping from) : Fortigate2 : id=36871 trace_id=477 msg=" vd-root received a packet(proto=1, 192.168.1.20:512->192.168.1.1:8) from OLD_SRV-NET." id=36871 trace_id=477 msg=" Find an existing session, id-00fc9ed5, original direction" id=36871 trace_id=477 msg=" enter fast path" id=36871 trace_id=477 msg=" DNAT 192.168.1.1:8->192.168.1.200:512" id=36871 trace_id=477 msg=" enter IPsec interface-OCC_IPSEC_SDB" id=36871 trace_id=477 msg=" encrypted, and send to [FORTIGATE1_ISP_PUBLIC_IP] with source [FORTIGATE2_ISP_PUBLIC_IP]" id=36871 trace_id=477 msg=" send to [FORTIGATE2_ISP_PUBLIC_IP] via intf-wan2" Fortigate1 : id=13 trace_id=233 msg=" vd-root received a packet(proto=1, 192.168.1.20:512->192.168.1.200:8) from IPSEC_SDB." id=13 trace_id=233 msg=" allocate a new session-0e8fa28c" id=13 trace_id=233 msg=" find a route: gw-192.168.1.200 via ONE-OLD_SRV-Net" id=13 trace_id=233 msg=" use addr/intf hash, len=2" id=13 trace_id=233 msg=" Allowed by Policy-320:" In my IPSEC definition, I have created a phase2 without any Quick Selector. I then tried to add another phase with source IP 192.168.1.1 and destination 192.168.1.20 on my Fortigate1, and the opposite in Fortigate2 (Soure 192.168.1.20, dest 192.168.1.1). But no luck
ede_pfau
SuperUser
SuperUser

Have you noticed that you translate .1.2 but ping .1.1?
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
adrienbcn
New Contributor

Sorry, this was a typo mistake, I am mapping 192.168.1.1 (Which is the DomainController I need to reach and that is still located on Datacenter1, while some of my servers are already moved to Datacenter2).
rickards
New Contributor

Hi Another option besides NAT is Proxy Arp, there is a KB article here: http://kb.fortinet.com/kb/microsites/search.do?cmd=displayKC&docType=kc&externalId=12017&sliceId=1&docTypeID=DT_KCARTICLE_1_1&dialogID=60519146&stateId=0%200%2060517437 Example is for an old version but it still applies on newer firmware.
Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors