Hi,
New here so forgive me if I've not posted this in the correct spot or if it has been asked before (couldnt find it anywhere).
I have an IPSec Tunnel configured with a Fortigate 201E at the local end and a Cisco Meraki MX appliance at the other end.
At the other end, we have frequent ISP drop outs (another issue we are working to fix) but it usually comes back up quite quickly.
The problem for us is that obviously when the link drops, the tunnel drops, but the link usually comes up within a minute or so and I can see the tunnel coming back online on the Fortigate but there is no traffic passing through. I have to manually take down the tunnel on the Fortigate, and it then immediately comes back up and traffic starts passing through.
I cant for the life of me work out why traffic does not resume when the tunnel reconnects.
Any ideas?
Thanks
Alex
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hey Alex199,
Without getting into logs and debugs, it seems like there's a mismatch on the SAs between the devices when the link flaps where one of them is holding on to an old SA and another is expecting a new one.
Do you have Dead-Peer Detection configured inside of Phase-1 on the FortiGate? If not, try turning that on to "On-Demand" which may help recover the session. If you want to get really crazy you could create an automation stitch to send a trigger which can be processed by another box which can then make API calls to reset the tunnel... But try DPD first if it's not already set. :)
Hope this helps,
Sean (Gr@ve_Rose)
Site: https://tcpdump101.com
Twitter: https://twitter.com/Grave_Rose
Reddit: https://reddit.com/r/tcpdump101
In the tunnel phase1 (may be phase2, I can't recall) setting, you should be able to 'set autonegotiate enable' to bring the tunnel up when both sides see each other again.
Bob - self proclaimed posting junkie!
See my Fortigate related scripts at: http://fortigate.camerabob.com
FWIW:
For all others encountering this issue, there is an explanations and an easy fix.
When a tunnel drops, it's route is dropped as well, along with all affected sessions. Consequently, the outgoing traffic to the remote private network is sent out along the default route, usually through the WAN interface.
That alone is not especially bad, the next router will drop traffic to RFC 1918 private networks.
But, the FGT will establish a session for it, as there is a valid policy from LAN to WAN, destination ALL.
Now when the tunnel comes back up, there is already a current session which has to time out first before a new session through the tunnel can be established. This causes a major delay in the data flow.
There is a fix for this:
Create blackhole routes for traffic to RFC 1918 subnets, that is, 192.168.0.0/24, 172.16.0.0/12, 10.0.0.0/8 among others. These bh routes need to have a distance of 254 (not 255!) in order to kick in when there is no better route available. The bh route will be used when the tunnel goes down and traffic will be discarded; NO session is established.
When the tunnel comes up again, a new session can be built right away, without any delay.
I've posted that 4 years ago along with a batch command file to download. Just import it (System>Advanced>batch...) to create the bh routes. This will not harm existing routes at all as they are the least attractive routes of all: [link]https://forum.fortinet.com/FindPost/120872[/link]
Hey Alex199,
Without getting into logs and debugs, it seems like there's a mismatch on the SAs between the devices when the link flaps where one of them is holding on to an old SA and another is expecting a new one.
Do you have Dead-Peer Detection configured inside of Phase-1 on the FortiGate? If not, try turning that on to "On-Demand" which may help recover the session. If you want to get really crazy you could create an automation stitch to send a trigger which can be processed by another box which can then make API calls to reset the tunnel... But try DPD first if it's not already set. :)
Hope this helps,
Sean (Gr@ve_Rose)
Site: https://tcpdump101.com
Twitter: https://twitter.com/Grave_Rose
Reddit: https://reddit.com/r/tcpdump101
In the tunnel phase1 (may be phase2, I can't recall) setting, you should be able to 'set autonegotiate enable' to bring the tunnel up when both sides see each other again.
Bob - self proclaimed posting junkie!
See my Fortigate related scripts at: http://fortigate.camerabob.com
Hi Bob,
Autonegotiate is already enabled. Unfortunately that isnt helping us either!
Thanks
Alex
Just an update on this.
We've actually added in a backup service on the Meraki side with an additional tunnel on the Fortigate side.
On the Fortigate we have set the backup tunnel with a higher Administrative Distance to monitor the Primary and it takes over when the backup fails.
Now when the Primary comes back up, it fails back seamlessly.
I am not sure why is wasnt working before but everything is working as expected now.
Thanks for the help guys.
Hi Sean,
Thanks for the response. We do have Dead-Peer Detection set to On-Demand at the moment but it doesn't seem to help. After doing a bit of reading on the SA side of things, this could definitely be the issue. Ill need to investigate this one a bit further and see if I can see what happens when the link goes down.
Thanks
Alex
DPD and autonegotioan are all in IPSec itself.
I encountered similar issues...tunnel was still there or came back asap when online again but no traffic.
WHat solved it here was to turn on NAT-T on the tunnel. This will send keepalives on the ip layer where your traffic flows over the tunnel.
Since I enabeld NAT-T the issue is gone...
--
"It is a mistake to think you can solve any major problems just with potatoes." - Douglas Adams
FWIW:
For all others encountering this issue, there is an explanations and an easy fix.
When a tunnel drops, it's route is dropped as well, along with all affected sessions. Consequently, the outgoing traffic to the remote private network is sent out along the default route, usually through the WAN interface.
That alone is not especially bad, the next router will drop traffic to RFC 1918 private networks.
But, the FGT will establish a session for it, as there is a valid policy from LAN to WAN, destination ALL.
Now when the tunnel comes back up, there is already a current session which has to time out first before a new session through the tunnel can be established. This causes a major delay in the data flow.
There is a fix for this:
Create blackhole routes for traffic to RFC 1918 subnets, that is, 192.168.0.0/24, 172.16.0.0/12, 10.0.0.0/8 among others. These bh routes need to have a distance of 254 (not 255!) in order to kick in when there is no better route available. The bh route will be used when the tunnel goes down and traffic will be discarded; NO session is established.
When the tunnel comes up again, a new session can be built right away, without any delay.
I've posted that 4 years ago along with a batch command file to download. Just import it (System>Advanced>batch...) to create the bh routes. This will not harm existing routes at all as they are the least attractive routes of all: [link]https://forum.fortinet.com/FindPost/120872[/link]
Scary, but I remember the bogons posts...
Bob - self proclaimed posting junkie!
See my Fortigate related scripts at: http://fortigate.camerabob.com
Awesome, thanks Ede, we'll do some testing with this and report back!
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1732 | |
1106 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.