Hi all,
Sorry to bother however I am currently experiencing issues with a HA pair of FortiGate 50E running v6.2.12 build1319 (GA) firmware.
For some reason both units work flawlessly when they are in standalone mode with one unit active at each time with no issues with the site to site VPNs to Azure however when the units are installed in Active-Active or Active-Passive HA mode the site to site VPN is dropped and unable to be established again.
I have performed some diagnostics and I seem to be getting IKEV2 Phase 1 issues however I can't work out why this would be the case only when in HA as the settings input are the same of those on the Fortinet 6.2 cook book for setting up the Azure VPN.
I've made sure all the firewall rules and routes are present for traffic to flow and when in standalone can confirm I am able to access the Azure resources without issue.
Has anyone seen this issue before?
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hi everyone,
After some trials and tribulations I have finally resolved this issue.
During out of hours I spent some time swapping the firewalls over regularly to see if one of the units was defective, as it turned out one of the units even when in standalone mode could not seem to connect to the VPN after initation.
I attempted to factory reset the configuration, restore the working configuration from unit 1 to unit 2 but alas to no avail which led me to believe unit 2 had a fault and that the issue was not with the firmware or HA but rather unit 2 itself. I had even tried to re-load the firmware via the web GUI however the unit had to be manually rebooted in order to bring it back online leading me to believe their is a flashing issue somewhere.
As a last ditch effort today I borrowed a serial cable and formatted the firmware on the defective unit and re-loaded the latest firmware from FortiGate via TFTP and as luck would have it this seemed to resolve the issue.
As it stands the unit has now been installed for 4 hrs without issue or dropping of the VPN.
Hello,
Could you please elaborate whether the issue is triggered after HA fail-over or generally doesn't work?
Hi there,
It just generally doesnt work in HA mode (Regardless of failover), in standalone it connects fine but when connected to the second FortiGate in Active-Active or Active-Passive the VPN will not come online
Hello,
In case of HA only one HA unit will establish IPsec tunnel, stand by unit will not establish second IPsec tunnel unless there is fail-over. Could you please elaborate whether IPsec tunnel is not established at all or secondary HA unit is not establishing IPsec tunnel?
Hi,
IPSec tunnels are not established by either unit after configuring HA so no traffic is being sent or received until I disable HA and remove the second unit
Hello,
I would recommend to collect ike debug traces on spoke side and try to bring the tunnel up:
diagnose debug application ike -1
diagnose debug enable
Hi,
I have completed that now and the logs look like the following:
ike 0:VPNAME:3373: sent IKE msg (RETRANSMIT_SA_INIT): IP1:500->IP2:500, len=340, id=11c81547683b6668/0000000000000000
ike shrank heap by 126976 bytes
ike 0:VPNAME:VPNAME: IPsec SA connect 14 IP1->IP2
ike 0:VPNAME:VPNAME: using existing connection
ike 0:VPNAME:VPNAME: config found
ike 0:VPNAME: request is on the queue
ike 0:VPNAME:VPNAME: IPsec SA connect 14 IP1->IP2
ike 0:VPNAME:VPNAME: using existing connection
ike 0:VPNAME:VPNAME: config found
ike 0:VPNAME: request is on the queue
ike 0:VPNAME:VPNAME: IPsec SA connect 14 IP1->IP2
ike 0:VPNAME:VPNAME: using existing connection
ike 0:VPNAME:VPNAME: config found
ike 0:VPNAME: request is on the queue
ike 0:VPNAME:3373: out 11C81547683B66680000000000000000212022080000000000000154220000840200002C010100040300000C0100000C800E0100030000080200000203000008030000020000000804000002020000280201000403000008010000030300000802000002030000080300000200000008040000020000002C030100040300000C0100000C800E01000300000802000005030000080300000C00000008040000022800008800020000C766A918E120DBC1A7654532AAFDEEF023D8182589A39F432E8F92D2FE6ADDDE18FACA5D0CF1F87026D2C906E1A620B7218C0D8ACD698641F3A54F4A5A98EE1C066E14D5A9C382D0747D2A9589D945365D782EC2787EC26A7EE96890C5A015CE21ECCE2E681F6B766BD837D4CC627B91D4A832A21D4B2DC9E83CDBEE1B038EFA2900002474A620484267A28B4D4C367A3F37324B61ECABD87A5774526865DCE67A66EEDC000000080000402E
ike 0:VPNAME:3373: sent IKE msg (RETRANSMIT_SA_INIT): IP1:500->IP2:500, len=340, id=11c81547683b6668/0000000000000000
ike 0:VPNAME:VPNAME: IPsec SA connect 14 IP1->IP2
ike 0:VPNAME:VPNAME: using existing connection
ike 0:VPNAME:VPNAME: config found
ike 0:VPNAME: request is on the queue
ike 0:VPNAME:3373: negotiation timeout, deleting
ike 0:VPNAME: connection expiring due to phase1 down
ike 0:VPNAME: deleting
ike 0:VPNAME: deleted
It just repeats that over and over at the moment
Hello,
I suspect that there is one way communication and phase 1 time out, since there is no response from the other side.
You may consider to confirm it by running diagnose sniffer packet any 'host <remote peer public IP address>' 4 0 a
In case there is no response from Azure side, we would need feedback from Azure side why there is no response.
Hi,
I do appear to be getting a response from the Azure IP
2023-01-13 15:54:05.046483 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:07.072643 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:12.534276 WAN out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:12.534283 wan1 out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:12.548333 WAN in DestinationIP.500 -> SourceIP.500: udp 364
2023-01-13 15:54:13.324125 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:18.469751 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:19.554405 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:21.448443 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:21.537001 WAN out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:21.537010 wan1 out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:21.553368 WAN in DestinationIP.500 -> SourceIP.500: udp 364
2023-01-13 15:54:21.604317 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:22.455210 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:22.681672 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:23.461975 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:23.770204 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:24.554297 WAN out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:24.554304 wan1 out SourceIP.500 -> DestinationIP.500: udp 340
2023-01-13 15:54:24.568194 WAN in DestinationIP.500 -> SourceIP.500: udp 364
2023-01-13 15:54:24.688331 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:26.477251 WAN in DestinationIP.500 -> SourceIP.500: udp 40
2023-01-13 15:54:27.808321 WAN in DestinationIP.500 -> SourceIP.500: udp 40
Hello,
I would recommend to collect debug flow traces and try to trigger the issue:
diagnose debug flow filter addr <Azure public IP address>
diagnose debug flow show function-name enable
diagnose debug flow trace start 100
diagnose debug enable
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1732 | |
1105 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.