hi,
we have two 1500D with vdom enabled on both, each should have 4 VDOMs (root,vpn,servers,wan) i configured an HA A/P with vcluser2 enabled and moved Servers vdom to vcluster 2 to become the master in the other unit, so basically it becomes like that:
Device Priority Name Vcluster1 Vcluster2 FGT_ha_1 200(root,vpn,wan) 100 (servers) FGT_ha_2 100 (servers) 200(root,vpn,wan)
which worked great in term of flow, now when testing the vdom failover (port monitor enabled on vdom port lever) we manually unplugged the vdom cables unfortunately the failover did not happened until i moved the vdom manually to the other vcluster e.g: moved vpn to vcluster2 .
did i miss something here ( i followed the document of 5.2 HA ) which i believe should be straight forward.
thanks
FCSNP 5, JNCIS-FW,JNCIA-SSL ,MCSE, ITIL.
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hello
You have the same interface monitored in primary and secondary vcluster.. So your monitoring is wrong. Try :
config sys ha
set vdom "CAIT" "VPN" "root"
set monitor "port21" "port22" "port23" "port24" "port25" (all your interface of vdom "CAIT" "VPN" "root")
config secondary-vcluster
set vdom "forti_srv" set monitor "port26" "port31" "port32" (all your interface of vdom "forti_srv")
end
end
and try again
Lucas
Can you draw a vdom topology of the layout? And did you ran any diagnostic when you simulated the failover? (diag debug application ha sync and talk )
PCNSE
NSE
StrongSwan
am sharing this also:
diag debug application hasync 6538: Unknown action 0 Command fail. Return code -1
FG1K5D3I14800133 # FG1K5D3I14800133 # FG1K5D3I14800133 # conf ig [K[Kig sy[K[K global config global vdom config vdom FG1K5D3I14800133 # config global
FG1K5D3I14800133 (global) # FG1K5D3I14800133 (global) # FG1K5D3I14800133 (global) # FG1K5D3I14800133 (global) # diag debug application hasync hasync debug level is 0 (0x0)
FG1K5D3I14800133 (global) # FG1K5D3I14800133 (global) # FG1K5D3I14800133 (global) # diag debug application hasync[K[K[K[Ktalk hatalk debug level is 0 (0x0)
FG1K5D3I14800133 (global) # config sys ha
FG1K5D3I14800133 (ha) # get group-id : 0 group-name : FGT mode : a-p password : * hbdev : "port17" 50 "port18" 50 session-sync-dev : route-ttl : 10 route-wait : 0 route-hold : 10 sync-config : enable encryption : disable authentication : disable hb-interval : 2 hb-lost-threshold : 6 helo-holddown : 20 gratuitous-arps : enable arps : 5 arps-interval : 8 session-pickup : enable session-pickup-connectionless: disable session-pickup-delay: disable update-all-session-timer: disable session-sync-daemon-number: 1 link-failed-signal : disable uninterruptible-upgrade: enable standalone-mgmt-vdom: disable ha-mgmt-status : enable ha-mgmt-interface : mgmt1 ha-mgmt-interface-gateway: 172.16.108.1 ha-mgmt-interface-gateway6: :: ha-eth-type : 8890 hc-eth-type : 8891 l2ep-eth-type : 8893 ha-uptime-diff-margin: 300 vcluster2 : enable vcluster-id : 1 override : disable priority : 100 monitor : "port21" "port22" "port23" "port24" "port25" "port26" "port31" "port32" pingserver-monitor-interface: pingserver-failover-threshold: 0 pingserver-slave-force-reset: enable pingserver-flip-timeout: 60 vdom : "CAIT" "VPN" "root" secondary-vcluster: vcluster-id : 2 override : enable priority : 200 override-wait-time : 0 monitor : "port21" "port22" "port23" "port24" "port25" "port26" "port31" "port32" pingserver-monitor-interface: pingserver-failover-threshold: 0 pingserver-slave-force-reset: enable vdom : "forti_srv" ha-direct : disable
FG1K5D3I14800133 (ha) #
----------------------------------------------------
diag debug application hasync hasync debug level is 0 (0x0)
FG1K5D3I14800187 (global) # diag debug application hasync hasync debug level is 0 (0x0)
FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # diag debug application hasync[K[K[K[Ktalk hatalk debug level is 0 (0x0)
FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) #
FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # FG1K5D3I14800187 (global) # config system ha
FG1K5D3I14800187 (ha) # get group-id : 0 group-name : FGT mode : a-p password : * hbdev : "port17" 50 "port18" 50 session-sync-dev : route-ttl : 10 route-wait : 0 route-hold : 10 sync-config : enable encryption : disable authentication : disable hb-interval : 2 hb-lost-threshold : 6 helo-holddown : 20 gratuitous-arps : enable arps : 5 arps-interval : 8 session-pickup : enable session-pickup-connectionless: disable session-pickup-delay: disable update-all-session-timer: disable session-sync-daemon-number: 1 link-failed-signal : disable uninterruptible-upgrade: enable standalone-mgmt-vdom: disable ha-mgmt-status : enable ha-mgmt-interface : mgmt1 ha-mgmt-interface-gateway: 172.16.101.1 ha-mgmt-interface-gateway6: :: ha-eth-type : 8890 hc-eth-type : 8891 l2ep-eth-type : 8893 ha-uptime-diff-margin: 300 vcluster2 : enable vcluster-id : 1 override : disable priority : 200 monitor : "port21" "port22" "port23" "port24" "port25" "port26" "port31" "port32" pingserver-monitor-interface: pingserver-failover-threshold: 0 pingserver-slave-force-reset: enable pingserver-flip-timeout: 60 vdom : "CAIT" "VPN" "root" secondary-vcluster: vcluster-id : 2 override : enable priority : 100 override-wait-time : 0 monitor : "port21" "port22" "port23" "port24" "port25" "port26" "port31" "port32" pingserver-monitor-interface: pingserver-failover-threshold: 0 pingserver-slave-force-reset: enable vdom : "forti_srv" ha-direct : disable
FG1K5D3I14800187 (ha) # FG1K5D3I14800187 (ha) #
FCSNP 5, JNCIS-FW,JNCIA-SSL ,MCSE, ITIL.
Hello
You have the same interface monitored in primary and secondary vcluster.. So your monitoring is wrong. Try :
config sys ha
set vdom "CAIT" "VPN" "root"
set monitor "port21" "port22" "port23" "port24" "port25" (all your interface of vdom "CAIT" "VPN" "root")
config secondary-vcluster
set vdom "forti_srv" set monitor "port26" "port31" "port32" (all your interface of vdom "forti_srv")
end
end
and try again
Lucas
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1732 | |
1106 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.