hi, I am trying to form HA in 2 Site with 1000D but it doesn't work.
These 2 devices are in a same vlan even if they are placed in different site because of expanding L2 network.
These FW are placed in same site right now and I am planning to transit FW to other site bellow
1.Move a secondary to new site
2.Form HA through 2 site
3.HA failover to new site
4.Move a secondary to new site
5.Form HA in new site
There is something trouble in step 2 above
I can see 0x8890 traffic is occurred but these 2 devices doesn't recognize each other, so they act primary each other.
I also confirm a sniffer is enabled already and NW devices placed between these 2 devices doesn't use 0x8890 ethernet port for other roles.
Is there some idea what prevents FW formed HA?
Hard to tell without debugging but...
HA uses ethertypes 0x8890, 0x8891 and 0x8893. I know that Cisco Nexus switches use some of these, so other vendors might as well. You can change the ethertype in
config system ha
set ha-eth-type "8890"
set hc-eth-type "8891"
set l2ep-eth-type "8893"
Assuming you've got a link up on the HA link(s) - if not, twist the wire. A Layer 2 connection is all that is needed, I've used one across a big city successfully.
Created on 02-23-2022 06:24 PM Edited on 02-23-2022 06:27 PM
Thank you your reply.
HA status bellow.
poc_fw0220 # get system ha status
HA Health Status: OK
Model: FortiGate-1000D
Mode: HA A-P
Group: 0
Debug: 0
Cluster Uptime: 0 days 0:5:52
Cluster state change time: 2022-02-24 10:56:24
Master selected using:
<2022/02/24 10:56:24> [this device] is selected as the master because it's the only member in the cluster.
ses_pickup: enable, ses_pickup_delay=disable
override: disable
System Usage stats:
[this device](updated 1 seconds ago):
sessions=80, average-cpu-user/nice/system/idle=0%/0%/0%/100%, memory=21%
HBDEV stats:
[this device](updated 1 seconds ago):
port9: physical/1000auto, up, rx-bytes/packets/dropped/errors=936502/1848/0/0, tx=889785/1755/0/0
Master: poc_fw0220 , [this device], HA cluster index = 0
number of vcluster: 1
vcluster 1: work 169.254.0.1
Master: [this device], HA operating index = 0
I set a eth-type like you said but still HA doesn't work.
I use Juniper and Arista switchies between each device and I already confirmed that these switchies doesn't uses the ethernet-type [0x8890][0x8891][0x8893]
Is there any problem if each HA device's ha-mgmt-interfaces gateway is different?
I confirned [show full-configuration] and there is no differenciation except bellow
・set password
・config ha-mgmt-interfaces
set gateway x.x.x.x
・set priority
poc_fw0220 (ha) # show full-configuration
config system ha
set group-id 0
set group-name "fw0219-fw0220"
set mode a-p
set sync-packet-balance disable
set password ENC [passwprd]
set hbdev "port9" 100
unset session-sync-dev
set route-ttl 10
set route-wait 0
set route-hold 10
set multicast-ttl 600
set sync-config enable
set encryption disable
set authentication disable
set hb-interval 2
set hb-lost-threshold 6
set hello-holddown 20
set gratuitous-arps enable
set arps 5
set arps-interval 8
set session-pickup enable
set session-pickup-connectionless disable
set session-pickup-expectation disable
set session-pickup-delay disable
set link-failed-signal disable
set uninterruptible-upgrade enable
set ha-mgmt-status enable
config ha-mgmt-interfaces
edit 1
set interface "port16"
set dst 0.0.0.0 0.0.0.0
set gateway 10.240.255.254
set gateway6 ::
next
end
set ha-eth-type "8890"
set hc-eth-type "8891"
set l2ep-eth-type "8893"
set ha-uptime-diff-margin 300
set vcluster2 disable
set override disable
set priority 64
unset monitor
unset pingserver-monitor-interface
unset vdom
set ha-direct disable
set ssd-failover disable
set memory-compatible-mode disable
set inter-cluster-session-sync disable
set logical-sn disable
end
poc_fw0220 (ha) #
thank you your reply
I set eth-type like you said but still doesn't work.
This is HA status below.
poc_fw0219 # get system ha status
HA Health Status: OK
Model: FortiGate-1000D
Mode: HA A-P
Group: 0
Debug: 0
Cluster Uptime: 0 days 0:1:34
Cluster state change time: 2022-02-24 12:52:03
Master selected using:
<2022/02/24 12:52:03> [this device] is selected as the master because it's the only member in the cluster.
ses_pickup: enable, ses_pickup_delay=disable
override: disable
System Usage stats:
[this device](updated 4 seconds ago):
sessions=0, average-cpu-user/nice/system/idle=0%/0%/0%/100%, memory=21%
HBDEV stats:
[this device](updated 4 seconds ago):
port3: physical/1000auto, up, rx-bytes/packets/dropped/errors=236740/469/0/0, tx=230577/453/0/0
Master: poc_fw0219 , [this device], HA cluster index = 0
number of vcluster: 1
vcluster 1: work 169.254.0.1
Master: [this device], HA operating index = 0
I use a Juniper and Arista switches between firewall devices and I already confirmed these switches doesn't use eth-port [8890][8891][8893]
I tried to change these eth-port to original one just in case, but still doesn't work.
I wanna use "NAT/Route Mode Heartbeat", so the sequence might be below
・find cluster unit member by using[ha-eth-type "8890"]
・Session Synchronization by using[l2ep-eth-type "8893"]
However I can not confirm "8893" packet.( I can see 8890 packet only)
First, you will not see ethertype 8893 until the cluster has formed, so that is OK. This unit thinks it's alone, i.e., it doesn't see the other unit.
Please do a "conf sys ha", "show full" on both units, copy to text files and compare them. BTW, you should change the groupID to something other than "0", to avoid trouble later. For instance, if there are other FGTs in HA mode on the same network, this might be confusing as "0" is the default. And, as the HA traffic is across buildings, I would put a password in, just so.
All of this does not fix your problem, so I am curious what you will find when comparing the configs.
I checked "show full" config differentiation.
There is some difference but I can not find anything specific except "management-interface-gateway-ip".
Is there any problem if a management-interface-gateway-ip is different?
One is 10.1.1.254 and the other is 10.5.5.254.
I think this is nothing to do with a HA working but I mentioned it just in case.
It seems a very difficult case. However the due is the end of this month.
If I had no other idea, I might give up on this and find another way.
Some settings are not synchronized on purpose, one of them is the mgmt IP and mgmt gateway IP. So that is OK.
Go ahead and open a TAC case for this. Chances are that FTNT can help you, or at least in parallel to the forum, if some other forum member comes up with an idea. I am assuming that you have checked connectivity of the HA line in the beginning - pure L2, no ACLs or filters.
You can watch HA traffic with
di de en
di de app hatalk -1
di de app hasy -1
This will run for 30 minutes max. FTNT support will ask for that maybe, so you should log it. If it's interesting you may post it here as well.
You should be absolutely sure that the cluster is working, by connecting the units via cable. If that works, and via VLAN it does not, then it's the line. VLAN packets are expanded, might even be a MTU issue.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1740 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.