Hi,
updating an active-passive setup for a 120G, from 7.0.15, to 7.2.9 seems to break HA totally.
It looks like the internal network can not be found anymore.
I raised a ticket on that. Downgrade is possible, but takes time and nervs.
Take care,
Ronny
2024-08-21 13:15:18 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:18 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:21 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:21 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:23 <hatalk> vcluster_1: ha_prio=0(primary), state/chg_time/now=2(work)/1724238681/1724238923
2024-08-21 13:15:24 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:24 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:24 <hasync:WARN> conn=0x4760c3d0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:24 <hasync:WARN> conn=0x4760c3d0 abort: rt=-1, dst=169.254.0.1, sync_type=27(capwap)
2024-08-21 13:15:27 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:27 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:27 <hasync:WARN> conn=0x4760c3d0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:27 <hasync:WARN> conn=0x4760c3d0 abort: rt=-1, dst=169.254.0.1, sync_type=5(conf)
2024-08-21 13:15:30 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:30 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:33 <hatalk> vcluster_1: ha_prio=0(primary), state/chg_time/now=2(work)/1724238681/1724238933
2024-08-21 13:15:33 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:33 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:36 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:36 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:36 <hasync:WARN> conn=0x4760c3d0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:36 <hasync:WARN> conn=0x4760c3d0 abort: rt=-1, dst=169.254.0.1, sync_type=18(byod)
2024-08-21 13:15:40 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:40 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:43 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:43 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:43 <hatalk> vcluster_1: ha_prio=0(primary), state/chg_time/now=2(work)/1724238681/1724238943
2024-08-21 13:15:46 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:46 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
2024-08-21 13:15:49 <hasync:WARN> conn=0x476086a0 connect(169.254.0.1) failed: 113(No route to host)
2024-08-21 13:15:49 <hasync:WARN> conn=0x476086a0 abort: rt=-1, dst=169.254.0.1, sync_type=3(fib)
Hello @Secucard,
Did try to reboot the secondary FortiGate?
Do you see the Secidary unit on Primary FortiGate GUI---system---HA---See if both units are available? (Even if they are not in sync)
Is it possible that the secondary unit is still on the older version?
If you have a console cable then connect to the secondary and check the Firmware version of that unit:
# get sys status
# get sys ha status
# di sys ha checksum cluster
You can try to run the following and collect the debugs:
diagnose sys ha checksum recalculate
diagnose debug application hatalk -1
diagnose debug application hasync -1
execute ha sync start
diagnose debug enable
execute ha force sync-config
All steps as you mentioned have had been performed,
We tried a 2nd time, and still the same nightmare.
Hello @Secucard ,
There is a known issue on 120G FortiOS 7.2.9 (Dev Ticket 1056138)
This is scheduled to be fixed in 7.2.10.
However, we definitely need the logs to check if this is the Match or if you are running into some different issue.
Adding to my previous comment,
Fortinet doesn't support downgrade. Instead, you can roll back.
Rolling back the device simply boots it to the previous partition, which has the old firmware and config file. You may boot to the new firmware again if you choose.
The commands to do so are as follows
diag sys flash list <----------------------------------list partitions and see if they are active.
exec set-next-reboot <primary|secondary> <-----------indicate what partition to boot from (1= primary, 2 = secondary)
exec reboot
Thanks for the assistance with rollback. We could fix this issue here in a testing lab today already.
It would be very important and helpful if Fortinet would add this known issue to the known issue overview. I suppose many companies use HA and this may result in some trouble. Thanks.
Hello,
Are you using the HA ports as heartbeat ports? Would you be able to test with another port as a heartbeat port? (port5 for example)
For the logs, I have to provide the tac file? I will prepare them tomorrow.
Thanks, Also you mentioned that you have raised a TAC ticket as well. Kindly post these logs to that ticket as well and the Assigned Engineer will follow up accordingly.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1738 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.