I experienced this issue with HA A-P cluster and would like to figure out what exactly I did wrong.
1. I have 2 FG-100F in A-P cluster with out-of-band management via dedicated mgmt interfaces.
2. I tried to update them via Fabric management. The outcome is only secondary was updated and cluster becomes out of sync.
3. I removed secondary from cluster and updated primary manually. The cluster still out of sync.
4. I did factory reset for secondary and tried to join secondary to cluster. So I had clear config on secondary.
I put config of ha config block from primary, changing priority value. Set up mgmt interface as dedicated-to-management.
When I issued the commands:
set interface "mgmt"
an error occurred: node_check_object fail!
I examined both config on primary and secondary and noticed differences : secondary has vdom "root" on mgmt interface, but primary doesn't. So I backed up config from primary and restore it on secondary. After reboot devices built the cluster but with out of sync error. So I check ha checksum, found differences in application.list, corrected them.
The questions are:
1. Why did node_check_object fail! error occur and how I can avoid it in the future?
2. Why after clear update and restoring the same config did differences occur between config? Is it device unit specific problem?
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.