Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
New Contributor

HA cluster synchronization after a configuration restore that modifies the ha configuratio



I have some doubts related with how an ha A-P cluster acts  when you restore a configuration to the master of a fortigate cluster and its ha configuration doesn't match with the configuration in the slave.


In my mind, it would have sense that the cluster brokes and its firewall acts as standby, but in my test environment I have seen that the cluster goes on working perfectly with the new configuration.


I consider that this is a bad practise, because if you misrestore a configuration file of another fortigate in the same version and of the same model, you would modify the whole cluster configuration, however, if you act as I consider, you would have the slave firewall working with the last configuration and without losing service. 


Can somebody help me with this topic? 


First, what is your purpose/objective of restoring the config file(?) to the current a-p master unit? Is it to change something in bulk, or to fix something broken on the master?

In either case, I would suggest let the other unit take the master role first, isolate the original master unit from the cluster/network, restore the (modified) config file via the dedicated management interface, then disconnect the acting master from the network at the same time reconnect the intended master to the network to take it over OUTSIDE of HA operation (both are masters), then reboot (all) other units so that uptime for the intended master would be the longest. After that, you re-connect HA (heart-beat interface(s)) back to normal so the slave(s) would sync up with the master.


My intention is to know if I make a mistake uploading a bad configuration file to a master, which would be the behaviour of the firewall cluster. Sometimes it is so important to discover the possible failure points of your system and this is one of them.


I know which is the best practise to modify the configuration of a cluster, I am NS4, but I would like to know which is the behaviour when you misrestore the configuration.


I assume you picked a much older config of the same type to restore (otherwise restore wouldn't go), and also assume you're not using override, means uptime dictates which take the master role. Then when you restore the config file, the unit reboots. After it came back up they negotiate to elect a master but this unit becomes a slave because the uptime is shorter than the other(s). Then it would start syncing with the master. That's what would happen.


Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Top Kudoed Authors