Created on
07-18-2018
06:17 AM
Edited on
10-16-2025
09:32 AM
By
Stephen_G
Description
This article describes what HA Cluster is and how to configure and check HA Cluster on FortiSandBox.
Scope
FortiSandbox, 3.0.x, 4.0.x, 4.2.x, 4.4.x, 5.0.x.
Solution
To handle the scanning of a high number of files concurrently, multiple FortiSandbox devices can be used together in a load-balancing high availability (HA) cluster.
Roles:
There are three types of nodes in a cluster: Master, Primary Slave, and Slave.
Master node roles:
All of the scan-related configuration should be done on the master node, and it will be broadcast from the Master node to the other nodes. Any scan-related configuration that has been set on a slave will be overwritten.
It is advised to use a FortiSandbox-3000D or above model for the Master role and the Primary Slave.
Primary Slave node roles:
It monitors the master's condition and, if the master node fails, the primary slave will assume the role of master. The former master will then become a primary slave.
Primary Slave node must be the same model as the Master node (so as per Master's advice, 3000D or above model).
Slave node roles:
Slave nodes should have their own network settings and VM image settings.
Slave nodes in a cluster do not need to be the same model.
Requirements and Failover Description.
Requirements to configure a HA Cluster.
Internal cluster communication includes:
Failover Description.
The Master node and Primary Slave nodes send heartbeats to each other to detect if it peers are alive. If something goes wrong (like a Master reboot or a network issue), failover will trigger in one of the 2 possible ways:
When the new Master is decided, it will:
When the original Master becomes the Primary Slave node, it will:
HA-Cluster on CLI and GUI.
Main HA-Cluster CLI Commands.
HA-Cluster configuration can be done on the CLI only
The following commands work for firmware version 3.0.x:
The following commands work for firmware versions 4.0.x, 4.2.x, 4.4.x and 5.0.x:
Main HA-Cluster GUI on Master:
Go to HA-Cluster -> Status to check all the nodes with S/N, Types (Roles), names, IPs (Internal heartbeat ports), and Status (active or inactive).
Go to HA-Cluster -> Job Summary to see job statistics data of each node with S/N and Pending, Malicious, Suspicious, Clean, and Other states.
Go to HA-Cluster -> Health Check to set up a Ping server to ensure the network condition between client devices and FortiSandbox is always up. If not, failover will be triggered.
Go to HA-Cluster -> SerialNumber to navigate to the Primary Slave or Regular Slave GUI from the Master.
Example configuration:
This example shows the steps for setting up an HA cluster using two FortiSandbox 3000E units and one FortiSandbox VM.
Minimum 3 subnets are needed:
Master configuration:
IP ports configuration:
set port1-ip 10.5.25.40/20
set default-gw 10.5.31.254
set port2-ip 10.139.9.40/20
set port3-ip 10.138.9.40/20
IP ports verification:
show
Configured parameters:
Port 1 IPv4 IP: 10.5.25.40/20 MAC: 00:62:6F:73:28:01
Port 2 IPv4 IP: 10.139.9.40/20 MAC: 00:62:6F:73:28:02
Port 3 IPv4 IP: 10.138.9.40/20 MAC: 00:62:6F:73:28:03
Port 4 IPv4 IP: 192.168.3.99/24 MAC: 00:62:6F:73:28:04
Port 5 IPv4 IP: 192.168.4.99/24 MAC: 00:62:6F:73:28:05
Port 6 IPv4 IP: 192.168.5.99/24 MAC: 00:62:6F:73:28:06
IPv4 Default Gateway: 10.5.31.254
HC-setting configuration for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-settings -sc -tM -nFSA1 -cTT -pfortinet -iport2 -->-sc is for cluster role definition and -tM defines the device role(Master), -n device name, -c cluster name, -I heartbeat port.
The unit was successfully configured.
hc-settings -si -iport1 -a10.5.25.41/20 -->-si is for External IP cluster and -i heartbeat port, -a External IP Cluster.
HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-settings -l
SN: FSA-VM0000000123
Type: Master
Name: FSA1
HC-Name: TT
Authentication Code: fortinet
Interface: port2
Cluster Interfaces: port1: 10.5.25.41/255.255.240.0.
hc-master -l --->> for 3.0.x
hc-primary -l --->> for 4.0.x, 4.2.x, 4.4.x and 5.0.x
File scan is enabled with 50 processing capacity
Primary configuration.
IP ports configuration:
set port1-ip 10.5.27.113/20
set default-gw 10.5.31.254
set port2-ip 10.139.11.113/20
set port3-ip 10.138.11.113/20
IP ports verification:
show
Configured parameters:
Port 1 IPv4 IP: 10.5.27.113/20 MAC: 00:71:75:61:0D:01
Port 2 IPv4 IP: 10.139.11.113/20 MAC: 00:71:75:61:0D:02
Port 3 IPv4 IP: 10.138.11.113/20 MAC: 00:71:75:61:0D:03
Port 4 IPv4 IP: 192.168.3.99/24 MAC: 00:71:75:61:0D:04
Port 5 IPv4 IP: 192.168.4.99/24 MAC: 00:71:75:61:0D:05
Port 6 IPv4 IP: 192.168.5.99/24 MAC: 00:71:75:61:0D:06
IPv4 Default Gateway: 10.5.31.254
HC-setting configuration for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-settings -sc -tP -nFSA2 -iport2 --> -sc is for cluster role definition and -tP defines the device role (Primary Slave), -n device name, -i heartbeat port.
The unit was successfully configured.
Warning:
Primary slave unit may take over the master role of the cluster if the original master is down, you have to make sure it has the same network environment settings as master unit.
For example:
*) configure same subnet for port1 on master and primary slaves
*) configure same subnet for port3 on master and primary slaves
*) configure route table on master and primary slaves
For 3.0.x:
hc-slave -a -s10.139.9.40 -pfortinet -->-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured
For 4.0.x, 4.2.x, 4.4.x, 5.0.x:
hc-worker -a -s10.139.9.40 -pfortinet -->-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured
HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-settings -l
SN: FSA-VM0000000456
Type: Primary Slave
Name: FSA2
Interface: port2
> hc-status -l
Status of master and primary slave units in cluster: TT
--------------------------------------------------------------------------------
SN Type Name IP Active
FSA-VM0000000123 Master FSA1 10.139.9.40 1 second(s) ago
FSA-VM0000000456 Primary Slave FSA2 10.139.11.113 1 second(s) ago
Slave configuration.
IP ports configuration:
set port1-ip 10.5.27.160/20
set default-gw 10.5.31.254
set port2-ip 10.139.11.160/20
set port3-ip 10.138.11.160/20
IP ports verification:
show
Configured parameters:
Port 1 IPv4 IP: 10.5.27.160/20 MAC: 00:71:75:61:3C:01
Port 2 IPv4 IP: 10.139.11.160/20 MAC: 00:71:75:61:3C:02
Port 3 IPv4 IP: 10.138.11.160/20 MAC: 00:71:75:61:3C:03
Port 4 IPv4 IP: 192.168.3.99/24 MAC: 00:71:75:61:3C:04
Port 5 IPv4 IP: 192.168.4.99/24 MAC: 00:71:75:61:3C:05
Port 6 IPv4 IP: 192.168.5.99/24 MAC: 00:71:75:61:3C:06
IPv4 Default Gateway: 10.5.31.254
HC-setting configuration 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-settings -sc -tR -nFSA3 -iport2 --> -sc is for cluster role definition and -tR defines the device role (Normal Slave), -n device name, -i heartbeat port.
The unit was successfully configured.
For 3.0.x:
hc-slave -a -s10.139.9.40 -pfortinet --> -a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured
For 4.0.x,4.2.x,4.4.x and 5.0.x:
hc-worker -a -s10.139.9.40 -pfortinet --> -a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured
HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x::
hc-settings -l
SN: FSA-VM0000000789
Type: Regular Slave
Name: FSA3
Interface: port2
More details are available in the administration guide on the Fortinet Document Library.
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.