FortiSandbox
FortiSandbox provides a solution to protect against advanced threats and ransomware for companies who don’t want to implement and maintain a sandbox environment on their own.
cborgato_FTNT
Article Id 196703

Description

 

This article describes what HA Cluster is and how to configure and check HA Cluster on FortiSandBox.


Scope

 

FortiSandbox, 3.0.x, 4.0.x, 4.2.x, 4.4.x, 5.0.x.


Solution

 

To handle the scanning of a high number of files concurrently, multiple FortiSandbox devices can be used together in a load-balancing high availability (HA) cluster.

Roles:
There are three types of nodes in a cluster: Master, Primary Slave, and Slave.

Master node roles:

  • Manages the HA-Cluster,
  • Distributes jobs and gathers the results,
  • Interacts with clients/admin,
  • Can also perform normal file scans.

 

All of the scan-related configuration should be done on the master node, and it will be broadcast from the Master node to the other nodes. Any scan-related configuration that has been set on a slave will be overwritten.

It is advised to use a FortiSandbox-3000D or above model for the Master role and the Primary Slave.

Primary Slave node roles:

  • HA support.
  • Normal file scans.


It monitors the master's condition and, if the master node fails, the primary slave will assume the role of master. The former master will then become a primary slave.

Primary Slave node must be the same model as the Master node (so as per Master's advice, 3000D or above model).

Slave node roles:

  • Perform normal file scans and report results back to the master and primary slave,
  • They can also store detailed job information.

 

Slave nodes should have their own network settings and VM image settings.

Slave nodes in a cluster do not need to be the same model.

Requirements and Failover Description.

Requirements to configure a HA Cluster.

  • The scan environment on all cluster nodes should be the same (for example, the same set of Windows VMs should be installed on all nodes so the same scan profile can be used).
  • port3 on all nodes should be connected to the Internet separately,
  • All nodes should be on the same firmware build.
  • Each node should have a dedicated network port for internal cluster communication (heartbeat port).

Internal cluster communication includes:

  1. Job dispatch.
  2. Job result reply.
  3. Setting synchronization.
  4. Cluster topology broadcasting.


Failover Description.
The Master node and Primary Slave nodes send heartbeats to each other to detect if it peers are alive. If something goes wrong (like a Master reboot or a network issue), failover will trigger in one of the 2 possible ways:

 

  • Objective node available:
    The Object node is a slave (either Primary or Regular) that can justify the new Master. After a Primary Slave node takes over the Master role, and the new role is accepted by the Object node, the original Master node will accept the decision when it is back online.
    After the original Master is back online, it will become a Primary Slave node.

  • No Objective node available:
    This occurs when the cluster's internal communication is down. For example, the internal cluster communication is down due to a failed switch, and all Slave nodes become the Master (more than one Master unit).
    When the system is back online, the unit with the largest Serial Number will keep the Master role, and the other will return to a Primary Slave.

When the new Master is decided, it will:

  • Restart the main controller to rebuild the scan environment.
  • Apply all the settings synchronized from the original Master, except the port3 IP and the internal cluster IP of the original Master.

When the original Master becomes the Primary Slave node, it will:

  • Keep its original Port3 IP and internal cluster communication IP.
  • Shut down all other interface ports.

HA-Cluster on CLI and GUI.

Main HA-Cluster CLI Commands.


HA-Cluster configuration can be done on the CLI only

 

The following commands work for firmware version 3.0.x:

 

  • hc-settings: Configure the unit as a HA Cluster mode unit and cluster fail-over IP set.
  • hc-status: List the stats of HA Cluster units.
  • hc-slave: Add, Update, or remove a slave unit to or from the HA Cluster.
  • hc-master: Turn on/off the file scan on the Master node and adjust the Master's scan power.

 

The following commands work for firmware versions 4.0.x, 4.2.x, 4.4.x and 5.0.x:

 

  • hc-settings: Configure the unit as a HA Cluster mode unit and cluster fail-over IP set.
  • hc-status: List the stats of HA Cluster units.
  • hc-worker: Add, Update, or remove a slave unit to or from the HA Cluster.
  • hc-primary: Configure the unit as a HA-Cluster primary unit.


Main HA-Cluster GUI on Master:

 

Go to HA-Cluster -> Status to check all the nodes with S/N, Types (Roles), names, IPs (Internal heartbeat ports), and Status (active or inactive).
Go to HA-Cluster -> Job Summary to see job statistics data of each node with S/N and Pending, Malicious, Suspicious, Clean, and Other states.
Go to HA-Cluster -> Health Check to set up a Ping server to ensure the network condition between client devices and FortiSandbox is always up. If not, failover will be triggered.
Go to HA-Cluster -> SerialNumber to navigate to the Primary Slave or Regular Slave GUI from the Master.

Example configuration:


This example shows the steps for setting up an HA cluster using two FortiSandbox 3000E units and one FortiSandbox VM.

Minimum 3 subnets are needed:

  • On port1, set management access and be sure it can reach FDN for license checks and FortiGuard updates (10.5.16.0 /20).
  • On port2, set the internal cluster communications subnet. Port2 is the heartbeat port (10.139.0.0 /20).
  • On port3, set the outgoing port (port 3) on each unit (10.138.0.0 /20)

Master configuration:

 

IP ports configuration:

 

set port1-ip 10.5.25.40/20
set default-gw 10.5.31.254
set port2-ip 10.139.9.40/20
set port3-ip 10.138.9.40/20


IP ports verification:

 

show

Configured parameters:

Port 1  IPv4 IP: 10.5.25.40/20  MAC: 00:62:6F:73:28:01
Port 2  IPv4 IP: 10.139.9.40/20         MAC: 00:62:6F:73:28:02
Port 3  IPv4 IP: 10.138.9.40/20         MAC: 00:62:6F:73:28:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:62:6F:73:28:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:62:6F:73:28:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:62:6F:73:28:06
IPv4 Default Gateway: 10.5.31.254


HC-setting configuration for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:

 

hc-settings -sc -tM -nFSA1 -cTT -pfortinet -iport2  -->-sc is for cluster role definition and -tM defines the device role(Master), -n device name, -c cluster name, -I heartbeat port.
The unit was successfully configured.

hc-settings -si -iport1 -a10.5.25.41/20 -->-si is for External IP cluster and -i heartbeat port, -a External IP Cluster.


HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:

 

hc-settings -l
SN: FSA-VM0000000123
Type: Master
Name: FSA1
HC-Name: TT
Authentication Code: fortinet
Interface: port2


Cluster Interfaces: port1: 10.5.25.41/255.255.240.0.

     

hc-master -l  --->> for 3.0.x

hc-primary -l  --->> for 4.0.x, 4.2.x, 4.4.x and 5.0.x
File scan is enabled with 50 processing capacity


Primary configuration.
IP ports configuration:

 

set port1-ip 10.5.27.113/20
set default-gw 10.5.31.254
set port2-ip 10.139.11.113/20
set port3-ip 10.138.11.113/20


IP ports verification:

 

show

 

Configured parameters:

 

Port 1  IPv4 IP: 10.5.27.113/20         MAC: 00:71:75:61:0D:01
Port 2  IPv4 IP: 10.139.11.113/20       MAC: 00:71:75:61:0D:02
Port 3  IPv4 IP: 10.138.11.113/20       MAC: 00:71:75:61:0D:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:71:75:61:0D:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:71:75:61:0D:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:71:75:61:0D:06
IPv4 Default Gateway: 10.5.31.254


HC-setting configuration for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:

 

hc-settings -sc -tP -nFSA2 -iport2 --> -sc is for cluster role definition and -tP defines the device role (Primary Slave), -n device name, -i heartbeat port.
The unit was successfully configured.

Warning:
        Primary slave unit may take over the master role of the cluster if the original master is down, you have to make sure it has the same network environment settings as master unit.
For example:
         *) configure same subnet for port1 on master and primary slaves
         *) configure same subnet for port3 on master and primary slaves
         *) configure route table on master and primary slaves

For 3.0.x:

hc-slave -a -s10.139.9.40 -pfortinet  -->-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured

      For 4.0.x, 4.2.x, 4.4.x, 5.0.x:

       hc-worker -a -s10.139.9.40 -pfortinet  -->-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
    The unit was successfully configured

 

HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:

 

hc-settings -l
SN: FSA-VM0000000456
Type: Primary Slave
Name: FSA2
Interface: port2

> hc-status -l
Status of master and primary slave units in cluster: TT
--------------------------------------------------------------------------------
SN                   Type            Name                 IP                   Active
FSA-VM0000000123     Master          FSA1                 10.139.9.40          1 second(s) ago
FSA-VM0000000456     Primary Slave   FSA2                 10.139.11.113        1 second(s) ago


Slave configuration.

 

IP ports configuration:

 

set port1-ip 10.5.27.160/20
set default-gw 10.5.31.254
set port2-ip 10.139.11.160/20
set port3-ip 10.138.11.160/20


IP ports verification:

 

show

 

Configured parameters:

 

Port 1  IPv4 IP: 10.5.27.160/20         MAC: 00:71:75:61:3C:01
Port 2  IPv4 IP: 10.139.11.160/20       MAC: 00:71:75:61:3C:02
Port 3  IPv4 IP: 10.138.11.160/20       MAC: 00:71:75:61:3C:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:71:75:61:3C:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:71:75:61:3C:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:71:75:61:3C:06
IPv4 Default Gateway: 10.5.31.254


HC-setting configuration 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x:

 

hc-settings -sc -tR -nFSA3 -iport2 --> -sc is for cluster role definition and -tR defines the device role (Normal Slave), -n device name, -i heartbeat port.
The unit was successfully configured.

 

For 3.0.x:

hc-slave -a -s10.139.9.40 -pfortinet --> -a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
The unit was successfully configured

 

      For 4.0.x,4.2.x,4.4.x and 5.0.x:

   hc-worker -a -s10.139.9.40 -pfortinet --> -a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password.
   The unit was successfully configured


HC-setting verification for 3.0.x, 4.0.x,4.2.x,4.4.x and 5.0.x::

 

hc-settings -l
SN: FSA-VM0000000789
Type: Regular Slave
Name: FSA3
Interface: port2


More details are available in the administration guide on the Fortinet Document Library.