Starting from FortiNAC-F v7.6.0, it is possible to configure Custom Health Check in addition to the Predefined Health Check for High Availability. If this check fails, the primary FortiNAC will trigger a 'System Check Failed!' event and force failover to the secondary.
- Go to 'System -> Settings -> System Management -> Custom Health check -> Custom Health Check Rule', and select 'Create New':
- 'Name': provide a unique name for this rule.
- 'Type': select one of the available check types:
- ICMP – Pings the server.
- TCP Echo - Sends a TCP echo to server port 7. Expects the server to respond with the corresponding TCP echo.
- TCP – Specify port (listening port number of the backend server) as an input.
- RADIUS – Listening port number of the backend server. Default RADIUS is 1812.
-
'Retry': select the number of attempts to retry this health check rule, default is 1.
- 'Timeout': Seconds to wait for a reply before assuming the health check has failed.

-
Go to 'System -> Settings -> System Management -> Custom Health check -> Custom Health Check', and select 'Add':
- 'Custom Health Check Rule': Select the rule that's been configured in the first step.
- 'Destination': Select the IP address of the target server for this check. It's possible to add multiple destinations to be checked concurrently. And logic will be applied here, if one destination fails the health check will fail.
- Select 'Test' to confirm if the rule passes the check before applying the settings.

Troubleshooting:
The 'output.processManager' log file will show the status of the health check, it can be viewed using the command:
diagnose tail -f output.processManager
In normal scenarios when the health check is successful, the following output will be generated on the primary FortiNAC:
2025-02-18 14:57:30.271 +0100 [main] INFO yams.CampusManager - [getCustomHealthCheckResult]custom health check enabled 2025-02-18 14:57:30.286 +0100 [main] INFO yams.CampusManager - [isSystemOK] latest predefined result true custom health check result true, final result true
But when the health check fails, the following output will be generated on the primary FortiNAC.
- On the primary:
2025-02-18 15:57:25.611 +0100 [com.bsc.server.CustomHealthCheckTaskController] INFO c.f.f.l.s.CustomHealthCheckService - Protocol check name ICMP_rule timed out, try 0
2025-02-18 15:57:28.620 +0100 [com.bsc.server.CustomHealthCheckTaskController] INFO c.f.f.l.s.CustomHealthCheckService - Protocol check name ICMP_rule timed out, try 1
2025-02-18 15:57:30.716 +0100 [main] INFO yams.CampusManager - [getCustomHealthCheckResult]custom health check enabled
2025-02-18 15:57:30.726 +0100 [main] INFO yams.CampusManager - [isSystemOK] latest predefined result true custom health check result false, final result false
2025-02-18 15:57:30.726 +0100 [main] INFO yams.CampusManager - ******* System Check Failed! *******
2025-02-18 15:57:30.726 +0100 [main] INFO yams.CampusManager - ******* Changing status to - Secondary In Control *******
2025-02-18 15:57:30.727 +0100 [main] INFO yams.CampusManager - Sending Force Failover to trigger other servers
2025-02-18 15:57:31.236 +0100 [main] INFO yams.CampusManager - ******* Shutting Down - Secondary In Control *******
-
On the secondary:
2025-02-18 09:57:51.651 -0500 [main] INFO yams.CampusManager - **** Primary responded with Secondary In Control **** 2025-02-18 09:57:51.758 -0500 [main] INFO yams.CampusManager - **** Failed to talk to primary and able to ping the gateway **** PingRetryCnt exceeded! 2025-02-18 09:57:51.759 -0500 [main] INFO yams.CampusManager - ******* Changing status to Secondary In Control ******* 2025-02-18 09:57:51.760 -0500 [main] INFO yams.CampusManager - ******* Starting Primary Not in Control *******
Related documents:
Custom Health Check
High Availability (FortiNAC-OS)
What's New in FortiNAC F 7.6
|