This article describes a behavior observed in FortiNAC Manager related to CA Management, where the FortiNAC CA status appears as 'Failed' despite stable network communication and normal CA functionality. This behavior is isolated to Standalone CAs which can not fail the services to another node.
FortiNAC 7.6.3 and later versions.
Similar behavior can happen with existing or newly added servers. The CA status will show normally for the first few minutes and able to initially synchronize. Later on the status can change to 'Unreachable Reconnecting':
And than later on to 'Failed':
While the CA is in this state, it is not possible to manually Synchronize and the host/adapter information from this CA is not present in the NCM. This behavior is caused by Service checks done in the CA. In this example, the DNS service in the CA is failing:
diagnose tail -F output.processManager
2025-09-08 12:24:50.472 +0200 [main] INFO yams.CampusManager - Restarting named (retries = 3)
2025-09-08 12:24:50.969 +0200 [main] INFO yams.CampusManager - [getPredefinedHealthCheckResult]latest predefined health check result: true
2025-09-08 12:25:21.596 +0200 [main] INFO yams.CampusManager - named is not running!
2025-09-08 12:25:21.597 +0200 [main] INFO yams.CampusManager - [getPredefinedHealthCheckResult]latest predefined health check result: false
2025-09-08 12:25:21.601 +0200 [main] INFO yams.CampusManager - [getCustomHealthCheckResult]custom health check enabled
2025-09-08 12:25:21.603 +0200 [main] INFO yams.CampusManager - [getCustomHealthCheckResult] overall result true
2025-09-08 12:25:21.603 +0200 [main] INFO yams.CampusManager - [isSystemOK] latest predefined result false custom health check result true, final result false
2025-09-08 12:25:21.603 +0200 [main] INFO yams.CampusManager - ******* System Check Failed! *******
2025-09-08 12:25:21.905 +0200 [main] INFO yams.CampusManager - Loaders are running.
2025-09-08 12:25:21.905 +0200 [main] INFO yams.CampusManager - Processes are running
Even though the Loaders and the Processes are running normally, the system health check result as false. As a result, the status of this CA in NCM changes to Failed and the synchronization with this node stops.
After the issue is solved and all the services are started in the CA, in a few minutes its status will change to Running. This can be checked from UI and also from the NCM CLI:
diagnose tail -F output.master
2025-09-08 14:11:23.504 +0200 [grpc-default-executor-24] INFO CAStatusChangeNotifyGRPCService - CAStatusChangeNotifyGRPCService get notification from coordinator : [.... CABasicInfo{id=302828805715968caID='FNVXCATM2400xxx5', name='fnac76', ip='10.6.2.61', managerNCMID='FNVX-MTM25000xx6', status='RUNNING', uiPort='', role='STANDALONE', companion=[], lastTimeManualSyncTry=-1, lastTimeManualSyncSucceed=1757327170758, latestTimeStatusChange=-1, lastTimeManualSyncTry=-1, lastTimeManualSyncSucceed=0, manualSyncFailedRetryCount=0, landscape=452891056385, hasCertificates=true, physicalAddress=00:xx:xx:69:19:01, platformVersion=5, fosModel='FNAC_ESX'}]
Related articles:
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.