FortiPAM
FortiPAM allows you to protect, isolate and secure privileged account credentials, manage and control privileged user access, and monitor and record privileged account activity.
ocara
Staff
Staff
Article Id 392375
Description

This article describes the steps required to configure FortiPAM in High Availability (HA) mode.

Scope FortiPAM, FortiSRA.
Solution

FortiPAM supports HA clusters in active-passive mode, where all tasks are managed by the primary node, and configurations such as the secret database, targets, and templates are automatically synchronized with the secondary node. FortiPAM also supports unicast HA, which allows members located at different sites with individually assigned IP addresses to form an HA cluster.

 

  1. Configuration Part.

 

Below is a schema showing how FortiPAM HA cluster nodes are connected.

 

1.png

 

In this article, the primary FortiPAM node has three interfaces configured:

  • Port1 (10.10.20.20) – This interface provides GUI access to FortiPAM. A firewall virtual IP is configured on this interface. Access to secrets is also handled through this interface.
  • Port2 (10.74.122.1) – This interface is used as the heartbeat interface for HA communication. In order for nodes to successfully join the cluster, these interfaces are recommended to be configured within the same broadcast domain.
  • Port4 (10.5.25.4) – This is the management interface, used for SSH access to FortiPAM nodes after synchronization is completed.

 

Screenshots of Interfaces Configured:

 

2.png

 

Configuration made on High Availability for the Primary Node:

 

3.png

 

Mode:

Active-Passive / Device-Priority: 200 (For Primary Node, this priority should be higher than secondary).

 

Cluster Settings:

  • Group Name: A specific name used for the cluster. This must match on both nodes.
  • Password: The password required for the nodes to join the cluster.
  • Monitored Interfaces: Typically, this is the interface used for launching secrets, targets, and GUI access.
  • Heartbeat Interface: The interface used for heartbeat communication between FortiPAM nodes.
    • If both nodes are in the same network, interfaces should remain within the same IP subnet or pool and the same broadcast domain.
    • If nodes are in different subnets or locations, enable Unicast Status.
      Note: Configuration for Unicast HA is out of scope for this article.

 

Management Interface Reservations:

This step allows the configuration of interfaces designated for SSH access to FortiPAM.

The interface used for SSH should:

  • Have only the SSH service enabled.
  • Not have GUI access enabled.
  • Not have any static routes associated.
  • Have its gateway defined under the HA settings.

 

Override:

It is recommended to enable this option.

When enabled, the primary device will automatically resume the master role if it rejoins the cluster after a failure or reboot.

 

Important Note:

Make sure that both FortiPAM nodes have the same group-id configured. This group-id is automatically configured by the system. It is necessary to verify that both nodes have the same ID; otherwise, they will not be able to join the cluster. This can be checked only via CLI, as shown below

 

config system ha

    set group-id 0       <--- This value should be the same on both nodes that are going to join the cluster.

    set group-name "PAMHA"

    set mode active-passive

    …

 

Configurations followed on Secondary Node:

 

4.png

 

After enabling high availability, the nodes will negotiate to form a cluster. The node with the higher device priority will be designated as the primary node.

Access to the primary node should remain available via:

  • Port1 for GUI access.
  • Port4 for SSH access.

 

GUI access to the secondary node will no longer be available. Only SSH access, as configured under Management Interface Reservation, will be permitted.

 

The HA status should reflect a synchronized state as shown below:

 

5.png

 

  1. Troubleshooting from CLI:

The following commands can assist with verification and troubleshooting of HA status and synchronization:

 

get system ha status

 

6.png

 

Verify via System-ARP that Layer-3 HA IPs and Heartbeat Link local addresses (169.254.X) are being learned correctly.

 

get system arp

 

Address           Age(min)   Hardware Addr      Interface

10.74.122.2       0          00:69:6f:6e:01:02 port2      <------ HA-Interface, IP from other node.

169.254.0.33      -          00:69:6f:6e:01:02 port2      <------ Link-Local Heartbeat IP addresses.

10.5.63.254       0          00:09:0f:09:fe:23 port3

10.5.31.254       0          00:09:0f:09:fe:23 port4

169.254.0.33      0          00:69:6f:6e:01:02 port_ha

10.10.20.1        90         00:78:65:6e:b1:01 port1

 

To verify checksum status:

 

diagnose sys ha checksum cluster

 

================== FPAVULTM2300###### ==================

 

is_manage_primary()=1, is_root_primary()=1

debugzone

global: 3d 16 77 dd e5 48 4f 35 17 df 14 1a 9c 9c d1 12

root: f1 7b fe 74 48 bd 4c 6c 36 18 4d 19 aa 07 f2 5f

all: 63 e5 50 ae 64 53 23 fb a2 2d 65 d2 b4 b6 5b c4

 

checksum

global: 3d 16 77 dd e5 48 4f 35 17 df 14 1a 9c 9c d1 12

root: f1 7b fe 74 48 bd 4c 6c 36 18 4d 19 aa 07 f2 5f

all: 63 e5 50 ae 64 53 23 fb a2 2d 65 d2 b4 b6 5b c4

 

================== FPAVULTM2###### ==================

 

is_manage_primary()=0, is_root_primary()=0

debugzone

global: 3d 16 77 dd e5 48 4f 35 17 df 14 1a 9c 9c d1 12

root: f1 7b fe 74 48 bd 4c 6c 36 18 4d 19 aa 07 f2 5f

all: 63 e5 50 ae 64 53 23 fb a2 2d 65 d2 b4 b6 5b c4

 

checksum

global: 3d 16 77 dd e5 48 4f 35 17 df 14 1a 9c 9c d1 12

root: f1 7b fe 74 48 bd 4c 6c 36 18 4d 19 aa 07 f2 5f

all: 63 e5 50 ae 64 53 23 fb a2 2d 65 d2 b4 b6 5b c4

 

To check the list of HA Checksum-Tables on both nodes:

 

diagnose sys ha checksum test

 

system.global: d22acc890a3f12fb418693dcf58815ec

system.accprofile: 8d96b9ce6ac2b02e02de0ce8011598c5

firewall.shaping-profile: 00000000000000000000000000000000

system.interface: 5873dd45edd01f09c1ef2e7819369e8e

system.password-policy: cbdb0648d24eceb433ad338e20e06a54

system.password-policy-guest-admin: 00000000000000000000000000000000

system.sms-server: 00000000000000000000000000000000

system.custom-language: ee62389ec8766e8436b032d6a6b04527

system.admin: ec293aaa0b6920044fc24918356bcc73

system.api-user: 00000000000000000000000000000000

system.sso-admin: 5873dd45edd01f09c1ef2e7819369e8e

system.sso-forticloud-admin: 00000000000000000000000000000000

system.maintenance: 165f95de755e0462ae7c5a2603ad7f5e

system.fsso-polling: 00000000000000000000000000000000

system.ha: e734318dc22d6a84a3a1438ede55670c

system.ha-lic: 1a0f1bcf804ecf2fc3b96d76dabf5ae3

system.ha-monitor: 00000000000000000000000000000000

system.storage: 2b3c98f4a5b22fa725b4991d774f7226

system.dns: a9db895811b1cbcd79e0499eeb371baf

system.ddns: 00000000000000000000000000000000

system.sflow: 00000000000000000000000000000000

system.netflow: 00000000000000000000000000000000

system.replacemsg-image: 7d11e16534085c25abe076a9e1622887

 

To recalculate checksums:

 

diagnose sys ha checksum recalculate

 

A manual sync can be triggered:

 

execute ha synchronize start

starting synchronize with HA master...

 

If the second node is unable to join the cluster, it is recommended to reboot the secondary node. This reboot should trigger the cluster join process again from the secondary node.

 

With the information provided above, it is possible to identify exactly where the variation in configuration exists. In this instance, the configurations of the two FortiPAM Firewalls should be checked under the:

 

config system global
config system interface
config system ha
config system console

 

  1. Debug Commands:

 

diagnose debug app hasync 255

diagnose debug application hatalk -1  <----- To check the Heartbeat communication between HA devices.

diagnose debug application hasync -1  <----- To check the HA synchronization process.

diagnose debug enable

 

 

The output from the debug should be as below:

 

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570419

<hatalk> parse options for 'FPAVULTM23000761', packet_version=1

<hatalk> new member 'FPAVULTM23000761' is added into group

<hatalk> [hatalk_gmember_update_last_hb_jiffies:209] recv hb packet from 'FPAVULTM23000761' on hbdev='port2' since last_lost_jiffies=0/7296513

<hatalk> vcluster_0: vmember 'FPAVULTM23000761' updated, override=1, usr_priority=150, mondev/pingsvr=0/0, uptime/reset_count=0/0, flag=0x00000001

<hatalk> cfg_changed is set to 1: hatalk_vcluster_add_vmember

<hatalk> vcluster_0: 'FPAVULTM23000759' is elected as the cluster primary of 2 members

<hatalk> vcluster_0: state changed, 2(work)->2(work)

<hatalk> vcluster_0: delay work_as_primary in timer

<hatalk> vcluster_0: reelect=1, vmember updated

<hatalk> reap child 2048 (0)

<hatalk> reap child 2049 (0)

<hatalk> reap child 2050 (0)

<hatalk> vcluster_0: reelect=0, hatalk_vcluster_timer_func

<hatalk> vcluster_0: 'FPAVULTM23000759' is elected as the cluster primary of 2 members

<hatalk> vcluster_0: state changed, 2(work)->2(work)

<hatalk> vcluster_0: delay work_as_primary in timer

<hatalk> cfg_changed is set to 0: hatalk_packet_setup_heartbeat

<hatalk> setup new heartbeat packet: hbdev='port2', packet_version=6

<hatalk> options buf is small: opt_type=41(DEVINFO), opt_sz=13806, buf_sz=1178

<hatalk> pack compressed dev_info: dev_nr=4, orig_sz=13800, z_len=70

<hatalk> heartbeat packet is set on hbdev 'port2'

<hatalk> [hatalk_timer_func:1146] need_set_haip=1

<hatalk> 'port2' is selected as 'port_ha'

<hatalk> change port_ha ip to 169.254.0.34

<hatalk> reap child 2051 (0)

<hatalk> parse options for 'FPAVULTM23000761', packet_version=2

<hatalk> vcluster_0: vmember 'FPAVULTM23000761' updated, override=1, usr_priority=150, mondev/pingsvr=0/0, uptime/reset_count=0/0, flag=0x00000000

<hatalk> vcluster_0: 'FPAVULTM23000759' is elected as the cluster primary of 2 members

FPAVULTM23000759 # <hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570540

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570550

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570560

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570570

FPAVULTM23000759 # <hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570580

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570590

FPAVULTM23000759 # <hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570631

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570641

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570651

<hatalk> vcluster_0: ha_prio=0(primary), state/chg_time/now=2(work)/1747497460/1747570661

 

Note:

In scenarios where an additional DR node is located at a different site (in a separate broadcast domain), it is important to enable 'Unicast Status' in the High Availability cluster configuration, as shown below:

 

image (21).png

 

This can be configured via CLI as well:

 

config system ha

    set unicast-gateway X.X.X.X

        config unicast-peers

            edit 1

                set peer-ip Y.Y.Y.Y

 

To allow HA synchronization while maintaining independent IP addressing, FortiPAM must be configured to exclude specific items from HA sync. The following components can be excluded:

  • Firewall Virtual IPs (VIPs).
  • System interfaces.
  • Static routes.
  • SAML configuration.
  • FortiToken Mobile Push configuration.

 

This can be achieved by configuring vdom-exceptions, which allow administrators to specify which objects should be excluded from HA configuration synchronization.

 

An example configuration of vdom-exceptions is shown below:

 

show system vdom-exception

config system vdom-exception

    edit 1

        set object firewall.vip

    next

    edit 2

        set object system.interface

    next

    edit 3

        set object router.static

    next

    edit 4

        set object user.saml

    next

    edit 5

        set object system.ftm-push

    next

end