Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.

aggregate members dissapear in the slave of an HA cluster



 It's me again with another strange problem. This happened on a customer's site. Any help will be appreciated.


  • 2 x FG1000C with firmware version 5.6.0.
  • 2 x Cisco Catalyst Switches in a stacked setup. From our point of view it acts as a single device.
  • 1 x Flex with 2 EN2092 Switches. They're not stacked nor connected among them.[/ul]




  • Production environment (i can't just crank the config until it works :) ).
  • 2 VDOMS: Transparent (transparent mode) and root (nat)
  • 2 Aggregates: LA-Catalyst and LA-Flex, these are configured at a global level. All using different LAG groups.
  • Several VLAN subinterfaces using these aggregates. All asigned to their corresponding VDOM.
  • Ports port1 and port2 will be used for HA.
  • LA-Catalyst uses ports: port6, port8 and port10.
  • LA-Flex uses ports: port18, port20 and port 22.[/ul]


  • The configuration works either on node A or node B.[/ul]


    Both FG1000C should work in an active-passive cluster with link monitoring enabled on "LA-Catalyst" and "LA-Flex". At least in the active node these interfaces must be aggregates to their respective switches.

    The problem

    I configured the HA cluster and noticed that the passive node had a out of sync configuration... forever.


    I dug and found that in the passive node was only one interface member per aggreate: port6 on LA-Catalyst and port18 on LA-Flex. The missing interfaces were nowhere to be found in the GUI: they were not listed as members of the aggregates, nor available for use. This was exactly the same in the CLI, though i "saw" then with show system interface portX.

    Then, on the slave node, i tried to add the missing members (config system interface, edit xxx, set member blah) and it gave me this error: "entry not found in datasource"


  • 1 REPLY 1

    FWIW, an update. 

    The customer couldn't find a maintenance window until yesterday. I re-created the cluster following these steps:

  • Backed up the configuration of node "A" and "B".
  • Copied the "config system storage" block from "B"'s backup into "A"'s backup.
  • Copied the "set hostname" and "set alias" values from "B"'s backup into "A"'s backup.
  • Assigned another IP to the management interface.
  • Re-created the cluster. Node "A" priority was higher, of course.[/ul]

     And voilá, it worked. I'd love to say "it was because of", but i cant. I did the same the last time i was at the customer's premises. I know that nothing was changed on the Fortigates, but i can't be certain that they change something on the switches. 


     The other possibility, but this is a longshot, it that the slave unit was licensed the very same day i was doing the initial configuration. 



  • Announcements

    Select Forum Responses to become Knowledge Articles!

    Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

    Top Kudoed Authors