Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
New Contributor

FG virtual-Clustering with 2 vclusters



is there a possibility to connect 2 vclusters like links similar to vdom-links in a 4 node setup ?

I have a problem actually to do this.

The documentation says it is not possible.

We have 4 vdoms:


Perimeter_vdom_internal     ----->vcluster1

DCFW_vdom_internal          ----->vcluster1


Perimeter_vdom_customer   ----->vcluster2

DCFW_vdom_customer        ----->vcluster2



The entire 4 node Cluster is streched across 2 datacenters 

We want be 2 Nodes to be Master for the beloning vdoms, so that we not have 3 passive nodes and the traffic is balanced between 2 physical nodes.


The base Setup is done but now we have the problem that we cant connect DCFW_vdom_internal to DCFW_vdom_customer wich is very bad because we have to connect to the servers that are located in the costomer vdom (all vlans are tagged on a trunk on all 4 nodes and each vlan is assigned to the beloning vdom).


is there any possibility to do this without using a third party routing device that routes between the virtual clusters ?


Please see the attached picture 


Thank you in Advance!

BR Martin











Valued Contributor

Hi Martin,


actually I don't see a reason, why this shouldn't work. You are missing a lot of details in terms of Layer 2 and 3 connectivity - so hard to guess, what is going wrong.


Using VDOM links or NPU VLinks is possible and supported in VClusters - But would enforce VDOMs using that link to stay on the same node - which might not be what you want in terms of load sharing




Esteemed Contributor III

Yes the vdom-link restriction would come back and haunt you. If you have a bigger platform ( aka more interfaces ) I would build the setup using physical links and possible in a LAG.


Ken Felix





PCNSE NSE StrongSwan
New Contributor

Hello Roman, 


thank you for your reply :)


yesterday i found this


It says:

With virtual clusters (vclusters) configured, inter-VDOM links must be entirely within one vcluster. You cannot create links between vclusters, and you cannot move a VDOM that is linked into another virtual cluster.



So we actually have 2 vclusters one for internal and one for customer traffic.

The idea behind this was to have 2 physical active boxes and a seperation between customer and internal traffic.

Our actual Firmware is 6.0.3


I have tried yesterday to create those vdomlinks via cli (regular vdom-link & npu-link) but unfortunately id did not work.


here is another sketch of our infrastructure.

All vlans shoudl terminate at the firewall, i dont want to use the ospf underlay network for vlan routing, this should be done by the firewall.


If this will not work with 2 virtual-clusters/vdom partitioning we will go ahead with an active active four node cluster and all vdoms in Proxy Mode.


BR Martin 









Valued Contributor

Martin_36 wrote:

 If this will not work with 2 virtual-clusters/vdom partitioning we will go ahead with an active active four node cluster and all vdoms in Proxy Mode.



As you have a split datacenter and you normally want to process firewall traffic only on one side primarily I'd not go for an A/A cluster - cause it might happen you send traffic unnecessarily between those datacenters multiple times. Also troublshooting will get more complex.


And keep in mind ONLY sessions which will result in proxied AV scanning will get offloaded to the subsidiary units. Don't know if you will really earn that much out of a A/A config!


I prefer A/P clusters and also virtual clustering. But I am used to always build concepts, where any transport between the VDOMs then happens on VLANs on the switches between the fortigates. As you utilize several LACP trunks, this shouldn't be a problem at all.





Hello all,


thanks for your answers!

we will go ahead, with Transfer Networks on each vcluster and join the networks to the underlaying ospf area (on the switches (Dell S5248-ON)) to join the two vclusters togehter.


Bandwidth will be no problem, we have 40G DCI over CWDM & DWDM.


So why not use the OSPF underlay :)

The Transfer Networks will be local vlans on each site (not joined to the vxlan virtual networks), because we cant assign an ip address to a vlan which is member of the vxlan virtual network.


So it will be 2 vlans on each site (1x customer &  1x internal)


We will test the config, and give feedback if it works as expected.


BR Martin 




we solved the case with a vlan connected to the root vdom.

Then we created 2 EMAC Vlans which are connected to the VLAn -interface on the root vdom and assigned them to the corresponding vdom on each vcluster.


We created the corresponding routes and policies and bam it works :)

So no need to use the OSPF underlay for it.


BR Martin 


Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Top Kudoed Authors