Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
wsal
New Contributor II

ospf multiple process in vdom

hey, I'm planning to implement vdom but I encountered a problem that I don't know how to solve or how to approach it.

I am planning to use 4 vdom on my fortigate 400f.

ospf vdom.jpg

 

in each vdom I will have VIPs on public IP addresses, which I distribute to my edge routers via static blackhole.

I did it in the lab and it seems to work. I have a public subnet with a /23 mask and various VIPs from my subnet, distributed from different vdoms to routers.

the problem is that on each vdom I will have a large number of connected vlans which I wanted to distribute between vdoms vdom link also via ospf.

the only problem here is that I don't see how I can add a new ospf process to use other interfaces (vdom_link) to distribute connected vlans.

I can broadcast vlans to routers via OSPF, but I would like the traffic between vlans in vdom to be via vdom link. It seems to me a better idea than using WAN interfaces.

I can use rip to vdom link to distribute vlans to vdom, but I prefer ospf.

Do you think my concept is correct?

17 REPLIES 17
Toshi_Esumi

I don't know exact bandwidth/capacity of npu-vlink (vdom-link is definitely slower because it has to come back to the CPU from off-loading), but must be much higher than physical interfaces 10Gbps for 400F as long as NPU is properly utilized. It would be relatively negligible compared to UTM performance.

Toshi_Esumi

wsal
New Contributor II

Currently, my East West traffic, looking at the aggregate to the core switch that has a trunk for all vlans, is on average 800mbps- 1gbs and about 100,000 sessions, sometimes the traffic will increase to about 8gbps when larger data is migrated between vlans. after separation on vdom, traffic may increase because some servers will be in different vlans - now a lot of communication takes place in the same vlan, which does not affect the fortigate. now I have 600e and we did tradup to 400f where I want to use vdom. unfortunately, 400f only supports 2 npu, so I can use it for bidirectional communication for 2 vdoms, the remaining traffic between vdoms must be based on the slower vdom mlink, but looking at the specifications, it should be enough. I will use npu for the two largest vdoms. and the rest via standard vdom link.

Toshi_Esumi

You can stack up many VLANs on one pair of npu-vlink. You shouldn't have to use vdom-links. You can put root-vdom1 and vdom1-vdom2 vlans on one npu-vlink as well as all other pairs. We have 1500Ds with >50 vdoms and all use the same npu-vlink to connect to root vdom.

 

Toshi

wsal
New Contributor II

I don't think I understand it very well, I'll have to check it in the lab tomorrow and I'll send you a screenshot. if I remember correctly, in vdom global I saw something like npu0 link0 and npu0 link1 and by entering these interfaces I could select and address vdoms, so I chose e.g. root and vdom1 for npu link0, and for link1 root and vdom 2. and I ran out of options choosing other vdoms, not to mention vlans. I do not currently have access to this infrastructure

Toshi_Esumi

wsal
New Contributor II

hi, 

sorry for the interruption.

I returned to my lab

I created vlan 1000 in npu link 0 and npu link 1 and added this vlan for link0 for one vdom, and link1 for the second vdom.

I understand that this is how it should be done?
i.e. one vlanid will be a dedicated connection of two vdoms.


so I create the same vlan on both e.g. vdom links and simply assign it to different vdoms as inter-link?npu.png

Toshi_Esumi

If the "ROTP_VLAN" is VLAN 1000, yes. That's how npu-vlink can be used between two vdoms. Then you can create like VLAN 1001 between another set of vdoms.

 

Toshi

Labels
Top Kudoed Authors