Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
tioeudes
Contributor

Throughput on inter vdom link

Hello All.

 

Did anyone ever measured the max thoughput on the inter vdom links?

Looking for this information online, but wasn't able to find it.

 

1 Solution
Yurisk
Valued Contributor

I did not. On the second thought I can't imagine much value in such testing - the feature is purely software based and done on CPU, so it will depend heavily on hardware model and CPU load at any given time, i.e. measuring even the same link at different times would give different results. 

Yuri https://yurisk.info/  blog: All things Fortinet, no ads.

View solution in original post

Yuri https://yurisk.info/ blog: All things Fortinet, no ads.
6 REPLIES 6
Yurisk
Valued Contributor

I did not. On the second thought I can't imagine much value in such testing - the feature is purely software based and done on CPU, so it will depend heavily on hardware model and CPU load at any given time, i.e. measuring even the same link at different times would give different results. 

Yuri https://yurisk.info/  blog: All things Fortinet, no ads.
Yuri https://yurisk.info/ blog: All things Fortinet, no ads.
James_G
Contributor III

If you have an npu vdom link available, it will be as fast as the physical interface you are routing from, uses hardware offload
emnoc
Esteemed Contributor III

We have and you will not get as high as using real interfaces. The problems line in the NPU more than the CPU than if anything.

 

  NP4 vsr NP6 vrs NP6LITE vrs NP7 and so on. 

 

 

Ken Felix

SCTG-MS

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Benoit_Rech_FTNT

Hello, On CPU based model, inter-vdom link will use the CPU to forward the traffic, and then the throughput is limited by the CPU.

On NP6Lite, NP7Lite and NP6 and NP7 based devices, you can use NPU_vlink as inter-vdom links, which will not use the CPU.  Basically, the maximum throughput is 10Gbps per npu_vlink for NP6 (10Gbps per XAUI), 100Gbps per npu_vlink for NP7.  There are information about NP6Lite, NP7Lite, NP6 and NP7 into the hardware acceleration guide: https://docs.fortinet.com/document/fortigate/6.4.0/hardware-acceleration/575471/network-processors-n... Benoit

tioeudes

Thanks everyone for the feedback.

 

I agree with you guys, thinking about how the inter vdom links work, the max throughput would probably depend on the processing capacity of the device, and also it could become a problem if the overall use of cpu of the device is already high.

 

So, the best alternative, if the throughput for inter vdom conections is something very important on my topology, is to use physical interfaces and a switch for interconnection, right?

 

Regards.

 

Benoit_Rech_FTNT

Hi Eudes,

My recommendation would be to you npu_vlink. Depending on your hardware, you can have multiple npu_vlink, and if you have many inter-vdom links, the best solution is to use different npu_vlink. It will naturally distribute the traffic on the different NP6 (NP7). If you don't have multiple npu_vlink, then you can create VLANs on top of npu_vlink (like you do on physical or aggregate interface). If you prefer to use physical interfaces, then I recommend you connect the interface back-to-back, without going through a switch. Again, you can use VLAN on top of these interfaces. The drawback of this solution, is that you will 'use' two physical interfaces, SFP, and cable which can be problematic and increase the cost .

Best regards. Benoit

Top Kudoed Authors