- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Throughput on inter vdom link
Hello All.
Did anyone ever measured the max thoughput on the inter vdom links?
Looking for this information online, but wasn't able to find it.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I did not. On the second thought I can't imagine much value in such testing - the feature is purely software based and done on CPU, so it will depend heavily on hardware model and CPU load at any given time, i.e. measuring even the same link at different times would give different results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I did not. On the second thought I can't imagine much value in such testing - the feature is purely software based and done on CPU, so it will depend heavily on hardware model and CPU load at any given time, i.e. measuring even the same link at different times would give different results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have and you will not get as high as using real interfaces. The problems line in the NPU more than the CPU than if anything.
NP4 vsr NP6 vrs NP6LITE vrs NP7 and so on.
Ken Felix
SCTG-MS
PCNSE
NSE
StrongSwan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, On CPU based model, inter-vdom link will use the CPU to forward the traffic, and then the throughput is limited by the CPU.
On NP6Lite, NP7Lite and NP6 and NP7 based devices, you can use NPU_vlink as inter-vdom links, which will not use the CPU. Basically, the maximum throughput is 10Gbps per npu_vlink for NP6 (10Gbps per XAUI), 100Gbps per npu_vlink for NP7. There are information about NP6Lite, NP7Lite, NP6 and NP7 into the hardware acceleration guide: https://docs.fortinet.com/document/fortigate/6.4.0/hardware-acceleration/575471/network-processors-n... Benoit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks everyone for the feedback.
I agree with you guys, thinking about how the inter vdom links work, the max throughput would probably depend on the processing capacity of the device, and also it could become a problem if the overall use of cpu of the device is already high.
So, the best alternative, if the throughput for inter vdom conections is something very important on my topology, is to use physical interfaces and a switch for interconnection, right?
Regards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Eudes,
My recommendation would be to you npu_vlink. Depending on your hardware, you can have multiple npu_vlink, and if you have many inter-vdom links, the best solution is to use different npu_vlink. It will naturally distribute the traffic on the different NP6 (NP7). If you don't have multiple npu_vlink, then you can create VLANs on top of npu_vlink (like you do on physical or aggregate interface). If you prefer to use physical interfaces, then I recommend you connect the interface back-to-back, without going through a switch. Again, you can use VLAN on top of these interfaces. The drawback of this solution, is that you will 'use' two physical interfaces, SFP, and cable which can be problematic and increase the cost .
Best regards. Benoit
