Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
suthomas1
New Contributor

npu link

Good day everyone,

 

I will appreciate all feedback on understanding what is the main difference between npu Vdom link & only Vdom link.

They appear to be two seperate things. i read about acceleration but didn't quite grasp it. 

 

So when should one use npu vdom & normal vdom link?

only creating the vdom link from interfaces, can it be used or does it have issues.

 

 

Suthomas
Suthomas
5 Solutions
Toshi_Esumi
SuperUser
SuperUser

Below doc for non-npu vlink says "VDOM link does not support traffic offload. If you want to use traffic offload, use NPU-VDOM-LINK."

https://docs.fortinet.com...646/inter-vdom-routing

So, if an ingress port1 in vdom1 is handled by npu1 and an egress port2 in vdom2 is handled by npu1 as well, the entire processes from the ingress to the egress could be offloaded to npu1 IF you use an npu1 vlink between two vdoms. If a non-npu vlink (or different npu vlink like npu2), it needs to come out from npu1 once and the CPU needs to handle it before hand it over to vdom2.

 

View solution in original post

Toshi_Esumi

Yes, of course. Also make sure it goes through only one/same npu from ingress to egress in case your model has multiple npus. It might make significant difference in performance.

View solution in original post

Toshi_Esumi

Only difference in config is npu vlink is built-in. You don't need to create one. If npu0, it's like npu0_vlink0 and npu0_vlink1 for both ends. But in case you have many vdoms need to connect them together, you shold use vlans on the npu0_vlink. Like "VLAN100_0" in vdom1 on npu0_vlink0 and "VLAN100_1" in vdom2 on npu0_vlink1, and so on and on.

View solution in original post

Toshi_Esumi

https://docs.fortinet.com/document/fortigate/6.4.0/hardware-acceleration/327022/using-vlans-to-add-m...

This is an example. VLAN is of course not built-in. You can name it whatever you want.

View solution in original post

Toshi_Esumi

The npu vlink's names are reserved and they're already there inside of npus. For the vdom links you can create you might call them as CPU vlinks handled by the CPU.

I don't know exactly why CPU vlinks exist but it might be just historical reason, or config compatibility with models that don't have NPUs. I'm almost sure CPU vlinks were the first when they introduced VDOMs, then they added npu vlinks when they introduced NPU chips.  

View solution in original post

14 REPLIES 14
Toshi_Esumi

npu_vlink with VLANs work as an "link pool" between VDOMs. Think about a topology in below diagram. To simplify let's assume this FGT has only one NPU, npu0. Then those vdoms are connected each others with totally 4 links. And you assigned 4 VLANs 100-103 for them on npu0. Then you decided to call each link interface at vdoms with "_0" (its parent interface is npu0_vlink0) and "_1"(its parent interface is npu0_vlink1)  on both ends of each link, which can be called like below:

 

rt-vd1_0  [root vdom/vlanid 100/npu0_vlink0]
rt-vd1_1   [vd1 vdom/vlanid 100/npu0_vlink1]
rt-vd2_0  [root vdom/vlanid 101/npu0_vlink0]
rt-vd2_1   [vd2 vdom/vlanid 101/npu0_vlink1]
rt-vd3_0  [root vdom/vlandid 102/npu0_vlink0]
rt-vd3_1   [vd2 vdom/vlanid 102/npu0_vlink1]
vd1-vd3_0  [vd1 vdom/vlandid 103/npu0_vlink0]
vd1-vd3_1   [vd2 vdom/vlanid 103/npu0_vlink1]

 

Then you can assign whatever subnet you want to use on the link, commonly /30, like:

 

rt-vd1_0 = 10.10.100.1/30

rt-vd1_1 = 10.10.100.2/30

rt-vd2_0 = 10.10.101.1/30

rt-vd2_1 = 10.10.101.2/30

       .....

 

Did you get the idea? I'm not an FTNT employee so I can't put this in KB. But I might put this on our own internal "wiki" KB.

 

Toshi

 

 

npuvlink.png

echo

Thank you for explanation!

But I still can't get the point.

The missing part is in npu0_vlink configuration. In my example, it looks like this:

 

edit "npu0_vlink0"
set vdom "root"

...

next

 

edit "npu0_vlink1"
set vdom "VD1"

...

next

 

If I create VLAN 101 under npu0_vlink1 and set it to VDOM VD2, then it appears there, yes. But as shown above, the npu0_vlink connects root and VD1. This is the confusing part and looks like a misconfiguration: I connect root with VD2 through a link that has been established between root and VD1. So is it a tweak, workaround or something like that? For me it looks like a faulty configuration that may possibly cause problems or doesn't work at all. I haven't tested it yet though.

 

But if it works then it looks like specifying a VDOM is an unnecessary abstraction layer that is not important when using these vlinks through VLANs. That also means that I could possibly leave the npu0_vlink0 and npu0_vlink1 both in root VDOM and use different VDOMs when configuring VLANs.

echo

I tested today the thing I proposed: make the npu0_vlink "connecting" root and root VDOMs (that is, both npu0_vlink0 and npu0_vlink1 belong to root VDOMs) and then using two different VLANs to use this link to connect root with VD1 and root with VD2. It really works this way! Thank you very much for helping.

msolanki

Hi Echo

 

There is basic difference in Vdom links and NPU Vdom link is that NPU vdom links are in buid and the moment you you enable  multi vdom mode the "npu0_vlink" interface name visble in FGT under interfaces .

Let say if a hardware has npu4 or np6lite the the interface name shows like "npu0_vlink0" and "npu0_vlink1"  however VDOM link interface you can create manually as many as you want .

 

If you are using NPU vdom interface in between VDOM then you can use NPU accelerationand offload however in VdomLink we can not use this feature for traffic.

 

NPU VDOM LINK

Now for your example if you want to connect root vdom to VD2  then create a another interface

Select interface ,VLAN ID and virtual domain so this will so another kind of sub interface under npu0_vlink0

In your case

 

edit "New-Interface"

        set vdom "root"

        set ip 50.50.50.1 255.255.255.252

        set allowaccess ping https ssh snmp

        set alias "Sub_root_250"

        set role wan

        set snmp-index 35

        set interface "npu0_vlink0" <<<<<<<<<<<<<<<<<<<<check this inface

        set vlanid 250

 

 

New_VD2"

        set vdom "VD2"

        set ip 50.50.50.2 255.255.255.252

        set allowaccess ping https ssh snmp

        set alias "Sub_VD2_Vlink1_250"

        set device-identification enable

        set role lan

        set snmp-index 36

        set interface "npu0_vlink1"    <<<<<<<<<<<<<<<<<<<<check this inface

        set vlanid 250

 

 

Same way you can create for VD3 but VLAN and IP should be different and you can select the VDOM like root and VD3

 

 

Madhav

Tac team

Toshi_Esumi
SuperUser
SuperUser

Don't touch "npu0_vlink0/1" config when you use VLAN subinterfaces to avoid confusion although you can use them. Just same as regular VLANs on regular interfaces/ports. Those are non-tagged parent interfaces.

 

Toshi

Labels
Top Kudoed Authors