Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
MarcusLeung
New Contributor

Traffic shaping profiles, diagnose netlink interface list show Qdisc=noqueue

Hi team,

 

I am currently testing Traffic Shaping Profile in v7.2.8 FortiGate 601F with NP7 (default-qos-type: policing, as shaping is not recommended).

NP7 traffic shaping | FortiGate / FortiOS 7.2.8 | Fortinet Document Library

 

I find the netlink interface shows "Qdisc=noqueue" and the guaranteed bandwidth in profile is not working. I set one to 99% and other 1%, but iperf test has no difference, they share 50-50% traffic.

But the maximum bandwidth in profile is working, if I set 5%, the iperf result shows down to 5Mbps immediately.

 

From the admin guide, I see the log is Qdisc=pfifo_fast.
Traffic shaping profiles | FortiGate / FortiOS 7.2.8 | Fortinet Document Library

 

Any idea to config FortiGate to use Qdisc=pfifo_fast, so that to further test the Traffic shaping profiles?

 

Netlink:

FortiGate-601F # diagnose netlink interface list <VPN interface>

if=<VPN interface> family=00 type=768 index=289 mtu=1420 link=0 master=0
flags=up p2p run noarp multicast
Qdisc=noqueue
egress traffic control:
bandwidth=100000(kbps) lock_hit=18 default_class=2 n_active_class=2
class-id=3 allocated-bandwidth=99000(kbps) guaranteed-bandwidth=99000(kbps)
max-bandwidth=100000(kbps) current-bandwidth=0(kbps)
priority=low forwarded_bytes=0
dropped_packets=0 dropped_bytes=0
class-id=2 allocated-bandwidth=1000(kbps) guaranteed-bandwidth=1000(kbps)
max-bandwidth=100000(kbps) current-bandwidth=0(kbps)
priority=top forwarded_bytes=120
dropped_packets=0 dropped_bytes=0
stat: rxp=34381725 txp=94045094 rxb=5947262736 txb=126799962736 rxe=0 txe=273 rxd=0 txd=0 mc=0 collision=0 @ time=1728370526
re: rxl=0 rxo=0 rxc=0 rxf=0 rxfi=0 rxm=0
te: txa=0 txc=0 txfi=0 txh=0 txw=0
misc rxc=0 txc=0
 
Below is the other config of the QoS:
 

config system interface
edit 
<VPN interface>

set vdom "root"
set ip x.x.x.x 255.255.255.255
set type tunnel
set outbandwidth 100000
set egress-shaping-profile "Profile-2"
set remote-ip x.x.x.x 255.255.255.255
set snmp-index xxx
set interface "xxx"
next
end

 

config firewall shaping-profile
edit "Profile-2"
set default-class-id 2
config shaping-entries
edit 2
set class-id 2
set priority low
set guaranteed-bandwidth-percentage 1
set maximum-bandwidth-percentage 100
next
edit 3
set class-id 3
set priority top
set guaranteed-bandwidth-percentage 99
set maximum-bandwidth-percentage 100
next
end
next
end

 

Thanks!

Marcus

2 REPLIES 2
bpozdena_FTNT

Hi Marcus,

this may be a bit too complex to solve with just the provided info. But I can give you a few hints, that I think you will find helpful:

The issue I see is with `current-bandwidth=0(kbps)` . This indicates that there is no traffic matching this shaping profile. This could be because you collected the output when there is no traffic flowing? If there was, it probably means your traffic shaping policies are not configured correctly. You will need to assign `class-id 2` and `class-id 3` to some traffic. See https://docs.fortinet.com/document/fortigate/7.2.8/administration-guide/673634/traffic-shaping-polic... .

Your main focus should be on trying to see some values under the current-bandwidth counter.

I would also recommend you to disable NP7 offloading while you are testing the setup. This is because the NP7 counters are often delayed and not so accurate even though the actual shaping works perfectly.

 

config firewall policy
edit <id>
set auto-asic-offload disable
next
end


Once you have the shaping working in the kernel, you can enable NP7 offloading and test again.

 

HTH,
Boris
MarcusLeung
New Contributor

Hi Boris,

 

Thanks for your help and reply! I tried the auto-asic-offload disable in policy, and tried auto-asic-offload disable and np-acceleration disable in VPN tunnel, but still failed for VPN Tunnel. 

 

Hi Teams,

 

I just find if I set the profile in physical port instead of VPN tunnel, the log shows "Qdisc=mq", and the profile setting of guaranteed bandwidth seems works.

For example, the physical port egress 100Mbps, class-id 2 is default class with 10% guaranteed bandwidth, class-id 3 is another application with 90% guaranteed bandwidth, the iperf result shows 10Mbps and 90Mbps.

If I set the same profile in a VPN interface, the guaranteed bandwidth is not working.

 

By CLI "diagnose netlink intf-qdisc list", I find interface shows qdisc pfifo_fast, qdisc mq, qdisc noqueue. For example:

qdisc mq 0: dev port1 root refcnt 1

qdisc pfifo_fast 0: dev port1 parent 0:1 refcnt 1

qdisc pfifo_fast 0: dev port1 parent 0:2 refcnt 1

qdisc noqueue 0: dev <VPN Tunnel> root refcnt 2

 

So, anyone know whether VPN interface support guaranteed bandwidth or not?

 

Thanks!

Marcus

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors