Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
peterblais
New Contributor

Traffic Shaping Mysteries

Hello Community!

 

I'm hopeful someone can kick-start my brain around traffic shaping and how I might practically apply it to my use case. For starters, I've perused the standard documentation as well as the Limiting Bandwidth with Traffic Shaping cookbook article as suggested in other posts. I've also taken a dive into some of the community posts to see if there's a similar situation. For the life of me, I can't seem to wrap my brain around the correct shaper settings, even to simply prove the concept.

 

To that end, I've attempted to prove the concept by creating a traffic shaper for the VLAN I'm currently on. I created a Traffic Shaper object that applies "high priority" and guarantees 2048Kbps with no maximum. I've turned on "per policy" in the CLI as suggested in the documentation, cookbook, and several community posts.

 

Then, I created a Traffic Shaping Policy with the following settings:

Source: My VLAN Object

Destination: "All"

Service: "All"

Outgoing Interface: "Any"

Shared Shaper: using the shaper object created above

Reverse Shaper: using the shaper object created above

 

And here are the results...

Policy Disabled: speed tests at 50+ Mbps (expected result) to the Internet

Policy Enabled: speed tests at ~2Mbps (what the?) to the Internet

 

Any thoughts on what I might be missing and how to help enlighten me on some misconceptions/assumptions I may have drawn in the process of creating the above?

 

Thanks to all in advance!

2 Solutions
btp

Bear in mind that Fortigate don't do QoS. Really.

"Priority High" is by Fortigate interpreted as "strict high", so if you have something in the "high" queue this will always be sent first. So all other queues have to wait.

 

Depending on the FortiGate model you might expect really ****ty performance when turning on traffic shaping on a policy. Because packets have to be processed by the CPU, and can't be hardware offloaded. You will see this in the session list. Something like this:

 

session info: proto=1 proto_state=00 duration=2 expire=59 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= ha_id=0 policy_dir=0 tunnel=/ vlan_cos=13/255 state=may_dirty  statistic(bytes/packets/allow_err): org=252/3/1 reply=252/3/1 tuples=2 tx speed(Bps/kbps): 89/0 rx speed(Bps/kbps): 89/0 orgin->sink: org pre->post, reply pre->post dev=23->24/24->23 gwy=172.16.30.1/10.10.2.2 hook=post dir=org act=snat 10.10.2.2:10099->172.16.30.1:8(172.16.30.2:62464) hook=pre dir=reply act=dnat 172.16.30.1:62464->172.16.30.2:0(10.10.2.2:10099) misc=0 policy_id=2 auth_info=0 chk_client_info=0 vd=0 serial=00005577 tos=ff/ff app_list=0 app=0 url_cat=0 dd_type=0 dd_mode=0 npu_state=0x000001 no_offload

 

as opposed to when the session is handled by the ASIC*:

(...)

npu_state=0x003000

npu info: flag=0x81/0x82, offload=8/8, ips_offload=0/0, epid=129/128, ipid=128/129, vlan=0/34768

 

On a FG60D you might drop from ~800Mbps to ~100Mbps.

 

*

offload=4/4: NP4 sessions.
offload=5/5: XLR sessions.
offload=6/6: Nplite/NP4lite sessions.
offload=7/7: XLP sessions.
offload=8/8: NP6 sessions.
flag 0x81: regular traffic.
flag 0x82: IPsec traffic.

-- Bjørn Tore

View solution in original post

-- Bjørn Tore
btp
Contributor

If you only mark your traffic with DSCP or COS the traffic is offloaded to the ASIC.

 

Also - the Fortigate shows two different values, depending on where tha marking is done:

BY design, ingress COS values will display in the session output in the range 0-7 but admin COS values will display in the range 8-15 even though the value on the wire will be in the range 0-7" 

 

In this example the traffic was marked with cos P=5 in a policy by the Fortigate:

config firewall policy    edit 2     set vlan-cos-fwd [style="background-color: #ffffff;"]5[/style] end 

 

session info: proto=1 proto_state=00 duration=2 expire=59 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= ha_id=0 policy_dir=0 tunnel=/ vlan_cos=13/255 state=may_dirty  statistic(bytes/packets/allow_err): org=252/3/1 reply=252/3/1 tuples=2 tx speed(Bps/kbps): 89/0 rx speed(Bps/kbps): 89/0 orgin->sink: org pre->post, reply pre->post dev=23->24/24->23 gwy=172.16.30.1/10.10.2.2 hook=post dir=org act=snat 10.10.2.2:10099->172.16.30.1:8(172.16.30.2:62464) hook=pre dir=reply act=dnat 172.16.30.1:62464->172.16.30.2:0(10.10.2.2:10099) misc=0 policy_id=2 auth_info=0 chk_client_info=0 vd=0 serial=00005577 tos=ff/ff app_list=0 app=0 url_cat=0 dd_type=0 dd_mode=0 npu_state=0x000001 no_offload no_ofld_reason: disabled-by-policy  // auto-asic-offload disable in  the policy total session 3

-- Bjørn Tore

View solution in original post

-- Bjørn Tore
5 REPLIES 5
packetpusher
Contributor

I think you've done pretty very well by gathering information first. My question is what are you trying to accomplish? Are you looking for a proof of concept?

btp

Bear in mind that Fortigate don't do QoS. Really.

"Priority High" is by Fortigate interpreted as "strict high", so if you have something in the "high" queue this will always be sent first. So all other queues have to wait.

 

Depending on the FortiGate model you might expect really ****ty performance when turning on traffic shaping on a policy. Because packets have to be processed by the CPU, and can't be hardware offloaded. You will see this in the session list. Something like this:

 

session info: proto=1 proto_state=00 duration=2 expire=59 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= ha_id=0 policy_dir=0 tunnel=/ vlan_cos=13/255 state=may_dirty  statistic(bytes/packets/allow_err): org=252/3/1 reply=252/3/1 tuples=2 tx speed(Bps/kbps): 89/0 rx speed(Bps/kbps): 89/0 orgin->sink: org pre->post, reply pre->post dev=23->24/24->23 gwy=172.16.30.1/10.10.2.2 hook=post dir=org act=snat 10.10.2.2:10099->172.16.30.1:8(172.16.30.2:62464) hook=pre dir=reply act=dnat 172.16.30.1:62464->172.16.30.2:0(10.10.2.2:10099) misc=0 policy_id=2 auth_info=0 chk_client_info=0 vd=0 serial=00005577 tos=ff/ff app_list=0 app=0 url_cat=0 dd_type=0 dd_mode=0 npu_state=0x000001 no_offload

 

as opposed to when the session is handled by the ASIC*:

(...)

npu_state=0x003000

npu info: flag=0x81/0x82, offload=8/8, ips_offload=0/0, epid=129/128, ipid=128/129, vlan=0/34768

 

On a FG60D you might drop from ~800Mbps to ~100Mbps.

 

*

offload=4/4: NP4 sessions.
offload=5/5: XLR sessions.
offload=6/6: Nplite/NP4lite sessions.
offload=7/7: XLP sessions.
offload=8/8: NP6 sessions.
flag 0x81: regular traffic.
flag 0x82: IPsec traffic.

-- Bjørn Tore

-- Bjørn Tore
peterblais
New Contributor

Thanks! The "strict high" bit is an important bit that I'd definitely overlooked. With that info, I can factor it in among the whole solution. I also have come to realize the QoS bit you mentioned as well, through other community posts. I appreciate the assist!

btp
Contributor

If you only mark your traffic with DSCP or COS the traffic is offloaded to the ASIC.

 

Also - the Fortigate shows two different values, depending on where tha marking is done:

BY design, ingress COS values will display in the session output in the range 0-7 but admin COS values will display in the range 8-15 even though the value on the wire will be in the range 0-7" 

 

In this example the traffic was marked with cos P=5 in a policy by the Fortigate:

config firewall policy    edit 2     set vlan-cos-fwd [style="background-color: #ffffff;"]5[/style] end 

 

session info: proto=1 proto_state=00 duration=2 expire=59 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=4 origin-shaper= reply-shaper= per_ip_shaper= ha_id=0 policy_dir=0 tunnel=/ vlan_cos=13/255 state=may_dirty  statistic(bytes/packets/allow_err): org=252/3/1 reply=252/3/1 tuples=2 tx speed(Bps/kbps): 89/0 rx speed(Bps/kbps): 89/0 orgin->sink: org pre->post, reply pre->post dev=23->24/24->23 gwy=172.16.30.1/10.10.2.2 hook=post dir=org act=snat 10.10.2.2:10099->172.16.30.1:8(172.16.30.2:62464) hook=pre dir=reply act=dnat 172.16.30.1:62464->172.16.30.2:0(10.10.2.2:10099) misc=0 policy_id=2 auth_info=0 chk_client_info=0 vd=0 serial=00005577 tos=ff/ff app_list=0 app=0 url_cat=0 dd_type=0 dd_mode=0 npu_state=0x000001 no_offload no_ofld_reason: disabled-by-policy  // auto-asic-offload disable in  the policy total session 3

-- Bjørn Tore

-- Bjørn Tore
peterblais

Thanks for the first sanity check about my assumptions. The end-game here is to be able to afford priority to different groups of traffic. For example, I have a VLAN dedicated to VoIP devices. This purpose-built VLAN needs some guarantee that it won't be choked-out by other bandwidth contenders. My environment, being a boarding school, has some interesting challenges and stakeholders. In addition to the VoIP example above, which is a business-critical infrastructure component, we also have residential students who play video games during their recreational time. Because this is a quality-of-life concern for our in-residence population, this traffic type also needs to be considered in the mix. Here's a simple recipe I'd like to see if I can build upon, for which my initial tests (in the OP) failed:

[ol]
  • Create a guaranteed bandwidth policy for VoIP and other critical infrastructure
  • Create a bandwidth cap (maximum) for recreational traffic like gaming apps
  • Attempt to reassign the DSCP codepoint on a VLAN basis -- this one is outside the scope of my OP, but, as a stretch goal, I'd like to see if any of you have practical experience on the DSCP assignment in shaping policies, and if it's honored from end-to-end? I have existing infrastructure on my L3 switches that assigns QoS policies to different DSCP and 802.1p values. I'm wondering if this is to my advantage, or if I'm actually fouling up/complicating traffic prioritization and efficiency.[/ol]

    Again, thanks for lending a hand. I hope this illustrates some of what I aim to do.

  • Labels
    Top Kudoed Authors