Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
New Contributor

Bad bufferbloat on WAN link. How to shape with Fortigate

Hi all,


Have recently started a new contracting gig. part of the role is implementing a voip telephone system , and I've been investigating the network a little as there are some problems with jitter and large latency spikes to handsets. Anecdotally users are also reporting "slow" internet , often when we are no where near peak capacity. 


a (not managed by us) telco router/media convertor is onsite (either one or both , I see a cisco MAC from the fortigate WAN interface, its near the MDF in the building, which we don't have access to) .  with a 50/50 fibre link.

RRUL testing shows pretty bad bufferbloat 





I'm not very familiar with fortigate products, I don't see any option for fq_codel , HTB etc as  such , which I have had some success  implementing on linux based routers etc before. 


Im thinking much of this problem is either because of how the ISP internet gear is buffering traffic (if its a router I can see in ARP), or its just discarding everything above 50m. I see spikes over 50mbit when the link is saturated that drop off quickly, I dont think they are letting us burst traffic though, I think its just being dropped so I need to setup some shaping outbound. 


There is pretty much zero setup on the fortigate right now from the outfit that installed it. No QoS. There are Vlans but they do nothing except have slightly different subnets (all route to each other,  no tagging or QoS). There are stacked DELL switches attached to the LAN, everything in the office goes through these.


Anyone have some experience trying to solve this on fortigate gear , or some tips on config?


in the past ive worked with mid band ethernet type services where its fairly essential to shape traffic before handing it off to the NTU ( a dumb layer 2 device thats just mirroring the mac from the switch in the exchance). I'm thinking if I can just shape everything at the LAN interface to slightly less than 50 this will improve, then I can work on QoS for the voice vlan etc. 




Any ideas or tips? I think we can get much better performance from this service.  



New Contributor

Thanks. I've got the same issue with a 60E, 100E and 200E on Telstra fibre connections with 50Mbps and 200Mbps, some direct with Telstra and others resold by another ISP with the ISP's data.

They have all told me that it's to do with the way Telstra uses Cisco routers as their NTU and how is discards excess traffic above the speed you are paying for.


I've got plenty of Fortigate's out there with Vocus and TPG fibre and are not having this issue on the upload side.


I've tried egress traffic shaping via GUI and also CLI via interface shaping and using percentage shaping, but can't get this to resolve.


The ISP has sent me a Juniper router which I've put in place of the Fortigate and they have put interface shaping on and it gets the full 50Mbps of the link, so they are blaming the Fortigate.


I don't want to have to put another router or device in front of the Fortigate.  So any other ideas?


richg wrote:

txque change minimal impact.

shaping outbound to a bit less than full speed and lowering that made a slight difference.


in the end the solution was to put a netgate (pfsense) firewal in and setup fq_codel


voip latency is around 25ms now solid , even under load.


this is in australia too. its a telstra service resold by someone else, I suspect they are doing something screwy in the interconnect because its not the only one I've seen now from the same ISP. 

New Contributor

I don't know if that's the same issue exactly, that was clearly bufferbloat. latency seesawed stupidly after around 40% load. 


try running a RRUL test with flent just to be sure. you will need a MAC/linux box (or a VM and good NIC in your computer) and a netperf/iperf server.


what sits behind the fortigate? I also found some issues in the dell switch stack they had there as well. it was cabled wrong for a start. this was also causing problems. 


where abouts are you speed testing from? directly off the fortigate?


have the telco given you any indication why they believe it's CPE, besides the Juniper getting full speed? I'd be curious to know how they tested that exactly.. 


what do your traffic reports show (telco ones) ?

how do those line up with the fortigate reports?

do you have netflow/sflow setup ? 


You could try hard shaping upstream to link speed minus a delta (try 10%) and see if that makes a difference.


also make sure you have set the link speed correctly , from memory there are two places to do it, ones effectively a label in the GUI , but the other is set via cli like this:


config system interface

edit wan1

set inbandwidth <kb>

set outbandwidth <kb>


..may vary depending on your fortios version. 


if it is a buffer issue then there are unfortunately a lot of places it can happen. NIC's (OS buffer and hardware buffers) , your switches , the CPE, the telcos gear at other end. etc. Its often telcos, because everyones got obsessed with not dropping packets, which is counterproductive. if you are maxxing something you want packets to drop so TCP will backoff. 


I know its not particulary helpful, but swapping the fortigate for a pfsense box improved the situation significantly. 


I really hope fortigate follows some of the other major vendors in implementing some form of SQM at some stage.. but I'm not sure that this is the same problem.


from my telco days certain types of handoff require hard policing on customer end, usually layer 2 ones. else packets get dropped in access network .

New Contributor

I did some testing with the DF bit set and found that packets > 1472 fail, so I set the WAN interface max mtu to 1472, however, this hasn't had any effect, like the other changes to the traffic rate on the interface.


Thanks for the answers, keep em coming!

New Contributor

Oh yeah, one more thing, seems to have a dynamic adjustment for line speed and apparently removes bufferbloat "automagically".


I would assume if I get the configuration you are suggesting working, I won't need that device, but I am so tired of laggy/slow connections, I'd really like to fixor one way or the other.




Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Top Kudoed Authors