Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
New Contributor

IPSEC throughput limited?

We are having some throughput problems between two Fortinet devices.


We have a 100D connected to a 60E over an IPSEC tunnel. The traffic seems to stagger around ~200Mbps even though we have a direct Gbps fiber connection.


Somewhere, it feels like a limitation of sorts. Any setting that could give this behaviour, or could it be that the 100D is simply too old for these speeds?


toshiesumi wrote:

Then why do you need a VPN over a point-to-point dedicated/private circuit?

That wasn't my original question and might be a good discussion in another thread.


I've read other threads on the subject and done some diagnostics, and it seems the 100D is the bottleneck. I thank you for your answers and we're going to have internal discussions how to proceed from this. The cheapest solutions is probably to add another 60E :)


We are experiencing similar issues with a 100E connecting to ISP @ 400MBs broadband back to HQ-a 1500D with 500MPS fiber circuit. We have multiple tunnels going back to HQ and all seem to be slower than their speeds would dictate, 10-15mbps throughput on IPSEC tunnel, and a 400 MB broadband internet connection. Question I have is, if all tunnels terminate on HQ, under same IP, would this cause the impact? Or should all of our B2B's be on different IP's within our assigned public subnet? Why are they all slow?


@james.heyworth; I would open a new thread with this question. Same issue, different boxes.


I would suspect you have asymmetric bandwidth A.K.A. cable modem/Spectrum Internet. FiOS is symmetrical. You subscribe to 100 Mbps, you get that both ways. Spectrum's 300 MB offering gives you 300 down and (I think) only about 10-20 Mbps up. If these lines are leased, this probably will not apply. You didn't say.

Bob - self proclaimed posting junkie!
See my Fortigate related scripts at:

Esteemed Contributor III

I would start with these questions:

1. can both sides see the near max (400Mbps down/? bps up, 500Mbps down/? bps up) bandwidth with local speed test sites?

2. Is the bandwidth between two location's public/interface IPs outside the tunnel closer to the smallest number of down/up both sides? (to test it, needs an iPerf server or other test tools on both ends)

3. Is the port arrangement on the 1500D in line with NP boundaries (, and both policies and IPsec config are allowing NPU off-loading?


Sharing IP/interface among tunnels shouldn't be a big factor. We have a bunch of 50-100 tunnels on the same interface per vdom on some 1500Ds.