Hi everyone,
We currently have two offices one in USA and one in Europe both of which have FortiGate 60F.
They are tunneled with IPSec and the bandwitdh using single iperf3 stream is extremely slow at 10Mbps but when using 10 parallel stream it is around 65Mbps.
The office in Europe has 1Gbps download and 400Mbps upload while office in USA has 300Mbps for both download and upload.
If you need any information to help me with the problem and to help me optimize the bandwidth between two offices, I will provide it.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
iperf3 allows you to run several sessions in parallel to find the true maximum throughput between two sites. You can even try to send your traffic with a higher TCP Window size and that should also improve throughput assuming there is no loss on the link.
Also Refer these documents :
https://community.fortinet.com/t5/FortiGate/Technical-Tip-Low-throughput-troubleshooting/ta-p/217967
Hello @sainc
From the above issue description, as I can understand that you are facing download and upload slowness issues within the IPSEC tunnel.
Can you please provide below information:
1. Rough network topology
2.Tunnel name
3.Try disabling, auto-asic offload from the firewall policy and disabled npu offload from phase1 of the ipsec tunnel
4.Try to lower the MTU size and MSS size in interface and firewall policy respectively.
5. Please run Iperf test from the from client machine and attach the output.
6. Try disabling UTM profiles and certificate inspection for testing purpose.
7. May, i know if traffic shaping has been configured, If yes, then try disabling it for testing purpose and check for the bandwidth.
Thanks
Shaleni
Hello @chauhans, thanks for helping
1. Topology is as simple as it gets it is one VLAN per office, both offices have access to internet and they are connected via IPSec tunnel.
2.gateway
name: 'To-US FGT'
type: route-based
local-gateway: xxx.xxx.xxx.xxx:0 (static)
remote-gateway: xxx.xxx.xxx.xxx:0 (static)
mode: ike-v2
interface: 'ppp1' (297)
rx packets: 7940195 bytes: 10635639728 errors: 208
tx packets: 4710208 bytes: 1651150828 errors: 725
dpd: on-demand/negotiated idle: 20000ms retry: 3 count: 0
selectors
name: 'To-US FGT'
auto-negotiate: enable
mode: tunnel
src: 0:192.168.100.0/255.255.255.0:0
dst: 0:192.168.0.0/255.255.255.0:0
SA
lifetime/rekey: 43200/404
mtu: 1422
tx-esp-seq: 2eb3
replay: enabled
qat: 0
inbound
spi: 680cc3a7
enc: aes-cb bb17b54d50d6a23b2d12a3430fd75a56
auth: sha256 b911461623f1dfa3561cd104a35744b16c6e2ae4c57a632134188eed4cc76a0f
outbound
spi: 4ad0bd5e
enc: aes-cb f15dc2513571bb1b9108a21852522860
auth: sha256 37e7def904c062ea34c79286e86e26cbd0b9580c9ff80c423dc80f90e1b419c8
NPU acceleration: none
4. Currently MTU is not overridden and is 1492. How much should I lower it?
5.
iperf3.exe -c 192.168.100.20
Connecting to host 192.168.100.20, port 5201
[ 4] local 192.168.0.4 port 55131 connected to 192.168.100.20 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 512 KBytes 4.19 Mbits/sec
[ 4] 1.00-2.00 sec 1.25 MBytes 10.5 Mbits/sec
[ 4] 2.00-3.02 sec 1.25 MBytes 10.4 Mbits/sec
[ 4] 3.02-4.00 sec 896 KBytes 7.45 Mbits/sec
[ 4] 4.00-5.01 sec 768 KBytes 6.22 Mbits/sec
[ 4] 5.01-6.01 sec 768 KBytes 6.28 Mbits/sec
[ 4] 6.01-7.02 sec 1.12 MBytes 9.42 Mbits/sec
[ 4] 7.02-8.01 sec 640 KBytes 5.25 Mbits/sec
[ 4] 8.01-9.00 sec 512 KBytes 4.25 Mbits/sec
[ 4] 9.00-10.00 sec 896 KBytes 7.34 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 8.50 MBytes 7.13 Mbits/sec sender
[ 4] 0.00-10.00 sec 8.43 MBytes 7.07 Mbits/sec receiver
iperf Done.
6. UTM is disabled it helped a bit when testing with multiple parallel streams using iperf3 but not when using one stream
7. Traffic shaping is not setup it is default.
Contrary to @chauhans opinion, I would deduce that the tunnel traffic does not use offloading and thus is slowed down by the (meager) SoC CPU of the 60F. This would be the case if any UTM is in use, in proxy mode, like AV.
Try to see the sessions into the tunnel: select the relevant policy, right-click to 'see in FortiView', select 'all sessions' tab and observe if (most of) the sessions are offloaded.
If not, strip the policy of all UTM for testing.
If so, too bad. Might be a MTU issue then.
I've seen some 60Fs in real networks, none of which exhibited excessively slow IPsec throughput.
Hello @ede_pfau, thank you for replying
Disabling UTM did help but only when using iperf3 with multiple parallel streams.
The situation you're experiencing, where a single stream is slow but multiple streams are faster, is quite common when dealing with high latency networks like transcontinental IPsec VPNs. The issue often lies with the TCP window size, which is the amount of data that can be "in flight" on the network before an acknowledgment is received.
High latency networks require a larger TCP window size to fully utilize the available bandwidth. If the window size is too small, the sender will stop sending data and wait for an acknowledgment, leading to underutilization of the available bandwidth.
Here are some steps you can take to improve the performance:
### 1. **Enable TCP Window Scaling**
On the systems sending and receiving data over the VPN, ensure that TCP window scaling is enabled. This allows the systems to use a larger TCP window size, which can improve performance on high latency networks.
### 2. **Adjust MTU Size**
The MTU size on the VPN tunnel can also affect performance. If the MTU size is too large, packets might need to be fragmented, which can reduce performance. If it's too small, the overhead of the IPsec encapsulation could take up a larger proportion of each packet. You could experiment with different MTU sizes to see if this improves performance.
### 3. **Enable VPN Performance Features**
On the FortiGate devices, make sure that you're using performance-enhancing features, such as:
- **Hardware acceleration**: If your FortiGate model supports it, ensure that IPsec hardware offloading is enabled to improve performance.
- **NPU acceleration**: Similarly, if your FortiGate has network processing units (NPUs), you can use them to accelerate IPsec traffic.
- **IPsec interface mode**: In interface mode, the FortiGate unit can use NPUs to offload flow-based and proxy-based security profiles, reducing CPU usage.
### 4. **Use Multiple VPN Tunnels**
If your FortiGate model supports it, consider setting up multiple VPN tunnels and using load balancing to distribute traffic across them. This could potentially allow you to utilize more bandwidth.
### 5. **Upgrade FortiGate Firmware**
Ensure that your FortiGate devices are running the latest firmware, as newer versions may include performance improvements or bug fixes that could help with your issue.
### 6. **Contact Fortinet Support**
If you've tried everything and are still experiencing issues, consider reaching out to Fortinet Support. They may be able to provide additional insights or suggestions tailored to your specific setup.
Remember, network performance tuning can be a complex process and the best settings often depend on the specifics of your network and the systems you're using. It's always a good idea to make changes incrementally and monitor the results to understand the impact of each change.
Thanks for replying,
I noticed two things that seems weird
1. MTU Sizes are showing different values on one fortigate from the other US.
US FortiGate 'diagnose vpn tunnel list'
name=To-BG FGT ver=2 serial=3 xxx.xxx.xxx.xxx:0->xxx.xxx.xxx.xxx:0 dst_mtu=1500
bound_if=5 lgwy=static/1 tun=intf/0 mode=auto/1 encap=none/520 options[0208]=npu frag-rfc run_state=0 role=primary accept_traffic=1 overlay_id=0
proxyid_num=1 child_num=0 refcnt=13 ilast=1 olast=2 ad=/0
stat: rxp=1560 txp=1569 rxb=219534 txb=119830
dpd: mode=on-demand on=1 idle=20000ms retry=3 count=0 seqno=1104
natt: mode=none draft=0 interval=0 remote_port=0
proxyid=To-BG FGT proto=0 sa=1 ref=4 serial=2 auto-negotiate
src: 0:192.168.0.0/255.255.255.0:0
dst: 0:192.168.100.0/255.255.255.0:0
SA: ref=6 options=18027 type=00 soft=0 mtu=1438 expire=41092/0B replaywin=2048
seqno=45d esn=0 replaywin_lastseq=0000045e qat=0 rekey=0 hash_search_len=1
life: type=01 bytes=0/0 timeout=42897/43200
dec: spi=4ad0bd69 esp=aes key=16 35a41770505da4e4e50c69a558cf9f13
ah=sha256 key=32 960df663a03aeef556e4ffb915454b74ee2df56d72852bc371815fc7b1bdf3e1
enc: spi=c6a3721c esp=aes key=16 1d83bd72d10772fc9774ceea04c99e93
ah=sha256 key=32 72cc804d7dcdeba9d2553afed7c0d1ac31d9cedd577dee9a44b583b81aef46c9
dec:pkts/bytes=1120/156148, enc:pkts/bytes=1124/86504
npu_flag=03 npu_rgwy=xxx.xxx.xxx.xxx npu_lgwy=xxx.xxx.xxx.xxx npu_selid=3 dec_npuid=1 enc_npuid=1
run_tally=1
Europe FortiGate 'diagnose vpn tunnel list'
name=To-US FGT ver=2 serial=1 xxx.xxx.xxx.xxx:0->xxx.xxx.xxx.xxx:0 dst_mtu=1492
bound_if=27 lgwy=static/1 tun=intf/0 mode=auto/1 encap=none/520 options[0208]=npu frag-rfc run_state=0 role=primary accept_traffic=1 overlay_id=0
proxyid_num=1 child_num=0 refcnt=13 ilast=0 olast=0 ad=/0
stat: rxp=1740 txp=1740 rxb=109602 txb=108575
dpd: mode=on-demand on=1 idle=20000ms retry=3 count=0 seqno=0
natt: mode=none draft=0 interval=0 remote_port=0
proxyid=To-US FGT proto=0 sa=1 ref=4 serial=1 auto-negotiate
src: 0:192.168.100.0/255.255.255.0:0
dst: 0:192.168.0.0/255.255.255.0:0
SA: ref=3 options=18027 type=00 soft=0 mtu=1422 expire=40956/0B replaywin=2048
seqno=514 esn=0 replaywin_lastseq=00000510 qat=0 rekey=0 hash_search_len=1
life: type=01 bytes=0/0 timeout=42930/43200
dec: spi=c6a3721c esp=aes key=16 1d83bd72d10772fc9774ceea04c99e93
ah=sha256 key=32 72cc804d7dcdeba9d2553afed7c0d1ac31d9cedd577dee9a44b583b81aef46c9
enc: spi=4ad0bd69 esp=aes key=16 35a41770505da4e4e50c69a558cf9f13
ah=sha256 key=32 960df663a03aeef556e4ffb915454b74ee2df56d72852bc371815fc7b1bdf3e1
dec:pkts/bytes=2590/162860, enc:pkts/bytes=2598/242213
npu_flag=00 npu_rgwy=xxx.xxx.xxx.xxx npu_lgwy=xxx.xxx.xxx.xxx npu_selid=0 dec_npuid=0 enc_npuid=0
run_tally=1
The configuration is the same but FortiGate in Europe is reporting that NPU Offload is not working while the one in US is working as expected.
Also wanted to point out that latency is 148ms but stable(without packet loss).
Both FortiGate are at v6.4.14 build2093 (GA) Firmware version
Hi,
As you already noticed EU FGT is not offloading traffic, also seen here, npu-flag=00 .
Session NPU-flag field
npu-flag=00 ; both IPsec SAs loaded to kernel ; also when IPsec offloading is disabled
Have a look at this, https://community.fortinet.com/t5/FortiGate/Technical-Tip-Ensuring-IPSec-traffic-is-offloaded-for-im... , maybe it helps.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1713 | |
1093 | |
752 | |
447 | |
231 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.