I'm facing a really strange problem with IPSec VPN. I configured IPSec tunnel FortiGate to FortiGate on different models (40F - 80F and 100F) all of my VPN tunnels are slow and they not reflecting my bandwidth throughput. I'm on FortiOs 7.0.1
For exemple I have : - FortiGate-80F <-> FortiGate-80F with a bandwidth of 1Gb/1Gb on both sites. Trough my tunnel, I reach with difficulties about 200Mb/200Mb
- FortiGate-100F <-> FortiGate-80F with a bandwidth of 1Gb/1Gb on 100F sites and a bandwidth of 500Mb/500mb. Trough my tunnel, I reach with difficulties about 200Mb/200Mb
- FortiGate-100F - Fortigate-100F with a bandwidth of 500Mb/500Mb on both sites. Trough my tunnel, I reach with difficulties about 200Mb/200Mb
I tried many options to optimise my tunnel but nothing woks. I tried : - Set correct MTU on WAN Interface and MSS on Firewall rules (for exemple MTU of 1500 and MSS of 1380) --> no change - Set different encryptions on my tunnels --> no change - Disabled ipsec-asic and ipsec-hmac --> no change
This slowness on IPSec seems to be the same on every models and on very configurations... Here is for exemple one of my phase1 config
config ipsec phase1-interface edit "vpn" set interface "wan1" set ike-version 2 set local-gw 188.8.131.52 set keylife 28800 set peertype any set net-device disable set proposal aes128-sha256 aes256-sha256 aes128gcm-prfsha256 aes256gcm-prfsha384 chacha20poly1305-prfsha256 set dhgrp 19 20 set nattraversal forced set remote-gw 184.108.40.206 set add-gw-route enable set psksecret Secret next end
I really need your help. I don't understand what I'v missed in configuration.
Why is nattraversal forced? Let it automatically figure out if NAT is needed or not, or better yet disable it if you know both sites are directly connected to the internet via public IP-addresses.
Also, what's the latency between your sites? Latency has a big impact on TCP performance, and they further away your sites are the worse single-session TCP will perform. Splitting up the workload into multiple TCP sessions is often necessary to get higher bandwidths over long distances.
Also, how are you testing the bandwidth over the tunnel? I personally like to use iperf3 since it allows me to test network throughput without having to rely on factors like harddrives or file transfers. It also allows me to run several sessions in parallel to find the true maximum throughput between two sites.
Sending your traffic with a higher TCP Window size should also improve throughput assuming there is no loss on the link.
Hi Zoriax, What about the MTU used on all your equipment between the client and the FortiGate but also between the FortiGate and the server? Are you sure you're not using Jumbo frames on these segments, when as you mention, you have an MTU of 1500 bytes on the interfaces of the FortiGate?
For SMB transfers we often see jumbo frame configurations on all the equipment of the path and of course if you haven't done that on your firewall, maybe this is the reason the performance decreases no matter which firewall you have in the middle of the path.
If this is the root cause of your issue, you need to override the MTU of your incoming and outgoing interfaces and verify that the MTU of the tunnel is also increased with the use of the following debug:
The debug output you display is just a reflection of your current configuration which doesn't give any information about potential TCP retransmissions due to lower MSS in the path. A valid test would be to change/increase the MTU configuration of your interfaces where the IPsec tunnels are bound, and verify if the performance is better. Once you change the MTU, the same diag command will give you dufferent "dst_mtu" and "mtu" values.
If you want to debug deeper and have a better visibility, you need to capture a trace on the interface the traffic is incoming and analyze the TCP performance.
Assuming the WAN interface is the ingressing interface of your traffic, you can change the MTU only on the WAN interface and this change will be reflected on the IPsec interfaces bonded to this physical interface as well. No need to touch the firewall policies. Don't forget to also change the MTU on the egressing interface of the firewall to allow the same amount of bytes being sent out of the firewall.
Once this is done, you can verify the MTU of your IPsec tunnel with the above command and then you can do a traffic test to verify if you get better performance.
I think I need to decrease the MTU because my ISP is limited to a MTU of 1500.
I tried many configuration but nothing seems to work. My WAN MTU is set to 1492 (reflecting PPPoE and validating by a ping). I think my IPSec will automatically adjust its MTU regarding of the WAN. So I don't understand where is the problem.
If a run a wireshark, I can't see any fragmentation but maybe it's related to honor-df in global setting. What do you think about this feature ? Should I disabled it ?