Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
zoriax
Contributor

Site to site IPsec traffic very slow

Hi everyone !

 

I'm facing a really strange problem with IPSec VPN. I configured IPSec tunnel FortiGate to FortiGate on different models (40F - 80F and 100F) all of my VPN tunnels are slow and they not reflecting my bandwidth throughput. I'm on FortiOs 7.0.1

 

For exemple I have :
- FortiGate-80F <-> FortiGate-80F with a bandwidth of 1Gb/1Gb on both sites. Trough my tunnel, I reach with difficulties about 200Mb/200Mb

- FortiGate-100F <-> FortiGate-80F with a bandwidth of 1Gb/1Gb on 100F sites and a bandwidth of 500Mb/500mb. Trough my tunnel, I reach with difficulties about 200Mb/200Mb

- FortiGate-100F - Fortigate-100F with a bandwidth of 500Mb/500Mb on both sites. Trough my tunnel, I reach with difficulties about 200Mb/200Mb

 

I tried many options to optimise my tunnel but nothing woks. I tried :
- Set correct MTU on WAN Interface and MSS on Firewall rules (for exemple MTU of 1500 and MSS of 1380) --> no change
- Set different encryptions on my tunnels --> no change
- Disabled ipsec-asic and ipsec-hmac --> no change

 

This slowness on IPSec seems to be the same on every models and on very configurations... Here is for exemple one of my phase1 config

 

config ipsec phase1-interface
   edit "vpn"
      set interface "wan1"
      set ike-version 2
      set local-gw 1.2.3.4
      set keylife 28800
      set peertype any
      set net-device disable
      set proposal aes128-sha256 aes256-sha256 aes128gcm-prfsha256 aes256gcm-prfsha384 chacha20poly1305-prfsha256
      set dhgrp 19 20
      set nattraversal forced
      set remote-gw 4.3.2.1
      set add-gw-route enable
      set psksecret Secret
   next
end

 

I really need your help. I don't understand what I'v missed in configuration.

 

Thanks !

 

 

17 REPLIES 17
Golle
New Contributor II

Why is nattraversal forced? Let it automatically figure out if NAT is needed or not, or better yet disable it if you know both sites are directly connected to the internet via public IP-addresses.

 

Also, what's the latency between your sites? Latency has a big impact on TCP performance, and they further away your sites are the worse single-session TCP will perform. Splitting up the workload into multiple TCP sessions is often necessary to get higher bandwidths over long distances.

 

Also, how are you testing the bandwidth over the tunnel? I personally like to use iperf3 since it allows me to test network throughput without having to rely on factors like harddrives or file transfers. It also allows me to run several sessions in parallel to find the true maximum throughput between two sites.

 

Sending your traffic with a higher TCP Window size should also improve throughput assuming there is no loss on the link.


NSE7
NSE7
zoriax
Contributor

Hi Golle,

 

Thanks for your return. Nat-t was set to force because Forti Support said to me it will be better to have it forced. But I can disable it because my router is directly connected to internet.

 

The latency between two sites (for exemple between Foti-100F) is around 22ms. For me it's low.

 

With iPerf an 10 parallel client threads I can reach the max : [SUM] 9.00-10.00 sec 53.0 MBytes 445 Mbits/sec

 

So... seems to be related to SMB

 

Any idea how to optimize this protocole trough VPN IPsec ? 

Stelios_FTNT

Hi Zoriax,
What about the MTU used on all your equipment between the client and the FortiGate but also between the FortiGate and the server? Are you sure you're not using Jumbo frames on these segments, when as you mention, you have an MTU of 1500 bytes on the interfaces of the FortiGate?

For SMB transfers we often see jumbo frame configurations on all the equipment of the path and of course if you haven't done that on your firewall, maybe this is the reason the performance decreases no matter which firewall you have in the middle of the path.

 

If this is the root cause of your issue, you need to override the MTU of your incoming and outgoing interfaces and verify that the MTU of the tunnel is also increased with the use of the following debug:

# diagnose vpn tunnel list
list all ipsec tunnel in vd 4
------------------------------------------------------
name=IPsec_name ver=2 serial=8 10.1.1.1:0->10.2.2.2:0 dst_mtu=9100
bound_if=40 lgwy=static/1 tun=intf/0 mode=auto/1 encap=none/536 options[0218]=npu create_dev frag-rfc accept_traffic=1 overlay_id=1
proxyid_num=1 child_num=1 refcnt=311 ilast=0 olast=0 ad=r/2
stat: rxp=1397676 txp=1201995507 rxb=502387654 txb=680787953
dpd: mode=on-demand on=1 idle=20000ms retry=3 count=0 seqno=2
natt: mode=none draft=0 interval=0 remote_port=0
proxyid=IPsec_name_0 proto=0 sa=1 ref=721 serial=1
src: 0:0.0.0.0/0.0.0.0:0
dst: 0:0.0.0.0/0.0.0.0:0
SA: ref=6 options=27 type=00 soft=0 mtu=9038 expire=1247/0B replaywin=2048

 

 

zoriax
Contributor

As I can see my MTU is correct : I have 

 

dst_mtu=1492

SA : mtu=1422

 

Jumbo is of course allowed on switches between my server and the FortiGate but if I look into for exemple a Windows machine, the MTU is set to 1500 (default).

 

For me here it's correct too....

Stelios_FTNT

The debug output you display is just a reflection of your current configuration which doesn't give any information about potential TCP retransmissions due to lower MSS in the path. A valid test would be to change/increase the MTU configuration of your interfaces where the IPsec tunnels are bound, and verify if the performance is better. Once you change the MTU, the same diag command will give you dufferent "dst_mtu" and "mtu" values.

If you want to debug deeper and have a better visibility, you need to capture a trace on the interface the traffic is incoming and analyze the TCP performance.

zoriax
Contributor

Ok nice ! I'm just a bit confuse how I need to change this value.

For MTU on WAN interface or VPN Interface ? And should I adjust MSS on these interfaces too or only on firewall rules ?

 

Thanks

Stelios_FTNT

Assuming the WAN interface is the ingressing interface of your traffic, you can change the MTU only on the WAN interface and this change will be reflected on the IPsec interfaces bonded to this physical interface as well. No need to touch the firewall policies. Don't forget to also change the MTU on the egressing interface of the firewall to allow the same amount of bytes being sent out of the firewall.

 

Once this is done, you can verify the MTU of your IPsec tunnel with the above command and then you can do a traffic test to verify if you get better performance.

zoriax
Contributor

I think I need to decrease the MTU because of my ISP. I can't have MTU bigger than 1500.

 

After a few test I tried to adjust MTU on 2 Windows Machine (one on each side of my VPN) but nothing change.

 

If I tried to ping through the tunnel, my MTU max is 1422 (1394 + 28). I adjust my Ethernet card to this value but SMB continue to be very slow (on both side of the tunnel)...

 

I'm sure it's correct like that but I continue to be confuse about this behaviour and how to solve this problem.

 

Anyone could help me to have the right process to debug and solve this kind of slowness ?


Thanks

zoriax
Contributor

Hi !

 

I think I need to decrease the MTU because my ISP is limited to a MTU of 1500.

 

I tried many configuration but nothing seems to work. My WAN MTU is set to 1492 (reflecting PPPoE and validating by a ping). I think my IPSec will automatically adjust its MTU regarding of the WAN. So I don't understand where is the problem.

 

If a run a wireshark, I can't see any fragmentation but maybe it's related to honor-df in global setting. What do you think about this feature ? Should I disabled it ?


Thanks

Labels
Top Kudoed Authors