Hello,
I get really strange problem with my Azure tunnel on Fortigate 60E (FortiOS 5.6.8). I have configured tunnel according to this article:
https://cookbook.fortinet...pn-microsoft-azure-54/
I have set up tcp-mss sender and receiver for 1350. Tunnel is stable but performance from LAN Fortigate is very poor.
I performed iperf3 test and result is 3-5Mbit/s. Our ISP provide us 100/100Mbit/s.
The weirdest thing is when I beeing connected to fortigate in LAN, and set UP SSL-VPN connection (FortiClient SSL-VPN on the same Fortigate) so technically traffic go through ssl-vpn tunnel but all communication is closed to these Fortigate I get 30-35Mbit/s performance result. What I tried:
1) Set up new Azure tunnel - the same result
2) Delete tcp-mss sender/receiver function from IPv4 policies to Azure - same result
Someone got these problem? It looks like there is something wrong in LAN interface. MTU?
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
For a bandwith issue over IPsec vpn, there are many things you need to eliminate one-by-on.
As a start I would suggest one thing to test: adjust mtu size on the physical interface on the FGT that this tunnel is going out of. I don't know if it's possible on Azure side, but try the same on that side as well. The tcp-mss adjustment works only TCP traffic. You might be doing only TCP test with iPerf so each packet should be much shorter than the minimum mtu on the path. But since you said SSL VPN (TCP) was performing much better, I'm suspecting it might be related to UDP/ESP (IPsec) traffic over the internet. Or at least you can eliminate that factor.
The outgoing wan interface mtu to let the FGT to fragment in case the packet sides are bigger than the mtu. The opposite direction is dectated by Azure side mtu setting. LAN side mtu setting is for PMTUD by applications on Azure side, which works over the top to VPN layer.
For test purpose I wouldn't care if the mtu is optimum or not, so let's say 1400, which would be short enough for any intermediate carrier to pass traffic without fragmentation. I think the commands would look like below:
config sys int
edit PHY_INTERFACE
set mtu-override ena
set mtu 1400
next
end
I have no knowledge on Azure so you have to figure out how to do the same yourself.
BTW, IPsec tunnel mtu is driven by the interface mtu automatically, or your don't have any control. That's why you need to change mtu on the interface.
Below is our exacmple of an interface mtu and the tunnel mtu driven by it. Please ignore it's oddly going out of "internal1" interface but in fact it's going out of the interface:
xxx-fg1 # fnsysctl ifconfig internal1 internal1 Link encap:Ethernet HWaddr 00:09:0F:09:FE:03 UP BROADCAST RUNNING PROMISC ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:31742093 errors:0 dropped:0 overruns:0 frame:0 TX packets:25140134 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23363609075 (21.8 GB) TX bytes:6072201816 (5.7 GB) sea5601-fg1 # fnsysctl ifconfig OfficeBKP-VPN <--- VPN interface name OfficeBKP-VPN Link encap:Unknown inet addr:10.242.128.130 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1422 Metric:1 RX packets:77057 errors:0 dropped:0 overruns:0 frame:0 TX packets:155683 errors:1 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11304894 (10.8 MB) TX bytes:10212501 (9.7 MB) So the key is to let FGT fragment packets BEFORE encryption.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1732 | |
1105 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.