Folks,
Recently my company decided to save money by transitioning away from MPLS and metro ethernet based connectivity to Internet based site to site VPN's. For our stores we are installing Time Warner and Comcast business class Internet. Generally either 100/10 or 100/20 with one location being on a Comcast fiber based Internet circuit that is 30/30.
So far our experience has not been all that great. Our data center currently has a 100/100 fiber based Internet connection (1g to be installed next week). Our 100mb is not oversubscribed at this point. Whenever I try to do a windows based drag and drop from the data center to the store on average I get from 1.5-2.5MB on the copy. so basically 12-20 megabit despite the fact that my store has a 100mb download pipe. If I try to use FTP over the VPN I get the same speed. However if I take the same server at the DC and do a 1 to 1 NAT and then FTP to it from the same store over the Internet and not through the VPN I see close to the 100mb speed that we are subscribed to. Interstingly whenever I copy from the store to the DC I almost always get the full 20mb upload speed. Finally at our 30/30 store I get all 30mb both directions.
My data center has a 500D and all of my stores have a 140D. So I would think there is enough horsepower to be able to handle the occasionally large file copy. We don't generally move a lot of data over our VPN's. Mainly web based applications with some videos. However when we need it, it would be nice to have a nice file copy speed. I understand there is some overhead on VPN's but not to this degree. I have already tried various MTU sizes on WAN interfaces at both the DC and my lab store.
At this point I am stumped. Why is my VPN running so slowly? Is it possible that TW and/or comcast throttles UDP 500/4500 or the ESP protocol? At this point I along with our CIO is ready to abort this project and go with Fiber in all 90 locations. But I am not quite ready to give up.
Any help would be appreciated.
Mark
Fragmentation is something that I thought might be the issue. However, I am not real sure I know how to check for it. I did try various mtu sizes on the interfaces connected to the internet on my DR 500D and lab 140d. I tried 1452, 1380 even 1300 (those numbers I pulled from the nether regions). No change as far as performace goes. Any lower than 1300 and my performance went down.
Unfortunately I don't have a ton of diagnostic experience with the fgt's. If there are commands that will allow me to see if fragmentation is my issue I would appreciate those.
I can relate to this issue. We recently began transitioning sites from Comcast cable to their fiber-based service (Internet only). All endpoints are FGTs and no hardware or configuration has been changed (other than new IP addresses, routes, etc). With the first site moved to 100/100 fiber service, I began to test IPsec site-to-site performance to another site still on Comcast cable at 100/20. The results were disappointing at about 30-40mbps (out of 100) and 8-10mbps (out of 20). I used both Windows 2012 R2 file copying and also iperf (both linux and windows builds) and the results were the same (they slightly improved with parallel iperf streams but not by much).
Using iperf directly without the IPsec tunnel produced the desired results so I was convinced that the FGT hardware couldn't handle the IPsec traffic (I blamed the older/slower device at one end). I'll skip over more of the tests I performed using a variety of configurations but I ultimately landed on the solution: I needed to switch to IPsec over UDP. The version of FGT I'm running doesn't support forced NAT mode so I bounced the tunnel traffic through the Comcast cable modem as a port forward which effectively forced the two FGT endpoints to use udp/4500 instead IP protocol 50. After this single change, I achieved full performance in both directions.
The interesting thing is that I always saw full performance when using Comcast cable to Comcast cable (which effectively meant 20-22mbps in either direction when using their 100/20 service). That 20mbps drops to 8-10 when using Comcast cable and Comcast fiber. Now that more of our sites are on Comcast fiber, I see full IPsec performance without udp encapsulation.
My conclusion so far is that there's something fishy between Comcast cable and fiber when using IPsec without udp encapsulation. (Unfortunately I cannot test using other ISPs.) I spoke with a very knowledgeable Comcast engineer who brought our most recent site online and he seems to fully understand the situation and promised to look into it because his curiosity has been piqued. We'll see.
I have also noted that long-running pings (several days) from Comcast fiber to another site on Comcast fiber (physically very close at under 10ms) will never produce a single dropped ping. However, the same test running over the IPsec tunnels do show a very tiny amount of lost pings (much less than when using Comcast cable). This may be the nature of the tunnels and I need to create a local in-house tunnel to determine if the same periodic loss can still happen even if Comcast's services aren't involved.
1366 has been my sweet spot when adjusting MTU to eliminate fragments (on both CAPWAP tunnels from FortiAP's and IPSec tunnels)
Mike Pruett
Suggestion; if you want to see the pure ipse-tunnel performance take out the ISP
e.g
user1computer--FGT<----IPSEC--->FGT2---user2computer
the bold IPSEC would be over a cross-over cable. In my own testing, I never ever ever gotten what FTNT spec claims for IPSEC thrust YMMV. More so on the small SOHO to low-medium hardware.
Once again, YMMV.
As for fragments, use UDP over the iperf, but set the paylaod to let's say 1389bytes and re-test. Again, you will probably not get what FTNT specifications shows.
Example
A FGT100D of mine gave a consistent 80mbps avg from 10 runs. FortiOs version also didn't seen to make a difference nor did end-ciphers ( 3DES vrs AES ) .
PCNSE
NSE
StrongSwan
emnoc wrote:Suggestion; if you want to see the pure ipse-tunnel performance take out the ISP
e.g
user1computer--FGT<----IPSEC--->FGT2---user2computer
the bold IPSEC would be over a cross-over cable. In my own testing, I never ever ever gotten what FTNT spec claims for IPSEC thrust YMMV. More so on the small SOHO to low-medium hardware.
Once again, YMMV.
As for fragments, use UDP over the iperf, but set the paylaod to let's say 1389bytes and re-test. Again, you will probably not get what FTNT specifications shows.
Example
A FGT100D of mine gave a consistent 80mbps avg from 10 runs. FortiOs version also didn't seen to make a difference nor did end-ciphers ( 3DES vrs AES ) .
Did this. Performance screamed as suspected.
nothingel wrote:I would be most appreciative if you could tell me how you did this encapsulation. Was this done on the Comcast modem or with the fgt. I have the same issue with Time Warner Cable sites as well?I can relate to this issue. We recently began transitioning sites from Comcast cable to their fiber-based service (Internet only). All endpoints are FGTs and no hardware or configuration has been changed (other than new IP addresses, routes, etc). With the first site moved to 100/100 fiber service, I began to test IPsec site-to-site performance to another site still on Comcast cable at 100/20. The results were disappointing at about 30-40mbps (out of 100) and 8-10mbps (out of 20). I used both Windows 2012 R2 file copying and also iperf (both linux and windows builds) and the results were the same (they slightly improved with parallel iperf streams but not by much).
Using iperf directly without the IPsec tunnel produced the desired results so I was convinced that the FGT hardware couldn't handle the IPsec traffic (I blamed the older/slower device at one end). I'll skip over more of the tests I performed using a variety of configurations but I ultimately landed on the solution: I needed to switch to IPsec over UDP. The version of FGT I'm running doesn't support forced NAT mode so I bounced the tunnel traffic through the Comcast cable modem as a port forward which effectively forced the two FGT endpoints to use udp/4500 instead IP protocol 50. After this single change, I achieved full performance in both directions.
The interesting thing is that I always saw full performance when using Comcast cable to Comcast cable (which effectively meant 20-22mbps in either direction when using their 100/20 service). That 20mbps drops to 8-10 when using Comcast cable and Comcast fiber. Now that more of our sites are on Comcast fiber, I see full IPsec performance without udp encapsulation.
My conclusion so far is that there's something fishy between Comcast cable and fiber when using IPsec without udp encapsulation. (Unfortunately I cannot test using other ISPs.) I spoke with a very knowledgeable Comcast engineer who brought our most recent site online and he seems to fully understand the situation and promised to look into it because his curiosity has been piqued. We'll see.
I have also noted that long-running pings (several days) from Comcast fiber to another site on Comcast fiber (physically very close at under 10ms) will never produce a single dropped ping. However, the same test running over the IPsec tunnels do show a very tiny amount of lost pings (much less than when using Comcast cable). This may be the nature of the tunnels and I need to create a local in-house tunnel to determine if the same periodic loss can still happen even if Comcast's services aren't involved.
I would be most appreciative if you could tell me how you did this encapsulation. Was this done on the Comcast modem or with the fgt. I have the same issue with Time Warner Cable sites as well?
If you're running 5.4.x, there should be a new option to force NAT traversal in the IPsec tunnel using the 'nattraversal' option. If you do this, there's probably no need to play games with another device. In my case, I'm running an older version so I configured the Comcast-provided cable modem to forward UDP 500 and 4500 to the FGT. There's a few more details but I'll skip those because I think you stated you're running 5.4.x.
If you're running 5.4.x, there should be a new option to force NAT traversal in the IPsec tunnel using the 'nattraversal' option.
FWIW
Nat transversal does not go over the ipse-tunnel, it even comes before the apices-tunnel is up. if you want to get away from nat-transversal just use IKEv2
PCNSE
NSE
StrongSwan
Nat transversal does not go over the ipse-tunnel, it even comes before the apices-tunnel is up. if you want to get away from nat-transversal just use IKEv2
I'm not sure I follow. My point is that I need to avoid ESP (protocol 50) and tricking the FGTs into nat traversal mode where udp/4500 is used seems to be the trick. If there's a better way, I would definitely like to know.
What do you mean by tricking. ESP proto#50 is what the ipsec-sa are using ( aka phase2 ), phase1 ( ike or Ikasmp ) is where NAT-T and udp comes into play. You can't trick anything as far as this goes. When the twp vpn-peers 1st engage in ( phase1 ) it learns if NAT-T is involved and switch from udp/500---udp/500 to just udp/4500
http://socpuppet.blogspot...-trouble-shooting.html
With IKEv2 NAT transversal is handled within IKEv2.
PCNSE
NSE
StrongSwan
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1737 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.