Folks,
Recently my company decided to save money by transitioning away from MPLS and metro ethernet based connectivity to Internet based site to site VPN's. For our stores we are installing Time Warner and Comcast business class Internet. Generally either 100/10 or 100/20 with one location being on a Comcast fiber based Internet circuit that is 30/30.
So far our experience has not been all that great. Our data center currently has a 100/100 fiber based Internet connection (1g to be installed next week). Our 100mb is not oversubscribed at this point. Whenever I try to do a windows based drag and drop from the data center to the store on average I get from 1.5-2.5MB on the copy. so basically 12-20 megabit despite the fact that my store has a 100mb download pipe. If I try to use FTP over the VPN I get the same speed. However if I take the same server at the DC and do a 1 to 1 NAT and then FTP to it from the same store over the Internet and not through the VPN I see close to the 100mb speed that we are subscribed to. Interstingly whenever I copy from the store to the DC I almost always get the full 20mb upload speed. Finally at our 30/30 store I get all 30mb both directions.
My data center has a 500D and all of my stores have a 140D. So I would think there is enough horsepower to be able to handle the occasionally large file copy. We don't generally move a lot of data over our VPN's. Mainly web based applications with some videos. However when we need it, it would be nice to have a nice file copy speed. I understand there is some overhead on VPN's but not to this degree. I have already tried various MTU sizes on WAN interfaces at both the DC and my lab store.
At this point I am stumped. Why is my VPN running so slowly? Is it possible that TW and/or comcast throttles UDP 500/4500 or the ESP protocol? At this point I along with our CIO is ready to abort this project and go with Fiber in all 90 locations. But I am not quite ready to give up.
Any help would be appreciated.
Mark
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
hi,
and welcome to the forums.
This situation is strange. The 100D feature a CP8 content processor but lack a network processor ASIC (NP6). Nevertheless, even CPU based they should sustain 100 Mbps IPsec.
Please supply the following info to clarify:
- which firmware versions are you running (HQ/branch)?
- which IPsec encryption is used in phase1, phase2?
- do the 140Ds spike in CPU load on large file transfers?
- what do you get with
diag vpn ipsec status, HQ and branch?
What I am aiming at is that certain encryption ciphers are not hardware accelerated, due to lack of NP6 or due to implementation after the ASIC was designed. On the safe side, AES128 and SHA1 are both done in hardware and more performant than, say, 3DES and AES256.
One more (faint) possibility would be frequent renegotiations due to parameter mismatch - but you would have noticed that in the logs instantly.
Lastly, have you checked with the ISP technical support that your HQ upstream is not throttled for specific protocols? They should be able to tell you with certainty.
Thank you for responding. Responses to your questions:
Both HQ and branch are running 5.40. I have also tested with our DR site which is a 500d running 5.4.1. For test purposes I tested to my lab site as well (fed by comcast biz class) which is a 140d running 5.4.1.
All my VPNS are using SHA 256 and AES 256. Both phases
Just tested and my CPU on the branch site moved from 24% to about 31%.
Below are the images from diag vpn ipsec status. First is branch second is HQ. On HQ everything below software was 0's.
I have not checked with the ISP. Mostly because the same ISP provides the 30/30 at the site where I get all 30 from hq to site. Additionally I see the same slow behavior from my DR site that has a 1gb circuit from Alpheus rather than Comcast. So I would doubt both would do that. But can check with them.
I will set up a tunnel with the cipher suites above and see what I get.
Thanks,
Mark
HQ: <Truncated to show only lines with entries under each processing type>
NP6_0 aes: 8623099264 168079844352 sha256: 8623099264 168079844352 NPU HARDWARE aes: 777924503 0 sha256: 777925152 0 aes: 195345 1306 sha256: 195343 1304 SOFTWARE:
Site: <Again, deleted all rows that had only 0's>
CP8_0 aes: 124431068 241293474 sha256: 124431068 241293474 SOFTWARE:
Ok set up a tunnel with AES128 and SHA1. Not much change.
FWIW
FTNT numbers for vpn ipsec are not realistic or real-world.
PCNSE
NSE
StrongSwan
Sorry. What is FTNT?
that's short for "Fortinet". We've got to repeat that so often...I think it's the NASDAQ caller sign as well.
"FGT" is (my) alias for "Fortigate".
I'm sorry all the obvious solutions proved wrong but it's an information gain anyway.
30% CPU seems a bit high but we don't know what else is active (UTM etc.)
What still is left for experimentation is downgrading to a 'mainstream' version like v5.2.3. Unfortunately both sides run v5.4 which still is in it's infancy (nicely put). Haven't heard exactly from IPsec problems in v5.4 but it would be worth a try. I know this needs some effort and better not be tested in production.
@emnoc: I had to smile when reading that. I agree in principle (marketing always wins over engineering) but really, 30 Mbps IPsec is a no-brainer for a FGT of any size. And a hardware throttle would work in both directions, the problem at hand looks asymmetrical.
Are you able to see if your packets are fragmented?
Sounds like there is something when you add the extra IPSEC overhead.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1643 | |
1069 | |
751 | |
443 | |
210 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.