Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
train_wreck
New Contributor III

30E site-to-site VPN - slow, randomly erratic bandwidth

We have 2 30Es at separate locations. The main location is behind a 1gigabit symmetrical AT&T fiber line, the other is a 75/5 Mediacom. We are trying to get the full bandwidth from the main location to the remote site. Doing a regular iperf transfer from the ATT site to the remote site (no VPN) yields full bandwidth:

 

------------------------------------------------------------ Client connecting to 173.19.---.---, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [300] local 172.16.16.10 port 1363 connected with 173.19.---.--- port 5001 [ ID] Interval Transfer Bandwidth [300] 0.0- 1.0 sec 8.33 MBytes 69.9 Mbits/sec [300] 1.0- 2.0 sec 9.69 MBytes 81.3 Mbits/sec [300] 2.0- 3.0 sec 9.60 MBytes 80.5 Mbits/sec [300] 3.0- 4.0 sec 9.59 MBytes 80.5 Mbits/sec [300] 4.0- 5.0 sec 9.71 MBytes 81.5 Mbits/sec [300] 5.0- 6.0 sec 9.65 MBytes 80.9 Mbits/sec [300] 6.0- 7.0 sec 9.56 MBytes 80.2 Mbits/sec [300] 7.0- 8.0 sec 9.70 MBytes 81.3 Mbits/sec [300] 8.0- 9.0 sec 9.58 MBytes 80.3 Mbits/sec [300] 9.0-10.0 sec 9.60 MBytes 80.5 Mbits/sec [300] 0.0-10.2 sec 95.0 MBytes 78.1 Mbits/sec

 

We have now used the GUI wizard to create a "Site-to-site (Fortigate)" style IPsec VPN, with all defaults left as they are. When doing the same iperf test, we get very poor and inconsistent bandwidth:

 

------------------------------------------------------------ Client connecting to 192.168.192.2, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [324] local 172.16.16.10 port 1504 connected with 192.168.192.2 port 5001 [ ID] Interval Transfer Bandwidth [324] 0.0- 1.0 sec 2.92 MBytes 24.5 Mbits/sec [324] 1.0- 2.0 sec 1.24 MBytes 10.4 Mbits/sec [324] 2.0- 3.0 sec 2.84 MBytes 23.8 Mbits/sec [324] 3.0- 4.0 sec 3.55 MBytes 29.8 Mbits/sec [324] 4.0- 5.0 sec 4.11 MBytes 34.5 Mbits/sec [324] 5.0- 6.0 sec 4.24 MBytes 35.6 Mbits/sec [324] 6.0- 7.0 sec 4.89 MBytes 41.0 Mbits/sec [324] 7.0- 8.0 sec 4.69 MBytes 39.3 Mbits/sec [324] 8.0- 9.0 sec 4.84 MBytes 40.6 Mbits/sec [324] 9.0-10.0 sec 3.65 MBytes 30.6 Mbits/sec [324] 0.0-10.4 sec 37.0 MBytes 29.7 Mbits/sec

 

This is not just iperf; SCP and FTP file transfers have the exact same bandwidth level. Windows file copies are even worse, though I understand that is to be expected with Win SMB protocol. I understand there is an overhead with IPsec protocols, but this feels like something else.... VPN transfers are less than half non-VPN transfers.

 

Before sending the 30E to the remote site, I tested this by setting up a S2S VPN in the exact same way (using wizard) with both 30E WANs directly connected to each other, and I measured ~120mbps of performance.

 

So what is happening here? Apparently something along the path is slowing down our VPN. Is there anything we can do to get back the lost performance? CPU usage on either side never rises above ~2%, and mostly stays at 0. We have not configured any AV/inspection policies, only basic NAT firewall and VPN.

11 REPLIES 11
train_wreck
New Contributor III

Anyone?? From a quick glance at this forum, it seems the majority of posts pertain to bad VPN performance, and so appears to be a common problem.....

Fullmoon

pls take a look. hope it helps

 

http://kb.fortinet.com/kb....do?externalId=FD36203

Fortigate Newbie

Fortigate Newbie
train_wreck
New Contributor III

That documentation is listed for FortiOS version 5.2. As stated in the topic of my post, I am on 5.6. It also appears to have a very outdated list of hardware. I am on a 30E (once again as the topic states).

 

In any case, doing "diag vpn tunnel list" does not show any "npu=" value at all, for any of the tunnels. I do have the CLI option "set npu-offload" available under "conf vpn ipsec phase1-interface" -> "edit TUNNELNAME", but setting this to "enable" still does not cause any "npu=" value to show up. As well, I don't think it is saving correctly, because after I save the phase1-interface config, if I do "show" the reference to npu does not show up there.

 

What doesn't make sense to me is that when the two 30E devices were connected directly together and a S2S IPsec tunnel was created between them, I got over 120mbps of bandwidth. After I change nothing except for taking the 30E to the remote site, the bandwidth is cut by around 75%, and becomes extremely inconsistent.

train_wreck

Anyone else? Probably going to move to a different vendor for our evaluations if I can't get an answer here.... we pay a significant amount of money for high bandwidth at our site locations specifically because we transfer large volumes between them, and our current vendor (Cisco) has no problem sustaining ~70-80mbps VPN transfer all day long.......

Toshi_Esumi
Esteemed Contributor III

Unlikely I would be able to provide the answer. But I'm wondering how you placed those 30Es to test in the real network environment. You said the current vendor is Cisco. Did you swap those Cisco on both sides to directly hook up 30Es to the circuits? Or put then behind those Ciscos?

If the numbers doesn't go down like 5, 6, 7Mbps, likely no duplex mismatch issues. The level of the numbers you showed indicates that something in-between end to end is slowing down due to bottle necks/high utilization or high CPU, or else. I would give them conditions as ideal as possible if you haven't done, like directly hooking up the circuits and cutting off everything else in maintenance window at night.

But if you don't want to spend much time and effort while your Cisco's VPN works fine, I don't blame you to use just Cisco instead. 30E is not high performance devices in FortiGate family. I wouldn't deploy it to our customers if the customer expects 75Mbps real VPN performance (datasheet VPN throughput: 75Mbps).

train_wreck

toshiesumi wrote:

You said the current vendor is Cisco. Did you swap those Cisco on both sides to directly hook up 30Es to the circuits?

Correct. I kept everything else identical.

 

toshiesumi wrote:

I would give them conditions as ideal as possible if you haven't done, like directly hooking up the circuits and cutting off everything else in maintenance window at night

Testing was done after business hours, with the only devices behind each firewall being the testing PCs.

 

toshiesumi wrote:

But if you don't want to spend much time and effort while your Cisco's VPN works fine, I don't blame you to use just Cisco instead. 30E is not high performance devices in FortiGate family. I wouldn't deploy it to our customers if the customer expects 75Mbps real VPN performance (datasheet VPN throughput: 75Mbps).

What baffles me is that when I had the 2 Fortigates connected directly to one another, I got fantastic VPN performance (~120mbps using a basic iperf test, and an FTP transfer with the same approximate level).It's just very strange that the Cisco's have no problem in the "real world" sustaining bandwidth, but the Fortigate's do. I am very interested in moving my clients to a UTM platform of some kind (at the moment the Ciscos are an aging mix of 19xx and 8xx ISRs), but VPN performance is critical here. I wouldn't even necessarily mind if the Fortigate's were lower than 75mbps if they were consistent, but the variations in speed seem almost random.

 

To your point about the datasheet numbers: I would be surprised if the IPsec throughput number didn't match real world, since up to now their numbers have been admirably accurate in our testing (e.g., for NAT throughput and for SSL scanning throughput).

Toshi_Esumi
Esteemed Contributor III

I have not much to add but have you tried iperf UDP test with a proper data size (not to be fragmented)?

rwpatterson
Valued Contributor III

What are the WAN speed and duplex settings on the 30Es? Make sure they match what the ISP is providing (usually NOT auto/auto).

 

Additionally find out what the largest packets are that can traverse the WAN between sites and set the interface MTU accordingly. Breaking down and reassembling of the packets consumes a ton of CPU cycles and those little boxes can use all the help they can get.

Bob - self proclaimed posting junkie!
See my Fortigate related scripts at: http://fortigate.camerabob.com

Bob - self proclaimed posting junkie!See my Fortigate related scripts at: http://fortigate.camerabob.com
train_wreck

OK, so a completely incomprehensible update....

 

I set the "NAT Traversal" options on both Fortigates from "Enabled" to "Forced", and changed nothing else. Now I am getting full speed VPN transfers:

 

PS C:\WINDOWS\system32> iperf -c 192.168.234.2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.234.2, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [300] local 172.16.16.10 port 2986 connected with 192.168.234.2 port 5001 [ ID] Interval Transfer Bandwidth [300] 0.0- 1.0 sec 8.87 MBytes 74.4 Mbits/sec [300] 1.0- 2.0 sec 9.20 MBytes 77.1 Mbits/sec [300] 2.0- 3.0 sec 9.15 MBytes 76.7 Mbits/sec [300] 3.0- 4.0 sec 9.10 MBytes 76.3 Mbits/sec [300] 4.0- 5.0 sec 9.17 MBytes 76.9 Mbits/sec [300] 5.0- 6.0 sec 9.24 MBytes 77.5 Mbits/sec [300] 6.0- 7.0 sec 9.10 MBytes 76.3 Mbits/sec [300] 7.0- 8.0 sec 9.21 MBytes 77.3 Mbits/sec [300] 8.0- 9.0 sec 9.14 MBytes 76.7 Mbits/sec [300] 9.0-10.0 sec 9.13 MBytes 76.6 Mbits/sec [300] 0.0-10.2 sec 91.3 MBytes 75.0 Mbits/sec

 

This makes absolutely no sense......

Labels
Top Kudoed Authors