Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
rwdorman
New Contributor III

FG to FG Tunnel Performance

Is anyone else seeing abysmal performance with an FG to FG IPSec tunnel? If i run iperf tests between my two sites by just opening the port to the Internet I get about 70Mbit/sec. I do the same tests, to the same servers but over a tunnel i get about 17Mbit/sec. I dont care how much IpSec overhead there is it shouldn' t be a 6x drop in performance. I' ve had a ticket open with TAC for 2 months. First they said it was a known issue in 5.0.7 with out of order packet arrival that would be fixed in 5.2; no dice. Now they are saying " well what do you expect?" (taking that one up with my rep) but I wanted to see if anyone else out there is having anything similar or corrected it in some way. The boxen are both 200D' s

-rd 2x 200D Clusters 1x 100D

1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D

-rd 2x 200D Clusters 1x 100D 1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D
5 REPLIES 5
ilucas
New Contributor

What' s your resource utilization like? How many P1 algorithms do you have set on this? What' s your Internet bandwidth on those endpoints? I have two 200D' s that I can work on some testing through over the weekend to see how it performs for me.

----

FG 200B/30D/60D/80D/100D/200D/300D

FE 200D

---- FG 200B/30D/60D/80D/100D/200D/300D FE 200D
emnoc
Esteemed Contributor III

I think your wasting your time personally. But if you wanted to measure true ipsec performance, you should have connected these back to back and test and removed the internet from your testing imho Q: Since you open a ticket with TAC , what number would have been ideal in your iper testing? Did you measure any L4 statistics or options between the 2 iperf hosts? Was the 17mbps done bidirectional ? Did you do tcp/udp testing ? (now for some dumb questions) was any other traffc removed from the testing period, so only the iperf traffic exist? what version of iperf/jperf was used, and type of OS ( win/linux/macosx )? I' ve found the hosts OS device has a lot of factors in performance numbers Are the links asymmertic or symmetric ? What was the initial window buffer size between the 2 hosts ? Did you monitor the window buffer size for optimized performance? If you upgrade or even downgrade the two FGT, did any improvement occur ? FWIW: I personally don' t believe the TAC and out-of-order excused, but you could measure and record this by using a opensource like that of tcptrace.org http://www.tcptrace.org/

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
rwdorman
New Contributor III

Punchline: NPU was off by default on the configuration and turning it on helped. I' ve answered some of your questions below anyway as they somewhat speak to the overall experience you often have with TAC... blame user, blame software, blame bug, blame upgrade, rinse, repeat.. I do appriciate the input and should I have issues like this in the future i' ll refer back to your test method suggestions Ideal number? I don' t didn' t have a specific number. My expectation was that in an apples/apples test I would recieve no more than a 10% difference in speed when accounting for IpSec encap and encrypt overhead. The syntax I used for the iperf testing was: Server: sudo iperf -s -p 8000 -i 2 Client:iperf -c <server> -t 30 -P 8 -p 8000 As to buffer sizes, multiple types of testing etc, no I didn' t go that far. IN a 2014 world I expect an IPSec tunnel to be little more than a bump in the road and consistently performing the exact same test, regardless of parameters, was good enough to make a comparison. I performed the tests on 5.0.6, 5.0.7 and 5.2

-rd 2x 200D Clusters 1x 100D

1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D

-rd 2x 200D Clusters 1x 100D 1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D
emnoc
Esteemed Contributor III

I would suggest udp to avoid any tcp-buffer and windowing issues. The problem why i say it' s a waste of time, the packet out of order is something you can control of fix, do a traceroute with a query probe count of 5-10 probes and you see see various paths in the traceroute could change in some foreign carrier' s network. A FGT delivery 17m/bit-sec is not a fault of the FGT. That firewall should easily achieve 400-500mbps ipsec traffic or more without looking at the specs.

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
rwdorman
New Contributor III

I completely agree, traces for the in tunnel traffic vs out of tunnel traffic are exactly the same. The only difference was the addition of the IPSec tunnel so I have a hard time believing this was not a fortigate issue. Also, as mentioned, TAC said that they had a known bug causing out of order packets so not something I could really control. Ah well, fixed now. On to the next 5.2 bug (dial up IPSec clients)

-rd 2x 200D Clusters 1x 100D

1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D

-rd 2x 200D Clusters 1x 100D 1x 60D FortiOS 5.2 FortiAP 221C FAZ 200D
Labels
Top Kudoed Authors