Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
kallbrandt
Contributor II

Tested - VXLAN over IPsec feature on 5.4.1

I tested this feature with a collegue yesterday. We enabled the vxlan encapsulation on the phase1-interface, and created a bridge interface/switch containing a physical port and the vxlan-if. It worked as a charm, and when we enabled vlanforward on the physical interface and the ipsec interface, we could also send tagged vlans over the tunnel. This is a feature we have been waiting for, since most other solutions for handling L2 traffic between remote sites/datacenters usually comes with a hefty price tag.

 

In our customer case, we will have two IP-sec/vxlan tunnels (one active, one redundant) on different 1Gbit ISP/WAN connections between two HA clusters located in different datacenters, effectively bridging selected vlans in the two sites together.

 

The lab equipment consists of two 400D running 5.4.1 and two Alcatel-Lucent 6850E switches (one on each side, obviously).

 

We will continue the testing tomorrow. Next step is to do some serious load testing on this setup over some time and to check for problems with stability, latency etc. It need to scale properly, and to be rock solid to be of any interest of course.

 

Will post our findings here.

 

Laters.

 

Richie

NSE7

Richie NSE7
25 REPLIES 25
pcraponi
Contributor II

Good to know... Congrats... :)

 

 

Paulo R., NSE8

Regards, Paulo Raponi

Regards, Paulo Raponi
kallbrandt

We did setup a 2nd redundant IPsec/vxlan-tunnel (monitor pointed at 1st tunnel), configured as the first tunnel, and then tested wan link failure. An important parameter here is the DPD-settings in the primary tunnel. We run the tests with failover after 3 failed polls 5 seconds apart, so 15 seconds before the 2nd tunnel comes up. We can tune it down of course, but you don't want to risk a link-flapping situation either. The 2nd tunnel will go down as soon as the 1st comes up again. It works, but it takes some time probably due to the mac-adress move. The switches has mac-spoofing protection that probably matters too, but haven't had time to check the logs yet. Failover time is approx. 30 seconds.

 

We will try to add the 2nd tunnel interface to the bridge interface /switch of the first tunnel tomorrow and see if that works. If so, no mac-address move between ports :)

 

Also some performance testing tomorrow.

Richie

NSE7

Richie NSE7
kallbrandt

The redundancy setup worked like a charm, and putting both IPsec-tunnels in the same virtual switch as the vxlan-interface also worked and lowered the failover time.

 

However....

 

The performance is sub-par, to say the least. We did some testing using iperf. With 2 computers on each side of the vxlan tunnel, we got some 83Mbit/s. Tried file transfer also, same speed. That's something like 1/170th of the 400D's max IPsec throughput...

 

When sending tagged packets through the tunnel (each computer connected to access port with default VLAN in switch, and the switch connected to firewall with the VLAN tagged on port), the speed dropped to approx. 32Mbit/s. That's down to 1/450th of the max IPsec throughput... Fragmentation or packet size shouldn't be an issue since the 802.1Q information is located in the inner ethernet header, inside the vxlan encapsulation. BUT, there are no docs on how the Fortigate handles this, or if you are supposed to tweak the tunnel interface mtu size or on the physical interface.

 

The virtual switch may very well be the culprit here. It may be possible to use the hardware switch instead, will have to check that one out before dropping this altogether.

 

Without some miracle with the hardware switch that solves everything, this very promising setup is a total no-go as it is right now. It will work for bridging two remote offices witch a bunch of computers together, but hardly for datacenter traffic, needing at least 1Gbit throughput. Too bad.

 

Max throughput over a plain IPsec tunnel is 14 Gbit/s according to the 400D docs, and in any way faster than we could measure - We got wire speed in every test over IPsec.

 

 

 

 

Richie

NSE7

Richie NSE7
tanr
Valued Contributor II

Do let us know how it goes with the hardware switch.  

 

I appreciate your posting the information from these tests.

200B
New Contributor

Thanks for the good work kallbrandt! 

kallbrandt

It doesn't seem to be possible to use anything else then the software switch in the setup. My collegue did some more testing this morning. Too bad. Stuck with bad performance then it seems.

We can solve the customer case with plain IPsec and NAT if needed, so no biggie for us right now.

 

However, we will create a support case as soon as the 400D's are registered, just to try to get an answer on how this is supposed to work, if there are any tuning parameters we can try, if there is anything planned for this feature in 5.4.2 or later in the 5.4.x track.

 

The feature is actually kick-ass for a lot of scenarios, so let's just hope the performance issue is fixable over time.

 

Will post the outcome, although it might take a while.

 

Over and out for now.

Richie

NSE7

Richie NSE7
rcarreras
New Contributor III

Awesome post !!! 

 

Follow up with next tests please.

MikePruett

Look foward to seeing your follow up regarding this issue. Obviously, losing that much performance isn't going to be doable so hopefully there is a good resolution to this.

Mike Pruett Fortinet GURU | Fortinet Training Videos
rcarreras
New Contributor III

Hello kallbrandt,

 

Any news with you support case with Fortinet about the performance problems ?

 

Thank you !!

 

R

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors