I tested this feature with a collegue yesterday. We enabled the vxlan encapsulation on the phase1-interface, and created a bridge interface/switch containing a physical port and the vxlan-if. It worked as a charm, and when we enabled vlanforward on the physical interface and the ipsec interface, we could also send tagged vlans over the tunnel. This is a feature we have been waiting for, since most other solutions for handling L2 traffic between remote sites/datacenters usually comes with a hefty price tag.
In our customer case, we will have two IP-sec/vxlan tunnels (one active, one redundant) on different 1Gbit ISP/WAN connections between two HA clusters located in different datacenters, effectively bridging selected vlans in the two sites together.
The lab equipment consists of two 400D running 5.4.1 and two Alcatel-Lucent 6850E switches (one on each side, obviously).
We will continue the testing tomorrow. Next step is to do some serious load testing on this setup over some time and to check for problems with stability, latency etc. It need to scale properly, and to be rock solid to be of any interest of course.
Will post our findings here.
Laters.
Richie
NSE7
We're way too busy with other stuff right now unfortunately... Will check if we can try something out with our lab gear during the upcoming weeks. The customer case is gone, so we kind of lack the motivation...
Richie
NSE7
Did any of you (kallbrandt?) ever get a chance to test VXLAN over IPsec performance with 5.4.2 or later? The performance bug that was supposed to be the issue, 369137, is listed as fixed in 5.4.2. (Though from some posts like https://forum.fortinet.com/tm.aspx?m=15069 it may have resurfaced in 5.6.x.)
I'm again considering VXLAN over IPsec between our two locations (under 5.4.6) using the example of http://kb.fortinet.com/kb/documentLink.do?externalID=FD38614 but would love to hear that somebody else has already tested the performance before I set it up.
Hello,
we are testing vxlan between FG 5OE and 500D ver. 5.4.4. It seems to work - client from one side can ping the another one. We have however another problem. We plan to use it in case of failure VM server (windows srv) in our branch office - then we can restore that VM in headquarter on our hypervisor and thanks to vxlan clients from branch office can communicate with their server on L2. The problem is that default gateway on restored server is IP address of FG from branch office and it does not respond to the server (timeout) - thus server can not go out for example to internet. Server has to have internet connection through static IP address that is in branch office. Have you ever deal with such scenario ?
thx
Hi, Veeam can deal that
https://helpcenter.veeam....rk_mapping.html?ver=95
2 FGT 100D + FTK200
3 FGT 60E FAZ VM some FAP 210B/221C/223C/321C/421E
Has anyone got this working with an acceptable level of performance?
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1738 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.