Hi!
I was wondering if any of you could helpo me out making this work,
I"m runnning 2 VM64 Fortigate on a ESXi server, through 2 VyOS router to emulate.
Version 5.6.3
The tunnel is up, but somehow, ARP requests are not getting through:
FortiGate-VM64 # diag netlink brctl name host VXLAN-INTERFACE show bridge control interface VXLAN-INTERFACE host. fdb: size=2048, used=3, num=3, depth=1 Bridge VXLAN-INTERFACE host table port no device devname mac addr ttl attributes 1 6 port4 00:0c:29:d6:62:ab 51 Hit(51) 2 17 VXLAN 5e:9f:e8:0f:21:a6 0 Local Static 1 6 port4 00:0c:29:0f:47:91 0 Local Static
interfaces=[any] filters=[host 10.0.11.100 and arp] 15.470412 port4 in arp who-has 10.0.11.101 tell 10.0.11.100 15.470449 VXLAN out arp who-has 10.0.11.101 tell 10.0.11.100 16.487104 port4 in arp who-has 10.0.11.101 tell 10.0.11.100 16.487121 VXLAN out arp who-has 10.0.11.101 tell 10.0.11.100 17.511047 port4 in arp who-has 10.0.11.101 tell 10.0.11.100 17.511059 VXLAN out arp who-has 10.0.11.101 tell 10.0.11.100 18.535167 port4 in arp who-has 10.0.11.101 tell 10.0.11.100 18.535191 VXLAN out arp who-has 10.0.11.101 tell 10.0.11.100
here's my config:
edit "port2" set vdom "root" set ip 84.84.85.2 255.255.255.0 set allowaccess ping set type physical set alias "WAN1" set role wan set snmp-index 2 next
edit "VXLAN" set vdom "root" set type tunnel set snmp-index 12 set interface "port2" next
config vpn ipsec phase1-interface edit "VXLAN" set interface "port2" set peertype any set proposal des-md5 set encapsulation vxlan set encapsulation-address ipv4 set encap-local-gw4 84.84.85.2 set encap-remote-gw4 84.84.86.2 set remote-gw 84.84.86.2 set psksecret ENC OWif8UtnjVfxFQDRN8ajAv/Ten/+O8xoWmIRA1fylLgeGljO1jb+irdNGhDpwlOJD5SJzW4uycM4fDZ2ISwWZUzCCeGKS2q2Df8PQ+qz4Q3pKS4FRd1/IpIYC1dcnnpsEixK5NuYyThTKHc9AoCZF0FT3akcZjevsHKb9m+CV/6VNE9ZY6mDy9bwcDrc7mSiie+mIg== next end
config vpn ipsec phase2-interface edit "VXLAN_ph2" set phase1name "VXLAN" set proposal des-md5 next end
config system switch-interface edit "VXLAN-INTERFACE" set vdom "root" set member "port4" "VXLAN" set intra-switch-policy explicit next end
config firewall policy edit 1 set name "VXLAN-INCOMING" set uuid 1d96cbcc-3d91-51e8-585d-00de8ce55269 set srcintf "VXLAN" set dstintf "port4" set srcaddr "all" set dstaddr "all" set action accept set schedule "always" set service "ALL" set nat enable next edit 2 set name "VXLAN-OUTGOING" set uuid 2c5fe85a-3d91-51e8-7c00-653d11fab724 set srcintf "port4" set dstintf "VXLAN" set srcaddr "all" set dstaddr "all" set action accept set schedule "always" set service "ALL" set nat enable next end
Thanks for the help!
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
So your closer to your identification of the problems(s)
Have you ran the diag debug flow and validate the configurations are correct?
Ken
PCNSE
NSE
StrongSwan
Hi everybody,
I think it is not necessary to create a new thread so I will kindly join this one. I try to setup VXLAN over IPSEC according to these guides:
http://kb.fortinet.com/kb/documentLink.do?popup=true&externalID=FD40170
https://travelingpacket.com/2017/09/28/fortigate-vxlan-encapsulation/
My goal is to build a L2 VPN between two FortiGates using VxLAN over IPsec. Or simply put: interconnect a detached branch to HQ according this scenario:
CISCO switchport --- (trunk)----HQ_Fortigate ----Internet(IPSEC)----Branch_ Fortigate---(trunk)---Cisco switchport
I hope I am close to be successfull however I am still fighting (probably) with MTU/MSS settings. The maximum ICMP packet size between clients on HQ and Branch side (incl. header) is 1390 Bytes regardless to "Don't Fragment Flag is set or not. Larger packet does not pass. That of course causes many problems with http/https traffic and other "usual" network services.
Could anyone advise how to handle with MTU/MSS settings on Fortigate? Firmware is v5.6.4. I can provide all the necessary output form debug.
Thank you in addition
Hi Jan,
There is no fix, only a workaround for the MTU issue:
The workaround is to stop honoring the DF bit:
config system global
set honor-df disable
end
FCNSA, FCNSP
---
FortiGate 200A/B, 224B, 110C, 100A/D, 80C/CM/Voice, 60B/C/CX/D, 50B, 40C, 30B
FortiAnalyzer 100B, 100C
FortiMail 100,100C
FortiManager VM
FortiAuthenticator VM
FortiToken
FortiAP 220B/221B, 11C
Hi
Thank you for your feedback and very inspirational topic. I've done some investigation and here is my temporary conclusion.
set honor-df disable possibly can do certain job in case I want to forward untagged traffic (switchport in access mode). However this setting does not affect tagged traffic (switchport in trunk mode). I've noticed that it is better to use separate VDOM for software switch and corresponding interfaces because of arp request. That may help to the founder of this topic. Untaged traffic would be let say good enough but I wanted more.
Here is my solution:
1)I've created an usual IPSEC tunnel with loopback interfaces. Notice, there in no "set encapsulation vxlan"
config vpn ipsec phase1-interface edit "vxlan_ph1" set interface "wan1" set ike-version 2 set peertype any set proposal aes256-sha512 set nattraversal disable set remote-gw remote_IP_address set psksecret ENC mysecredpassword next end config vpn ipsec phase2-interface edit "vxlan_ph2" set phase1name "vxlan_ph1" set proposal aes256-sha512 set auto-negotiate enable set src-subnet 172.30.31.0 255.255.255.0 //IP address of local loobpack inteface. (Of cource, there could be /32 prefix :) ) set dst-subnet 172.30.30.0 255.255.255.0 //IP address of remote loobpack inteface. next end
2) Next step is to configure "native" vxlan according this reference: http://help.fortinet.com/fos50hlp/56/Content/FortiOS/fortigate-whats-new/Top-Network-vxlan.htm
config system vxlan edit "vxlan1" set interface "loop1" //local loopback interface set vni 1 set remote-ip "172.30.30.1" //remote loopback int. IP address next end
This configuration makes the "vxlan1" interface:
edit "vxlan1"
set vdom "root" set type vxlan set snmp-index 9 set interface "loop1"
3) The last step is to put the physical port "internal4" and vxlan interface "vxlan1" into the soft switch:
config system switch-interface edit "sw_switch" set vdom "root" set member "internal4" "vxlan1" // next end
After that I am able to transfer tagged traffic from HQ switchport to the branch switchport without packet loss and MTU is no problem any more. Unfortunatelly, I have to break next investigation / debug because I had to return the borrowed device. Anyway, I hope that my approach could help.
*English is not my mother tongue, please excuse any errors on my part.
Jan
I didn't find an issue with vlans going across VxLan but the https/printer traffic. IP Telephony, Access Points works just fine - all receive tagged traffic in its vlans. Funny but Access Points connected to Wireless Controller works with https traffic. The only workaround I found for https traffic is to send native vlan for PCs to allow https traffic to go through tunnel without fragmentation. MTU is a problem and I couldn't find a solution for that. I've tested that on various FortiOS versions. 6.0.5 was the recent. If anyone found a solution for https through vxlan over ipsec please let me know.
that is 6.2.x in my opinion. it did clear up issues like this for me.
If anyone else is deploying this in labs or on ESXi, you need to have the vSwitch Security configured to Accept all three (Promiscuous, MAC address changes, and Forged trandsmits).
Was getting Destination host unreachable, and Timed out when testing - enabled those after finding someone who mentioned it offhand in a lab guide, and suddenly everything sprang to life.
Hope it helps
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1665 | |
1077 | |
752 | |
446 | |
220 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.