Hi all, I'm deploying a new pair of 1000D in active/active configuration. Outside interface's are connecting to a single ISP & seem to be working fine. The issue I'm having is communication between our core Nexus 9K's & the 1000D's. This is a multi-tenant environment and therefore we are leveraging VDOM's on the FG & VRF's on the 9K's.
I'm using individual /29 networks between the FG & 9K's to route. We are also using LACP on the FG & VPC on the pair of 9K's. The 9K's are not new & have been in production for quite some time. They are configured according to Cisco best practices.
The issue we ran into was that we were unable to communicate properly between the core switches & the firewalls on one of the /29 networks. After working with Fortinet Support for hours, I was told that best practices was for a L2 switch to be physically installed between the firewalls and our core switches. This apparently is due to some requirement for HA configuration. Even though this wasn't clear to me, we did try this method and it didn't resolve the issue.
The physical ports on the FG are configured like this. 802.3ad port is in the root VDOM with no L3 config. The underlying VLAN interfaces are in a unique VDOM. These VLAN interfaces are what use the /29 network to route to the Nexus 9K's. Is there something about this design that doesn't seem right? Any ideas what might be causing this problem? Running 5.6.4 code.
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the [strike]KB.[/strike] HA chapter of the 'FortiOS Handbook'.
To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.
Your routing / connectivity problems could be a side-effect of this.
To me it should work. But for our similar case, we use a /30 subnet on the vlan to connect a vdom to each customer' VRF because it's a-p setup. We don't have any Nexus SW in the mix though.
That should be doable, the /29or /30 is not revelent. Since you mention NXOS are they acting liek l3 routers? if so per cisco this will not work ideally for dynamic routing protocols
read this and review if anything here is in your design.
http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
PCNSE
NSE
StrongSwan
The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.
The only reason we're using a /29 rather than a /30 is we need additional IP addresses to accommodate HSRP (VIP) on Nexus 9K's. The 9K's are our core switches so they are doing layer-3. However, we are not doing any dynamic routing protocols so the dynamic route peering over VPC does not apply in this situation.
Cisco Nexus internally use non-standard Ethertypes - the same which FortiOS uses on the HA links. This is documented in the [strike]KB.[/strike] HA chapter of the 'FortiOS Handbook'.
To avoid trouble you can change the Ethertype in the HA setup ('conf sys ha'). 3 different types are used.
Your routing / connectivity problems could be a side-effect of this.
That is interesting. I hadn't heard that before. I'll look into this but I can tell you that our heartbeat ports are directly connected to each other. The two FG's don't use any switching to connect the two for HA. Do you think this could still apply?
I'd really like to hear from those out there who are running a HA Cluster. Do you have to have a L2 device between the FG's & your core switching/routing? If so, can anyone explain why that is a requirement?
Did you ever resolve this ?
I am having a simliar issue since upgrading to a Nexus 9K over the weekend.
I cant get my FortiADC (load balancers) to establish a LACP connection to the 9k.
I have found the following which appears to be a bug in the 9K regarding LACP but I have not confirmed that it is why LACP isnt working.
Started here after some googling.
https://www.reddit.com/r/networking/comments/7uu1zh/nxosv_9000_703i72_vpc_fully_working/
LACP bug documented here. Seems Cisco to Cisco is fine.
Cisco to non Cisco does not work.
https://learningnetwork.cisco.com/thread/120028
This shows some sort of a hack for a known LACP bug in 7.0.3.I7.2
https://techstat.net/cisco-nexus-nx-osv-9000-lacp-vpc-bug-fix/
Were you able to resolve your issue.
Dear friend ,
we are working with same senario as you want to use but we are not using fortigate HA ,
but we are able to acheivethe same you want .
please contact me so we can share the information .
we have dual nexus 9K and fortigate 1500D with Vdom
Layer 3LACP for communicating with Nexus core .
please be in touch for any information
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1660 | |
1077 | |
752 | |
443 | |
220 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.