Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
FB
New Contributor

Aggregate using 4-port in FG cluster and Cisco Catalyst

 

I have an 802.3ad aggregate using 4-port in FG dual-node cluster and a Cisco Catalyst Cisco IOS Software Bengaluru, Catalyst L3 Switch Software CAT9K_IOSXE, Version 17.6.5, RELEASE SOFTWARE fc2

The current status is: In FG all 4 ports are in Up-Running-Active state

status up algorithm L4 lacp-mode active

But in cisco side, half of the ports are in SUSPENDED state

Why is that?

How to detect parameters that are not matching? 

FG support is limited to tell me that in FG side everything is OK and even sending diag and sniffer information, still, they can´t tell what´s could be wrong in Cisco Side

Here, an example, the S status of the port

101    Po101 SU       LACP        Gi2-0-44 w     Gi2-0-46 s


============================================================================

status: up
npu: n
flush: n
asic helper: y
ports: 4
link-up-delay: 50ms
min-links: 1
ha: master
distribution algorithm: L4
LACP mode: active
LACP speed: slow
LACP HA: enable
aggregator ID: 1
actor key: 17
actor MAC address: 48:3a:02:ed:c3:52
partner key: 101
partner MAC address: 8c:94:61:b0:e0:00

 

and all ports are OK as well

 

member: port1
index: 0
link status: up
link failure count: 0
permanent MAC addr: 48:3a:02:ed:c3:52
LACP state: established
LACPDUs RX/TX: 5596/5509
actor state: ASAIEE
actor port number/key/priority: 1 17 255
partner state: ASAIEE
partner port number/key/priority: 557 101 32768
partner system: 32768 8c:94:61:b0:e0:00
aggregator ID: 1
speed/duplex: 1000 1
RX state: CURRENT 6
MUX state: COLLECTING_DISTRIBUTING 4

 

 

---

---
11 REPLIES 11
AEK
SuperUser
SuperUser

AEK
FB
New Contributor

The article is not good in my case, as using 2-port config (per FG cluster node)  works well, the problem is when we try to use a 4-port configuration (4 ports per fg cluster node)

---

---
yutanta5
New Contributor

In my case I'm running A/P because I do not need the capacity of 2. What I ended up doing to fail over and fail back was to tell the units that they both should accept BPDU's. If you do not do this you will end up with a Fortigate blocking traffic while the ports are up ( spanning tree will kick in ).

FB
New Contributor

I already tried all combinations for active-passive, active-active and nothing works

---

---
AEK
SuperUser
SuperUser

  • Which FG model
  • FortiOS version
  • and which ports are you using (port numbers)
AEK
AEK
FB
New Contributor

FortiGate-120G v7.4.9,build2829,250924 (GA.M)

 

basic config

 


set vdom "root"
set type aggregate
set member "port1" "port2" "port3" "port4"
set lldp-reception enable
set lldp-transmission enable
set role lan
set snmp-index 35
set ip-managed-by-fortiipam disable
next

 

 

 

diag

 

status: up
npu: n
flush: n
asic helper: y
ports: 4
link-up-delay: 50ms
min-links: 1
ha: master
distribution algorithm: L4
LACP mode: active
LACP speed: slow
LACP HA: enable
aggregator ID: 1
actor key: 17
actor MAC address: 48:3a:02:ed:c3:52
partner key: 101
partner MAC address: 8c:94:61:b0:e0:00

member: port1
index: 0
link status: up
link failure count: 0
permanent MAC addr: 48:3a:02:ed:c3:52
LACP state: established
LACPDUs RX/TX: 15032/14795
actor state: ASAIEE
actor port number/key/priority: 1 17 255
partner state: ASAIEE
partner port number/key/priority: 557 101 32768
partner system: 32768 8c:94:61:b0:e0:00
aggregator ID: 1
speed/duplex: 1000 1
RX state: CURRENT 6
MUX state: COLLECTING_DISTRIBUTING 4

member: port2
index: 1
link status: up
link failure count: 0
permanent MAC addr: 48:3a:02:ed:c3:53
LACP state: established
LACPDUs RX/TX: 15039/14795
actor state: ASAIEE
actor port number/key/priority: 2 17 255
partner state: ASAIEE
partner port number/key/priority: 559 101 32768
partner system: 32768 8c:94:61:b0:e0:00
aggregator ID: 1
speed/duplex: 1000 1
RX state: CURRENT 6
MUX state: COLLECTING_DISTRIBUTING 4

member: port3
index: 2
link status: up
link failure count: 0
permanent MAC addr: 48:3a:02:ed:c3:54
LACP state: established
LACPDUs RX/TX: 15038/14795
actor state: ASAIEE
actor port number/key/priority: 3 17 255
partner state: ASAIEE
partner port number/key/priority: 267 101 32768
partner system: 32768 8c:94:61:b0:e0:00
aggregator ID: 1
speed/duplex: 1000 1
RX state: CURRENT 6
MUX state: COLLECTING_DISTRIBUTING 4

member: port4
index: 3
link status: up
link failure count: 0
permanent MAC addr: 48:3a:02:ed:c3:55
LACP state: established
LACPDUs RX/TX: 15035/15172
actor state: ASAIEE
actor port number/key/priority: 4 17 255
partner state: ASAIEE
partner port number/key/priority: 268 101 32768
partner system: 32768 8c:94:61:b0:e0:00
aggregator ID: 1
speed/duplex: 1000 1
RX state: CURRENT 6
MUX state: COLLECTING_DISTRIBUTING 4



GET


vdom : root
vrf : 0
cli-conn-status : 0
fortilink : disable
mode : static
dhcp-relay-interface-select-method: auto
dhcp-relay-service : disable
management-ip : 0.0.0.0 0.0.0.0
ip : 0.0.0.0 0.0.0.0
allowaccess :
fail-detect : disable
arpforward : enable
broadcast-forward : disable
bfd : global
l2forward : disable
icmp-send-redirect : enable
icmp-accept-redirect: enable
reachable-time : 30000
vlanforward : disable
stpforward : disable
ips-sniffer-mode : disable
ident-accept : disable
ipmac : disable
status : up
netbios-forward : disable
wins-ip : 0.0.0.0
type : aggregate
netflow-sampler : disable
sflow-sampler : disable
src-check : enable
sample-rate : 2000
polling-interval : 20
sample-direction : both
explicit-web-proxy : disable
explicit-ftp-proxy : disable
proxy-captive-portal: disable
tcp-mss : 0
inbandwidth : 0
outbandwidth : 0
egress-shaping-profile:
ingress-shaping-profile:
weight : 0
external : disable
trunk : disable
member : "port1" "port2" "port3" "port4"
devindex : 46
security-mode : none
ike-saml-server :
device-identification: disable
lldp-reception : enable
lldp-transmission : enable
lldp-network-policy :
estimated-upstream-bandwidth: 0
estimated-downstream-bandwidth: 0
measured-upstream-bandwidth: 0
measured-downstream-bandwidth: 0
bandwidth-measure-time:
monitor-bandwidth : disable
vrrp-virtual-mac : disable
vrrp:
role : lan
snmp-index : 35
secondary-IP : disable
preserve-session-route: disable
auto-auth-extension-device: disable
ap-discover : enable
ip-managed-by-fortiipam: disable
switch-controller-mgmt-vlan: 4094
switch-controller-igmp-snooping-proxy: disable
switch-controller-igmp-snooping-fast-leave: disable
swc-vlan : 0
swc-first-create : 0
tagging:
eap-supplicant : disable
np-qos-profile : 0
ipv6:
ip6-mode : static
nd-mode : basic
ip6-address : ::/0
ip6-allowaccess :
icmp6-send-redirect : enable
ra-send-mtu : enable
ip6-reachable-time : 0
ip6-retrans-time : 0
ip6-hop-limit : 0
dhcp6-prefix-delegation: disable
delegated-DNS1 : ::
delegated-DNS2 : ::
delegated-domain :
dhcp6-information-request: disable
cli-conn6-status : 0
vrrp-virtual-mac6 : disable
vrip6_link_local : ::
ip6-send-adv : disable
autoconf : disable
prefix : ::/0
preferred-life-time : 0
valid-life-time : 0
dhcp6-relay-service : disable
priority : 1
dhcp-relay-source-ip: 0.0.0.0
dhcp-relay-circuit-id:
dhcp-client-identifier:
dhcp-renew-time : 0
idle-timeout : 0
detected-peer-mtu : 0
disc-retry-timeout : 1
padt-retry-timeout : 1
dns-server-override : enable
Acquired DNS1 : 0.0.0.0
Acquired DNS2 : 0.0.0.0
dns-server-protocol : cleartext
wccp : disable
drop-overlapped-fragment: disable
drop-fragment : disable
mtu-override : disable
lacp-mode : active
lacp-ha-secondary : enable
system-id-type : auto
lacp-speed : slow
min-links : 1
min-links-down : operational
algorithm : L4
link-up-delay : 50
aggregate-type : physical

 

---

---
Toshi_Esumi

Since the active FGT sees all ports are up and normal (ASAIEE) whatever the issue is it's on Catalyst config and those 4 cable connections from the active FGT.
Probably you should ask Cisco Community instead.

Toshi 

FB
New Contributor

Yes, I really doesn´t doubt it, but you can imagine how Cisco guys can act... "everything on my side is perfect..."  Thet created a Lab with cisco only  and it works, so they´re unwilling to admit anything wrong in their side

My hope is that here in Forum someone could led me to argue with Cisco guys properly, maybe someone with the exact same experience, because Cisco guys are talking about FG cluster fault, etc

 

---

---
AEK

"everything on my side is perfect ... they´re unwilling to admit anything wrong in their side ... Cisco guys are talking about FG cluster fault"

Are you sure we didn't work before in the same company?

AEK
AEK
Announcements
Check out our Community Chatter Blog! Click here to get involved
Labels
Top Kudoed Authors