Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
LACP recommandation between Fortigate FortiOS 5 and Cisco switch
Hello,
I would like to know if some of you have a recommendation for a configuration between a Cisco switch port-channel and a Fortigate Agg FortiOS5
On my Cisco configuration I' ve used this for the physical interfaces
channel-group 1 mode active
switchport nonegotiate
On the Fortigate I have
edit " Agg1"
set vdom " root"
set type aggregate
set member " port1" " port2"
set lacp-mode passive
So LACP active on the Cisco switch and passive on the Fortigate.
Thank you
19 REPLIES 19
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The below are the configs we' re using:
Cisco:
interface Port-channel1
description uplink to FortigateFW
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 100-150,200-250,300-350
switchport mode trunk
spanning-tree portfast trunk
end
Fortigate:
config system interface
edit " LACP VLAN Group"
set vdom " Blah"
set type aggregate
set member " port28" " port29"
set snmp-index 52
set lacp-mode static
next
end
The Cisco switches we' re connecting with are stacked 3750G' s running IOS 15.0(2)SE
Our Fortigates are a HA Pair (A/P) 1240B' s running 5.0.6, though we' ve used this config since FortiOS 4 MR2.
If you' re using HA you' ll need separate Port Channel groups for each Fortigate.
We' ve also had one or two occurrences where we' ve had Speed/Duplex mismatches, so you may need to statically set ports on both sides.
Regards,
Matthew
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you still need separate ether-channels on the Cisco side if the cluster is Active Active?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Cisco: interface Port-channel1 description uplink to FortigateFW switchport trunk encapsulation dot1q switchport trunk allowed vlan 100-150,200-250,300-350 switchport mode trunk spanning-tree portfast trunk end Fortigate: config system interface edit " LACP VLAN Group" set vdom " Blah" set type aggregate set member " port28" " port29" set snmp-index 52 set lacp-mode static next endThe cisco stuff you posted is NOT a lacp bundle btw. Here' s a real LACP mode active from a 3750G int range gi 1/0/1-2 no shut switchport channel-group 10 mode active channel-protocol lacp load-interval 30 logging event link-status logging event bundle-status ! ! int port 10 description 2 GIG bundle to FGT ! Keep in mind you can trunk over the etherchannel also. So this will allow you to use the aggregate ports more effective and by issuing sub-intf
PCNSE
NSE
StrongSwan
PCNSE
NSE
StrongSwan
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The cisco stuff you posted is NOT a lacp bundle btw.You' re absolutely correct, it' s not LACP but raw/static etherchannel. We had several issues when we did our deployment where the Cisco and Fortigate would either not negotiate at all or it would negotiate too often and drop the link. Changing to use a static link aggregation was the best solution in our case, though it' s not the only way aggregation can be done. That' s also the reason the interface is labeled " LACP VLAN Group" it was originally a proper LACP configuration. I know some people argue against using static aggregation because there are some dangers with MAC flapping & loops, but in a DC environment where physical connections are static(we' ve made no physical changes to our 1240B' s in 3.5 years) the dangers are minimal. IMO, LACP introduces a bigger risk where a software bug can cause the negotiation to not work properly, ie see ShrewLWD post that mentions bug #0229638. As for the stability of 5.0, I' d have to agree that there have been several bugs that could have been nasty to our Production environment. 5.0 GA -> 5.0.2 were not " friendly" while 5.0.3 was actually somewhat mature. I guess that' s why we send firmware changes through our Test and DR environments before they hit Production. I also have to say I' m not overly happy with the way that new features and changes of functionality are introduced in the minor releases, they should be bug fixes only. Regards, Matthew Mollenhauer
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Thank you both for your answers.
Regarding my Cisco configuration I just wrote the important lines (not the syslog or load-interval related commands) and of course I do use trunk with the firewall.
I didn' t write the channel-protocol lacp command but according to the 2960X Cisco switch it is LACP.
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SU) LACP Gi1/0/41(P) Gi2/0/41(P)
I' m asking the question because I see a lot of output drops on the port-channel interface and I would tend to think it' s due to the Fortigate side.
#sh int po1 | i drops
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 99837
Incremendation happen when there is a load on the link
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
highly doubt it' s the fortigate causing output drop on a interface facing the fortigate and on a cisco swith.
Can you confirm flow-control is disable on interface gi 1/0/41/+2/0/41?
Do you have any QoS enabled ( policy-map, thresholds,etc....)
Do you have giant frames allowed?
Drops could be anything from a ACL list , unknown datagrams,etc...?
PCNSE
NSE
StrongSwan
PCNSE
NSE
StrongSwan
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi everyone,
this past weekend we had issues with attempting to LAG link the SFP ports of a 100D (v506) to use dual fiber down to Cisco 3750s at our DR site.
No matter what we tried, LAG would not link up. We are using the simplest of LAG setups on both ends.
When I ran
diagnose netlink aggregate name [MyLAGsName]
I noticed
LACP state: negotiating
actor state: ASAIDD
partner state: ASAOEE
It seems Active/Active doesn' t work, and the Fortinet won' t respond either, if it is in passive mode and the Ciscos in Active. We were able to get it up and running by putting both into STATIC mode. I realize it is not the most ideal, but packets are passing, and we are not seeing any errors or dropped packets. The Fortinet TAC mentioned a known bug with some portion of LACP that is still unresolved (bug# 0229638).
Hope this helps!
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That' s interesting, all of the fortinet that I' ve worked with that support 802.3ad works regardless of if they are ACT/PASS
Remember with within any 802.3ad setup, one member-side must be in ACTIVE mode. Are you 100% sure the 3750 was setup for LACP and not PAGP ? and did they issues any show etherchannel and show lacp commands ?
PCNSE
NSE
StrongSwan
PCNSE
NSE
StrongSwan
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi emnoc,
Here is what the Cisco looked like;
!
interface Port-channel25
switchport access vlan 10
!
interface GigabitEthernet1/1/1
description " OWS Links"
switchport access vlan 10
mls qos trust dscp
channel-protocol lacp
channel-group 25 mode active
!
interface GigabitEthernet2/1/1
description " OWS Links"
switchport access vlan 10
mls qos trust dscp
channel-protocol lacp
channel-group 25 mode active
!
Here was my Fortinet;
config vdom
edit VD_PWAN
config system interface
edit " port15"
set vdom " VD_PWAN"
set type physical
set snmp-index 25
next
edit " port16"
set vdom " VD_PWAN"
set type physical
set snmp-index 26
next
edit " COLO_Link"
set vdom " VD_PWAN"
set broadcast-forward enable
set l2forward enable
set stpforward enable
set type aggregate
set member " port15" " port16"
set snmp-index 44
next
end
The only settings that allowed it to link up was;
Cisco:
channel-group 25 mode on
Fortinet:
set lacp-mode static
We tried all the other variations, with a full reboot of the 100D between changes.
