Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
nsantin
New Contributor III

HA Internal Interface on 2 switches

Hi, very simple question as I setup my first HA cluster with 2 FGT60-C I have the WAN1/2/DMZ interfaces all interconnected using dedicated VLANs on my switches. With the internal interfaces do they need to be on the same isolated switch? Can I have FGT1 connect to switch #1 (with other devices) and FGT2 connected to switch #2 (with another batch of devices) and have the 2 switches interconnected? I know this is a rudimentary question, just all the documentation I see refers to a single switch. Im concerned about the duplicate MAC addresses and how the switches would handle that across 2 physically different switches. (Cisco SG300 and Cisco ESW-540) Im trying to have the internal interfaces on different switches so I dont have a single place of failure (in the event of a switch failure) Thanks!
13 REPLIES 13
ede_pfau
SuperUser
SuperUser

I wonder why noone else answered this... To distribute your hosts on 2 switches at first glance looks like it could work. Two switches chained are one switch, and a FGT cluster in active/passive mode can connect to that switch. But you' ll get into trouble when one FGT fails. The other unit will take over the cluster MAC address and forward traffic. But as it is connected to a different switch with a MAC table of it' s own the connections will fail. The target MAC addresses from one switch database will not be mirrored to the other switch just by interconnecting them. There are advanced features in switches which allow such a design. Juniper calls it ' Virtual Chassis' , HP/3Com ' IRF' , Extreme and Arista have it as well. But that needs to be configured beforehand. Two such prepared switches act as one, with synchronized MAC tables and reacting to ARP coming from the firewalls. In short, regular switches without these advanced features cannot be daisy-chained and connected to two FGTs in cluster mode.
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
nsantin
New Contributor III

Thanks Ede, I see the issue with one unit failing and the mac lookup table not being refreshed timely on the switch. So here is another random idea, if I have my 5 internal ports on the FGT setup in switch-mode, could I have Internal Port 1 on both units tied together on switch 1 and port 2 on both units tied on switch 2? that would be 4 connections all with the same MAC address. In theory if a FGT died, both switches would still have a wire and a path to the working unit. While it' s up, either switch could forward packets to the FGT using either connected port. Like this:
emnoc
Esteemed Contributor III

I have to disagree with edu_pfau analysis to a certain degree. Yes you can interconnect 2 FGT in the fashion that you mention above. FGT01--int1 ==sw1 FGT02--int1 ==sw2 and have sw1/sw2 tied with a lacp etherbundle. This is normal and SOP in most areas. You can also run these FGT in act/act or act/pas mode via. I do that all day long with ASA , pfsense w/carp, Juniper and with Act/pas for FGT for no real reason. One think you want to consider, are your interfaces to monitor and how you adjust which interfaces to monitor. On the drawing you provided, that' s not need or warrant imho nor have I' ve ever seen any try that with internal-switch mode multiple ports like that. As a matter of fact I would run your int1 on both FGT as non-switch-mode and plumb them just as a typical l3 interface. Lastly, if you want the best of approach run 2x interfaces from each FGT to the same switch and then you have link redundancey. The virtual chassis, multi-etherchannel-chassis, virtual switch system, and virtual-port-channel, are available in a host of switching providers, & the last two being that of a cisco option. But not all switches support this ( i.e dell, hp,lower end layer2 switch , lower end l2/l3 switches, etc..... The problem with these, they typical warrant a very expensive switch or high end switch for this. And Arista aren' t actually cheap to deploy by cost per port

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
nsantin
New Contributor III

Hi Emnoc, Thank you for your input. Basically, this is what Im trying to achieve. If I can run my FGT cluster in this manner then I can have full link redundancy in the event of any single piece of equipment failure. Do you/others think this will be an issue with the duplicate MAC address on the different switches? How quickly would a switch clear the MAC lookup table if a device went offline? I would expect it to be pretty quick then the switch woudl start broadcasting to the network to find the next path on the other switch?
veechee
New Contributor

ORIGINAL: emnoc Lastly, if you want the best of approach run 2x interfaces from each FGT to the same switch and then you have link redundancey.
To be able to run link redundancy like this, does the FGT have to be in interface mode instead of switch mode? Anything else to do on FGT or switch? Right now I have a site where I have the VLANs trunked to the FGT-60C over a single link. I have lots of extra ports on the Cisco switch. I' m interested in gaining link redundancy if it' s as easy as you begin to suggest.
ede_pfau
SuperUser
SuperUser

I have to disagree with edu_pfau analysis to a certain degree. Yes you can interconnect 2 FGT in the fashion that you mention above. FGT01--int1 ==sw1 FGT02--int1 ==sw2 and have sw1/sw2 tied with a lacp etherbundle. This is normal and SOP in most areas. You can also run these FGT in act/act or act/pas mode via. I do that all day long with ASA , pfsense w/carp, Juniper and with Act/pas for FGT for no real reason.
Sorry, no, this will not work at all! You' re building loops, with identical traffic origin coming in on one port of the switch and on another as well (the inter-switch link). This cannot work. The first broadcast will cause a broadcast storm. Maybe you can see it from another perspective: the server receives traffic from FGT #1 via switch #1, using the cluster MAC for the int1 interface. Which NIC should the server use to send the reply out? Assume NIC1. Next packet arrives, with the SAME MAC, via NIC2. How does the server keep the L2 traffic apart then? IMHO this discussion is becoming academic. Set it up, make the loop and see for yourself.
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
nsantin
New Contributor III

Ok, what about this scenario, interconnecting the ESXi servers to the extra ports in the FGT. Keep in mind the ESXi (VMWare) creates a virtual switch on itself, (so it would be like conectting the cluster to a dedicated switch) While at the same time connecting the primary cluster on the rest of the network gear? I dont mean for this to be acidemic, I just have to implement a new cluster this weekend (in 72 hours) and i' ve validated everything except how the physical link will work. FWIW, I did try to test my looping scenario on another pair of test switchs and had mixed results, which is why I started this thread. Really appreciate the help
ede_pfau
SuperUser
SuperUser

This scheme looks OK as long as you bundle (' link aggregate' ) ports 2 and 3 of each FGT. But that is easy. You could then connect INT2 of FGT#1 to NIC1, and cross connect INT3 of the same FGT#1 to NIC2 of your server. Always assuming that both NICs form one switch. This way you could loose one NIC and still stay connected, or loose one FGT and the same. In the beginning, I' d just use INT2 on both FGTs to test the failover scenario. If that works under all circumstances you can bond INT2 and INT3 and test again. For bonding, see the ' Link Aggregation' chapter in the Admin Guide or the Handbook. It will use the (standard) LACP protocol (which, I assume, will be used on the ESXi server as well to aggregate the 2 NICs). Even if link aggregation doesn' t work right from the start, and you feel the pressure to get to results, using just one link each will do the job. Refine later.
Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
veechee
New Contributor

ede_pfau: I don' t believe that FGT-60C supports Link Aggregation (802.3ad). It' s only available starting on " Medium" size models: " Starting FortiOS 4.0MR2 : 200B, 300A, 310B, 400A, 500A, 620B, 800 and higher"
Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors