Howdy,
Perhaps you can shed some light on the following. We have two Fortigate 300Ds (v6.2.3) in an Active-Passive HA cluster. Up to now, only the Primary unit has had the "outside" interface (let's call it WAN1) plugged in; we don't have a switch between the Fortigate and the ISP (ISP1) in order to have WAN1 plugged in on both Primary and Slave.
Now, we have a second internet pipe (ISP2). I know the typical deployment would have a switch between each Fortigate in the HA cluster and the ISP:
[ul]The above would be ideal, but I need to make things work without the upstream switches.
Here are the requirements:
If ISP1 is having issues, which is plugged into Primary WAN1, HA fails over to Secondary which has ISP2 plugged into WAN2.
If I keep ISP1 plugged into Primary WAN1 (Secondary WAN1 has nothing plugged in), and plug ISP2 into Secondary WAN2, is it as easy as setting up link monitoring, adding the default route, and adding the WAN2 interface of the HA cluster to the existing WAN1 policies? Any issues with keeping HA as Active-Passive?
Here's the kicker, we're advertising a /24 to ISP1 via BGP. I won't be able to set the secondary IP address of WAN2 to anything in the /24 advertised by WAN1. This might be a whole different topic, but in order to achieve all of the above *AND* advertise a /24 via BGP, would creating an SD-WAN interface be the way to go (add both WAN1 and WAN2 to the SD-WAN interface)?
Thank, in advance, you for your guidance.
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
If you setup a link monitor to down the port(wan1) rather than just remove the route it might failover to secondary as connected ports is the main criteria for primary HA selection. But, even if it does work, I do believe a WAN switch is by far the way to go.
What I believe is completely impossible is to automate failover on ISP failure
Yes this is more interesting. What do you mean "stay clean"? Following FTNT BCP you suppose to connect all interfaces in a HA cluster to the lan. HA -HB is going to be a issues if enable and interface unitA can see unitB for example
FWIW, I would not substitute a poor design due to a few dollars. A simple L2-switch can be had for 25-30 dollars on ebay and you would only need to set 2x vlans ( VLAN-ISP1 VLAN-ISP2 ) and cable the HA cluster wan1/wan2 to those vlans.
Ken Felix
PCNSE
NSE
StrongSwan
Clean, a device that does not touch the public WAN directly. A dirty device would straddle both private LAN and public WAN.
If I have the following, in an Active-Passive HA cluster:
[ul]My questions are:
[ol]What I believe is completely impossible is to automate failover on ISP failure
You need to get rid of HA. Keep the second box as a cold spare and put it into place if needed.
This will not work without symmetrical connectivity. The conditional BGP article is assuming both ISPs are connected at all times.
In an Active-Active HA cluster, the failover would be at the route level not the HA member level. In theory, accomplishing the original goal?
In an Active-Active HA cluster, in the config described in my previous post (#12), would it not basically be as follows?
[ul]Active-active does not work like that, even in active-active all the active IP addresses are only on the primary unit, it just transfers processing of some UTM to the secondary node.
I wonder if something could be cooked up by using vdoms and virtual clustering with affinity to physical units, each WAN port being a seperate vdom, then some cross over cables to a central vdom then then in turn connects to internal LAN.
I was thinking the below - then remembered you have a 300D and run our of ports. Read on if you want to get an idea of where I was going.
VDOM "WAN1" - has ports WAN1 and ports 3 and 4 in a hardware switch - virtual cluster with affinity to node A
VDOM "WAN2" - has ports WAN2 and ports 5 and 6 in a hardware switch - virtual cluster with affinity to node B
VDOM "INTERNAL" - has port 7 on each node cabled to port 3 and 4 on node A - port 8 on each node cabled to port 5 and 6 on node B. On the internal vdom, create interface monitors on port 7 and 8.
Run out of ports for internal :(
That could work if you use physical port between vclusters-vdom, but man your over complicating the network design for a meer cost of a switch and add more policy, more work, more break points and defeating the concept of HA-cluster to begin with.
Ebay has WS-2960 for 60 or less dollar or a EX4200 for 80 or less dollars. I just gave away like 8x 2960 to church, I could have donated you a switch or two ;)
Ken Felix
PCNSE
NSE
StrongSwan
Please take into considerations that if you are using CAPWAP for wireless all wireless will drop when the gates fail-over. If you are running bridged then you should be OK.
To solve this problem, we utilize inexpensive Netgear 5-port gig switches to split the ISP to each gate.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1712 | |
1093 | |
752 | |
447 | |
231 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.