Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
Fido
New Contributor II

BGP Multihoming to same ISP

Hello,

I have this scenario with 2 fortigate 80E A and B at sites 1 and 2.  They both connects to the same ISP with eBGP and of course a link exists between the two sites. I'm advertising network x.x.x.x in site 1 using FG-A and network y.y.y.y at site 2 using FG-B. Both networks exists in the routing table as blackholed routes and are only only usable when a system is Natted to the ips. I want to achieve the following.

1. Make Site-1 the preferred network path for inbound traffic to network x.x.x.x and Site-2 the preferred for path for incoming traffic to network y.y.y.y

2. Be able to Nat a device that exists in site 2 to an IP in network x.x.x.x and have the traffic utilize link in site 2 for outbound and inbound, even though that network originally belongs to site 1.

 

Ive attached a small sketch. Any help will be appreciated.

9 REPLIES 9
emnoc
Esteemed Contributor III

item1 is  easy, setting metric as in MEDs

 

Item2 is not so easy, you will have issue with asymmetrical routing issues.

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Toshi_Esumi
Esteemed Contributor III

Not from my first-hand experiences but I heard it's depending on ISP's BGP setting if you want to use MED to influence their routing toward you (See Cisco Learning Network discussion below). You should contact your ISP first.

[link]https://learningnetwork.cisco.com/thread/95799[/link]

AS-Path (prepending) is more commonly used to influence incoming routes. See an article below for FGT config example:

https://travelingpacket.com/2014/04/23/fortigate-bgp-as-path-prepending/

 

For the second part, I don't know where to put the NAT/VIP to work in a redundant way.

 

 

emnoc
Esteemed Contributor III

AS-prepending would not work here. It's the same ISP  AS20, MEDs is what predefine what links to use within that "intra AS" and is not transitive.

 

http://www.informit.com/articles/article.aspx?p=331613&seqNum=5

 

The MUTLI_EXIT_DISC (MED) is an optional non-transitive attribute that provides a mechanism for the network administrator to convey to adjacent autonomous systems to optimal entry point in the local AS

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Toshi_Esumi
Esteemed Contributor III

sorry, I forgot about the fact one ISP. We're kind of ISP but unless our customers tell us we'll just ignore MED. I heard the others are similar.

Fido
New Contributor II

Thanks Toshiesumi and Emnoc. Will definitely investigate the MED option. My main concern however is part 2 of the questions.

How do enterprises achieve continuity in a DR scenario.

i.e a webserver is published in DC1 and natted to e.g 2.2.2.2/24, it gets failed over to DC2 where it is natted to 3.3.3.3/24. 

How do you keep users connected during a DR failover without a DNS change. Can BGP with reference to my diagram, help with this?

emnoc
Esteemed Contributor III

In real world since the address is two different   ipv4, we would use a   FQDN device like  a GTM and  wide-ip and set the priority for site Y and fallback to  site X. So  traffic wil always go to site Y and in the event site is 100% down or the server probes die off, you  change the  DNS TTL and repoint it to  site X.

 

Ken 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Fido
New Contributor II

Thanks for the response Ken.

If i understand clearly, there's no way to provide full auto failover to a public web-server across two DCs on separate subnet, when the translation is done on a FG (even when both sites are in the same AS), without manual modification of dns record.

I was looking for solution to ensures traffic gets to the same webserver on same ip address irrespective of the resident DC. That way, i wont have to modify dns record whenever i switch DCs.

Whats GTM and wide-ip?

 

emnoc
Esteemed Contributor III

F5  product or similar you can read more in this knowledge  article

 

https://devcentral.f5.com...ons/f5-gtm-and-wide-ip

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
gsm
New Contributor

 

I used Ecessa PowerLink WAN aggregator for the past 10 years.  Their appliances have a number of features one being DNS SOA. We set our TTL settings for DNS records to 30 seconds, but could go lower, for faster IP address updates upon DNS cache expiration. We also setup DNS load balancing (alternating IPs), local and/or site-to-site fail-over redundancy with multiple IP addresses for the same FQDN. 

 

With their appliances they claim BGP is no longer required due to their WAN aggregation, load balancing, link-state awareness, and DNS SOA features; now offers SD-WAN capabilities as well.  Even though, I still implemented BGP multi-homing via a single ISP (iNAP formally interNap), who algorithmic-ally leveraged multiple tier 1 carriers for additional redundancy, used separate AS numbers out of their own and partner collocation data centers.  In our case we used a /23 subnet split between two sites providing both DNS load balancing and site-to-site DNS redundancy.  Note with the Ecessa's DNS SOA we experienced better fail-over response time than with our BGP multi-homing fail-over convergence times.

 

In production I had multiple appliances over the years in a HA fail-over configuration. My Initial implementation began load balancing two 3Mb circuits between with multiple ISPs, then a few years later upgraded to multiple 100Mb Internet connections, and then more recently multiple 1GB circuits.  Basically, every three years had hardware and circuit upgrades.

 

https://www.ecessa.com/powerlink/ 

(I do not work for Ecessa nor do I get compensated by them).

Labels
Top Kudoed Authors