Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
Rinmaru
New Contributor

Layer2 Routing Across Two Datacentres

Hi there awesome people of the Fortigate forums , Hopefully you can assist this young apprentice with some sage advice. Going to try and explain to the best of my ability and feel free if more information is needed.

 

So we have two sites  going to refer to them as Site A and Site B

Both Site have 2 x Hyper-V clusters per site

Site A ( 2 x Fortigate 500E) Site B ( 2 x Fortigate 200E) ,

These two sites have a seperate dedicated layer 2 ,1GB link between the two sites that is currently not connected to the Firewalls , but instead get plugged into our storage network on either side which we then use to replicate our sql/mysql database during the day and then nightly we replicate our backups across this same link to the secondary site.

 

The current design is clunky and overcomplicated and a bit of schlep to maintain and in a attempt to simplify this I have suggested we plug these layer 2 links directly into the firewall's on either side and let the firewalls manage the traffic instead by adding it into its own vlan and seperating from the rest of our network traffic in the cluster.

I have received push back on this idea as i was said this would introduce extra load on the firewall's and they will not be able to cope with the demand of carrying the layer 2 traffic via the firewall and this would impact our normal links.

 

I would love to find out from more experienced Fortigate Wizards out there is this will indeed be this case , I dont think we currently use much of our processing power of our fortigates and i believe we have enough head room to allow for this extra load should it indeed be the case it does add that much extra load.

 

Any help or advice would be much appreciated and if more info is needed feel free to ask and i will answer as best i can.

5 REPLIES 5
AEK
SuperUser
SuperUser

Hi Rinmaru

I think if this L2 link is only used by DB backup/replication, then I guess it is like point-to-point, right?

If this is the case then I don't see the need to introduce the firewalls. Also as per my experience so far I've seen very few companies filtering backup traffic with firewall.

However, in case the link is also used for other purpose, like user access from site to site with ssh and other, then in that case a firewall may be needed.

AEK
AEK
ede_pfau
SuperUser
SuperUser

IMHO, this is more of an architectural question than a technical one.

Technically, the 200E (as the weaker link) will be able to sustain an additional 1 Gbps stream easily (I take it that "1 GB" does indeed mean 1 Gigabit throughput). No need to think about the 500E then. Layer 2, without any UTM added like AV, IPS, App Control etc., will not be a burden on the CPU but will be offloaded onto the NP ASICs.

 

But, as @AEK already hinted at, the question remains whether it's a good move to convert a storage network link into an ethernet/ip/tcp link. Surely, you will have to redesign the communication as different protocols are used. As data packets will be wrapped in TCP/IP packets, and then amended by VLAN bits, it will most probably mean less throughput and longer transfer times for the nightly data.

Maybe you could elaborate a bit on the perceived advantages of this conversion. It is definitely not that you can then route the traffic freely, as this clearly is not needed here. And you didn't mention security concerns either, which, as @AEK pointed out, would be unusual for the kind of traffic in question. So what do you plan to achieve?

 

You mention that the current setup is clumsy and will take a lot of effort for changes - how so? What kind of changes do you anticipate which would be eliminated by tunneling the data stream via IP?

 

Hopefully, these are some points worth to be discussed further.

Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
Rinmaru
New Contributor

@ede_pfau @AEK   , Thank you both for the your feedback so far ... Maybe i need to elaborate and what i am trying to achieve here to help with context of why i am asking the question.The very short version I am Trying to simplify our use of multiple hardware devices to achieve the same end result across the two sites. Currently this layer2 is provided by another ISP they dont provide the same offload per site so going to try and explain ... @ site 1 they have mikrotik router that handsoff the connection to us via Copper , this single link then goes into another Mikrotik router  , then to add to this the same SP is providing a secondary slower 500meg backup link that goes to another piece of kit in the cabinet (own by them) this single link is then plugged into our second mikrotik router which is then configured in a mlag across the two mikrotiks . (this is such a over the top complex setup and i really believe this could be much simpler)  , so we have 2xMikrotik supplied by ISP ( which each carrying different layer2 links for redundancy) each of these single point of failure links then go directly to our mikrotiks (1GB link to the one and 500Mb link to the secondary) which is then in a MLAG , we then connect these links from our equipment into our storage switches (we have dedicated nics in our Dell FX2 chassis for the traffic via the the layer2) , this is also true then for Site B ... this design in my mind screams complexity and inefficiency , My thought process was if we move those 2 layer 2 links directly into the FW there would be no requirement for all the extra equipment in the cabinet causing more points of failure or complexity that is not needed) 

 

Just on the storage network side a quick update , so we run ISCI via Dell S5248F-ON switches , the storage network has its own different IP ranges for communication , when the L2 plugs into these switches , we basically have the alternative network cards from the host also plug into these same switches on their own IP ranges (which we call our L2 ip ranges) and this range stretches the same across both sides.

ede_pfau

How a network is designed is not only subject to your needs but also to your skills and preferences. Like, N admins and N+x different design proposals.

That being said, my thoughts to this:

- if you don't agree with your ISP's setup you will have to deal with your ISP, first and foremost.

- if the setup appears to be fragile, your ISP will be held responsible if it fails.

- have you checked that the FGTs will handle iSCSI? jumbo frames?

- in all, to me this is not a technical question. After all, the current setup does work. I don't see how the forum could answer it, it's about responabilities.

Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
mickhence
New Contributor II

To route traffic across two data centers at Layer 2, you can use technologies like VXLAN or EVPN, which encapsulate Layer 2 traffic in Layer 3 packets for transport over the network.

 
 
Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors