Hi,
we have deployed a Fortigate cluster on AWS cloud, mostly used as a VPN/NAT-Gateway.
Now as you usually have no network layer 2 on AWS, there is no ip/mac failover. As the cluster members are deployed in multiple subnets/availability zones, the interface config on each cluster member is unique, with different ip addresses and subnets. Failover is done by Fortigate using the AWS api and modifying route tables and elastic/public ip assignment. So far so good.
As a consequence of the above, you have to exclude some configuration items from the configuration sync with the ha peer, namely the interface config and router static config (by default also vip configuration is excluded).
As we will do definitely DNAT in our VPNs, I have removed the exclusion of the vip already. But interface config and static routes must kept excluded.
Now to the issue with this: For each VPN you have to configure some static routes pointing to the tunnel interface. But now with static routes excluded from ha sync, you have to do every change manually on the ha peer node from CLI by connecting to the ha peer and replicate the required route config.
If you have users which should mainly use the GUI to configure also VPNs, you're lost in this case.
Does anybody have an idea how to workaround this? Hopefully in some of the next releases it will be possible to still keep routes for VPN or other virtual interfaces synced with the ha peer.
Regards... Walter
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hi,
I’m not sure if this what you’re looking for or how you’re managing your infra, but using Terraform, for example, can ease that burden.
All you’ll have to do after everything is implemented in TF, is update the vars for each node and apply the configuration.
For FGTVM HA on aws, there should be a hamgmt port that user can access to each instance via it's public EIP address. Should be able to use that EIP to do the configuration on each instance via GUI.
Created on 11-18-2021 12:33 PM Edited on 11-18-2021 12:34 PM
Yes, thanks, indeed. But it's still an extra step and it is a little bit error prone to keep the configs in sync like this. Not to say confusing for less experienced users, to have to do most of the configuration on the ha master, but to have an exception for routing.
I was thinking about some external triggered automation for syncing the vpn routes, should be doable, however it may introduce more issues than it solves :)
Do you know if there are any plans to improve this?
Don't know any plans on improving this. As routing will be different especially in your setup is a cross-az. Hence, just make sure to do the configurations on both units when making any changes.
Hi,
I’m not sure if this what you’re looking for or how you’re managing your infra, but using Terraform, for example, can ease that burden.
All you’ll have to do after everything is implemented in TF, is update the vars for each node and apply the configuration.
Thanks for your suggestions, yes we are using terraform, but mostly for initial/base depoyments. I may consider something like this.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1732 | |
1106 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.