Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
WalterW
New Contributor

Fortigate on Cloud/AWS quirks

Hi,

 

we have deployed a Fortigate cluster on AWS cloud, mostly used as a VPN/NAT-Gateway.

 

Now as you usually have no network layer 2 on AWS, there is no ip/mac failover. As the cluster members are deployed in multiple subnets/availability zones, the interface config on each cluster member is unique, with different ip addresses and subnets. Failover is done by Fortigate using the AWS api and modifying route tables and elastic/public ip assignment. So far so good.

 

As a consequence of the above, you have to exclude some configuration items from the configuration sync with the ha peer, namely the interface config and router static config (by default also vip configuration is excluded).

 

As we will do definitely DNAT in our VPNs, I have removed the exclusion of the vip already. But interface config and static routes must kept excluded.

 

Now to the issue with this: For each VPN you have to configure some static routes pointing to the tunnel interface. But now with static routes excluded from ha sync, you have to do every change manually on the ha peer node from CLI by connecting to the ha peer and replicate the required route config.

 

If you have users which should mainly use the GUI to configure also VPNs, you're lost in this case.

 

Does anybody have an idea how to workaround this? Hopefully in some of the next releases it will be possible to still keep routes for VPN or other virtual interfaces synced with the ha peer.

 

Regards... Walter

 

1 Solution
ncorreia

Hi,

 

I’m not sure if this what you’re looking for or how you’re managing your infra, but using Terraform, for example, can ease that burden.

All you’ll have to do after everything is implemented in TF, is update the vars for each node and apply the configuration.

View solution in original post

5 REPLIES 5
dchao_FTNT
Staff
Staff

For FGTVM HA on aws, there should be a hamgmt port that user can access to each instance via it's public EIP address.   Should be able to use that EIP to do the configuration on each instance via GUI.

 

WalterW

Yes, thanks, indeed. But it's still an extra step and it is a little bit error prone to keep the configs in sync like this. Not to say confusing for less experienced users, to have to do most of the configuration on the ha master, but to have an exception for routing.

 

I was thinking about some external triggered automation for syncing the vpn routes, should be doable, however it may introduce more issues than it solves :)

 

Do you know if there are any plans to improve this?

dchao_FTNT

Don't know any plans on improving this.  As routing will be different especially in your setup is a cross-az. Hence, just make sure to do the configurations on both units when making any changes.   

ncorreia

Hi,

 

I’m not sure if this what you’re looking for or how you’re managing your infra, but using Terraform, for example, can ease that burden.

All you’ll have to do after everything is implemented in TF, is update the vars for each node and apply the configuration.

WalterW
New Contributor

Thanks for your suggestions, yes we are using terraform, but mostly for initial/base depoyments. I may consider something like this.

Labels
Top Kudoed Authors