Description |
This article describes how to setup FortiGate in Active-Active mode with Network Load balancer in AWS. |
Scope |
Port1: External (Public) interface that is connected to Internet. Port2: Internal Interface that is connected to the Private network. No HA as well as Mgmt interface is needed in this lab.
Port1: External (Public) interface that is connected to the Internet. Port2: Internal Interface that is connected to the Private network. Four subnets are needed for our Lab (Two for each AZ). Window servers are placed in AZ1 and AZ2 respectively. Four route tables are needed for our Lab. (Two for public as well as two for private).
Reference to create VPC, Subnets (Subnet Association), Route Table, and Internet Gateway has already been configured to these related documents: |
Solution |
In an active-passive FortiGate (FGT) setup within AWS, a failover event can result in extended downtime.
The load balancer is responsible for distributing traffic to the FortiGates(FGTs). (Make sure to use a Network Load balancer as in this case TCP 3389 (RDP) traffic is used for the window server and NLB is a Layer 4 load balancer fulfilling these requirements). Additionally, the load balancer performs health checks to determine the operational status of the FortiGates, ensuring they are functioning correctly. All FortiGates receive sessions via the load balancer as long as they pass the health checks. While an Active-Passive (A-P) cluster behind the load balancer is an option, it is generally more effective to use standalone FGT units behind the load balancer in multiple Availability Zones (AZs). This configuration provides a robust mechanism to withstand the complete failure of an AZ.
Note: To be able to attach at least two Network interfaces, the Instances ('VM') size in AWS Must contain at least two vCPUs. Port 1(Public1) of the Primary FortiGate resides within one subnet, whereas Port 1(Public2) of the Secondary FortiGate belongs to a separate subnet. (Two Elastic IP will be used with these ports for management purposes). Port 2 (Private1) of the Primary FortiGate resides within one subnet, whereas Port 2 (Private2) of the Secondary FortiGate belongs to a separate subnet. Window servers 1 and 2 are behind our Private subnets (Private 1 and Private 2 respectively).
Four subnets are needed for this Lab as shown in the below picture.
FortiGates exist in the same VPC and different AZ.
Steps:
Related article:
Note: Primary FortiGate is getting the IP address (Public) from US-EAST-2A. When launching the Secondary FortiGate, select the suitable Availability Zone. In this case, US-EAST-2b has been selected.
Related article:
A VIP (Virtual IP) will be created for the Windows server to facilitate internet traffic reaching the server via the FortiGate. On the VIP policy make sure SNAT is enabled whenever we use ALB in front of FortiGate because ALB always sources NAT the traffic, so to avoid asymmetric routing enable NAT on VIP policy always so that traffic will reach to original server with FortiGate private ENI IP. Even if the configuration appears correct, when attempting RDP, it will not be possible to not see any traffic in the firewall logs. To resolve this, ensure that the security group associated with the firewalls in the AWS console allows RDP traffic
Creating Window server.
Launch a two-window instance in Private 1 and Private 2 subnet, select launch instance, Type window server, and select window server image from Marketplace as shown below picture, window server 1 is launched in subnet 1, similarly, window server 2 is needed to launch in subnet 2 in different AZ). As shown in the picture, Instance 1 is located in Subnet 1, and a security group has been created to permit inbound HTTPS and RDP traffic. Similarly, Windows Server 2 is in Subnet 2, and it is possible to usethe security group of Windows Server 1, as depicted in the image.
Alternatively, it is also possible to hover the mouse over Instance 1, 'right-click', select 'Launch More like this,' and then edit the settings for the subnet. Created route table for Private subnet so that all traffic will be routed to corresponding Fortigate port 2.
This ensures that all traffic from the Windows servers flows through the connected FortiGate, allowing for comprehensive inspection.
The steps are shown in the pictures below:
By doing this load balancing will be done in all AZs:
To check the health check, run a sniffer on FortiGate port1 to see if NLB is doing health checks. (Which NLB IP in that AZ can be visible). NLB will have a DNS name, which is used to do RDP for Windows servers.
FINAL NETWORK DIAGRAM:
Once everything is set, you will have, two FortiGates in two AZs having window server attached and VIPs are made in the firewall. Traffic will be load-balanced by NLB as FortiGates are in the target group.
Results: Initiate two different RDP sessions with the same DNS name of the Network load balancer as shown below picture and in first picture traffic went through FGT1 to window Server 1
In the below image with same DNS IP of NLB, traffic went to Windows server 2 through FGT2 (In a different AZ).
It is possible to run a sniffer in parallel on both FortiGates to verify matching logs and confirm that traffic is load-balanced across them. If one FortiGate fails, the automatic health check will mark it as unhealthy, and traffic will seamlessly route through the available FortiGate. Additionally, this setup allows you to implement auto-scaling based on the network load:
Scale Out: Increase the instance count to handle higher loads. Scale In: Decrease the instance count during lower traffic periods. |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.