Fortinet Integrates with Azure Gateway Load Balancer to Enable Seamless Deployment of FortiGate Next-Generation Firewall
By: Ali Bidibadi
Many customers use third-party network virtual appliances (NVA) offerings from Azure Marketplace, including Fortinet FortiGate Next-Generation Firewall (NGFW), to secure their Azure cloud deployments. Specifically, Fortinet offers several design patterns to address each customer’s unique deployment requirements. Those include Active-Passive with Unicast HA and Active-Active with and without autoscaling support. Both external and internal load balancers are utilized in some of those architectures to help with scalability and resiliency of the security solutions.
Now, with the public preview launch of the Azure Gateway Load Balancer (GWLB) service, we are excited to announce the availability of the Fortinet FortiGate VM integrated with this service as a launch partner. Equipped with full NGFW capabilities, as well as VXLAN encapsulation and decapsulation features, Fortinet is leveraging the highly scalable and distributed Azure Gateway Load Balancer to simplify deployment and configuration and to help reduce outages from erroneous changes
What is the Azure Gateway Load Balancer?
Today if customers want to inspect all traffic to an endpoint, they can inject a third-party firewall into their VNet. For example, customers deploy a standard/basic load balancer to front the firewall NVAs that are available in Azure Marketplace. To create the desired outcome, multiple configuration changes are needed, often leading to outages. These solutions do not always scale well and involve additional overhead each time a customer needs to add a firewall instance.
To overcome these shortcomings, Microsoft has added a new SKU to its set of supported Load Balancer SKUs called, Gateway SKU. It supports service chaining which can help reduce outages from erroneous changes. The goal is to enable transparent deployment of firewall NVAs such that adding and/or removing NVAs does not introduce any management overhead. Azure Gateway Load Balancer utilizes VXLAN as a service chaining encapsulation and decapsulation protocol to enable this use case. Gateway Load Balancer maintains flow symmetry as all traffic going to the backend NVAs will go through the GWLB in both directions. Also, it ensures traffic destined to/from a backend application is inspected by a fleet of firewall NVAs.
FortiGate Next-Generation Firewall Integrated with Azure Gateway Load Balancer
As a leading next-generation firewall, FortiGate VM has been the top choice of Azure customers due to its extensive security features as well as high-performance capabilities. Now, with the public preview launch of Azure Gateway Load Balancer, Fortinet can leverage VXLAN protocol to support Azure GWLB service chaining to simplify the deployment of FortiGate VM NGFWs. In order to integrate with Azure GWLB, NVAs need to support changing the default MTU as well as VXLAN encapsulation/decapsulation. FortiGate VM has the capability to do both. This will include:
Change default MTU – The FortiGate VM is going to receive VXLAN encapsulated packets. The inner packets can use the Azure default maximum transmission unit (MTU) (~1500bytes). As a result, the entire encapsulated packets may exceed the MTU of the FortiGate VM thus causing packet drops. Changing default MTU on the NVA VM prevents such packet drops. The additional bytes needed are the size of encapsulated headers, including an Ethernet header, an IP header, a UDP header, and a VXLAN header (RFC7348). To be specific, the additional size will be 50 bytes for regular IPv4 and 70 bytes for regular IPv6. So, the MTU of NVA needs to be at least 1570 to support both IPv4 and IPv6. FortiGate VM allows changing the MTU size to 1570.
Set up VXLAN encapsulation rules – The packets received on the NVA VMs (FortiGate VMs) are VXLAN encapsulated packets and only the inner packets are to/from the backend application service. FortiGate VM supports encapsulation and decapsulate of the VXLAN and processes the inner packet. The Azure VXLAN support follows RFC7348, with 8 bytes VXLAN header. Packets to the backend application service will be encapsulated with a VXLAN header (alone with corresponding UDP, IP, Ethernet header) using the FortiGate VM's IP as the destination IP of the outer packet header. The packets are then first sent to the FortiGate VM to decapsulate and process. After that, the packets will be encapsulated in FortiGate VM with VXLAN again and sent to customer service. While Azure infrastructure will take care of the encapsulation of the packets sent to the FortiGate VM, for the packets sent out by the FortiGate VM, the FortiGate will do the encapsulation.
As explained in the previous section, FortiGate VM allows deployment with Gateway SKU load balancers to fully support this integration. The diagram below shows how the architecture in which FortiGate VM integrated with Azure Gateway load balancer inspects traffic destined to a backend application. The integration enables new use cases. This includes:
Transparent inspection of consumer application traffic – the ability to direct traffic to GWLB from a standard load balancer that fronts consumer applications allows FortiGate VMs to be shared across all applications including consumer applications that reside in a different subscription. This means inspecting traffic destined to a new consumer application can be done with minimal changes.
Offering managed security services to end customers – GWLB integration with FortiGate allows managed security service providers to offer advanced threat protection service via FortiGate VMs deployed behind a GWLB. Those FortiGate VMs can inspect consumer application traffic for different customers.
Next steps – to learn about deployment of FortiGate VM integrated with Azure Gateway Load Balancer, sample templates, and sample configurations, visit this page.
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.