FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
kcheng
Staff
Staff
Article Id 214315
Description

This article describes how to resolve an issue with the Microsoft Azure Linux Agent (waagent) which manages Linux and FreeBSD provisioning and VM interaction with the Azure Fabric Controller.

 

(For more information, refer to Azure Linux VM Agent overview).

Scope FortiGate Azure.
Solution

FortiGate deployed in Azure is equipped with a Microsoft Azure Linux Agent during installation.

Occasionally, the Microsoft Azure portal states that the virtual machine agent is not ready and requests troubleshooting the issue:

 

kcheng_0-1654838558263.png

 

When this issue occurs, it usually indicates a problem with connecting to the Azure Fabric Controller (168.63.129.16).

Confirm this is the case with the following steps:

 

  1. To check the connectivity between the FortiGate virtual machine agent and Azure Fabric Controller, run the following command:

 

diag deb app waagent -1 

diag deb console timestamp enable

diag deb en

diag test app waagent 1

 

kcheng_1-1654838716236.png

 

Note that the virtual agent is trying to connect to Azure Fabric Controller IP 168.63.129.16 and the connection did not manage to go through.

 

  1. Run the debug flow to check on the traffic flow:

     

    diag deb flow filter addr 168.63.129.16

     

     

    diag deb console timestamp enable

    diag deb flow sh function-name en

    diag deb flow sh iprope en

     

     

    diag deb flow trace start 10000

    diag deb en

     

    kcheng_2-1654838774007.png

     

    From the debug flow above, it is possible to see that the traffic towards the Azure Fabric Controller is being forwarded to port2 which is supposed to be meant for the internal network traffic.

     

     

  2. Check the routing table for 168.63.129.16 with the following command:

     

get router info routing-table details 168.63.129.16

 

kcheng_3-1654838881574.png

 

Note that there is a static route configured to route traffic destined to 168.63.129.16/32 to port2.

 

 

  1. The Azure Fabric Controller IP is considered an external IP from FortiGate's point of view.

     As a result, the traffic should be exiting using port1 instead of port2.

     

    In the above scenario, removing the static route and re-initiating waagent connection will resolve the issue.

     

    kcheng_4-1654838968921.png

     

    Recheck the connection with the following commands, which should show that waagent is communicating successfully with Azure Fabric Controller:

     

    diag deb reset

    diag deb app waagent -1

     

    diag deb console timestamp enable

    diag deb en

    diag test app waagent 1

     

    kcheng_5-1654839041491.png

     

     

  2. Once the waagent is able to send the traffic towards the correct interface, the agent status and host details will appear in the Azure console respectively:

     

    kcheng_6-1654839086055.png

     

    The connection between the Microsoft Azure Agent and Azure Fabric controller will not impact FortiGate performance.

    There is no downtime required to perform the above remediation steps.

     

    If External Load Balance or Internal Load Balance is used, the connection of the virtual machine agent will be transmitted on both port1 and port2.

    As a result, 2 static routes are required:

     

    config router static
        edit <static_route_index>
            set dst 168.63.129.16 255.255.255.255
            set gateway <gateway_IP>
            set device "port2"
        next
        edit <static_route_index>
            set dst 168.63.129.16 255.255.255.255
            set gateway <gateway_IP>
            set device "port1"
        next
    end

     

    For more information on deploying the FortiGate with ELB/ILB scenario, refer to the following document:

    Configuring FGSP session sync

     

 

Note: If the message appears on the passive VM only in a deployment consisting of an HA cluster of Azure FortiGate VM running in Active-Passive model, this is expected behavior. It occurs because, when running HA in Active-Passive mode, the passive device does not have the routing table. As a result, it will not be able to connect to the Azure Fabric Controller IP at 168.63.129.16/32.

 

Note-2: If deploying with single Load balancer (ELB) traffic for Azure Fabric Controller IP at 168.63.129.16/32 router to an external connected port only as the traffic to Azure IP 168.63.129.16/32 should be flowing through port1 as it is external (Azure backbone) traffic. The agent is configured to communicate to Azure backbone IP to send host information and host monitoring purposes. Hence, the traffic should not be sent to internal network (port2).