Created on
‎06-09-2022
10:45 PM
Edited on
‎09-25-2024
09:28 PM
By
Jean-Philippe_P
Description |
This article describes how to resolve an issue with the Microsoft Azure Linux Agent (waagent) which manages Linux and FreeBSD provisioning and VM interaction with the Azure Fabric Controller.
(For more information, refer to Azure Linux VM Agent overview). |
Scope | FortiGate Azure. |
Solution |
FortiGate deployed in Azure is equipped with a Microsoft Azure Linux Agent during installation. Occasionally, the Microsoft Azure portal states that the virtual machine agent is not ready and requests troubleshooting the issue:
When this issue occurs, it usually indicates a problem with connecting to the Azure Fabric Controller (168.63.129.16). Confirm this is the case with the following steps:
diag deb app waagent -1 diag deb console timestamp enable diag deb en diag test app waagent 1
Note that the virtual agent is trying to connect to Azure Fabric Controller IP 168.63.129.16 and the connection did not manage to go through.
get router info routing-table details 168.63.129.16
Note that there is a static route configured to route traffic destined to 168.63.129.16/32 to port2.
config router static
What will happen in this situation is that the waagent will use the internal port2 as mentioned before, and since we need the static route for ILB probing we cannot delete the route for port2.
Sniffer packet.
Waagent debug.
2024-07-10 15:06:56 waagent<15963> httpGet:560 URL: http://168.63.129.16/machine/?comp=goalstate <-- Communication with agent stopped.
The solution for this case is to change the priorities of the static routes:
config router static set priority 2 set priority 1
This will allow the routing outgoing (ELB + waagent) through port1 and accepting egress traffic (from ILB to 168.63.129.16/32) from port2.
For more information on deploying the FortiGate with ELB/ILB scenario, refer to the following document:
Note 1: If the message appears on the passive VM only in a deployment consisting of an HA cluster of Azure FortiGate VM running in an Active-Passive model, this is expected behavior. It occurs because, when running HA in Active-Passive mode, the passive device does not have the routing table. As a result, it will not be able to connect to the Azure Fabric Controller IP at 168.63.129.16/32.
Note 2: If deploying with single Load balancer (ELB) traffic for Azure Fabric Controller IP at 168.63.129.16/32 router to an external connected port only as the traffic to Azure IP 168.63.129.16/32 should be flowing through port1 as it is external (Azure backbone) traffic. The agent is configured to communicate to Azure backbone IP to send host information and host monitoring purposes. Hence, the traffic should not be sent to the internal network (port2). |