This article describes the issue when FortiGate Azure VM on v7.4.7 and earlier, v7.6.2 and earlier versions, enters conserve mode due to Network Virtual Appliance (NVA) health checks sent to Azure. High memory usage is observed on the daemon 'azd'.
FortiGate Azure VM on v7.4.7 and earlier, v7.6.2 and earlier versions.
When FortiGate Azure VM enters conserve mode, the devices’ major services can be disrupted. The output below shows the current state of memory consumption when the device is sending NVA health check metrics to Azure. The daemon 'azd' gradually consumes high memory until the conserve mode threshold is reached.
FGT-Azure-VM # diag hardware sysinfo conserve
memory conserve mode: on
total RAM: 32169 MB
memory used: 28411 MB 88% of total RAM
memory freeable: 546 MB 1% of total RAM
memory used + freeable threshold extreme: 30561 MB 95% of total RAM
memory used threshold red: 28309 MB 88% of total RAM
memory used threshold green: 26379 MB 82% of total RAM
Checking the memory utilization shows process 'azd' consuming the highest memory resource.
FGT-Azure-VM # diagnose sys top-mem 5
azd (2315): 25304360kB <<<-------
node (2310): 497535kB
wad (2411): 127804kB
ipsengine (21004): 55761kB
ipsengine (21005): 54162kB
Top-5 memory used: 26039622kB
FGT-Azure-VM # diagnose sys top 2 30
Run Time: 203 days, 22 hours and 32 minutes
0U, 0N, 0S, 100I, 0WA, 0HI, 0SI, 0ST; 32169T, 3179F
node 2310 S 0.5 1.5 2
wad 2408 S 0.5 0.2 2
httpsd 25433 S 0.5 0.1 4
httpsd 25431 S 0.5 0.1 6
sessionsync 3909 S 0.5 0.0 2
ikecryptd 3923 S 0.5 0.0 4
newcli 25442 R 0.5 0.0 5
azd 2315 S 0.0 76.9 4 <<<--------
Real-time application debugs would show NVA health check metrics being sent to Azure.
FGT-Azure-VM # diagnose test app azd 4
azd stats: primary
FGT-Azure-VM # diagnose debug application azd -1
FGT-Azure-VM # diagnose debug enable
FGT-Azure-VM # azd_refresh_tunnel_stat: tunnel: VA_aze-0, Up, tx/rx: 5913017059/7948051117 bytes
[...some output omitted...]
---- health metrics report ----
report health metric: NvaAvailability
report health metric: NvaCpuUtilization
report health metric: NvaMemoryUtilization
report health metric: NvaNetworkInterfaceBitsInPerSecond
report health metric: NvaNetworkInterfaceBitsOutPerSecond
report health metric: NvaTunnelCount
report health metric: NvaTunnelAvailability
report health metric: NvaTunnelBitsInPerSecond
report health metric: NvaTunnelBitsOutPerSecond
azd_refresh_tunnel_stat: tunnel: VA_aze-0, Up, tx/rx: 5913091017/7948065290 bytes
azd_refresh_tunnel_stat: tunnel: mgmt_tunnel_0, Up, tx/rx: 98336089/21489828 bytes
azd_refresh_tunnel_stat: tunnel: NJ_aze-0, Up, tx/rx: 126181975818/1741611614192 bytes
azd_refresh_tunnel_stat: tunnel: mgmt_tunnel_1, Up, tx/rx: 0/0 bytes
azd_update_series_tunnel_tx: tunnel: VA_aze-0, tx speed: 15516 bps
azd_update_series_tunnel_rx: tunnel: VA_aze-0, rx speed: 3269 bps
azd_update_series_tunnel_tx: tunnel: mgmt_tunnel_0, tx speed: 237 bps
azd_update_series_tunnel_rx: tunnel: mgmt_tunnel_0, rx speed: 32 bps
azd_update_series_tunnel_tx: tunnel: NJ_aze-0, tx speed: 170106 bps
azd_update_series_tunnel_rx: tunnel: NJ_aze-0, rx speed: 439329 bps
azd_update_series_tunnel_tx: tunnel: mgmt_tunnel_1, tx speed: 0 bps
azd_update_series_tunnel_rx: tunnel: mgmt_tunnel_1, rx speed: 0 bps
Crash log events can also be seen, as follows:
FGT-Azure-VM# diagnose debug crashlog read
16147: 2024-12-19 19:22:20 the killed daemon is /bin/azd: status=0x8
16148: 2024-12-19 19:22:40 azd previously crashed 1 times. The last crash was at 2024-12-19 19:22:20.
16149: 2024-12-19 19:22:40 <19580> firmware FortiGate-VM64-AZURE v7.4.4,build2662b2662,240514 (GA.F)
16150: 2024-12-19 19:22:40 (Release)
16151: 2024-12-19 19:22:40 <19580> application azd
16152: 2024-12-19 19:22:40 <19580> *** signal 8 (Floating point exception) received ***
16153: 2024-12-19 19:22:40 <19580> Register dump:
16154: 2024-12-19 19:22:40 <19580> RAX: 0000000000000000 RBX: 000000001052b8c8
16155: 2024-12-19 19:22:40 <19580> RCX: 0000000000000000 RDX: 0000000000000000
16156: 2024-12-19 19:22:40 <19580> R08: 0000000000000000 R09: 0000000000000000
16157: 2024-12-19 19:22:40 <19580> R10: 000000001052baf8 R11: 0000000000000246
16158: 2024-12-19 19:22:40 <19580> R12: 00000000051033e0 R13: 000000001052b8c8
16159: 2024-12-19 19:22:40 <19580> R14: 00000000104fddb8 R15: 0000000000000005
16160: 2024-12-19 19:22:40 <19580> RSI: 00000000104fddb8 RDI: 000000001052b8c8
16161: 2024-12-19 19:22:40 <19580> RBP: 00007ffd6a28a200 RSP: 00007ffd6a28a1f0
16162: 2024-12-19 19:22:40 <19580> RIP: 00000000004fbc33 EFLAGS: 0000000000010246
16163: 2024-12-19 19:22:40 <19580> CS: 0033 FS: 0000 GS: 0000
16164: 2024-12-19 19:22:40 <19580> Trap: 0000000000000000 Error: 0000000000000000
16165: 2024-12-19 19:22:40 <19580> OldMask: 0000000000000000
16166: 2024-12-19 19:22:40 <19580> CR2: 0000000000000000
16167: 2024-12-19 19:22:40 <19580> stack: 0x7ffd6a28a1f0 - 0x7ffd6a28b930
16168: 2024-12-19 19:22:40 <19580> Backtrace:
[..some lines omitted..]
16190: 2025-01-21 14:57:26 service=kernel conserve=on total="6971 MB" used="6134 MB" red="6134 MB"
16191: 2025-01-21 14:57:26 green="5716 MB" msg="Kernel enters memory conserve mode"
The issue is reported under ID 1109724 and is fixed on v7.4.8 and v7.6.3.
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.