FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
GeorgeZhong
Staff & Editor
Staff & Editor
Article Id 425136
Description This article describes a multicast routing issue in a FortiGate ADVPN hub-and-spoke deployment, where multicast traffic forwarded by the hub based on the multicast routing table is received by only one spoke when hardware offloading is enabled on the hub.
Scope All FortiGate hardware models, Multicast Routing.
Solution

In the standard ADVPN hub-and-spoke topology described below, multicast PIM-SM (PIMv3) is enabled on the hub and both spokes.

 

The hub FortiGate 200F interface x3, which serves as the gateway for the multicast source, is configured as the multicast Rendezvous Point (RP). The hub forwards multicast traffic from the source to all spokes based on the multicast routing table.

 

BGP peering is established between the hub and spokes over the IPsec tunnel VPN1, allowing each spoke to learn the unicast route toward both the multicast source and the RP.

 

  Untitled.png

 

Part of the Multicast configuration on Hub and Spoke is as below:

 

Hub:

 

FortiGate-201F (multicast) # sh
config router multicast
    set multicast-routing enable
        config pim-sm-global
            config rp-address
                edit 1
                    set ip-address 10.2.2.102
                next
            end
        end
        config interface
            edit "VPN1"
                set pim-mode sparse-mode
            next
            edit "x3"
                set pim-mode sparse-mode
            next
        end
end

 

Spoke1:

 

spoke1 # config router multicast

spoke1 (multicast) # sh
config router multicast
    set multicast-routing enable
        config pim-sm-global
            config rp-address
                edit 1
                    set ip-address 10.2.2.102
                next
            end
        end
        config interface
            edit "VPN1"
                set pim-mode sparse-mode
            next
            edit "port2"
                set pim-mode sparse-mode
            next
        end
end

Spoke2:

 

Spoke2 (multicast) # sh
config router multicast
    set multicast-routing enable
        config pim-sm-global
            config rp-address
                edit 1
                    set ip-address 10.2.2.102
                next
            end
        end
        config interface
            edit "VPN1"
                set pim-mode sparse-mode
            next
            edit "port2"
                set pim-mode sparse-mode
            next
        end
end

 

Once a multicast receiver is registered to a spoke via an IGMP membership report, each spoke successfully joins the RP as well.

 

For example, on Spoke 1, the multicast routing table shows that group 234.5.6.7 is received from the RPF neighbor 169.254.96.253 (the hub) via the IPsec tunnel VPN1. The routing table on Spoke 2 shows similar output.

 

spoke1 # get router info multicast pim sparse-mode table
IP Multicast Routing Table

(*,*,RP) Entries: 0
(*,G) Entries: 2
(S,G) Entries: 0
(S,G,rpt) Entries: 0
FCR Entries: 0

(*, 234.5.6.7) - 3
RP: 10.2.2.102
RPF nbr: 169.254.96.253
RPF idx: VPN1
RPF RP: 10.2.2.102, 1, 3, 1, 1681692846
Upstream State: JOINED
Downstream Expired: 0
 Local:
     port2
     Total: 1
 Joined:
     Total: 0
 Lost assert:
     Total: 0
FCR:

 

On the hub, the multicast routing table confirms that downstream receivers (spokes) have joined, and multicast traffic for group 234.5.6.7 sourced from 10.2.2.78 is forwarded out via VPN1.:

 

FortiGate-201F # get router info multicast pim sparse-mode table
IP Multicast Routing Table

(*,*,RP) Entries: 0
(*,G) Entries: 2
(S,G) Entries: 1
(S,G,rpt) Entries: 1
FCR Entries: 0

VRF 0 (*, 234.5.6.7) - 3
RP: 10.2.2.102
RPF nbr: 0.0.0.0
RPF idx: None
RPF RP(VRF=0): 10.2.2.102, 2, 3, 1, 846930899
Upstream State: JOINED
Downstream Expired: 0
 Local:
     Total: 0
 Joined:
     VPN1
     Total: 1
 Lost assert:
     Total: 0
FCR:
VRF 0 (10.2.2.78, 234.5.6.7) - 3
RPF nbr: 0.0.0.0
RPF idx: None
RPF RP(VRF=0): 10.2.2.102, 2, 3, 1, 846930899
RPF Source(VRF=0): 1, 2, 846930899
SPT bit: 1
Upstream State: JOINED
Downstream Expired: 0
 Local:
     Total: 0
 Joined:
    VPN1
     Total: 1
 Lost assert:
     Total: 0
 Outgoing:
    VPN1
     Total: 1

 

Despite correct multicast routing state on both the hub and spokes, packet captures reveal that only one spoke receives continuous multicast traffic at any given time.

 

For example, in below packet capture below, Spoke 1 receives only the first two multicast packets and then stops. Spoke 2 continues to receive multicast packets without interruption. If multicast traffic stops and restarts, the issue may switch to the other spoke.

 

Spoke1:

 

Spoke1 # diagnose sniffer packet any 'host 234.5.6.7' 4 0 l
Using Original Sniffing Mode
interfaces=[any]
filters=[host 234.5.6.7]
2025-08-27 12:16:39.374336 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:39.374629 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:44.561731 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:44.561782 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000

Spoke2:

 

Spoke2 # diagnose sniffer packet any 'host 234.5.6.7' 4 0 l
Using Original Sniffing Mode
interfaces=[any]
filters=[host 234.5.6.7]
2025-08-27 12:16:39.390069 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:39.390459 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:44.577609 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:44.577709 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:49.765328 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:49.765375 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:54.795845 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:54.795921 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:59.983363 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:16:59.983434 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:17:05.139790 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:17:05.139847 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:17:10.280840 VPN1 in 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000
2025-08-27 12:17:10.280914 port2 out 10.2.2.78.50670 -> 234.5.6.7.8910: udp 1000

 

This issue can be addressed by disabling the hardware acceleration either on the Multicast policy on the Hub, or the IPsec phase1 setting. 

 

To disable the hardware acceleration on the Multicast policy that forwards the traffic to the IPsec tunnel on the Hub FortiGate 200F:

 

config firewall multicast-policy
    edit 2
        set uuid e67d77b8-8279-51f0-6d1e-c6296c430d02
        set name "multicast_out"
        set srcintf "x3"
        set dstintf "VPN1"
        set srcaddr "all"
        set dstaddr "all"
        set logtraffic all
        set auto-asic-offload disable <<<<<<<<<<<
    next
end

 

Alternatively, to disable the hardware acceleration on the IPsec phase1 on the Hub:

 

config vpn ipsec phase1-interface

    edit VPN1

        set npu-offload disable

    next

end

 

Applying either of these workarounds resolves the issue, allowing both spokes to consistently receive multicast traffic from the hub.

 

This behavior is caused by a hardware acceleration limitation when FortiGate NPUs handle multicast session replication over IPsec in a multicast routing scenario. As a result, only one copy of the multicast stream is forwarded to a single spoke. Disabling hardware acceleration for multicast traffic on the hub is recommended in this scenario.

 

Note:
If multicast routing is not enabled on the hub, multicast traffic is broadcast directly without referencing the routing table. In that case, both spokes can receive multicast traffic even with hardware acceleration enabled.