FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
PabloSaco
Staff
Staff
Article Id 365151
Description

This article describes how to change a FortiGate's network processor behavior in order to accommodate for unconventionally tagged traffic over virtual wire pair interfaces. 

Scope FortiGate.
Solution

On rare occasions, an issue may appear with traffic using a double tag (QinQ) but not using the encapsulation defined by standard IEEE 802.1ad, in other words, traffic with 802.1Q encapsulation for both internal and external tags, this type of setup may also be by design even if QinQ is rarely implemented this way.

 

The problem this brings is that if traffic is not compliant with IEEE 802.1ad standards for QinQ (802.1ad as external tag and 802.1Q as internal) a network processor will reject these packets for acceleration, generating soft CPU interrupts that may impact the FortiGate's performance severely.

 

In the following example output from 'get system performance status' shows CPUs 46 and 58 with signs of overutilization due to soft interrupts (also known as soft IRQs) which are caused by a failure in hardware acceleration:

 

FortiGate # get system performance status
<output omitted for brevity>
CPU43 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU44 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU45 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU46 states: 0% user 0% system 0% nice 1% idle 0% iowait 0% irq 99% softirq
CPU47 states: 0% user 0% system 0% nice 74% idle 0% iowait 0% irq 26% softirq
CPU48 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU49 states: 0% user 0% system 0% nice 76% idle 0% iowait 0% irq 24% softirq
CPU50 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU51 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU52 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU53 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU54 states: 0% user 0% system 0% nice 76% idle 0% iowait 0% irq 24% softirq
CPU55 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU56 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU57 states: 0% user 0% system 0% nice 75% idle 0% iowait 0% irq 25% softirq
CPU58 states: 0% user 0% system 0% nice 16% idle 0% iowait 0% irq 84% softirq

 

The issue can also be seen by checking the session table:

 

FortiGate # diagnose sys session list 

<output omitted for brevity>
vlanid=1010:100 <- double tag
npu_state=00000000
npu info: flag=0x00/0x81, offload=0/0, ips_offload=0/0, epid=0/0, ipid=0/148, vlan=0x0000/0x0064
vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0, ha_divert=0/0
no_ofld_reason: offload-denied
ext_header_type=0x22:0x22

 

On the NPU section of the session's data, offload is being denied with the reason 'offload-denied'.


To circumvent this issue in cases where the tagging cannot be corrected for any reason, the network processor's behavior can be changed to accept packets using a double 802.1Q tag using the following steps:

 

  1. This change has to be applied on a per NPU basis, if the FortiGate in question has several NPUs, applying the change to all of them may cause new problems as traffic with a single tag or no tag might be dropped after the change. For this reason, it is ideal to determine on which port the double-tagged traffic in question will arrive and then map the port or ports to the NPUs that will be configured, this is done as follows:

 

config system npu-post

    config port-npu-map

        edit port1

            set npu-group NP0-to-NP1

end

 

In this example the double-tagged traffic is arriving at port1, therefore port1 was mapped to NPUs 0 and 1 which are the ones to have their behavior changed, for the ports that will be processing normal traffic the same changes should be applied but mapping them to NPUs which will remain operating normally.

 

  1. Configure the NPUs to accept an outer 802.1Q tag using the following CLI command (this can only be done through CLI):

     

    diagnose npu np7 dvlan-mode <dvlan_mode> <npid>

    <dvlan_mode>
    802.1AD Outer TPID is 0x88A8.
    802.1Q Outer TPID is 0x8100.

    <npid> npid or all.

     

    In this example, the following was configured for NPUs 0 and 1:

     

    diagnose npu np7 dvlan-mode 802.1Q 0

     

    diagnose npu np7 dvlan-mode 802.1Q 1

     

    The FortiGate will have to reboot for these changes to be applied.

     

     

  2. Configure the outer VLAN ID under the virtual wire's configuration:

     


config system virtual-wire-pair
    edit "dvlan-test"
        set member "portX" "portX"
        set wildcard-vlan enable
        set outer-vlan-id 1010
    next

end

 

Here the VLAN ID used as the outer/external one has to be configured as such for this to work properly, for this example the VLAN ID is 1010.

 

After these changes, CPU interrupts should be gone or reduced considerably and traffic should be accelerated correctly:

 

FortiGate # get system performance status
<output omitted for brevity>
CPU43 states: 0% user 0% system 0% nice 94% idle 0% iowait 1% irq 5% softirq
CPU44 states: 0% user 0% system 0% nice 96% idle 0% iowait 0% irq 4% softirq
CPU45 states: 0% user 0% system 0% nice 93% idle 0% iowait 1% irq 6% softirq
CPU46 states: 0% user 0% system 0% nice 92% idle 0% iowait 1% irq 7% softirq
CPU47 states: 0% user 0% system 0% nice 92% idle 0% iowait 1% irq 7% softirq
CPU48 states: 0% user 0% system 0% nice 96% idle 0% iowait 0% irq 4% softirq
CPU49 states: 0% user 0% system 0% nice 95% idle 0% iowait 0% irq 5% softirq
CPU50 states: 0% user 0% system 0% nice 93% idle 0% iowait 1% irq 6% softirq
CPU51 states: 0% user 0% system 0% nice 96% idle 0% iowait 0% irq 4% softirq
CPU52 states: 0% user 0% system 0% nice 96% idle 0% iowait 0% irq 4% softirq
CPU53 states: 0% user 0% system 0% nice 92% idle 0% iowait 1% irq 7% softirq
CPU54 states: 0% user 0% system 0% nice 94% idle 0% iowait 1% irq 5% softirq
CPU55 states: 0% user 0% system 0% nice 91% idle 0% iowait 1% irq 8% softirq
CPU56 states: 0% user 0% system 0% nice 95% idle 0% iowait 1% irq 4% softirq
CPU57 states: 0% user 0% system 0% nice 95% idle 0% iowait 0% irq 5% softirq
CPU58 states: 0% user 0% system 0% nice 94% idle 0% iowait 0% irq 6% softirq

 

FortiGate # diagnose sys session list 

<output omitted for brevity>

vlanid=1010:100
npu_state=0x000c00 ofld-O ofld-R
npu info: flag=0x81/0x81, offload=9/9, ips_offload=0/0, epid=149/151, ipid=151/149, vlan=0x0064/0x0064
vlifid=151/149, vtag_in=0x0064/0x0064 in_npu=5/5, out_npu=5/5, fwd_en=0/0, qid=30/30, ha_divert=0/0
ext_header_type=0x22:0x22

 

This kind of solution should ideally be temporary and considered a workaround until tags can be properly implemented.

Related documents:

Improve DVLAN QinQ performance for NP7 platforms over virtual wire pairs

FortiGate 4400F and 4401F fast path architecture