Hello,
Recently, I upgraded all my FortiGates from FortiOS 7.2.6 to 7.4.5. After several weeks with no issues, I took a closer look at my configuration and noticed an unusual increase in IPSec VPN errors.
Specifically, the TX error counter on the IPSec VPN interface has been steadily increasing on all FortiGates with a PPPoE WAN interface.
stat: rxp=6099751 txp=4612589 rxb=3619539840 txb=1263016689 rxe=0 txe=3928
I thoroughly reviewed my configuration and conducted several tests. While I can replicate the TX errors, they don’t appear to impact traffic or data transfer. Given this, I decided to open a support case. After several weeks of investigation, no specific issues related to FortiOS have been identified.
I’ve also attempted to adjust the MTU and TCP-MSS settings in my firewall policies, but these changes haven’t resolved the issue. I already tried to edit these values to lower size but nothing change.
Here’s the relevant part of my configuration:
config system interface
edit wan
set mtu-override enable
set mtu 1492
next
end
config firewall policy
edit 1001
set tcp-mss-sender 1380
set tcp-mss-receiver 1380
set auto-asic-offload disable
next
end
I’m already familiar with some common causes of TX errors, such as:
I have double-checked these possible causes and haven’t identified any related traffic issues.
I would appreciate your assistance in understanding what might be causing these TX errors and the potential reasons behind them.
Thank you very much.
Solved! Go to Solution.
To conclude this post, after weeks of debugging, I received confirmation from support that there is a well-known issue with SOC4 platforms related to the size of the CP queue.
The current workaround is to disable ipsec-asic-offload using the following commands:
config system global
set ipsec-asic-offload disable
end
A permanent solution will be available soon, but for now, this is the only way to address the issue.
Please monitor your FortiGate's performance closely. While this workaround might impact CPU usage, it should normally remain within acceptable limits.
Hello Zoriax
I see txe about 0.15% of txp. I think this is not bad at all.
On the other hand, it can also be related to the WAN connection. So do you see any tx errors on the associated WAN interface?
Also on the remote side do you see any rx/tx error on the tunnel interface or on the associated WAN interface?
Hello AEK,
Thanks for your response. The tx error is equals to 0 on WAN interface. Same on remote site, tx error is equals to 0. So it only increased on spoke.
The number is low because I cleared counter just before creating this post.
I continued my tests and tried downgrading my FortiGate to version 7.2.10, but observed the same behavior on this release.
Therefore, I suspect there might be something unusual in my configuration, likely related to MTU/MSS settings.
I can reproduce the TX errors with an SMB transfer (on Windows). At the beginning of the transfer, it appears there is a negotiation that causes TX errors to increase. After that, the traffic stabilizes, and no further errors occur.
Here, I started with txe=0, download an iso from on computer to another one (trough IPSec VPN)
stat: rxp=108572 txp=266477 rxb=4725342 txb=355629854 rxe=316 txe=3327 rxd=0 txd=0 mc=0 collision=0 @ time=1731595260
As you can see here, there is a slight peak at the beginning of the transfer, which I believe is our culprit.
I really need you help to understand how I can prevent this negotiation (maybe it's not on the FortiGate itself). I already deal with MTU on interface WAN and LAN according to this documentation : https://community.fortinet.com/t5/FortiGate/Technical-Tip-Behavior-of-TCP-MSS-setting-under-system-i... but unfortunately no change.
Thanks
I continued my investigations and I think I founded the culprit. I tried to remove ASIC inside VPN tunnel with command :
config system global
set ipsec-asic-offload disable
end
Then after flushing tunnel, tx errors totally disappear and my tunnel overall speed is approximatively two times better.
So why suddenly asic offload is a problem on an IPSec VPN tunnel ? Should I consider changing my VPN settings (especially proposal and dhgrp) ?
I saw errors may happen with ASIC offload in many functionalities.
Try open a ticket, so if it is a bug then Fortinet will fix it in a future patch.
Hi AEK,
I opened a ticket at the beginning of this post. Support reply to me it's not a bug. I continued with them to know why. In my case, all my PPPoE devices (more than 100) are impacted. This reinforces my belief that this is not an isolated case but rather a systemic issue within these versions of FortiOS.
I my particular case, I can also reduce error with (in vpn phase1)
ip-fragmentation pre-encapsulation
With pre-encapsulation, bandwidth trough my tunnel becomes very slow (2 times slower). There is something with MTU/MSS of fragmentation but I don't know where to begin my investigation.
To conclude this post, after weeks of debugging, I received confirmation from support that there is a well-known issue with SOC4 platforms related to the size of the CP queue.
The current workaround is to disable ipsec-asic-offload using the following commands:
config system global
set ipsec-asic-offload disable
end
A permanent solution will be available soon, but for now, this is the only way to address the issue.
Please monitor your FortiGate's performance closely. While this workaround might impact CPU usage, it should normally remain within acceptable limits.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1738 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.