Before I start, I want to clarify that the remote iPerf server I’m testing against is on a 10Gbps link.
I have a Fortigate 61F with the wan2 interface connected to a 1Gbps (synchronous) Internet connection. If I run an iPerf test from that interface, I can achieve ~740mbps. However, if I run the same test from a machine on the inside of the firewall, I can never get past 300Mbps.
Version: FortiWiFi-61F v7.2.3,build1262,221109 (GA.F)
Firmware Signature: certified
Virus-DB: 90.08592(2022-12-09 22:26)
Extended DB: 90.08592(2022-12-09 22:25)
AV AI/ML Model: 2.08777(2022-12-09 21:45)
IPS-DB: 22.00454(2022-12-08 00:58)
IPS-ETDB: 0.00000(2001-01-01 00:00)
APP-DB: 22.00454(2022-12-08 00:58)
INDUSTRIAL-DB: 22.00454(2022-12-08 00:58)
IPS Malicious URL Database: 4.00555(2022-12-09 11:23)
IoT-Detect: 22.00454(2022-12-07 17:24)
Serial-Number:
BIOS version: 05000007
System Part-Number: P24307-03
Log hard disk: Available
Hostname:
Private Encryption: Disable
Operation Mode: NAT
Current virtual domain: root
Max number of virtual domains: 10
Virtual domains status: 1 in NAT mode, 0 in TP mode
Virtual domain configuration: disable
FIPS-CC mode: disable
Current HA mode: standalone
Branch point: 1262
Release Version Information: GA
Here is the test from the firewall:
diagnose traffictest run -c x.x.x.x -P 4
[ ID] Interval Transfer Bandwidth Retr
[ 7] 0.00-10.01 sec 204 MBytes 171 Mbits/sec 44 sender
[ 7] 0.00-10.01 sec 204 MBytes 171 Mbits/sec receiver
[ 9] 0.00-10.01 sec 212 MBytes 177 Mbits/sec 138 sender
[ 9] 0.00-10.01 sec 212 MBytes 177 Mbits/sec receiver
[ 11] 0.00-10.01 sec 243 MBytes 204 Mbits/sec 40 sender
[ 11] 0.00-10.01 sec 243 MBytes 204 Mbits/sec receiver
[ 13] 0.00-10.01 sec 224 MBytes 188 Mbits/sec 10 sender
[ 13] 0.00-10.01 sec 224 MBytes 188 Mbits/sec receiver
[SUM] 0.00-10.01 sec 883 MBytes 741 Mbits/sec 232 sender
[SUM] 0.00-10.01 sec 882 MBytes 740 Mbits/sec receiver
Here is the test from the inside machine:
iperf3 -c x.x.x.x -P 4
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 83.5 MBytes 70.0 Mbits/sec sender
[ 4] 0.00-10.00 sec 83.5 MBytes 70.0 Mbits/sec receiver
[ 6] 0.00-10.00 sec 83.2 MBytes 69.8 Mbits/sec sender
[ 6] 0.00-10.00 sec 83.2 MBytes 69.8 Mbits/sec receiver
[ 8] 0.00-10.00 sec 83.0 MBytes 69.6 Mbits/sec sender
[ 8] 0.00-10.00 sec 83.0 MBytes 69.6 Mbits/sec receiver
[ 10] 0.00-10.00 sec 83.4 MBytes 69.9 Mbits/sec sender
[ 10] 0.00-10.00 sec 83.4 MBytes 69.9 Mbits/sec receiver
[SUM] 0.00-10.00 sec 333 MBytes 279 Mbits/sec sender
[SUM] 0.00-10.00 sec 333 MBytes 279 Mbits/sec receiver
What’s interesting is that if I increase the threads on the inside client from 4 to 20, I can start realizing those higher speeds.
iperf3 -c x.x.x.x -P 4
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 57.0 MBytes 47.8 Mbits/sec sender
[ 4] 0.00-10.00 sec 57.0 MBytes 47.8 Mbits/sec receiver
[ 6] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec sender
[ 6] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec receiver
[ 8] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec sender
[ 8] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec receiver
[ 10] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec sender
[ 10] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec receiver
[ 12] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec sender
[ 12] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec receiver
[ 14] 0.00-10.00 sec 56.8 MBytes 47.6 Mbits/sec sender
[ 14] 0.00-10.00 sec 56.8 MBytes 47.6 Mbits/sec receiver
[ 16] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec sender
[ 16] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec receiver
[ 18] 0.00-10.00 sec 56.1 MBytes 47.1 Mbits/sec sender
[ 18] 0.00-10.00 sec 56.1 MBytes 47.1 Mbits/sec receiver
[ 20] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec sender
[ 20] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec receiver
[ 22] 0.00-10.00 sec 56.4 MBytes 47.3 Mbits/sec sender
[ 22] 0.00-10.00 sec 56.4 MBytes 47.3 Mbits/sec receiver
[ 24] 0.00-10.00 sec 54.2 MBytes 45.5 Mbits/sec sender
[ 24] 0.00-10.00 sec 54.2 MBytes 45.5 Mbits/sec receiver
[ 26] 0.00-10.00 sec 56.0 MBytes 47.0 Mbits/sec sender
[ 26] 0.00-10.00 sec 56.0 MBytes 47.0 Mbits/sec receiver
[ 28] 0.00-10.00 sec 57.0 MBytes 47.8 Mbits/sec sender
[ 28] 0.00-10.00 sec 57.0 MBytes 47.8 Mbits/sec receiver
[ 30] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec sender
[ 30] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec receiver
[ 32] 0.00-10.00 sec 48.6 MBytes 40.8 Mbits/sec sender
[ 32] 0.00-10.00 sec 48.6 MBytes 40.8 Mbits/sec receiver
[ 34] 0.00-10.00 sec 52.9 MBytes 44.4 Mbits/sec sender
[ 34] 0.00-10.00 sec 52.8 MBytes 44.3 Mbits/sec receiver
[ 36] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec sender
[ 36] 0.00-10.00 sec 56.6 MBytes 47.5 Mbits/sec receiver
[ 38] 0.00-10.00 sec 45.2 MBytes 38.0 Mbits/sec sender
[ 38] 0.00-10.00 sec 45.2 MBytes 38.0 Mbits/sec receiver
[ 40] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec sender
[ 40] 0.00-10.00 sec 56.5 MBytes 47.4 Mbits/sec receiver
[ 42] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec sender
[ 42] 0.00-10.00 sec 56.9 MBytes 47.7 Mbits/sec receiver
[SUM] 0.00-10.00 sec 1.08 GBytes 929 Mbits/sec sender
[SUM] 0.00-10.00 sec 1.08 GBytes 928 Mbits/sec receiver
I'm not using a VPN tunnel, I don’t have any strange routing (no PBR), and I don’t even run a routing protocol, just a single static default route. There is no traffic-shaping, and I did not create any software switches. I have a single physical interface is connected to a Cisco switch (no errors on the ports), and it’s configured for VLAN trunking.
I thought maybe the outbound policy might be slowing it down, so I’ve removed all UTM features, set the SSL inspection to “no-inspection”, and even disable traffic logging but it doesn’t make the slightest difference.
config firewall policy
edit 2
set name "outbound default"
set uuid 31503162-25f6-51eb-f268-20a2aadc8c96
set srcintf "Servers" "Workstations" "Lab-1"
set dstintf "Outside"
set action accept
set srcaddr "all"
set dstaddr "all"
set schedule "always"
set service "ALL"
set logtraffic disable
set logtraffic-start enable
next
I’m not sure where to start looking next so any suggestions would be appreciated!
Thanks!!
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
What window size is your inside client using? That is likely your issue. Or actually now that I look closer looks like you are using UDP on your server. Can you change to TCP?
Resolution
Check lInk speed WAN interface
TIP: Fig. 1. Optimizing the Link Speed and MTU on the Advanced tab of the WAN interface where the defaults fail to establish a compatible ISP connection.
Check the MTU.
A mismatch in the maximum transmission unit (MTU) between the firewall and the ISP device can impact the bandwidth. In one of my cases, just by optimizing the MTU we were able to regain the bandwidth (5% to 90%). An optimal condition for the test would be to connect a computer directly to the firewall (Fig.2). This is to rule out uncertainties about latency due to a network. Image
TIP: Fig.2. Checking MTU on a directly connected computer is my preferred way to minimize uncertainties about latency involved in a complex network.
A typical MTU optimization test involves doing a ping with the options of -f (don't fragment) and -l (size) as summarized in Fig. 3. I start with an MTU of 1500 and find out a value where there is a successful ping. Then I add 28 bits to derive an MTU value I would be using on the WAN interface.
TIP: Ping Test on a Windows Computer directly connected to the Firewall. I would add 28 to the final MTU value that resulted in a successful ping.
Check the DNS settings of the Computer where you are testing.
On a few occasions, although you see a reasonable bandwidth, web pages take long to initially load. First suspect was the DNS. When you type abc on a browser, the first thing we do is a DNS name resolution to get the IP. In Fig. 4, the host is trying to resolve names by accessing local DNS- one local and another across a VPN tunnel. This is a security measure. However, if the local DNS is having an issue (overwhelmed, network latency...), it will slow the DNS name resolution. Just by using a public DNS that is readily available, we were able to overcome the slow page load issue.
Regards,
Rachel Gomez
Thank you both Graham, and Rachel for your suggestions!
The default protocol for iPerf is TCP (unless you specify -u) but I did verify that the remote server was started using TCP (the remote server is actually an Exfo MAX-880 tester).
I did test different window sizes (64K, 512K, 1M, 5M, 20M) but they make no difference at all.
The MTU on all the Fortigate interfaces is set to 1500, but the actual usable MTU is 1472. I did verify with the ISP that their ethernet side is also configured for 1500 so there are no mismatches there.
# fnsysctl ifconfig -a wan2
wan2 Link encap:Ethernet HWaddr 04:D5:90:4A:1F:41
inet addr:x.x.x.x Bcast:x.x.x.x Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:305335156 errors:0 dropped:0 overruns:0 frame:0
TX packets:1555526559 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:156619317955 (145.9 GB) TX bytes:2312546218880 (2153.7 GB)
# fnsysctl ifconfig -a VLAN-5
VLAN-5 Link encap:Ethernet HWaddr 04:D5:90:4A:1F:43
inet addr:192.168.76.1 Bcast:192.168.76.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:360399486 errors:0 dropped:0 overruns:0 frame:0
TX packets:283937545 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:488631528196 (455.1 GB) TX bytes:328796468895 (306.2 GB)
The biggest problem we’re having is with sustained throughput. A simple speed test does show good internet speeds, but when we’re doing any kind of heavy file transfers, those speeds average ~10% of the capacity.
For instance, we have a daily transfer that runs between our on-site S3 storage to a remote location. The total size is usually 300-400 GB/day (with no files being less and 5GB) and it’s accomplished using Rclone. Without the Fortigate, we can sustain 57MB/sec (~450Mbps), but through the Fortigate that speed drops to ~10MB/sec (~80Mbps).
Internally (between VLANs) on the Fortigate, we can sustain ~90MB/s (720Mbps), and all the ports on our internal switches are clean (no errors/ no drops). The speed issue only seems to happen when we’re running through the wan interface.
My only thought at this point is the FGT is busy servicing other traffic. It sounds like you are doing East-West inter-vlan routing through the FGT? That's a fairly small box to be doing edge FW and internal FW at the same time.
Are you running these tests when there is absolutely no other traffic flowing through the gate?
This is a very small site, and east-west traffic happens during the day. The Internet transfers happen at midnight when nobody is here (and there is no traffic on the network). For my testing, I did them with no other traffic (or negligible traffic) on the network.
Can you monitor CPU usage on the FGT when the transfers are happening? Is it high at all?
Also please send output of
diag sys session filter dst <FILE_SERVER_IP>
diag sys session filter src <INTERNAL_CLIENT_IP>
diag sys session list
Here are the sessions:
session info: proto=6 proto_state=16 duration=116 expire=0 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
origin-shaper=
reply-shaper=
per_ip_shaper=
class_id=0 ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/255
state=log may_dirty npu f00 app_valid log-start
statistic(bytes/packets/allow_err): org=658815174/439525/1 reply=2386874/45417/1 tuples=3
tx speed(Bps/kbps): 3427462/27419 rx speed(Bps/kbps): 19299/154
orgin->sink: org pre->post, reply pre->post dev=25->7/7->25 gwy=x.x.x.x/192.168.73.5
hook=post dir=org act=snat 192.168.73.5:46130->x.x.x.x:443(x.x.x.x:46130)
hook=pre dir=reply act=dnat x.x.x.x:443->x.x.x.x:46130(192.168.73.5:46130)
hook=post dir=reply act=noop x.x.x.x:443->192.168.73.5:46130(0.0.0.0:0)
pos/(before,after) 0/(0,0), 0/(0,0)
src_mac=90:09:d0:27:90:aa
misc=0 policy_id=2 pol_uuid_idx=541 auth_info=0 chk_client_info=0 vd=0
serial=0010f4ec tos=ff/ff app_list=2000 app=47013 url_cat=0
rpdb_link_id=00000000 ngfwid=n/a
npu_state=0x4003c08 ofld-O ofld-R
npu info: flag=0x81/0x81, offload=0/0, ips_offload=0/0, epid=65/67, ipid=67/65, vlan=0x0049/0x0000
vlifid=67/65, vtag_in=0x0049/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=0/2
no_ofld_reason:
ofld_fail_reason(kernel, drv): none/not-established, none(0)/none(0)
npu_state_err=00/04
session info: proto=6 proto_state=11 duration=5 expire=3598 timeout=3600 flags=00000000 socktype=0 sockport=0 av_idx=0 use=4
origin-shaper=
reply-shaper=
per_ip_shaper=
class_id=0 ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/255
state=log may_dirty npu f00 app_valid log-start
statistic(bytes/packets/allow_err): org=16782474/11198/1 reply=5746/12/1 tuples=3
tx speed(Bps/kbps): 0/0 rx speed(Bps/kbps): 0/0
orgin->sink: org pre->post, reply pre->post dev=25->7/7->25 gwy=x.x.x.x/192.168.73.5
hook=post dir=org act=snat 192.168.73.5:46248->x.x.x.x:443(x.x.x.x:46248)
hook=pre dir=reply act=dnat x.x.x.x:443->x.x.x.x:46248(192.168.73.5:46248)
hook=post dir=reply act=noop x.x.x.x:443->192.168.73.5:46248(0.0.0.0:0)
pos/(before,after) 0/(0,0), 0/(0,0)
src_mac=90:09:d0:27:90:aa
misc=0 policy_id=2 pol_uuid_idx=541 auth_info=0 chk_client_info=0 vd=0
serial=0010f73b tos=ff/ff app_list=2000 app=47013 url_cat=0
rpdb_link_id=00000000 ngfwid=n/a
npu_state=0x4003c08 ofld-O ofld-R
npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=65/67, ipid=67/65, vlan=0x0049/0x0000
vlifid=67/65, vtag_in=0x0049/0x0000 in_npu=1/1, out_npu=1/1, fwd_en=0/0, qid=0/3
total session 2
I'm thinking a support ticket might be the next steps.
Support ticket would be a good idea. Please report back findings here.
Also just a quick question what's the latency between the client and the file share server?
@Brian_M - We are facing a similar issue. Did you find a resolution?
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1714 | |
1093 | |
752 | |
447 | |
232 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.