FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
aishaqui
Contributor
Article Id 232172
Description

This article describes how to configure FortiGate as a speed test (iperf) server.

Scope

FortiGate v7.0, v7.2, v7.4.0, v7.4.1.

Solution

Use the settings below to configure FortiGate as a speed test (iperf) server (This feature does NOT work in v7.4.2+):

 

config system global

    set speedtest-server enable

end

   

config system interface

    edit <interface name>

        append allowaccess speed-test

    next

end

Enabling Speed Test on the interface using the GUI:

 

Speed Test.PNG

Note:

FortiGate, as a speed test (Iperf) server, listens on TCP port 5201.

Starting from FortiOS 7.4.8, FortiGate, as a speed test (Iperf) server, listens on both UDP and TCP port 5201.

 

For testing, it is possible to make one FortiGate as Iperf client and another FortiGate as Iperf server.

Make 'FGT-A' as an iperf server and 'FGT-B' as an Iperf client.

 

FGT-A (iPerf Server):

 

config system global

    set speedtest-server enable

end

 

config system interface

edit "port1"

set ip 10.9.1.127 255.255.240.0

set allowaccess ping https ssh http telnet speed-test

next

end

 

FGT-B (iPerf Client):

From 'FGT-B', run the following command to check traffic test settings.

Make sure the port is 5201 and the protocol is TCP:

 

FortiGate-2000E # diagnose traffictest client-intf port1

FortiGate-2000E # diagnose traffictest server-intf port1

FortiGate-2000E # diagnose traffictest port 5201

FortiGate-2000E # diagnose traffictest show

server-intf:    port1

client-intf:    port1

port:   5201

proto:  TCP

 

Run the following command to initiate the traffic test or speed test:

 

FortiGate-2000E # diagnose traffictest run -c 10.9.1.127

Connecting to host 10.9.1.127, port 5201

[ 14] local 10.9.0.167 port 1209 connected to 10.9.1.127 port 5201

[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd

[ 14]   0.00-1.00   sec   114 MBytes   955 Mbits/sec   23   1.13 MBytes      

[ 14]   1.00-2.00   sec   112 MBytes   943 Mbits/sec    0   1.25 MBytes      

[ 14]   2.00-3.00   sec   112 MBytes   939 Mbits/sec    0   1.35 MBytes      

[ 14]   3.00-4.00   sec   112 MBytes   939 Mbits/sec    0   1.43 MBytes      

[ 14]   4.00-5.00   sec   113 MBytes   945 Mbits/sec    0   1.48 MBytes      

[ 14]   5.00-6.00   sec   112 MBytes   941 Mbits/sec    0   1.52 MBytes      

[ 14]   6.00-7.00   sec   112 MBytes   943 Mbits/sec    0   1.54 MBytes      

[ 14]   7.00-8.00   sec   112 MBytes   941 Mbits/sec    0   1.55 MBytes      

[ 14]   8.00-9.00   sec   112 MBytes   940 Mbits/sec    0   1.56 MBytes      

[ 14]   9.00-10.00  sec   112 MBytes   940 Mbits/sec    0   1.56 MBytes      

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth       Retr

[ 14]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec   23             sender

[ 14]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec                  receiver

 

iperf Done

 

Note:

Run the following command to collect the reverse traffic speed test (in this case, download speed):

 

       diagnose traffictest run -R -c 10.9.1.127

 

Run the following command to run iPerf parallel streams. Parallel streams in iPerf helps you measure the true maximum throughput.

 

diagnose traffictest run -c 10.9.1.127 -P 5   // 5 parallel streams for upload

diagnose traffictest run -R -c 10.9.1.127 -P 5  // 5 parallel streams for download

 

FGT-A:

If the sniffer is run on 'FGT-A':

 

FGT-A # diagnose sniffer packet any "port 5201" 4 0 l

interfaces=[any]

filters=[port 5201]

2022-12-03 06:57:27.907142 port1 in 10.9.0.167.17680 -> 10.9.1.127.5201: syn 3982763007

2022-12-03 06:57:27.907176 port1 out 10.9.1.127.5201 -> 10.9.0.167.17680: syn 29805291 ack 3982763008

2022-12-03 06:57:27.907228 port1 in 10.9.0.167.17680 -> 10.9.1.127.5201: ack 29805292

2022-12-03 06:57:27.907242 port1 in 10.9.0.167.17680 -> 10.9.1.127.5201: psh 3982763008 ack 29805292

2022-12-03 06:57:27.907248 port1 out 10.9.1.127.5201 -> 10.9.0.167.17680: ack 3982763045

 

iPerf arguments:

 

Server or client:

-p, --port # server port to listen on/connect to
-f, --format [kmgtKMGT] format to report: Kbits, Mbits, Gbits, Tbits
-i, --interval # seconds between periodic throughput reports
-F, --file name xmit/recv the specified file
-A, --affinity n/n,m set CPU affinity
-B, --bind <host> bind to the interface associated with the address <host>
-V, --verbose more detailed output
-J, --json output in JSON format
--logfile f send output to a log file
--forceflush force flushing output at every interval
-d, --debug emit debugging output
-v, --version show version information and quit
-h, --help show this message and quit

 

Server-specific:


-s, --server run in server mode
-D, --daemon run the server as a daemon
-I, --pidfile file write PID file
-1, --one-off handle one client connection then exit
--rsa-private-key-path path to the RSA private key used to decrypt
authentication credentials
--authorized-users-path path to the configuration file containing the user
credentials

 

Client specific:

 

-c, --client <host> run in client mode, connecting to <host>
-u, --udp use UDP rather than TCP
--connect-timeout # timeout for control connection setup (ms)
-b, --bitrate #[KMG][/#] target bitrate in bits/sec (0 for unlimited)
(default 1 Mbit/sec for UDP, unlimited for TCP)
(optional slash and packet count for burst mode)
--pacing-timer #[KMG] set the timing for pacing, in microseconds (default 1000)
-t, --time # time in seconds to transmit for (default 10 secs)
-n, --bytes #[KMG] number of bytes to transmit (instead of -t)
-k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n)
-l, --length #[KMG] length of buffer to read or write
(default 128 KB for TCP, dynamic or 1460 for UDP)
--cport <port> bind to a specific client port (TCP and UDP, default: ephemeral port)
-P, --parallel # number of parallel client streams to run
-R, --reverse run in reverse mode (server sends, client receives)
--bidir run in bidirectional mode.
Client and server send and receive data.
-w, --window #[KMG] set window size / socket buffer size
-C, --congestion <algo> set TCP congestion control algorithm (Linux and FreeBSD only)
-M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes)
-N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm
-4, --version4 only use IPv4
-6, --version6 only use IPv6
-S, --tos N set the IP type of service, 0-255.
The usual prefixes for octal and hex can be used,
i.e., 52, 064, and 0x34 all specify the same value.
--dscp N or --dscp val set the IP dscp value, either 0-63 or symbolic.
Numeric values can be specified in decimal,
octal and hex (see --tos above).
-L, --flowlabel N set the IPv6 flow label (only supported on Linux)
-Z, --zerocopy uses a 'zero copy' method of sending data
-O, --omit N omit the first n seconds
-T, --title str prefix every output line with this string
--extra-data str data string to include in client and server JSON
--get-server-output get results from server
--udp-counters-64bit use 64-bit counters in UDP test packets
--repeating-payload use repeating pattern in payload, instead of
randomized payload (like in iperf2)
--username username for authentication
--rsa-public-key-path path to the RSA public key used to encrypt
authentication credentials
 

In case FortiGate acts as client or server, iperf traffic is handled by CPU. While conducting an iperf test, higher CPU usage may be observed due to softirq.

If FortiGate fails to do a traffic test towards the iPerf server, check if the built-in firewall (ex. Windows firewall) in the iPerf server is enabled. Disable temporarily to allow the traffictest to proceed.

 

Error :

 

FGT A # di traffictest run -c 10.9.1.127
iperf3: error - unable to connect to server - server may have stopped running or use a different port, firewall issue, etc.: Connection timed out

 

FGT A # diag sniff packet any 'port 5201' 4 0 l
Using Original Sniffing Mode
interfaces=[any]
filters=[port 5201]
2025-06-17 13:17:06.508771 port1 out 10.47.4.44.11436 -> 10.9.1.127.5201: syn 2553256182
2025-06-17 13:17:07.586896 port1 out 10.47.4.44.11436 -> 10.9.1.127.5201: syn 2553256182
2025-06-17 13:17:09.666877 port1 out 10.47.4.44.11436 -> 10.9.1.127.5201: syn 2553256182
2025-06-17 13:17:13.746890 port1 out 10.47.4.44.11436 -> 10.9.1.127.5201: syn 2553256182

 

Packet sniffer shows that there is no response from the iPerf server (10.9.1.127). 

Note

  • FortiGate devices configured as speed test servers are not designed to operate as standard iPerf servers and should not be used for comprehensive traffic testing.
  • Their primary function is for quick, basic bandwidth assessments within specific network contexts, and attempting to connect them directly for traffic testing can lead to inaccurate results or network issues.
  • For detailed and accurate performance testing, it is recommended to use dedicated iPerf servers or other specialized tools outside the FortiGate environment.

For details about an issue where iPerf connection cannot be established with a FortiGate acting as a SpeedTest Server, see Technical Tip: Unable to establish an iperf connection with a FortiGate acting as a SpeedTest Server....

 

Related articles: