 
                    
                     
					
				
		
FGT-Perimeter# diagnose traffictest port 5201
FGT-Perimeter# diagnose traffictest proto 0 
FGT-Perimeter# diagnose traffictest client-intf port1​
Note: proto 0 is for TCP, for UDP it will be proto 1.  
To verify the configuration I'll use diagnose traffictest show:
FGT-Perimeter # diag traffictest show
server-intf:    port1
client-intf:    port1
port:   5201
proto:  TCP
Looks good, now let's actually run the test with diagnose traffictest run -c specifying the remote host IP of 199.1.1.1:FGT-Perimeter # diagnose traffictest run  -c 199.1.1.1
Connecting to host 199.1.1.1, port 5201
[  9] local 10.10.10.178 port 13251 connected to 199.1.1.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  9]   0.00-1.00   sec   114 MBytes   957 Mbits/sec    0   2.10 MBytes 
      
[  9]   1.00-2.00   sec   112 MBytes   945 Mbits/sec    0   3.01 MBytes 
      
[  9]   2.00-3.00   sec   112 MBytes   944 Mbits/sec    0   3.01 MBytes 
      
[  9]   3.00-4.00   sec   112 MBytes   943 Mbits/sec    0   3.01 MBytes 
      
[  9]   4.00-5.00   sec   111 MBytes   934 Mbits/sec    0   3.01 MBytes 
      
[  9]   5.00-6.00   sec   112 MBytes   944 Mbits/sec    0   3.01 MBytes 
      
[  9]   6.00-7.00   sec   112 MBytes   944 Mbits/sec    0   3.01 MBytes 
      
[  9]   7.00-8.00   sec   112 MBytes   944 Mbits/sec    0   3.01 MBytes 
      
[  9]   8.00-9.00   sec   112 MBytes   942 Mbits/sec    0   3.01 MBytes 
      
[  9]   9.00-10.00  sec   111 MBytes   935 Mbits/sec    0   3.01 MBytes 
      
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  9]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec    0             sender
[  9]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec                  receiver
​[root@darkstar3 ~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.10.178, port 22082
[  5] local 199.1.1.1 port 5201 connected to 10.10.10.178 port 13782
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   108 MBytes   904 Mbits/sec 
                 
[  5]   1.00-2.01   sec   112 MBytes   937 Mbits/sec 
                 
[  5]   2.01-3.00   sec   112 MBytes   946 Mbits/sec 
                 
[  5]   3.00-4.00   sec   112 MBytes   942 Mbits/sec 
                 
[  5]   4.00-5.00   sec   112 MBytes   941 Mbits/sec 
                 
[  5]   5.00-6.00   sec   112 MBytes   941 Mbits/sec 
                 
[  5]   6.00-7.00   sec   112 MBytes   941 Mbits/sec 
                 
[  5]   7.00-8.00   sec   112 MBytes   941 Mbits/sec 
                 
[  5]   8.00-9.00   sec   112 MBytes   941 Mbits/sec 
                 
[  5]   9.00-10.00  sec   112 MBytes   941 Mbits/sec 
                 
[  5]  10.00-10.06  sec  6.34 MBytes   938 Mbits/sec 
                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.06  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.06  sec  1.10 GBytes   938 Mbits/sec                  receiver​FGT-Perimeter # diagnose traffictest run -R  -c 199.1.1.1
Connecting to host 199.1.1.1, port 5201
Reverse mode, remote host 199.1.1.1 is sending
[  9] local 10.10.10.178 port 18344 connected to 199.1.1.1  port 5201
[ ID] Interval           Transfer     Bandwidth
[  9]   0.00-1.00   sec   112 MBytes   940 Mbits/sec 
                 
[  9]   1.00-2.00   sec   111 MBytes   933 Mbits/sec 
                 
[  9]   2.00-3.00   sec   111 MBytes   931 Mbits/sec 
                 
[  9]   3.00-4.00   sec   111 MBytes   931 Mbits/sec 
                 
[  9]   4.00-5.00   sec   111 MBytes   934 Mbits/sec 
                 
[  9]   5.00-6.00   sec   111 MBytes   934 Mbits/sec 
                 
[  9]   6.00-7.00   sec   111 MBytes   933 Mbits/sec 
                 
[  9]   7.00-8.00   sec   111 MBytes   930 Mbits/sec 
                 
[  9]   8.00-9.00   sec   111 MBytes   931 Mbits/sec 
                 
[  9]   9.00-10.00  sec   111 MBytes   935 Mbits/sec 
                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  9]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0             sender
[  9]   0.00-10.00  sec  1.09 GBytes   934 Mbits/sec                  receiver
iperf Done.
Hopefully, the bandwidth reached will be symmetrical and according to the line capacity.  
There is a disadvantage to the TCP protocol testing - the protocol is very smart and adaptable. When TCP detects packet loss inside the connection, it will fall back to send data slower to compensate for this. But if there is packet loss - we very much would like to see it! The UDP protocol can do exactly that - you set needed bandwidth to reach, and if there is any packet loss while trying to do so, it shows. Let's do just 100 Mbit/sec test and with UDP.
First, we have to say we want UDP to the Fortigate with diagnose traffictest proto 1:
FGT-Perimeter # diagnose traffictest proto 1
proto:  UDP
All is set, let's run specifying desired bandwidth of 100 Mb/sec with diagnose traffictest run -b 100M:
Connecting to host 199.1.1.1, port 5201 
[ 9] local 10.10.10.178 port 5935 connected to 199.1.1.1 port 5201 
[ ID] Interval Transfer Bandwidth Total Datagrams 
[ 9] 0.00-1.00 sec 10.8 MBytes 90.6 Mbits/sec 1382 
[ 9] 1.00-2.00 sec 11.9 MBytes 99.9 Mbits/sec 1525 
[ 9] 2.00-3.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 3.00-4.00 sec 11.9 MBytes 99.9 Mbits/sec 1525 
[ 9] 4.00-5.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 5.00-6.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 6.00-7.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 1526 
[ 9] 9.00-10.00 sec 11.9 MBytes 100 Mbits/sec 1526 
- - - - - - - - - - - - - - - - - - - - - - - - - 
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams 
[ 9] 0.00-10.00 sec 118 MBytes 99.1 Mbits/sec 0.055 ms 5903/15057 (39%) 
[ 9] Sent 15057 datagrams
The situation is much worse now. Ok, tiny confession - to show you actual packet loss, I introduced fault into the same network and this caused such bad performance, otherwise I would get the same ~1000 Mbit/sec with 0% packet loss as with the TCP test. In this test we can see the parameter Lost/Total which was completely missing in the TCP test. We can see that only 39% of the packets reached the remote Linux server.
Few remarks about the iperf version embedded in Fortigate:
- The version used (in 5.x and 6.x firmware so far) is 3.0.9. This means it will not work with the iperf2 and its subversions.
- The tool can work as CLIENT only, i.e. it does not accept -s option. This means we can NOT run iperf test between 2 Fortigates, one of the peers has to be some Linux/Windows server with iperf3 -s running. It does NOT mean we can test only one direction, though - the command accepts -R option for reverse traffic.
Another cool feature we can combine with iperf on Fortigate is Automation. We can set in Security Fabric -> Automation -> Schedule -> CLI script to run iperf test on schedule and then look at the results on the remote server. If anyone finds it useful, write in comments and will show how to do it. That is for now, keep safe.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.