Blogs
Yurisk
SuperUser
SuperUser
If you haven't used the open source iperf tool before, there is a lot of info on it (see https://iperf.fr), and I will only say it allows us to generate UDP/TCP traffic between 2 hosts of any  bandwidth we desire. Load testing is a sure way to pinpoint "weak links" in the network, be it equipment or cabling.  So iperf is a software (Linux & Windows) you install on 2 hosts and it works as client and server - one host sends TCP or UDP traffic, and the second one (well, iperf on it) receives it measuring jitter, packet loss, bandwidth. We use iperf to indicate network problems but also to prove (to client or ourselves) capacity of a line - is the claimed throughput indeed as expected?  It becomes a challenge when you don't have Windows/Linux on remote site to install iperf or you cannot allow network downtime to disconnect your Fortigate  from the line and connect instead laptop/server. Luckily, starting with FortiOS 5.2.x version, all Fortigate firewalls come with iperf built-in!  So, let's run some tests.
The main command here is diagnose traffictest.  Let's say we want to run TCP test from our local Fortigate (named FGT-Perimeter) to the remote host (Linux server named DarkStar) 199.1.1.1 (IP is sanitized) to the port 5201 (default) on which the remote  iperf listens. On our Fortigate the Internet-connected interface is port1.

FGT-Perimeter# diagnose traffictest port 5201
FGT-Perimeter# diagnose traffictest proto 0
FGT-Perimeter# diagnose traffictest client-intf port1​


Note: proto 0 is for TCP, for UDP it will be proto 1

To verify the configuration I'll use diagnose traffictest show:

FGT-Perimeter # diag traffictest show
server-intf: port1
client-intf: port1
port: 5201 proto: TCP

Looks good, now let's actually run the test with diagnose traffictest run -c specifying the remote host IP of 199.1.1.1:
diagnose traffictest run -c 199.1.1.1
The result on our Fortigate and below on remote Linux server are:


FGT-Perimeter # diagnose traffictest run -c 199.1.1.1
Connecting to host 199.1.1.1, port 5201
[ 9] local 10.10.10.178 port 13251 connected to 199.1.1.1 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 9] 0.00-1.00 sec 114 MBytes 957 Mbits/sec 0 2.10 MBytes
[ 9] 1.00-2.00 sec 112 MBytes 945 Mbits/sec 0 3.01 MBytes
[ 9] 2.00-3.00 sec 112 MBytes 944 Mbits/sec 0 3.01 MBytes
[ 9] 3.00-4.00 sec 112 MBytes 943 Mbits/sec 0 3.01 MBytes
[ 9] 4.00-5.00 sec 111 MBytes 934 Mbits/sec 0 3.01 MBytes
[ 9] 5.00-6.00 sec 112 MBytes 944 Mbits/sec 0 3.01 MBytes
[ 9] 6.00-7.00 sec 112 MBytes 944 Mbits/sec 0 3.01 MBytes
[ 9] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 3.01 MBytes
[ 9] 8.00-9.00 sec 112 MBytes 942 Mbits/sec 0 3.01 MBytes
[ 9] 9.00-10.00 sec 111 MBytes 935 Mbits/sec 0 3.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 9] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender
[ 9] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec receiver ​


On the receiving end:

[root@darkstar3 ~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.10.10.178, port 22082
[ 5] local 199.1.1.1 port 5201 connected to 10.10.10.178 port 13782
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 108 MBytes 904 Mbits/sec
[ 5] 1.00-2.01 sec 112 MBytes 937 Mbits/sec
[ 5] 2.01-3.00 sec 112 MBytes 946 Mbits/sec
[ 5] 3.00-4.00 sec 112 MBytes 942 Mbits/sec
[ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec
[ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec
[ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec
[ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec
[ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec
[ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec
[ 5] 10.00-10.06 sec 6.34 MBytes 938 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.06 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-10.06 sec 1.10 GBytes 938 Mbits/sec receiver​



By default, iperf SENDS the data to the remote host, that is, in our case we tested UPLOAD for the Fortigate. To generate traffic in the opposite direction, you use -R option. Now the remote Linux hosts sends data to our Fortigate and this way we test DOWNLOAD for the Fortigate:

FGT-Perimeter # diagnose traffictest run -R -c 199.1.1.1
Connecting to host 199.1.1.1, port 5201 Reverse mode, remote host 199.1.1.1 is sending
[ 9] local 10.10.10.178 port 18344 connected to 199.1.1.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 9] 0.00-1.00 sec 112 MBytes 940 Mbits/sec
[ 9] 1.00-2.00 sec 111 MBytes 933 Mbits/sec
[ 9] 2.00-3.00 sec 111 MBytes 931 Mbits/sec
[ 9] 3.00-4.00 sec 111 MBytes 931 Mbits/sec
[ 9] 4.00-5.00 sec 111 MBytes 934 Mbits/sec
[ 9] 5.00-6.00 sec 111 MBytes 934 Mbits/sec
[ 9] 6.00-7.00 sec 111 MBytes 933 Mbits/sec
[ 9] 7.00-8.00 sec 111 MBytes 930 Mbits/sec
[ 9] 8.00-9.00 sec 111 MBytes 931 Mbits/sec
[ 9] 9.00-10.00 sec 111 MBytes 935 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 9] 0.00-10.00 sec 1.09 GBytes 935 Mbits/sec 0 sender
[ 9] 0.00-10.00 sec 1.09 GBytes 934 Mbits/sec receiver iperf Done.



Hopefully, the bandwidth reached will be symmetrical and according to the line capacity.  
There is a disadvantage to the TCP protocol testing - the protocol is very smart and adaptable. When TCP detects packet loss inside the connection, it will fall back to send data slower to compensate for this. But if there is packet loss - we very much would like to see it! The UDP protocol can do exactly that - you set needed bandwidth to reach, and if there is any packet loss while trying to do so, it shows. Let's do just 100 Mbit/sec test and with UDP.

First, we have to say we want UDP to the Fortigate with diagnose traffictest proto 1:

FGT-Perimeter # diagnose traffictest proto 1
proto: UDP


All is set, let's run specifying desired bandwidth of 100 Mb/sec with diagnose traffictest run -b 100M:

FGT-Perimeter # diagnose traffictest run -b 100M -c 199.1.1.1
Connecting to host 199.1.1.1, port 5201
[ 9] local 10.10.10.178 port 5935 connected to 199.1.1.1 port 5201
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 9] 0.00-1.00 sec 10.8 MBytes 90.6 Mbits/sec 1382
[ 9] 1.00-2.00 sec 11.9 MBytes 99.9 Mbits/sec 1525
[ 9] 2.00-3.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 3.00-4.00 sec 11.9 MBytes 99.9 Mbits/sec 1525
[ 9] 4.00-5.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 5.00-6.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 6.00-7.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 7.00-8.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 1526
[ 9] 9.00-10.00 sec 11.9 MBytes 100 Mbits/sec 1526
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 9] 0.00-10.00 sec 118 MBytes 99.1 Mbits/sec 0.055 ms 5903/15057 (39%)
[ 9] Sent 15057 datagrams


The situation is much worse now. Ok, tiny confession - to show you actual packet loss, I introduced fault into the same network and this caused such bad performance, otherwise I would get the same ~1000 Mbit/sec with 0% packet loss as with the TCP test. In this test we can see the parameter Lost/Total which was completely missing in the TCP test. We can see that only 39% of the packets reached the remote Linux server.

Few remarks about the iperf version embedded in Fortigate:

- The version used (in 5.x and 6.x firmware so far) is 3.0.9. This means it will not work with the iperf2 and its subversions.
- The tool can work as CLIENT only, i.e. it does not accept -s option. This means we can NOT run iperf test between 2 Fortigates, one of the peers has to be some Linux/Windows server with iperf3 -s running. It does NOT mean we can test only one direction, though - the command accepts -R option for reverse traffic.

Another cool feature we can combine with iperf on Fortigate is Automation. We can set in Security Fabric -> Automation -> Schedule -> CLI script to run iperf test on schedule and then look at the results on the remote server. If anyone finds it useful, write in comments and will show how to do it. That is for now, keep safe.



6 Comments
SaulAnsb
New Contributor
Hi Yuri, nice article. I have to admit: I had no idea Fortigates had iPerf built-in since FortiOS 5.2! This looks very useful.

And I am interested in your Automation idea. Can you post details about it? Would it be possible to have several different Fortigates all run iperf tests against the same remote Server? (eg. a server hosted in Azure) Could it be a Windows or Linux server? Would there be a simple way to differentiate the results between each Fortigate? (by System/Host name? or only by IP) 

I like the idea of having all FTG devices do periodic iperf tests to a central server, maybe weekly, and then aggregate the results into a report showing which devices may be experiencing poor performance compared to last time, etc.

Thanks!
   --Saul
TeodFuic
New Contributor
Hello Yuri,

thanks for the good info with the iPerf on FortiGate. Didn't know that this is possible. For sure we're interested in getting some additional information on automation part with these features.

Many thanks,

Teodor
aallani
New Contributor
Thanks Yuri for this post. Very useful for Fortigate customer !
perrosenlind
New Contributor
Great blog, I've always struggled a bit with the use of this feature. Now it's crystal clear! Thanks alot!
aahmadzada
Staff
Staff

An addition to that thread.

There is a KB describing how to use it:

https://community.fortinet.com/t5/FortiGate/Technical-Tip-How-to-perform-bandwidth-tests/ta-p/197784

bam
Staff
Staff

Please mind `diagnose traffictest` is designed to test connectivity between ports but does not provide reliable performance data (due to multiple reasons, eg. it won't use NP). For proper throughput test use iperf client and server outside FortiGate and set a forwarding rule.