In the virtual server config, when the server type is set to TCP, TCP sessions are load balanced between the real servers (set server-type tcp).
- Configure the health check via CLI as follows or via GUI under Policy & Objects -> Health Check -> Create New:
# config firewall ldb-monitor edit "health-check" set type ping set interval 10 set timeout 2 set retry 3 next end
- Configure a virtual server via CLI as follows or via GUI under Policy & Objects -> Virtual Servers -> Create New:
Spoke1 # config firewall vip edit "tcp-server" set type server-load-balance set extip 10.109.20.85 set extintf "wan1" set server-type tcp set monitor "health-check" set extport 80 config realservers edit 1 set ip 10.104.4.85 set port 80 next edit 2 set ip 10.104.3.233 set port 80 set status standby next end next end
- Configure a firewall policy via CLI as follows or under Policy & Objects -> Firewall Policy > Create New:
Spoke1 # config firewall policy edit 8 set name "vip" set srcintf "dmz" set dstintf "port1" set action accept set srcaddr "all" set dstaddr "tcp-server" set schedule "always" set service "ALL" set logtraffic all next end
How to get details of the real servers and how to perform basic troubleshooting using the debugging commands:
1) The command # di firewall vip realserver list shows:
- IP of the virtual server. - IP of the real server(s). - Status of the real server (if the real server is down or up based on configured health check).
- Number of total real servers.
- Number of alive real servers . - Type of the real server (HTTP, HTTPS, IMAPS, POP3S, SMTPS, SSL/TLS, TCP/UDP and IP).
Spoke1 # di firewall vip realserver list alloc=3 ------------------------------ vf=0 name=tcp-server/2 class=4 type=0 10.109.251.56:(80-80), protocol=6 total=2 alive=1 power=1 ptr=9753543
ip=10.104.4.85-10.104.4.85/80 adm_status=0 holddown_interval=300 max_connections=0 weight=1 option=01 alive=1 total=1 enable=00000001 alive=00000001 power=1 src_sz=0 id=0 status=up ks=0 us=0 events=3 bytes=656 rtt=0
ip=10.104.3.233-10.104.3.233/80 adm_status=1 holddown_interval=300 max_connections=0 weight=1 option=01 alive=0 total=1 enable=00000000 alive=00000000 power=0 src_sz=0 id=0 status=down ks=0 us=0 events=3 bytes=184 rtt=0
2) Using the command # di firewall vip realserver healthcheck stats, statistics of the configured health check are tracked and also it shows:
- Name of the VIP. - IP of the real server(s). - Mode of each real servers. - Mapped port of each real server.
Spoke1 # di firewall vip realserver healthcheck stats
Spoke1 # vip: tcp-server -------------------------- time since last status change: 425 num of successful checks since last status change: 75 num of failed checks since last status change: 26 num of times server up->down: 2 num of times server down->up: 4 num of times server failovers: 1 num of ping detects performed: 5584 num of failed ping detects: 3622 num of tcp detects performed: 0 num of failed tcp detects: 0 num of http detects performed: 0 num of failed http detects: 0 num of https detects performed: 0 num of failed https detects: 0 num of dns detects performed: 0 num of failed dns detects: 0
Real server status: VIP=tcp-server 1: ip=10.104.4.85, port:80, mode:Active, health check status:UP 2: ip=10.104.3.233, port:80, mode:Standby, health check status:UP --------------------------
3) In order to troubleshoot the health check of the real servers, the below sniffers are used to check the flow of the health check traffic:
# di sniffer packet any "host <real server IP> and icmp" 4 0 l # di sniffer packet any "host <real server IP> and icmp" 6 0 l
# di de dis # di de reset # get router info routing-table details <real server IP> # di ip address list # di firewall proute list # di sys session filter clear # di sys session filter dst <real server> # di sys session filter proto 1 # di sys session list # di de flow filter addr <real server IP> # di de flow filter proto 1 # di de cons time enable # di de en # di de flow trace start 1000
Note.
In case the output of the above sniffer and flow debugging commands did not help to figure out the root cause of the issue, collect outputs and attach the logs to the TAC support ticket.
Related documents: https://docs.fortinet.com/document/fortigate/6.2.12/cookbook/713497/virtual-server https://docs.fortinet.com/document/fortigate/7.2.3/administration-guide/713497/virtual-server-load-b...
https://community.fortinet.com/t5/FortiGate/Technical-Tip-Virtual-servers-load-balancing-showing-wro...
|