Hi,
I found an extremely strange behavior in the FortiOS 5.2.3 on our FGT 110C.
FG-GW # diagnose sys link-monitor status
Link Monitor: gwdetect-upg-1 Status: alive Create time: Sun Mar 29 01:17:53 2015
Source interface: wan1 (3)
Interval: 5, Timeout 5
Fail times: 0/5
Send times: 0
[...]
protocol: ping, state: alive
[...]
Link Monitor: gwdetect-upg-2 Status: alive Create time: Sun Mar 29 01:17:53 2015
Source interface: wan2 (10)
Interval: 5, Timeout 5
Fail times: 0/5
Send times: 0
[...]
protocol: ping, state: alive
[...]
seems to be correct, but
FG-GW # diagnose sys link-monitor interface wan1
Interface(wan1): state(down, since Sun Mar 29 01:18:21 2015
), latency(8.96), jitters(18.33), session count(639), bandwidth(55977)
FG-GW # diagnose sys link-monitor interface wan2
Interface(wan2): state(down, since Sun Mar 29 01:18:22 2015
), latency(21.22), jitters(8.04), session count(81), bandwidth(30896)
show's down links. The same output are shown in the gui on System/Monitor/Link-Monitor. Is this my failure or should I report an bug?
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
It gets better. The behavior I'm seeing is that the display under System -> Monitor -> Link Monitor in the UI will always be precisely opposite the output of di sys link-mon stat for the corresponding monitor. The CLI output is correct, the UI is always reverse and as such wrong.
Hi Mario,
Nice to know it's solved for you.
In my case, my link monitors detect the fail, but doesn't come up when the probe is pinging again.
I can also do "exec ping 8.8.8.8" and it's working, but on diag sys link-monitor status it shows as "dead"
I tried different firmware versions, also 5.2.6 but the result is the same.
I'll keep diging.. Thanks !!
Luiz Alberto Camilo NCT São Paulo www.nct.com.br NSE-5 Expert
Are you sure, that your ping-server (shown as peer in "diag sys link-monitor status") is responding correctly?
Check this out :
<code>
MASTER # exec router restart
MASTER # diag sys link-monitor interface wan1 Interface(wan1): state(down, since Fri Feb 12 03:15:02 2016 ), latency(0.18), jitters(0.25), session count(4), bandwidth(647)
MASTER # diag sys link-monitor interface wan1 Interface(wan1): state(up, since Fri Feb 12 03:15:02 2016 ), latency(-1.00), jitters(-1.00), session count(0), bandwidth(639)
MASTER # diag sys link-monitor interface wan1 Interface(wan1): state(up, since Fri Feb 12 03:15:02 2016 ), latency(-1.00), jitters(-1.00), session count(1), bandwidth(639)
</code>
On the test above, the destination 8.8.8.8 which I was testing, was reachable. using exec ping 8.8.8.8 I could ping but the link monitor was bringing the state as down. After "exec router restart" the status changed to Up ... This is very weird ... I just did this test ..
I'm also using :
diag debug application link-monitor -1
diag test application lnkmtd 3
diag sys link-monitor status
So far, it looks that it's not working correctly.
Luiz Alberto Camilo NCT São Paulo www.nct.com.br NSE-5 Expert
Guys,
Running a FG1000D cluster in 5.2.6, my link-monitor seems to report correctly, but it doesn't trigger a fail-over to the Slave. What might I be missing? I ping my gateway, and remove the vlan on a switch in between, triggering a state:down of link-monitor.
My HA config looks like this:
config system ha
set group-name "Customer"
set mode a-p
set password ENC ****
set hbdev "port32" 50
set session-pickup enable
set session-pickup-delay enable
set ha-mgmt-status enable
set ha-mgmt-interface "mgmt1"
set override enable
set priority 196
set monitor "port1" "port9"
set pingserver-monitor-interface "IPVPN"
set pingserver-failover-threshold 5
set pingserver-flip-timeout 10
end
MASTER (VPN) # diag sys link-monitor interface IPVPN
Interface(IPVPN): state(down, since Tue Feb 23 15:53:36 2016
), bandwidth(28), session count(0).
MASTER (VPN) # dia sys ha status
HA information
Statistics
traffic.local = s:0 p:46298 b:13522060
traffic.total = s:0 p:46313 b:13522416
activity.fdb = c:0 q:0
Model=1000, Mode=2 Group=0 Debug=0
nvcluster=1, ses_pickup=1, delay=1
HA group member information: is_manage_master=1.
FGT1KD3915800307, 0. Master:196 MASTER
FGT1KD3915800188, 1. Slave:128 SLAVE vcluster 1, state=work, master_ip=169.254.0.1, master_id=0:
FGT1KD3915800307, 0. Master:196 MASTER (prio=0, rev=0)
FGT1KD3915800188, 1. Slave:128 SLAVE (prio=1, rev=1)
-- Bjørn Tore
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1713 | |
1093 | |
752 | |
447 | |
231 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.