Fortigate 3100D: Different Classifying of the same ICMP Traffic
we have a Fortigate3100D v6.2.3 and we have a CheckMK-monitoring server running. The monitoring server sends always the same ping requests to a few destinations which cannot be monitored with snmp.
Sometimes it occurs that the result of monitoring shows packet losses of the monitored host. To have a second opinion we monitored the same destination with an other ping tool and the comparison shows that the destination host is reachable.
A view to the "FortiSessions" shows, that packetlosses at CheckMk appearing when the Fortigate declars the application as "PING" and not as "...Ping" (given in in the picture) Both traffics are "accepted" but the PING-session never contains transmitted packets and remains at "0B / 0B" in the statistics.
My questions are now:
1. Why can it occur that the same traffic is recognised as "unknown service" or as "network.service"
2. Does "Accept" in the traffic log (given in the picture) always mean, that the traffic is allowed?
3. Is a dependence possible between the classifying as unknown PING and transmitting no packets?
The fear is, that wrong classifiying also affects other traffic, e.g. datatransfer-protocols.
If the application control cannot reliably detect the application, it will just use the object name of the service you configured.
'PING' in your case for ICMP-echo.
config firewall service custom edit "PING" set category "Network Services" set protocol ICMP set icmptype 8 unset icmpcode next end
I also have different applications in our environment which generates event log entries with 0 packets count.
But in my case, even with a 0 packet count in the logfiles, the ping-reply is still successful.
The application is running a traceroute, starting with TTL=1 and counting up. But just sending one ping packet for each TTL, not multiple.
Maybe your application is doing something similar?
Have you tried configuring a specific policy for the problematic connection and setting up packet sniffers on the incoming and outgoing interface?
Also don't forget to disable np-accelerate on this policy.
after some time we found the service of IDS was hanging and blocking one core. Our theory is, that every time packets are forwarded to the core with static 99% use, they wasn't forwarded. After we restarted the service, the core was ok, and no packets were lost and no packets were wrong declared.