Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
rajamanickam
Contributor

Fortianalyzer analytics issue

Hi, We have a fortianalyzer in our environment. We are forwarding all logs from Fortigate to Fortianalyzer. However in Fortianalyzer , database indexing is not happening for more than 3 days. Archive logs are getting stored for only 16 days. Currently the storage policy is 30 days for analytic logs and 90 days for Archive logs. We are also forwarding FA logs to pair of syslog servers. We have analytic storage occupied at around 600GB (90% of allocated). But actual analytic logs available is only for 3 days. This means the data occupied in the analytic storage is not getting indexed properly.. But after mutliple troubleshooting , we were informed to upgrade RAM from current RAM of 16GB to 32 GB.. Currently our logs are receiving at the rate of 7000 logs/sec and as per Fortinet documentation, 16GB RAM can be supported for upto 20000 logs/sec. Any one faced similar issue or what could be the issue? We are running Fortianalyzer version 7.0.3 as VM in ESXi hypervisor with 1TB storage allocated between Analytic and archive logs.
5 REPLIES 5
Debbie_FTNT
Staff
Staff

Hey rajamanickam,

the numbers (600GB/3 days) seem very roughly correct to me based on your 7000 logs/s figure.

To elaborate:
A rough calculation for FortiAnalyzer disk space requirements is to assume each log message requires 100 bytes.

7000 logs/s means 700,000 bytes per second, so roughly 700KB/s.
This translates to very roughly 60GB per day (700KB*3600*24=60,480,000KB) in terms of raw logs.
The SQL database requires about 3 times as much space as raw logs, meaning 180GB per day, which in turn adds up to 540GB over three days, not too far off from 600GB.

Please do note: All of this is a very rough calculation, and might not be entirely accurate; the 100 byte per log figure is just a guideline, and actual log sizes (and thus disk space requirements) can vary.

 

From experience, 1TB is rather on the low side for the described logging volume; if you compare to hardware models designed to handle about the same logging volume you will find models with 8TB or above.

There may very well be an issue with log indexing on your FortiAnalyzer (I don't know what troubleshooting was done or what issues were noted), but 1TB is almost certainly too low for the sheer amount of logs your FortiAnalyzer receives.

+++ Divide by Cucumber Error. Please Reinstall Universe and Reboot +++
rajamanickam

Hi Debbie,

  Thank you for your reply..

Sorry 600GB is not for 3 days. The average data ingestion rate which we could see on  database is around 35GB/day.. There are nearly 500GB analytic data which arent getting indexed and just only 3 days data are getting indexed.  Hence in all the reporting graphs, we could only able to see last  3 days data.. We rebuilt the database, we rebuilt the database of ADOM, upgraded the version from 7.0.3 to 7.0.4 and still same issue. As a next step we are planning to upgrade the RAM from 16GB to 32GB to see whether this is fixing the issue. We could also see below logs which indicates some performance issue.

TCP: port1: Driver has suspect GRO implementation, TCP performance may be compromised.
capability: warning: `clickhouse-serv' uses 32-bit capabilities (legacy support in use)

Even the RAM upgrade didnt help then with no other option we have to rebuild the analyzer.. Still couldnt able to get on what could have triggered this issue


Regards

Raja

Cajuntank
Contributor II

I didn't have your issue, but I had a kind of similar issue where all of a sudden, I would no longer have any log insertions at all. I would receive logs, but it would not write them to the database at all after about a day or so. TAC had me do a debug on the SQLplugind and showed a Redis process being stopped for some reason. Reboot, would work fine for a day or so, then process would stop. Updated to 7.0.4 and all was well from then on.

Disclaimer... I run mine on a Hyper-V VM and always had my memory set for 32GB. Also, you did not mention the storage technology you have. I know I would run into oddities when my original storage was spindle SATA based storage but went away once I migrated to SSD storage.

Edit to say, I guess I glazed over the fact that you only have 1TB of storage. Like @Debbie_FTNT mentioned, that surely is too low for your environment. I run about 45GB per day at about 4500 logs/sec and utilize 4TB carved out for that root ADOM

rajamanickam

Hi @Cajuntank  , For us new data are getting indexed , nearly 500GB of older data is not getting indexed neither getting deleted..  As I replied to Debbie comment earlier, its not 600GB/3 days.. its nearly 30 to 35GB/day..

 

Is there a better way to optimize the logs. Yesterday I also found that some SDWAN overlay tunnel IPs were pinging across each site and getting failed as Local-in policies. We enabled ping on these overlay interfaces which helped to some extent on reducing the number of logs/sec.

 

Regards

Raja

Cajuntank

You never shared on what your storage medium was. I made the comment about mine being SATA based originally to speak of issues I ran into with high disk I/O and causing things to time out, delay,  and having to rebuild the database several times myself in the past (this was long before that log insertion issue). Might be something to be mindful of as well. 

Labels
Top Kudoed Authors