Hello guys, please guide me to understand this little thing.
This is a supervisor all in one deployment with Click house DB (FSM 7.1.X running on Rocky Linux)
As you can see, my /sda partition has 75G, (at first it was 25 GB)
The /dev/mapper/rl-root inside /dev/sda is eating quickly the storage that I assign to it. Why is this happening? is supposed that logs are forwarded to Click house DB in /dev/sde partition tand this should be the more used storage, also /opt is running out of storage.
This is the second time that I resize the /dev/mapper/rl-root partition.
Thank you!
Created on 06-17-2024 12:55 PM Edited on 06-17-2024 01:15 PM
Hi @AEK thank you for your reply.
[root@XXX ~]# du -sh /opt/* 2>/dev/null | sort -n
0 /opt/db-backup
0 /opt/glassfish
0 /opt/Java
0 /opt/openjdk
0 /opt/vmware
1.1G /opt/glassfish5
1.2G /opt/jsreport
13M /opt/zookeeper.tar.gz
31M /opt/rpm
40M /opt/fortiinsight-ai
40M /opt/zookeeper
48M /opt/exporter
63M /opt/archive
64G /opt/phoenix
89M /opt/selenium
399M /opt/clickhouse
417M /opt/node-rest-service
727M /opt/charting
[root@XXX ~]# du -sh /* 2>/dev/null | sort -n
0 /bin
0 /lib
0 /lib64
0 /media
0 /mnt
0 /pbin
0 /proc
0 /query
0 /querywkr
0 /sbin
0 /srv
0 /sys
2.4G /run
3.3G /fsmopt.tar.gz
4.0K /platform-info
4.0K /svn.tar.gz
6.1G /usr
8.7M /root
12K /redis.log
24K /home
27M /etc
32G /var
53G /data-clickhouse-hot-1
68G /opt
165M /data
253M /tmp
332K /svn
432K /dev
648M /boot
726M /cmdb
But, really I want to understand why this is happening, if /dev/mapper/rl-root is linked to /opt and eat storage quickly, seems that I'm missing something abot initial sizing.
UPDATE: (inside /opt/phoenix/cache), can this be safely removed?
587M /opt/phoenix/cache/_MEIfjDpil
587M /opt/phoenix/cache/_MEIFsVx3B
587M /opt/phoenix/cache/_MEIg0X2zL
587M /opt/phoenix/cache/_MEIgHX101
587M /opt/phoenix/cache/_MEIgjsjuq
587M /opt/phoenix/cache/_MEIIuHILc
587M /opt/phoenix/cache/_MEIj6QqhT
587M /opt/phoenix/cache/_MEIJRt7Wd
587M /opt/phoenix/cache/_MEIjxxg3D
587M /opt/phoenix/cache/_MEILNdpfo
587M /opt/phoenix/cache/_MEIMDCga2
587M /opt/phoenix/cache/_MEIMFFd1S
587M /opt/phoenix/cache/_MEImhGWaQ
587M /opt/phoenix/cache/_MEIn4b2fh
587M /opt/phoenix/cache/_MEINe9eRD
587M /opt/phoenix/cache/_MEInpqHtW
587M /opt/phoenix/cache/_MEIoM0HCw
587M /opt/phoenix/cache/_MEIp2q0dp
....
a lot
...
Thank you!
Hi @gwaihir
This does not look like normal behavior. My recommendation would be to raise a TAC case for this. The following link provides some useful information about the output required by TAC:
I hope it helps.
Hello @Richie_C thank you for your reply
I deleted all those files from ../cache/ and everything is going well.
Regards.
Hi @gwaihir,
I have the same problem, I get a /archive fullness warning in super. I can add a disk but it might happen again, I have a new environment. How did you clear the cache?
Hi @adem_netsys I deleted all files inside /opt/phoenix/cache with the prefix _M* and also found that some kernel crash were saved on /crash directory /var
I think this problem was solved because there are no more space disk alerts.
Regards.
Hi @gwaihir
I have the same issue with my FortiSIEM. The file /dev/mapper/rl-root is up to 71%.
How can i purge it ? With wich command ? Please i'm waiting your feedback.
Thank you.
Hello @Hugues1 and @adem_netsys greetings.
Until now, I keep deleting those *M files, and the FSIEM works perfectly. I don't know if this will cause problems in the future.
Also, I noted that the version of FortiSIEM was causing many crash that were store in /var/crash and was consuming a huge space, I deleted this also. (with previous backup)
Hope this would give you some help.
Regards.
Welcome to your new Fortinet Community!
You'll find your previous forum posts under "Forums"
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.