Dear community, We are implementing new NVMe disks in our cluster and
currently discussing about the best redundancy methods. As these disks
are not meant to be managed by a hardware controller, we have multiple
disks (in fact 4), that can only be mo...
Hi everyone, I'd just like to exchange thoughts or practices about
baseline-focused rules on the FortiSIEM:At the moment, about 80% of our
Incidents are "Sudden increase in ...", as we narrowed down all the
other rules to not trigger on False Positiv...
Hello everyone, We are continuously experiencing the incident "High
performance monitoring delay from Collector or Worker SIEM Supervisor"
on our FortiSIEM platform. That one is triggered as soon as the Event
Type "PH_DEV_MON_PERFMON_ALL_DEVICE_DELAY...
Dear Community support, I've had a custom avatar image a while (think,
I've set this two-three years ago) and tried to update it recently. But
my finger was too fast, so I got one of the "community avatars" now.Now,
my question is: How can set a cust...
Hello all, We are in discussion with a customer that likes to host the
FortiSIEM on prem but considers moving to our
multi-tenant-cloud-environment some day in future.As we are just setting
up the SIEM, I would like to build the environment in a way ...
HI @AEH, If you have three Keepers, a majority decision (one is down,
two remaining) will enable writing to the tables and therefore, yes, the
logs will be received and stored in this scenario.I was talking about
the case if you only use the Supervis...
Hi @AEH, Yes, having redundancy for Keepers can make sense. However,
best practice is to have separate machines (see also:
https://clickhouse.com/docs/architecture/replication). So, if you go for
redundancy, you should also go the full way of keeping...
Hi @yans, In my understanding1) Keepers should be independent from
Replicas2) If you have a multi-node deployment, you usually also have
Collectors. Collectors cache all the events they cannot send to the
cluster, so a reboot of a Supervisor would no...
Hi @Pantashaa, The reporting IP should be mapped automatically to the
source of this event. When running the test locally, it's always the
local address, so I assume it should insert the correct ip when the
parser is running in production. You should...
Hi @AEH, This setup should be able to handle 10k and more (assuming you
spend enough CPUs and RAM for the nodes). Talking about Keepers: I don't
really see a benefit from configuring all three nodes as Keepers
(Supervisor goes down: You cannot use th...