Hi,
I want to achieve a redundant setup, using clickhouse for storage backend.
I reinstalled the Supervisor, with the 5th disk for understanding, but, the 5th disk is not be in use. Choosing Clickhouse on the install process, it again asks for a disk / partition, which is not in generated. The 5th disk is available, but not partitioned.
I do not want my supervisor being part of the Clickhouse process, but it seems, that the supervisor has to be part of the Clickhouse-cluster-setup at least as a Keeper (?). How, then, should I partion the disk? LVM, and then ext4 or xfs? I could not find any documentation on it.
Maybe someone can help Best Ronny |
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hi Ronny,
So for the initial setup - You need to provide 5th disk for the clickhouse data - This is because on first install it will be considered as standalone device making Super to be part of keeper and data node as well.
I suggest you can provide very low storage for this and once you have the actual keeper and data workers added , you can remove the Super out of clickhouse cluster.
But still it requires the data disk - Super will use clickhouse disk to generate reports etc
Regards,
Goutham
Thank you so much helping me to understand the technology.
However, working with a warm and hot tier, ended up in that the clickhouse database was not coming up. I used two partitions, each with 4 TB in space (Tiny storage models allows doing bigger setup instead of then, later, increasing and / or moving data...). For this setup, do I need a warm tier? Or is this not even nessescarry?
Thanks a lot,
Ronny
I tried to resetup the whole supervisor, just greated a 1 TB single storage and, received the error again:
Storage provision error: ClickHouse Restart Failure
I added the 5th disk,
Then, created a GPT label, created a primary partition, and formatted it using ext4, and then, created the folder /data-clickhouse-hot-1 and mounted this parition,
on setup process, I added the hot-tier using /dev/sde1 and /data-clickhouse-hot-1 and this results in this error.
Do I have to use lvm?
I am confused.
There are some errors on the logs
Next try, creating a 1 TB lvm, but still the same error in the logs and crashing on setup
pvcreate /dev/sde1
vgcreate clickhouse /dev/sde1
lvcreate --name data -l 100%FREE clickhouse
mkfs.ext4 /dev/clickhouse/data
Then, on init, choosing clickhouse on supervisor, adding /dev/mapper/clickhouse-data for the one and only hot-tier, it again is crashing and confirming same error message as before.
Bug ?
I do not think, that this kind of error report is "normal" ...
Hi Ronny,
You don't have to create the partition
This is done by FSM itself
Please provide only the disk name that is for the clickhouse data
/dev/sde instead of /dev/sde1
During the initial setup provide this Disk name and then Test and Save
/opt/pheonix/log/phoenix.log should provide the error message if there are any errors
You can format this disk and reuse it
Solution is: Just create the partition and give path to it like /dev/sde for example, and the rest of installation will be done by FortiSIEM itself.
Yes, this is expected
Welcome to your new Fortinet Community!
You'll find your previous forum posts under "Forums"
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.