Dear FortiSIEM community,
I would like to ask about the procedure of creating a ClickHouse cluster of 1 supervisor and 2 worker nodes. I understand that if I wanted to use replica shards, it is better to have several keeper nodes (3 at minimum), as the keeper node is important for proper management of replicas (if you lose your single keeper, for example supervisor reboot, in a cluster with replicas, you lose log ingestion for the time the keeper is down.) When I try to deploy such cluster, I am able to add only one other keeper node, if I want to add the third one, I get an error "You can add at most 1 new ClickHouse Keeper." I deployed the cluster with two keepers and then I added the third one. As you'd expect, one of the worker node ended up with the tables in read only mode. This cannot be the proper way of forming the cluster with three keeper nodes, is it?
Thanks,
Jan
Hi @yans,
In my understanding
1) Keepers should be independent from Replicas
2) If you have a multi-node deployment, you usually also have Collectors. Collectors cache all the events they cannot send to the cluster, so a reboot of a Supervisor would not drop any events.
Yes, you can add only only Keeper at a time. I would expect adding the Keepers (forming the root of the cluster) happens before your define the table/replicas. So, I would start with adding the Keepers one by one (in my deployment scenario, it'd be a 3-node-HA-deployment of the Supervisors which already form the 3 Keepers) and after that happened starting to configure the rest of the cluster.
You might have a look a the KBs here and official docs, there are a lot of commands for fighting read-only databases if the cluster gets stuck in that state.
Best,
Christian
Hi @Secusaurus,
I appreciate the time you took to reply to me.
1) Keepers should be independent from Replicas
- Meaning you should never have a keeper on the same worker node where you have replicas?
2) If you have a multi-node deployment, you usually also have Collectors. Collectors cache all the events they cannot send to the cluster, so a reboot of a Supervisor would not drop any events.
- I don't think this applies in this case, because collector is not part of the ClickHouse cluster and it does not know that the tables are RO because of a broken quorum, does it? What you say would apply if the connectivity to the workers would be unavailable, then the collector would cache the logs. Or does the collector stop sending logs if the workers are broken on the ClickHouse level? There would need to be some application-level check on the side of the collector to detect that, I think it is only concerned about network connectivity.
Yes, you can add only only Keeper at a time. I would expect adding the Keepers (forming the root of the cluster) happens before your define the table/replicas. So, I would start with adding the Keepers one by one (in my deployment scenario, it'd be a 3-node-HA-deployment of the Supervisors which already form the 3 Keepers) and after that happened starting to configure the rest of the cluster.
- You can't really do that, because there is always at least one shard defined in the ClickHouse cluster deployment, therefore from GUI, you cannot deploy just the keeper nodes individually. This is what happens if you want to remove the shard to deploy only keepers:
You might have a look a the KBs here and official docs, there are a lot of commands for fighting read-only databases if the cluster gets stuck in that state.
- This is not a problem, it is fixable by running this on the supervisor:
This made me question the whole process, because if this is the right way of deploying three keepers, then this error is inevitable.
Many thanks,
Jan
Welcome to your new Fortinet Community!
You'll find your previous forum posts under "Forums"
| User | Count |
|---|---|
| 77 | |
| 25 | |
| 15 | |
| 10 | |
| 10 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.