FortiSOAR™ 6.0.0 onwards multiple disks are supported for the FortiSOAR™ installation. Having multiple disks for the FortiSOAR™ installation has an advantage that if your FortiSOAR™ system crashes, you can detach the disks that contain data and recover data. The procedure for recovering data is present in the Recovering Data section later in the article. The procedure for extending existing disks is present in the FortiSOAR™ documentation in the Deployment Troubleshooting chapter of the "Deployment Guide."
You can add three more disks to your Virtual Machine (VM) and create separate Logical Volume Management (LVM) partitioning for PostgreSQL and Elasticsearch data.
For example, you have added the following new disks:
/dev/sdb
: Recommended to have a thin provisioned disk for PostgreSQL data whose disk size is 500GB. /dev/sdc
: Recommended to have a thin provisioned disk for Elasticsearch data whose disk size is 150GB./dev/sdd
: Recommended to have a thin provisioned disk for FortiSOAR™ RPM data whose disk size is 20GB.To partition the /dev/sdb
, which is the disk for PostgreSQL data, run the following commands as a root
user:
pvcreate /dev/sdb
vgcreate vgdata /dev/sdb
mkdir -p /var/lib/pgsql
lvcreate -l 100%VG -n relations vgdata
mkfs.xfs /dev/mapper/vgdata-relations
mount /dev/mapper/vgdata-relations /var/lib/pgsql
echo "/dev/mapper/vgdata-relations /var/lib/pgsql xfs defaults 0 0"
>> /etc/fstab
To partition the /dev/sdc
, which is the disk for Elasticsearch data, run the following commands as a root
user:
pvcreate /dev/sdc
vgcreate vgsearch /dev/sdc
mkdir -p /var/lib/elasticsearch
lvcreate -l 100%VG -n search vgsearch
mkfs.xfs /dev/mapper/vgsearch
-search
mount /dev/mapper/vgsearch
-search /var/lib/elasticsearch
echo "/dev/mapper/vgsearch
-search /var/lib/elasticsearch xfs defaults 0 0"
>> /etc/fstab
To partition the /dev/sdd
, which is the disk for FortiSOAR™ RPM data, run the following commands as a root
user:
pvcreate /dev/sdd
vgcreate vgapp /dev/sdd
mkdir -p /opt
lvcreate -l 100%VG -n csapps vgapp
mkfs.xfs /dev/mapper/vgapp
-csapps
mount /dev/mapper/vgapp
-csapps /opt
echo "/dev/mapper//vgapp
-csapps /opt xfs defaults 0 0" >> /etc/fstab
Note: Commands for recovery of data must be run as a root
user.
Following is the procedure for recovering data from the disks:
/etc/fstab
file, comment out the lines that contain the word vgdata
or vgapp
.vgrename vgdata old_vgdata
vgrename vgapp old_vgapp
csadm services --stop
umount /var/lib/pgsql/ && umount /opt
command.vgchange -a n old_vgdata old_vgapp
vgdata
and vgapp
volume groups using 'pvs'
command./dev/sdb
and vgapp is on /dev/sdd
, you require to skip these disks from lvm
scanning. You can skip the disks from lvm
scanning by adding the skip filter in /etc/lvm/lvm.conf
file, by performing the following steps:etc/lvm/lvm.conf
file using the vi /etc/lvm/lvm.conf
command."devices {"
section in the lvm.conf file, add the following line:filter = ["r|/dev/sdb|", "r|/dev/sdd|"]
vgs
command, which should display the vgdata and vgapp volume groups./etc/fstab file
, uncomment the lines that contain the word vgdata or vgapp that we had commented out in step 2.envc
and cascade
tables using the following command:psql -U cyberpgsql -d das -c "truncate envc cascade;"
sh temp_script_for_cluster_table_updation.sh
:hardware_key=`csadm license --get-hkey` current_hostname=`hostname`
#First findout the number of nodes available in cluster table
number_of_nodes_in_cluster_table=`psql -U cyberpgsql -d das -tAc "select COUNT(*) from cluster;"`
if [ $number_of_nodes_in_cluster_table -eq 1 ]; then
# Only single node is available in cluster, hence directly update the nodeid.
psql -U cyberpgsql -d das -c "UPDATE cluster SET nodeid='${hardware_key}';"
csadm ha set-node-name $current_hostname
elif [ $number_of_nodes_in_cluster_table -gt 1 ]; then
# More than one node is available. Now update the nodeid where nodename in cluster table matches with current hostname
psql -U cyberpgsql -d das -c "UPDATE cluster SET nodeid='${hardware_key}' where nodename='${current_hostname}';"
else
echo "Not able to update the cluster table"
fi
#findout current redis password
redis_password=`grep -e "^\s*requirepass\s\+" /etc/redis.conf | cut -d' ' -f2`
#enycrypt the redis password
redis_password=`/opt/cyops-auth/.env/bin/python /opt/cyops/configs/scripts/manage_passwords.py --encrypt $redis_password`
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "redis_password" -v
$redis_password
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "redis_password_primary" -v
$redis_password
systemctl start rabbitmq-server
rabbitmq_password=`grep "cyops.rabbitmq.password" /opt/cyops/configs/rabbitmq/rabbitmq_users.conf | cut -d"=" -f2`
rabbitmqctl change_password cyops $rabbitmq_password
elasticsearch_password=`csadm license --get-hkey`
printf $elasticsearch_password | /usr/share/elasticsearch/bin/elasticsearch-keystore add "bootstrap.password" -f
elasticsearch_password=`/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/manage_passwords.py --encrypt
$elasticsearch_password`
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "secret" -v
$elasticsearch_password
systemctl start redis
rm -rf /opt/cyops-api/app/cache/prod/
cd /opt/cyops-api
echo "Clear API cache"
sudo -u nginx php app/console cache:clear --env=prod --no-interaction
sudo csadm certs -l
cd /opt/cyops-api/
echo "System Update"
sudo -u nginx php app/console cybersponse:system:update -la --env=prod --force
echo "Restarting Services dependent on keys..."
sudo csadm services --restart
sudo -u nginx php /opt/cyops-api/app/console cybersponse:elastic:create --env=prod
/opt/cyops-auth/.env/bin/python -c "import sys;
sys.path.append(\"/opt/cyops-auth\"); import utilities.reset_user as
reset_user; reset_user.start()"
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2023 Fortinet, Inc. All Rights Reserved.