Created on 04-08-2020 08:46 AM
FortiSOAR™ 6.0.0 onwards multiple disks are supported for the FortiSOAR™ installation. Having multiple disks for the FortiSOAR™ installation has an advantage that if your FortiSOAR™ system crashes, you can detach the disks that contain data and recover data. The procedure for recovering data is present in the Recovering Data section later in the article. The procedure for extending existing disks is present in the FortiSOAR™ documentation in the Deployment Troubleshooting chapter of the "Deployment Guide."
You can add three more disks to your Virtual Machine (VM) and create separate Logical Volume Management (LVM) partitioning for PostgreSQL and Elasticsearch data.
For example, you have added the following new disks:
/dev/sdb: Recommended to have a thin provisioned disk for PostgreSQL data whose disk size is 500GB. /dev/sdc: Recommended to have a thin provisioned disk for Elasticsearch data whose disk size is 150GB./dev/sdd: Recommended to have a thin provisioned disk for FortiSOAR™ RPM data whose disk size is 20GB.To partition the /dev/sdb, which is the disk for PostgreSQL data, run the following commands as a root user:
pvcreate /dev/sdbvgcreate vgdata /dev/sdbmkdir -p /var/lib/pgsqllvcreate -l 100%VG -n relations vgdatamkfs.xfs /dev/mapper/vgdata-relationsmount /dev/mapper/vgdata-relations /var/lib/pgsqlecho "/dev/mapper/vgdata-relations /var/lib/pgsql xfs defaults 0 0" >> /etc/fstabTo partition the /dev/sdc, which is the disk for Elasticsearch data, run the following commands as a root user:
pvcreate /dev/sdcvgcreate vgsearch /dev/sdcmkdir -p /var/lib/elasticsearchlvcreate -l 100%VG -n search vgsearchmkfs.xfs /dev/mapper/vgsearch-searchmount /dev/mapper/vgsearch-search /var/lib/elasticsearchecho "/dev/mapper/vgsearch-search /var/lib/elasticsearch xfs defaults 0 0" >> /etc/fstabTo partition the /dev/sdd, which is the disk for FortiSOAR™ RPM data, run the following commands as a root user:
pvcreate /dev/sddvgcreate vgapp /dev/sddmkdir -p /optlvcreate -l 100%VG -n csapps vgappmkfs.xfs /dev/mapper/vgapp-csappsmount /dev/mapper/vgapp-csapps /optecho "/dev/mapper//vgapp-csapps /opt xfs defaults 0 0" >> /etc/fstabNote: Commands for recovery of data must be run as a root user.
Following is the procedure for recovering data from the disks:
/etc/fstab file, comment out the lines that contain the word vgdata or vgapp.vgrename vgdata old_vgdatavgrename vgapp old_vgappcsadm services --stopumount /var/lib/pgsql/ && umount /opt command.vgchange -a n old_vgdata old_vgappvgdata and vgapp volume groups using 'pvs' command./dev/sdb and vgapp is on /dev/sdd, you require to skip these disks from lvm scanning. You can skip the disks from lvm scanning by adding the skip filter in /etc/lvm/lvm.conf file, by performing the following steps:etc/lvm/lvm.conf file using the vi /etc/lvm/lvm.conf command."devices {" section in the lvm.conf file, add the following line:filter = ["r|/dev/sdb|", "r|/dev/sdd|"]vgs command, which should display the vgdata and vgapp volume groups./etc/fstab file, uncomment the lines that contain the word vgdata or vgapp that we had commented out in step 2.envc and cascade tables using the following command:psql -U cyberpgsql -d das -c "truncate envc cascade;"sh temp_script_for_cluster_table_updation.sh:hardware_key=`csadm license --get-hkey` current_hostname=`hostname`
#First findout the number of nodes available in cluster table
number_of_nodes_in_cluster_table=`psql -U cyberpgsql -d das -tAc "select COUNT(*) from cluster;"`
if [ $number_of_nodes_in_cluster_table -eq 1 ]; then
# Only single node is available in cluster, hence directly update the nodeid.
psql -U cyberpgsql -d das -c "UPDATE cluster SET nodeid='${hardware_key}';"
csadm ha set-node-name $current_hostname
elif [ $number_of_nodes_in_cluster_table -gt 1 ]; then
# More than one node is available. Now update the nodeid where nodename in cluster table matches with current hostname
psql -U cyberpgsql -d das -c "UPDATE cluster SET nodeid='${hardware_key}' where nodename='${current_hostname}';"
else
echo "Not able to update the cluster table"
fi
#findout current redis password
redis_password=`grep -e "^\s*requirepass\s\+" /etc/redis.conf | cut -d' ' -f2`
#enycrypt the redis password
redis_password=`/opt/cyops-auth/.env/bin/python /opt/cyops/configs/scripts/manage_passwords.py --encrypt $redis_password`
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "redis_password" -v
$redis_password
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "redis_password_primary" -v
$redis_passwordsystemctl start rabbitmq-server
rabbitmq_password=`grep "cyops.rabbitmq.password" /opt/cyops/configs/rabbitmq/rabbitmq_users.conf | cut -d"=" -f2`
rabbitmqctl change_password cyops $rabbitmq_passwordelasticsearch_password=`csadm license --get-hkey`
printf $elasticsearch_password | /usr/share/elasticsearch/bin/elasticsearch-keystore add "bootstrap.password" -f
elasticsearch_password=`/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/manage_passwords.py --encrypt
$elasticsearch_password`
/opt/cyops-auth/.env/bin/python
/opt/cyops/configs/scripts/confUtil.py -f
/opt/cyops/configs/database/db_config.yml -k "secret" -v
$elasticsearch_passwordsystemctl start redisrm -rf /opt/cyops-api/app/cache/prod/
cd /opt/cyops-api
echo "Clear API cache"
sudo -u nginx php app/console cache:clear --env=prod --no-interactionsudo csadm certs -lcd /opt/cyops-api/
echo "System Update"
sudo -u nginx php app/console cybersponse:system:update -la --env=prod --forceecho "Restarting Services dependent on keys..."
sudo csadm services --restartsudo -u nginx php /opt/cyops-api/app/console cybersponse:elastic:create --env=prod/opt/cyops-auth/.env/bin/python -c "import sys;
sys.path.append(\"/opt/cyops-auth\"); import utilities.reset_user as
reset_user; reset_user.start()"The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.