Description
This article explains how to minimize the amount of downtime that occurs when migrating data from an NFS store to another NFS store, or from local storage to a new NFS server. This article assumes that the new server is a linux server. If any other NFS servers are being utilized, adjustments outside of the following steps will be necessary, depending on the server.
NOTE: rsync is 3rd party software, utilized for convenience in the same way as cp (copy) is utilized.
The Linux Administrator should perform troubleshooting of rsync, as it requires Linux Administrative knowledge to run and upkeep the necessary tasks.
Refer to the rsync author's page to obtain further information for troubleshooting file transfer issues: http://rsync.samba.org/
Scope
Solution
Option 1: Old NFS to New NFS
1) Log into the Supervisor and create a folder to mount the new storage onto:
# ssh root@<supervisor ip>
# mkdir /new_data
2) [Supervisor Only] Mount the NEW NFS on FortiSIEM
# mount -t nfs (new NFS IP):(remote new NFS path) /new_data
If a problem occurs with the mount, run the following example command (substituting values as appropriate) to troubleshoot and send the result to support in a support ticket:
# mount -v -t nfs (new NFS IP):(remote new NFS path) /new_data
# showmount -e (new NFS IP)
NOTE: Before running the above rsync, it is recommended to install and utilize a virtual screen to prevent accidental network drops, which would make it necessary to repeat step 3: https://linuxize.com/post/how-to-use-linux-screen/
3) [Supervisor Only] Run rsync from the old NFS location to the new NFS location:
# cd /data
# rsync --progress -av * /new_data
Run the steps below after completing step 3:
4) [Supervisor and Workers] Shut down all AO Processes:
A) SSH into the supervisor and workers as root
B) Shut down all of the necessary services:
# phtools --stop all
# phstatus (make sure all ph-processes are down except phMonitor)
# service crond stop
# killall -9 phwatchdog phMonitor phxctl
Supervisor Only:
# /opt/glassfish/bin/asadmin stop-domain
v5.1.2 and below:
# service postgresql-9.1 stop
v5.2.1+,v6.1.*:
# service postgresql-9.4 stop
v6.2+:
# service postgresql-13 stop
# service httpd stop
5) [Supervisor and Workers] Check the statuses of all services to ensure they are down:
# phstatus
NOTE: For Supervisor, the ONLY service running should be Node.js.
6) [Supervisor Only] Run step 3 once more to complete the final copy:
# cd /data
# rsync --progress -av * /new_data
7) [Supervisor Only] Check the ownership, permissions and folder structure of the '/new_data' path against '/data'.
# cd /new_data/
# ls -l
The output will appear similar to the following:
drwxr-sr-x 3 postgres postgres 4096 Dec 21 2016 archive
drwxr-sr-x 3 root admin 4096 Mar 19 17:21 backup
drwxr-sr-x 10 admin admin 4096 Jan 18 14:46 cache
drwxr-sr-x 2 postgres postgres 4096 Dec 20 2016 cmdb
drwxr-sr-x 2 admin admin 20480 Jun 12 17:02 custParser
drwxrwsr-x 2 admin admin 36864 Mar 12 11:17 eventDataResult
drwxrwsr-x 2 admin admin 4096 May 30 14:17 eventDataSum
drwxr-sr-x 22 admin admin 4096 Jun 11 12:52 eventdb
drwxr-sr-x 2 admin admin 4096 Dec 20 2016 jmxXml
drwxr-sr-x 2 admin admin 4096 Mar 19 19:00 mibXml
8) [Supervisor and Workers] Unmount the old NFS server and mount the new NFS server:
# umount /new_data (Supervisor only)
# umount /data
# vi /etc/fstab
Modify the NFS mount entry to the new NFS mount:
<NEWNFSIP>:/<directory> /data nfs defaults,nfsvers=3,timeo=14,noatime,nolock 0 0
For example:
10.30.1.10:/eventDB /data nfs defaults,nfsvers=3,timeo=14,noatime,nolock 0 0
Run the following command:
# mount -a
If mounting fails, ensure that the new NFS server has the workers as part of its export list under '/etc/exports'.
9) [Supervisor Only] Reset the configuration database values:
v5.1.2 and below:
# service postgresql-9.1 start
v5.2.1+,v6.1.*:
# service postgresql-9.4 start
v6.2+:
# service postgresql-13 start
# psql -U phoenix phoenixdb -c “delete from ph_sys_conf where category=’16’
# /opt/phoenix/deployment/jumpbox/storage_type_to_db.sh
10) [Supervisor and Workers] Restart all internal services:
# service crond start
# phstatus (Watch for all the services to restart)
Optionally, reboot the system:
# shutdown -r now
Or:
# reboot
11) [Supervisor and Workers] Verify that the new NFS disk is available with the following command:
# df
Example output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 57668868 27594080 27145380 51% /
tmpfs 8166588 256 8166332 1% /dev/shm
/dev/sda1 126931 68150 52228 57% /boot
/dev/sdb1 61923396 2238644 56539228 4% /cmdb
/dev/sdc1 61923396 187220 58590652 1% /svn
172.30.59.133:/eventDB
91792384 44505088 42617856 52% /data
Option 2: Local disk Storage to NFS
1) Log into the Supervisor and create a folder to mount the new storage onto:
# ssh root@<supervisor ip>
# mkdir /new_data
2) Mount the new NFS on FortiSIEM:
# mount -t nfs (new NFS IP):(remote new NFS path) /new_data
If a problem occurs with the mount, run the following example command (substituting values as appropriate) to troubleshoot and send the result to support in a support ticket:
# mount -v -t nfs (new NFS IP):(remote new NFS path) /new_data
NOTE: Before running the above rsync, it is recommended to install and utilize a virtual screen to prevent accidental network drops, which would make it necessary to repeat step 3: https://linuxize.com/post/how-to-use-linux-screen/
3) Run rsync from the local disk to the new NFS location:
# cd /data
# rsync --progress -av * /new_data
Run the following steps after completing step 3.
4) [Supervisor and Workers] Shut down all AO Processes:
A) SSH into the supervisor and workers as root
B) Shut down all of the necessary services:
# phtools --stop all
# phstatus (make sure all ph-processes are down except phMonitor)
# service crond stop
# killall -9 phwatchdog phMonitor phxctl
Supervisor only:
# /opt/glassfish/bin/asadmin stop-domain
v5.1.2 and below:
# service postgresql-9.1 stop
v5.2.1+,v6.1.*:
# service postgresql-9.4 stop
v6.2+:
# service postgresql-13 stop
# service httpd stop
5) [Supervisor and Workers] Check the statuses of all services to ensure they are down:
# phstatus
NOTE: For supervisor, the ONLY service running should be Node.js.
6) [Supervisor Only] Run step 3 once more to complete the final copy:
# cd /data
# rsync --progress -av * /new_data
7) [Supervisor Only] Check the ownership, permissions and folder structure of the '/new_data' path against '/data'.
# cd /new_data/
# ls -l
The output will appear similar to the following:
drwxr-sr-x 3 postgres postgres 4096 Dec 21 2016 archive
drwxr-sr-x 3 root admin 4096 Mar 19 17:21 backup
drwxr-sr-x 10 admin admin 4096 Jan 18 14:46 cache
drwxr-sr-x 2 postgres postgres 4096 Dec 20 2016 cmdb
drwxr-sr-x 2 admin admin 20480 Jun 12 17:02 custParser
drwxrwsr-x 2 admin admin 36864 Mar 12 11:17 eventDataResult
drwxrwsr-x 2 admin admin 4096 May 30 14:17 eventDataSum
drwxr-sr-x 22 admin admin 4096 Jun 11 12:52 eventdb
drwxr-sr-x 2 admin admin 4096 Dec 20 2016 jmxXml
drwxr-sr-x 2 admin admin 4096 Mar 19 19:00 mibXml
8) Unmount the old NFS server and mount the new NFS server.
# umount /new_data (Supervisor only)
# umount /data
# vi /etc/fstab
Change the local disk entry to the NFS server entry. First, find the mount entry:
<device disk> /data ext3 defaults,noatime 0 0
For example:
/dev/sdd /data ext3 defaults,noatime 0 0
Modify it to the NFS Entry:
<NEWNFSIP>:/<directory> /data nfs defaults,nfsvers=3,timeo=14,noatime,nolock 0 0
For example:
10.30.1.10:/eventDB /data nfs defaults,nfsvers=3,timeo=14,noatime,nolock 0 0
Run the following command:
# mount -a
If mounting fails, ensure that the new NFS server has the workers as part of its export list under '/etc/exports'.
9) Reset the configuration database values:
v5.1.2 and below:
# service postgresql-9.1 start
v5.2.1+,v6.1.*:
# service postgresql-9.4 start
v6.2+:
# service postgresql-13 start
# psql -U phoenix phoenixdb -c “delete from ph_sys_conf where category=’16’
# /opt/phoenix/deployment/jumpbox/storage_type_to_db.sh
10) Restart all internal services:
# service crond start
# phstatus (Observe to ensure all services restart)
Optionally, reboot the system:
# shutdown -r now
Or
# reboot
11) Verify that the NEW NFS disk is available:
# df
Example output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 57668868 27594080 27145380 51% /
tmpfs 8166588 256 8166332 1% /dev/shm
/dev/sda1 126931 68150 52228 57% /boot
/dev/sdb1 61923396 2238644 56539228 4% /cmdb
/dev/sdc1 61923396 187220 58590652 1% /svn
172.30.59.133:/eventDB
91792384 44505088 42617856 52% /data
12) [OPTIONAL] Register Workers
The most common reason to move the eventdb from the local disk to an NFS server is to provide additional workers within the FortiSIEM infrastructure. This step is only necessary if that is the intention.
A) Navigate to https://<super_address>/phoenix/.
B) Navigate to Admin > License > Nodes > Add.
C) Select the intended worker from the dropdown menu.
D) Enter the worker address under Worker IP Address.
NOTE: Worker addition requires the workers to be able to connect to the NFS server that has been configured. If the NFS server has not included the worker’s ip in /etc/exports, add it before adding workers:
E) Navigate to Admin > Settings > Worker Upload.
F) Add the worker address under Worker Address.
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.