FortiSIEM Discussions
gauravpawar
New Contributor III

FortiSIEM cluster upgrade from 7.3.2 to 7.4.0 failed

Have Cluster setup of 3 supervisors with automated HA and 2 workers on 7.3.2. Followed the steps mentioned on 7.4.0 upgrade as follows 

To run the cluster upgrade:

  1. Collectors can remain up and running. Workers will be stopped via the cluster upgrade script.
  2. SSH to Licensed Supervisor as root.
  3. Create the path /opt/upgrade.
    mkdir -p /opt/upgrade
  4. Download the upgrade zip package FSM_Upgrade_All_7.4.0_build0435.zip, then upload it to the Supervisor node under the /opt/upgrade/ folder.
    Example (From Linux CLI):
    scp FSM_Upgrade_All_7.4.0_build0435.zip root@10.10.10.15:/opt/upgrade/
  5. Go to /opt/upgrade.
    cd /opt/upgrade
  6. Use 7za to extract the upgrade zip package.
    Note: 7za replaces unzip for FortiSIEM 7.1.0 and later to avert unzip security vulnerabilities.
    7za x FSM_Upgrade_All_7.4.0_build0435.zip
  7. Go to the FSM_Upgrade_All_7.4.0_build0435 directory.
    cd FSM_Upgrade_All_7.4.0_build0435
  8. Run a screen.
    screen -S upgrade
    Note: This is intended for situations where network connectivity is less than favorable. If there is any connection loss, log back into the SSH console and return to the virtual screen by using the following command.
    screen -r
  9. Run the script.
    python fsm_cluster_upgrade.py
    This will go through all the Supervisor HA and DR nodes, including Worker nodes in the correct order and upgrade them. After this is finished, you need to upgrade Collectors.

After this step it failed with following error 

 

(2025-07-10 02:12:56): [01:31:06] pre-upgrade : PRE-UPGRADE | Put Patroni cluster in maintenance mode if licensed super ...| localhost | FAILED | 3.92s
(2025-07-10 02:12:56): {
(2025-07-10 02:12:56): - stdout: ""
(2025-07-10 02:12:56): - rc: 1
(2025-07-10 02:12:56): - stderr: ""
(2025-07-10 02:12:56): - start: 2025-07-10 01:31:07.152985
(2025-07-10 02:12:56): - end: 2025-07-10 01:31:10.059735
(2025-07-10 02:12:56): - msg: non-zero return code
(2025-07-10 02:12:56): - changed: True
(2025-07-10 02:12:56): - cmd: sudo su - pghauser -c 'ssh -o StrictHostKeyChecking=no pghauser@dbleader.fsiem.fortinet.com "sudo patronictl -c /etc/patroni/patroni.yml pause 2>/dev/null"'
(2025-07-10 02:12:56): - delta: 0:00:02.906750
(2025-07-10 02:12:56): - stderr_lines: []
(2025-07-10 02:12:56): - failed_when_result: True
(2025-07-10 02:12:56): }
(2025-07-10 02:12:56): --------------------------------------------------
(2025-07-10 02:12:56): DEBUG: False
(2025-07-10 02:12:56): ERROR: Version mismatch: upgrade package version 7.4.0.0435 vs recorded version 7.3.2.0374
(2025-07-10 02:12:56): DEBUG: False
(2025-07-10 02:12:56): ERROR: Node 192.168.60.31 upgrade failed again. Please investigate and run the upgrade manually on node 192.168.60.31.
(2025-07-10 02:12:56): INFO: Not all nodes have been upgraded yet.

 

 @Secusaurus @Anthony_E   Could you please help here

 

 

1 REPLY 1
Secusaurus
Contributor III

Hi @gauravpawar,

 

I did not do an upgrade to 7.4 anywhere yet. Therfore, I am not aware of the process with the cluster-upgrade python script and did not see a similar error before. Feels like one of the cluster members or components of the upgrade should have finished the upgrade, but did not and therefore the script stops here.

 

Please raise a technical support ticket here.

 

Best,

Christian

FCX #003451 | Fortinet Advanced Partner
FCX #003451 | Fortinet Advanced Partner