FortiSIEM
FortiSIEM provides Security Information and Event Management (SIEM) and User and Entity Behavior Analytics (UEBA)
kdave
Staff
Staff
Article Id 390983
Description This article describes how to resolve the AppSvr error 'Unable to build Hibernate SessionFactory', which occurs during an upgrade.
Scope FortiSIEM.
Solution

While performing an upgrade, the upgrade fails, and the following errors have been observed in the upgrade logs.

 

Log location: /usr/local/upgrade/logs/ansible.log

[1;31m[13:23:17] ↳ appserver : APPSERVER | Deploying EAR file into glassfish domain1 using the standard approach ...| localhost | FAILED | 1m1s

{

- stdout: Command deploy failed.

- rc: 1

- stderr: remote failure: Error occurred during deployment: Exception while preparing the app : [PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory. Please see server.log for more details.

Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare method : javax.persistence.PersistenceException: [PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory

[PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory

.....

- stderr_lines: [

- remote failure: Error occurred during deployment: Exception while preparing the app : [PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory. Please see server.log for more details.

- Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare method : javax.persistence.PersistenceException: [PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory

- [PersistenceUnit: phoenixDB] Unable to build Hibernate SessionFactory

 

The following are two of the possible root causes for this error during the upgrade. Methods to resolve each are provided.

 

Case 1: The upgrade failed because the dbleader host entry was not present in the /etc/hosts file.

  • Make sure that the following entry exists in /etc/hosts file. If it does not exist, manually add the entry using any of the available editors, such as vi:

 

vi /etc/hosts

##profile.on dbvip
X.X.X.X dbleader.fsiem.fortinet.com

 

Note: X.X.X.X is the IP Address of the Supervisor; replace it with the actual IP address of the Supervisor.

 

After adding the entry, save the file.

 

  • Resume the upgrade from the step where it failed previously.

 

cd /usr/local/upgrade

ansible-playbook post-upgrade.yml --start-at-task="<task_name>" | tee -a /usr/local/upgrade/logs/ansible_continued_upgrade.log

 

The 'task_name' can be found with the following command:

 

ansible-playbook --list-task post-upgrade.yml

 

Case 2: AppSvr deployment failed during upgrade due to AppSvr password mismatch 

  • The /tmp/passwords.txt file must contain the existing Database Password.

 

cat /tmp/passwords.txt

 

  • Note down the password value:


cp /opt/glassfish/domains/domain1/config/domain.xml /opt/glassfish/domains/domain1/config/domain.xml.backup


vi /opt/glassfish/domains/domain1/config/domain.xml

Locate the following line:

<property name="password" value="${ALIAS=phdbpwd}"></property>

 

Change it to <password_as_noted_above> without <,>.

 

<property name="password" value="<password_as_noted_above>"></property>

Save the file:

  • Redeploy the AppSvr.


su admin
cd /opt/phoenix/deployment
./deploy-fresh.sh phoenix.ear

 

  • After AppSvr has redeployed successfully, continue the upgrade:

 

   cd /usr/local/upgrade

ansible-playbook post-upgrade.yml --start-at-task="<task_name>" | tee -a /usr/local/upgrade/logs/ansible_continued_upgrade.log
task_name can be found using below command.
ansible-playbook --list-task post-upgrade.yml

 

  • Once the upgrade task is completed, reboot the instance.
  • Check if all of the processes are up and running once the instance has been rebooted.