Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
rrodrigues
New Contributor II

Scheduled Firmware upgrade results in failed initialisatio

Hello,

Our system was set for a schedule firmware upgrade - 7.4.1 to 7.4.2.

There was a scheduled automated message about the upgrade "Automatic firmware upgrade schedule changed" which housed when and what:

 

 

date=2023-12-22 time=23:08:37 devid="xxxx" devname="FG-companyname-SC" eventtime=1703315317648109599 tz="-0800" logid="xxx" type="event" subtype="system" level="notice" vd="root" logdesc="Automatic firmware upgrade schedule changed" user="system" msg="System patch-level auto-upgrade new image installation scheduled between local time Thu Dec 28 23:16:15 2023 and local time Fri Dec 29 01:00:00 2023." 

 

 

And in our logs we have a critical event (see screenshot) where this was actioned (in line with the timing listed in the email above) and we've just had reports that the there is no wifi being served through this.

A hard reset of the fortigate seems to resolve this issue but happens everytime the system reboots itself after an update.

Screenshot 2024-01-02 at 17.32.03.png

 

What I find interesting is that there is no events between the 28th and 2nd Jan.

Its worth noting that there is also a warning in the logs after this update on 28th December but unsure if connected so will mention it never the less:

 

 

Local certificate Fortinet_SSL_RSA4096 will expire in 0 days.

 

Is there a known issue or something specific I could search for in order to help track down what the potential issue is for this?

8 REPLIES 8
AEK
SuperUser
SuperUser

Hello

I'm not sure but I'd say that the new firmware has probably invalidated this certificate for some reason. Keep in mind you can regenerate the default certificates.

execute vpn certificate local generate default-ssl-key-certs

 Hope this helps.

AEK
AEK
rrodrigues
New Contributor II

Hi AEK

Would you think that the invalid certificate would also cause the reboot/relaunch of the device to fail?

hbac
Staff
Staff

Hi @rrodrigues,

 

Did you scheduled firmware upgrade work? expired certificate shouldn't cause a reboot. You can follow this article to regenerate the certificate: https://community.fortinet.com/t5/FortiGate/Technical-Tip-Renew-Certificate-Expired-on-FortiGate/ta-...

 

Regards, 

rrodrigues
New Contributor II

Hi hbac,

Yes we scheduled the firmwarre upgrade. Im assuming based on previous experience around firmware updates for switches/routeres that it needed a reboot. My highlight on the cert error, was merely that of question, rather than indication that this was the cause of the problem.

 

Could the failed cert stop the fortigate device from coming back online after a firmware upgrade?

 

The status LED for the device was off when we came to the device and did a hard reset (pull the power out and plug it back in)

AEK

Hi

I think it is unlikely that an expired cert can cause such an issue on FGT.

Can you check for any startup error logs.

get system startup-error-log
or
diag debug config-error-log read

 

AEK
AEK
hbac

@rrodrigues,

 

As I mentioned, expired certificate shouldn't cause a reboot. I understand that the FortiGate was bot able to boot up and you had to hard reboot it. Can you confirm if the firmware upgrade was successful or not? Please also provide the output of this command "di deb crashlog read". 

 

Regards, 

rrodrigues
New Contributor II

According to the logs, yes, but its worded weirdly.

The status of the device confirms yes though.

Screenshot 2024-01-04 at 12.32.18.png

Would we still get anything relevant from the logs if the system was hard rebooted after the 28th?

Both of these commands returned nothing

diag debug config-error-log read
get system startup-error-log

 

here is the log from the di deb crashlog read command

FG-Cloudinary-SC # di deb crashlog read
1: 2021-11-15 09:04:51 Interface x1 is brought up. process_id=221, process_name="lnkmtd"
2: 2021-11-15 09:04:51 Interface x2 is brought up. process_id=221, process_name="lnkmtd"
3: 2021-12-15 14:55:42 the killed daemon is /bin/eap_proxy: status=0x0
4: 2022-02-23 10:55:33 the killed daemon is /bin/eap_proxy: status=0x0
5: 2022-03-23 10:55:31 the killed daemon is /bin/eap_proxy: status=0x0
6: 2022-04-17 09:33:02 the killed daemon is /bin/sflowd: status=0x0
7: 2022-04-17 09:50:43 the killed daemon is /bin/sflowd: status=0x0
8: 2022-04-20 20:42:34 the killed daemon is /bin/dhcpd: status=0x0
9: 2022-04-27 02:37:21 the killed daemon is /bin/csfd: status=0x0
10: 2022-08-16 17:51:48 the killed daemon is /bin/csfd: status=0x0
11: 2022-08-16 17:51:49 the killed daemon is /bin/eap_proxy: status=0x0
12: 2022-09-19 23:05:48 the killed daemon is /bin/dhcpd: status=0x0
13: 2022-10-13 14:52:22 the killed daemon is /bin/csfd: status=0x0
14: 2022-11-11 16:13:52 the killed daemon is /bin/dhcpd: status=0x0
15: 2022-11-11 16:14:41 the killed daemon is /bin/dhcpd: status=0x0
16: 2022-12-08 14:51:40 the killed daemon is /bin/csfd: status=0x0
17: 2023-01-14 14:12:16 the killed daemon is /bin/csfd: status=0x0
18: 2023-03-03 12:11:23 the killed daemon is /bin/csfd: status=0x0
19: 2023-04-21 15:45:13 the killed daemon is /bin/sflowd: status=0x0
20: 2023-05-14 22:45:29 the killed daemon is /bin/csfd: status=0x0
21: 2023-06-02 09:15:07 the killed daemon is /bin/csfd: status=0x0
22: 2023-06-06 12:37:25 the killed daemon is /bin/sflowd: status=0x0
23: 2023-06-06 12:41:43 the killed daemon is /bin/sfupgraded: status=0x0
24: 2023-06-06 12:41:43 the killed daemon is /bin/sfupgraded: status=0x0
25: 2023-06-06 12:41:46 the killed daemon is /bin/sflowd: status=0x0
26: 2023-06-06 13:12:39 the killed daemon is /bin/csfd: status=0x0
27: 2023-06-06 13:12:39 the killed daemon is /bin/eap_proxy: status=0x0
28: 2023-06-07 14:08:13 the killed daemon is /bin/sflowd: status=0x0
29: 2023-06-08 18:24:19 the killed daemon is /bin/sflowd: status=0x0
30: 2023-06-08 18:24:29 the killed daemon is /bin/sflowd: status=0x0
31: 2023-06-24 09:35:24 the killed daemon is /bin/sflowd: status=0x0
32: 2023-06-24 09:35:24 the killed daemon is /bin/sflowd: status=0x0
33: 2023-06-24 09:35:24 the killed daemon is /bin/sflowd: status=0x0
34: 2023-06-24 09:35:31 the killed daemon is /bin/sflowd: status=0x0
35: 2023-06-24 10:05:55 the killed daemon is /bin/csfd: status=0x0
36: 2023-06-24 10:05:55 the killed daemon is /bin/eap_proxy: status=0x0
37: 2023-06-26 18:05:26 the killed daemon is /bin/eap_proxy: status=0x0
38: 2023-06-26 18:05:26 the killed daemon is /bin/csfd: status=0x0
39: 2023-06-29 11:35:11 the killed daemon is /bin/eap_proxy: status=0x0
40: 2023-06-29 11:35:11 the killed daemon is /bin/csfd: status=0x0
41: 2023-08-11 12:41:52 the killed daemon is /bin/sflowd: status=0x0
42: 2023-08-11 12:41:52 the killed daemon is /bin/sflowd: status=0x0
43: 2023-08-30 09:41:18 the killed daemon is /bin/csfd: status=0x0
44: 2023-08-30 09:41:18 the killed daemon is /bin/eap_proxy: status=0x0
45: 2023-09-06 07:56:48 Interface ha1 is brought down. process_id=9647, process_name="httpsd"
46: 2023-09-06 07:56:48 Interface ha2 is brought down. process_id=9647, process_name="httpsd"
47: 2023-09-06 07:56:48 Interface mgmt is brought down. process_id=9647, process_name="httpsd"
48: 2023-09-06 07:56:48 Interface port1 is brought down. process_id=9647, process_name="httpsd"
49: 2023-09-06 07:56:49 Interface port2 is brought down. process_id=9647, process_name="httpsd"
50: 2023-09-06 07:56:49 Interface port3 is brought down. process_id=9647, process_name="httpsd"
51: 2023-09-06 07:56:49 Interface port4 is brought down. process_id=9647, process_name="httpsd"
52: 2023-09-06 07:56:49 Interface port5 is brought down. process_id=9647, process_name="httpsd"
53: 2023-09-06 07:56:49 Interface port6 is brought down. process_id=9647, process_name="httpsd"
54: 2023-09-06 07:56:49 Interface port7 is brought down. process_id=9647, process_name="httpsd"
55: 2023-09-06 07:56:49 Interface port8 is brought down. process_id=9647, process_name="httpsd"
56: 2023-09-06 07:56:49 Interface port9 is brought down. process_id=9647, process_name="httpsd"
57: 2023-09-06 07:56:50 Interface port10 is brought down. process_id=9647, process_name="httpsd"
58: 2023-09-06 07:56:50 Interface port11 is brought down. process_id=9647, process_name="httpsd"
59: 2023-09-06 07:56:50 Interface port12 is brought down. process_id=9647, process_name="httpsd"
60: 2023-09-06 07:56:50 Interface port13 is brought down. process_id=9647, process_name="httpsd"
61: 2023-09-06 07:56:50 Interface port14 is brought down. process_id=9647, process_name="httpsd"
62: 2023-09-06 07:56:50 Interface port15 is brought down. process_id=9647, process_name="httpsd"
63: 2023-09-06 07:56:50 Interface port16 is brought down. process_id=9647, process_name="httpsd"
64: 2023-09-06 07:56:50 Interface port17 is brought down. process_id=9647, process_name="httpsd"
65: 2023-09-06 07:56:51 Interface port18 is brought down. process_id=9647, process_name="httpsd"
66: 2023-09-06 07:56:51 Interface port19 is brought down. process_id=9647, process_name="httpsd"
67: 2023-09-06 07:56:51 Interface port20 is brought down. process_id=9647, process_name="httpsd"
68: 2023-09-06 07:56:51 Interface wan1 is brought down. process_id=9647, process_name="httpsd"
69: 2023-09-06 07:56:51 Interface wan2 is brought down. process_id=9647, process_name="httpsd"
70: 2023-09-06 08:57:34 the killed daemon is /bin/sflowd: status=0x0
71: 2023-09-06 08:57:34 the killed daemon is /bin/sflowd: status=0x0
72: 2023-09-06 08:57:34 the killed daemon is /bin/sflowd: status=0x0
73: 2023-10-03 09:57:27 the killed daemon is /bin/csfd: status=0x0
74: 2023-10-13 08:39:57 the killed daemon is /bin/sflowd: status=0x0
75: 2023-10-13 08:39:57 the killed daemon is /bin/sflowd: status=0x0
76: 2023-10-13 08:40:03 the killed daemon is /bin/sflowd: status=0x0
77: 2023-10-13 09:10:48 the killed daemon is /bin/csfd: status=0x0
78: 2023-10-13 09:10:48 the killed daemon is /bin/eap_proxy: status=0x0
79: 2023-11-13 09:29:35 the killed daemon is /bin/sflowd: status=0x0
80: 2023-11-13 09:49:09 the killed daemon is /bin/csfd: status=0x0
81: 2023-11-13 09:49:09 the killed daemon is /bin/eap_proxy: status=0x0
82: 2023-12-14 09:48:40 the killed daemon is /bin/csfd: status=0x0
83: 2024-01-02 09:09:49 the killed daemon is /bin/sfupgraded: status=0x0
84: 2024-01-02 09:09:49 the killed daemon is /bin/sfupgraded: status=0x0
85: 2024-01-02 09:09:51 the killed daemon is /bin/sfupgraded: status=0x0
86: 2024-01-02 09:09:53 the killed daemon is /bin/sfupgraded: status=0x0
87: 2024-01-02 09:09:55 the killed daemon is /bin/sfupgraded: status=0x0
88: 2024-01-02 09:09:56 the killed daemon is /bin/sflowd: status=0x0
89: 2024-01-02 09:42:08 the killed daemon is /bin/csfd: status=0x0
90: 2024-01-02 09:42:08 the killed daemon is /bin/eap_proxy: status=0x0
Crash log interval is 3600 seconds
Max crash log line number: 16384

 

hbac

@rrodrigues,

 

If there is no crashlogs or event logs at that time. You will need to connect to the console port when the issue is occurring to see what's going on. Please refer to https://community.fortinet.com/t5/FortiGate/Technical-Tip-How-to-connect-to-the-FortiGate-console-po...

 

Regards, 

Labels
Top Kudoed Authors