Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
ITGuy11
New Contributor

When is 5.4.1 going to drop?

Is there an ETA as to when 5.4.1 is going to drop?  I have a brand new 300D that I am waiting to put into production as soon as 5.4.1 is ready.

2 Solutions
FGTuser
New Contributor III

by end of next week (April 15)

View solution in original post

kallbrandt

That amount of clashes is nothing to worry about I'd say. On the LB-vdom I mentioned earlier the log shows 6-digit amounts of clashes. The clash counter is reset at reboot btw, and is not related to the current amount of sessions. It is just an ongoing counter.

 

To my knowledge, all restarts of applications with restart option 11 (segmentation fault) in FortiOS is seen as a crash. It doesn't have to mean anything bad per se. The OS recycles processes all the time using option 15 (graceful restart). When that doesn't work, it moves on to try to restart with option 11 wich will generate a log entry in the syslog. The recycle process continues all the time, buffers needs to be cleared etc etc. However, a constant restarting of the same application can also mean various problems - Memory leaks, buffer overflows etc.

 

I checked your log, but I can't see anything else then the PID and some weird ASCII-signs as application name. It does look kinda odd.

 

Check your logs and keep track of if the application crash log entries correlates with odd behaviour in the firewall, we're talking sudden reboots, functions and features stopping/not working.

 

What does "diagnose debug crashlog read" say?

 

Also, do a  "diagnose sys top", a few times during the day. Do you have processes in Z or D state?

 

Richie

NSE7

View solution in original post

Richie NSE7
104 REPLIES 104
rcarreras
New Contributor III

Upgrade 1 ( Fortigate 60D 5.4.0 --> 5.4.1 ) --> OK

Upgrade 2 ( Fortigate 60D 5.2.7 + FortiAP's --> 5.4.1 ) --> OK

 

 

SecurityPlus

Upgrade 1 ( FortiWiFi 60D 5.4.0 --> 5.4.1 ) --> OK

omega

Can you please post your partition layout after succesful upgrade?

fnsysctl df -h

 

Thanks

abelio
Valued Contributor

Hello,

just curious: 5.4.1 released available for 80C units, and the next "C" series avalaible remains in 600C.

What is so special about 80Cs only? (and not about 110C for instance)

 

from 5.4.1 release notes:

FortiOS 5.4.1 supports the following models. FortiGate FG-30D, FG-30E, FG-30D-POE, FG-50E, FG-51E, FG-60D, FG-60D-POE, FG-70D, FG-70D-POE, FG-80C, FG-80CM, FG-80D, FG-90D, FGT-90D-POE, FG-92D, FG- 94D-POE, FG-98D-POE, FG-100D, FG-140D, FG-140D-POE, FG- 200D, FG-200D- POE, FG-240D, FG-240D-POE, FG-280D-POE, FG-300D, FG-400D, FG-500D, FG- 600C, FG-600D, FG-800C, FG-800D, FG-900D, FG-1000C, FG-1000D, FG-1200D, FG-1500D, FG-1500DT, FG-3000D, FG-3100D, FG-3200D, FG-3240C, FG-3600C, FG-3700D, FG-3700DX, FG-3810D, FG-3815D, FG-5001C, FG-5001D

regards




/ Abel

regards / Abel
seth57
New Contributor

Hello

 

FGT92D upgraded -> no problem during upgrade

problem found : SNMP interfaces name changed

unable to identify them because the order does not match UP/DOWN status in GUI

 

[root@centreon-svr plugins]# ./check_centreon_snmp_traffic -H x.x.x.x -C XXXXX -v 2c -s Interface 1 ::  :: up Interface 2 ::  :: up Interface 3 ::  :: down Interface 4 ::  :: up Interface 5 ::  :: up Interface 6 ::  :: up Interface 7 ::  :: up Interface 8 ::  :: up Interface 9 ::  :: up Interface 10 ::  :: up Interface 11 ::  :: up Interface 12 ::  :: down Interface 13 ::  :: down Interface 14 ::  :: down Interface 15 ::  :: down Interface 16 ::  :: down Interface 17 ::  :: down Interface 18 ::  :: down Interface 19 ::  :: down Interface 20 ::  :: down Interface 21 ::  :: up Interface 22 ::  :: up Interface 23 ::  :: up

 

EDIT : solved by calling SNMP index instead of name

NSE6

NSE6
AragoN
New Contributor

Upgrade 1 ( Fortigate 800C 5.4.0 --> 5.4.1 ) --> OK

qxu_FTNT

The root cause of FGT-60D issue is because the second disk (flash) was not formatted, and that somehow causes /var/log was mounted to one of first two partition which should only be used as booting.

 

To thoroughly avoid this in future upgrading, please follow below steps:

 

1. format boot device from BIOS (if you already encountered the problem). This can make the first flash in very clean state.

 

2. burn image

 

3. once boot up, run "exe disk format 16". This will format the second flash disk.

 

omega
New Contributor

On a standalone unit this gives me again

 

/dev/sda1                247.9M      37.2M     197.8M  16% /data /dev/sda3                  3.2G      70.5M       2.9G   2% /data2 /dev/sdb1                  7.4G     145.1M       6.9G   2% /var/log

should there be a sda2?

 

How would that work in a HA-Cluster? Will the cluster fail due to hdisk status mismatch?

The unit reboots when format is executed.

 

 And it still shows

Log hard disk: Not available

Is that correct for a 60D without a real disk?

JohnLuo_FTNT

Hi Omega,

 

Either sda1+sda3, or sda2+sda3, not both. HA would have no problem even they are different on this.

 

And, "Log hard disk: Not available" is correct for 60D on v5.4.1

Holy

Upgrade 60D > OK

 

fnsysctl df -h
Filesystem                 Size       Used  Available Use% Mounted on
rootfs                   885.4M      88.2M     797.2M  10% /
tmpfs                    885.4M      88.2M     797.2M  10% /
none                       1.3G      34.8M       1.3G   2% /tmp
none                       1.3G      64.0K       1.3G   0% /dev/shm
none                       1.3G      19.5M       1.3G   1% /dev/cmdb
/dev/sda1                247.9M      47.8M     187.2M  20% /data
/dev/sda3                 11.8G       2.2G       8.9G  20% /data2
/dev/sda3                 11.8G       2.2G       8.9G  20% /var/log
/dev/sda3                 11.8G       2.2G       8.9G  20% /var/storage/Internal-0-4821026924CF4E90
 
 

NSE 8 

NSE 1 - 7

 

NSE 8 NSE 1 - 7
Labels
Top Kudoed Authors