Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
Hosemacht
Contributor II

FortiOS 6.0.5 is out!

Many Bugfixes but no TLS 1.3 mentioned.

 

https://docs.fortinet.com.../fortios-release-notes

sudo apt-get-rekt

sudo apt-get-rekt
1 Solution
seadave

Ich verstehen Sie, aber meine Deutsch ist sehr schlecht!

 

So I just got off the phone with the TAC.  Did lots of testing.  We thought the issue was with our FAZ, but after doing some more checking we decided to check the miglogd process.

 

If you log into the CLI

 

diag sys top-summary

 

look for the miglogd process and note the process ID (PID)

 

[style="background-color: #ffff00;"]197 [/style]    320M    0.0  2.0    73  09:19.30  miglogd [x5]

 

 Press "q" to quit the monitoring of sys top.

 

Now that you have the PID of the miglogd process, enter the following to kill and restart it:

 

dig sys kill 11 197

 

Note in my case the PID was 197 as highlighted above.  Once we did this, we saw this in the crashlog:

 

diag debug crashlog read

 

509: 2019-06-05 13:36:40 <00197> application miglogd 510: 2019-06-05 13:36:40 <00197> *** signal 11 (Segmentation fault) received *** 511: 2019-06-05 13:36:40 <00197> Register dump: 512: 2019-06-05 13:36:40 <00197> RAX: fffffffffffffffc RBX: 0000000000ed19e0 513: 2019-06-05 13:36:40 <00197> RCX: ffffffffffffffff RDX: 0000000000000400 514: 2019-06-05 13:36:40 <00197> R08: 0000000000000000 R09: 0000000000000008 515: 2019-06-05 13:36:40 <00197> R10: 00000000000006d6 R11: 0000000000000246 516: 2019-06-05 13:36:40 <00197> R12: 00007fff4adca950 R13: 0000000000000000 517: 2019-06-05 13:36:40 <00197> R14: 00007fff4adca950 R15: 0000000015477b10 518: 2019-06-05 13:36:40 <00197> RSI: 00000000151a9550 RDI: 000000000000000a 519: 2019-06-05 13:36:40 <00197> RBP: 00007fff4adca5f0 RSP: 00007fff4adca5b8 520: 2019-06-05 13:36:40 <00197> RIP: 00007f1ff42bbba0 EFLAGS: 0000000000000246 521: 2019-06-05 13:36:40 <00197> CS: 0033 FS: 0000 GS: 0000 522: 2019-06-05 13:36:40 <00197> Trap: 0000000000000000 Error: 0000000000000000 523: 2019-06-05 13:36:40 <00197> OldMask: 0000000000000000 524: 2019-06-05 13:36:40 <00197> CR2: 0000000000000000 525: 2019-06-05 13:36:40 <00197> stack: 0x7fff4adca5b8 - 0x7fff4adcb800 526: 2019-06-05 13:36:40 <00197> Backtrace: [size="1"]527: 2019-06-05 13:36:40 <00197> [0x7f1ff42bbba0] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 528: 2019-06-05 13:36:40 (epoll_pwait+0x00000020) liboffset 000f4ba0 [size="1"]529: 2019-06-05 13:36:40 <00197> [0x01dfb8d1] => /bin/miglogd[/size] [size="1"]530: 2019-06-05 13:36:40 <00197> [0x00ed25fa] => /bin/miglogd[/size] [size="1"]531: 2019-06-05 13:36:40 <00197> [0x0042ea44] => /bin/miglogd[/size] [size="1"]532: 2019-06-05 13:36:40 <00197> [0x0043529f] => /bin/miglogd[/size] [size="1"]533: 2019-06-05 13:36:40 <00197> [0x004321e8] => /bin/miglogd[/size] [size="1"]534: 2019-06-05 13:36:40 <00197> [0x004326de] => /bin/miglogd[/size] [size="1"]535: 2019-06-05 13:36:40 <00197> [0x00434584] => /bin/miglogd[/size] [size="1"]536: 2019-06-05 13:36:40 <00197> [0x00434ee7] => /bin/miglogd[/size] [size="1"]537: 2019-06-05 13:36:40 <00197> [0x7f1ff41e7eaa] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 538: 2019-06-05 13:36:40 (__libc_start_main+0x000000ea) liboffset 00020eaa [size="1"]539: 2019-06-05 13:36:40 <00197> [0x0042b7da] => /bin/miglogd[/size] 540: 2019-06-05 13:38:40 the killed daemon is /bin/miglogd: status=0x0 Crash log interval is 3600 seconds miglogd crashed 1 times. The last crash was at 2019-06-05 13:36:40

 

After that DNS records started flowing and populating in FAZ again!  We are not sure of the cause or how to replicate.  But it appears to be resolved.  We will monitor to see if the logs keep flowing as they should now.

View solution in original post

23 REPLIES 23
ddskier

Any further feedback on 6.0.5?   Does the community feel that this stable enough?  (SSLVPN, BGP, AV, etc.)

-DDSkier FCNSA, FCNSP FortiGate 400D, (2) 200D, (12) 100D, (2) 60D

-DDSkier FCNSA, FCNSP FortiGate 400D, (2) 200D, (12) 100D, (2) 60D
James_G

Patched a couple of FGT50e units that I had issues with hitting conserve mode on 6.0.4, after 48 hours memory is still 37% on the units, so looking good.

 

Will be scheduling in patching the rest of the estate to from 6.0.4 to 6.0.5

streeb2021

We have hit an issue with matching on multiple RADIUS fortinet-groups returned from a FortiAuthenticator instance for SSL VPN users. Basically 6.0.5 appears to be only accepting one group and ignoring the rest. FTNT has reproduced on their side and tied it to known bug 0554529 seen in 6.2.0 and fixed in 6.2.1. 

 

tanr
Valued Contributor II

Hi streeb2021.

So the bug is only if a single user gets multiple fortinet-groups returned? 

Wanted to clarify as I'm planning to move us from 5.6.9 to 6.0.5 soon.

ddskier

tanr wrote:

Hi streeb2021.

So the bug is only if a single user gets multiple fortinet-groups returned? 

Wanted to clarify as I'm planning to move us from 5.6.9 to 6.0.5 soon.

I'm interested to, as 5.6.9 has a new vulnerability with the only known resolution is to go to 6+

https://fortiguard.com/psirt/FG-IR-19-034

-DDSkier FCNSA, FCNSP FortiGate 400D, (2) 200D, (12) 100D, (2) 60D

-DDSkier FCNSA, FCNSP FortiGate 400D, (2) 200D, (12) 100D, (2) 60D
Rami
New Contributor

ddskier wrote:

tanr wrote:

Hi streeb2021.

So the bug is only if a single user gets multiple fortinet-groups returned? 

Wanted to clarify as I'm planning to move us from 5.6.9 to 6.0.5 soon.

I'm interested to, as 5.6.9 has a new vulnerability with the only known resolution is to go to 6+

https://fortiguard.com/psirt/FG-IR-19-034

How come I have no option to upgrade to 6.0.5 from 5.6.9 , only those with invalid upgrade paths appear

Is there something I am doing wrong?

tanr
Valued Contributor II

Are you checking the valid upgrade paths shown by the widget at https://support.fortinet.com/Download/FirmwareImages.aspx?

seadave
Contributor III

We moved to 6.0.5 from 6.0.3 for an A-P HA pair of 501Es.  We did this mainly because of the CVE notice related to SSL access vulns:

 

FortiOS system file leak through SSL VPN via specially crafted HTTP resource requests

https://fortiguard.com/psirt/FG-IR-18-384

 

Unauthenticated SSL VPN users password modification

https://fortiguard.com/psirt/FG-IR-18-389

 

We are using most features related to advanced routing and NGFW proxy mode, but not switch/wifi controller, Device ID, or spam filtering.  Two things I have seen:

 

My security rating went from +85 to -425.  Seems mainly to be related to logging which relates to our other issue.

We were logging all DNS traffic via a DNS filter applied to traffic from our DNS servers.  This worked fine in 6.0.3, but has stopped cold in 6.0.5.  I have tried to disable/re-enable but no logs.  FAZ shows last event received right after update.  Using Splunk and a timechart I can clearly see a decrease in the amount and specific types of logs.

 

Release notes indicate there is a similar bug:  412649 In NGFW Policy mode, FortiGate does not create web filter logs.

 

but we still see webfilter logs.  We have a decrease in all logs, but mainly the subtypes:

 

app-ctrl

dns-query

dns-response

 

I have opened ticket 3316524 with the TAC.

Hosemacht

we moved our AA Cluster from 5.6.8 to 6.0.5 last week, no issues regarding dns filter logs.

the only issue i saw was about a decreased forti ap performance after we moved them to 6.0.5 too.

The solution was to disable on-wire Rouge ap scanning.

 

Maybe you mixed something up regarding NGFW Policy and DNS Filter, if you did not set your Cluster in FLOW mode you

cannot create NGFW Policies.

Anyway this is a known issue since 5.6 i guess and still an issue even in 6.2.

 

 

sudo apt-get-rekt

sudo apt-get-rekt
seadave

Hmm.  That is interesting.  It was working fine for us in 6.0.3.  We had a very clean config as we had to move from 500D to 501E and thus we upgraded our 500D from 5.6 to 6.03 and then diffed, importing policy stanzas into the 501E before deploying in HA.  We monitor our logs in splunk.  Before upgrading to 6.0.5, we were logging ~2G/day.  Now ~1G/day.  Will see what TAC says as I have sent them our config.

 

I don't understand this statement:

 

"Maybe you mixed something up regarding NGFW Policy and DNS Filter, if you did not set your Cluster in FLOW mode you cannot create NGFW Policies."

 

We do not use flow and have always used NGFW policies.  We are in AP and not AA mode, so perhaps that is the difference?

Labels
Top Kudoed Authors