Many Bugfixes but no TLS 1.3 mentioned.
https://docs.fortinet.com.../fortios-release-notes
sudo apt-get-rekt
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Ich verstehen Sie, aber meine Deutsch ist sehr schlecht!
So I just got off the phone with the TAC. Did lots of testing. We thought the issue was with our FAZ, but after doing some more checking we decided to check the miglogd process.
If you log into the CLI
diag sys top-summary
look for the miglogd process and note the process ID (PID)
[style="background-color: #ffff00;"]197 [/style] 320M 0.0 2.0 73 09:19.30 miglogd [x5]
Press "q" to quit the monitoring of sys top.
Now that you have the PID of the miglogd process, enter the following to kill and restart it:
dig sys kill 11 197
Note in my case the PID was 197 as highlighted above. Once we did this, we saw this in the crashlog:
diag debug crashlog read
509: 2019-06-05 13:36:40 <00197> application miglogd 510: 2019-06-05 13:36:40 <00197> *** signal 11 (Segmentation fault) received *** 511: 2019-06-05 13:36:40 <00197> Register dump: 512: 2019-06-05 13:36:40 <00197> RAX: fffffffffffffffc RBX: 0000000000ed19e0 513: 2019-06-05 13:36:40 <00197> RCX: ffffffffffffffff RDX: 0000000000000400 514: 2019-06-05 13:36:40 <00197> R08: 0000000000000000 R09: 0000000000000008 515: 2019-06-05 13:36:40 <00197> R10: 00000000000006d6 R11: 0000000000000246 516: 2019-06-05 13:36:40 <00197> R12: 00007fff4adca950 R13: 0000000000000000 517: 2019-06-05 13:36:40 <00197> R14: 00007fff4adca950 R15: 0000000015477b10 518: 2019-06-05 13:36:40 <00197> RSI: 00000000151a9550 RDI: 000000000000000a 519: 2019-06-05 13:36:40 <00197> RBP: 00007fff4adca5f0 RSP: 00007fff4adca5b8 520: 2019-06-05 13:36:40 <00197> RIP: 00007f1ff42bbba0 EFLAGS: 0000000000000246 521: 2019-06-05 13:36:40 <00197> CS: 0033 FS: 0000 GS: 0000 522: 2019-06-05 13:36:40 <00197> Trap: 0000000000000000 Error: 0000000000000000 523: 2019-06-05 13:36:40 <00197> OldMask: 0000000000000000 524: 2019-06-05 13:36:40 <00197> CR2: 0000000000000000 525: 2019-06-05 13:36:40 <00197> stack: 0x7fff4adca5b8 - 0x7fff4adcb800 526: 2019-06-05 13:36:40 <00197> Backtrace: [size="1"]527: 2019-06-05 13:36:40 <00197> [0x7f1ff42bbba0] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 528: 2019-06-05 13:36:40 (epoll_pwait+0x00000020) liboffset 000f4ba0 [size="1"]529: 2019-06-05 13:36:40 <00197> [0x01dfb8d1] => /bin/miglogd[/size] [size="1"]530: 2019-06-05 13:36:40 <00197> [0x00ed25fa] => /bin/miglogd[/size] [size="1"]531: 2019-06-05 13:36:40 <00197> [0x0042ea44] => /bin/miglogd[/size] [size="1"]532: 2019-06-05 13:36:40 <00197> [0x0043529f] => /bin/miglogd[/size] [size="1"]533: 2019-06-05 13:36:40 <00197> [0x004321e8] => /bin/miglogd[/size] [size="1"]534: 2019-06-05 13:36:40 <00197> [0x004326de] => /bin/miglogd[/size] [size="1"]535: 2019-06-05 13:36:40 <00197> [0x00434584] => /bin/miglogd[/size] [size="1"]536: 2019-06-05 13:36:40 <00197> [0x00434ee7] => /bin/miglogd[/size] [size="1"]537: 2019-06-05 13:36:40 <00197> [0x7f1ff41e7eaa] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 538: 2019-06-05 13:36:40 (__libc_start_main+0x000000ea) liboffset 00020eaa [size="1"]539: 2019-06-05 13:36:40 <00197> [0x0042b7da] => /bin/miglogd[/size] 540: 2019-06-05 13:38:40 the killed daemon is /bin/miglogd: status=0x0 Crash log interval is 3600 seconds miglogd crashed 1 times. The last crash was at 2019-06-05 13:36:40
After that DNS records started flowing and populating in FAZ again! We are not sure of the cause or how to replicate. But it appears to be resolved. We will monitor to see if the logs keep flowing as they should now.
Hey there,
the bug about the NGFW Policy Webfilter Logs you wrote earlier is only About NGFW Policy Mode.
So if you go to System->Settings and choose Flow Based Inspection Mode then you can select between Profile-Based or Policy-Based.
In Policy-Based you can create so called "NGFW" Policies where you can also set certain Applications and URL-Categories between the usual Options.
If you set such policies no webfilter logs are generated and this is what this Bug is About.
If your Fortigate is in Proxy mode you should not have any issues with this bug.
sry for my english but its early in the morning and our coffee make is broken.
sudo apt-get-rekt
Ich verstehen Sie, aber meine Deutsch ist sehr schlecht!
So I just got off the phone with the TAC. Did lots of testing. We thought the issue was with our FAZ, but after doing some more checking we decided to check the miglogd process.
If you log into the CLI
diag sys top-summary
look for the miglogd process and note the process ID (PID)
[style="background-color: #ffff00;"]197 [/style] 320M 0.0 2.0 73 09:19.30 miglogd [x5]
Press "q" to quit the monitoring of sys top.
Now that you have the PID of the miglogd process, enter the following to kill and restart it:
dig sys kill 11 197
Note in my case the PID was 197 as highlighted above. Once we did this, we saw this in the crashlog:
diag debug crashlog read
509: 2019-06-05 13:36:40 <00197> application miglogd 510: 2019-06-05 13:36:40 <00197> *** signal 11 (Segmentation fault) received *** 511: 2019-06-05 13:36:40 <00197> Register dump: 512: 2019-06-05 13:36:40 <00197> RAX: fffffffffffffffc RBX: 0000000000ed19e0 513: 2019-06-05 13:36:40 <00197> RCX: ffffffffffffffff RDX: 0000000000000400 514: 2019-06-05 13:36:40 <00197> R08: 0000000000000000 R09: 0000000000000008 515: 2019-06-05 13:36:40 <00197> R10: 00000000000006d6 R11: 0000000000000246 516: 2019-06-05 13:36:40 <00197> R12: 00007fff4adca950 R13: 0000000000000000 517: 2019-06-05 13:36:40 <00197> R14: 00007fff4adca950 R15: 0000000015477b10 518: 2019-06-05 13:36:40 <00197> RSI: 00000000151a9550 RDI: 000000000000000a 519: 2019-06-05 13:36:40 <00197> RBP: 00007fff4adca5f0 RSP: 00007fff4adca5b8 520: 2019-06-05 13:36:40 <00197> RIP: 00007f1ff42bbba0 EFLAGS: 0000000000000246 521: 2019-06-05 13:36:40 <00197> CS: 0033 FS: 0000 GS: 0000 522: 2019-06-05 13:36:40 <00197> Trap: 0000000000000000 Error: 0000000000000000 523: 2019-06-05 13:36:40 <00197> OldMask: 0000000000000000 524: 2019-06-05 13:36:40 <00197> CR2: 0000000000000000 525: 2019-06-05 13:36:40 <00197> stack: 0x7fff4adca5b8 - 0x7fff4adcb800 526: 2019-06-05 13:36:40 <00197> Backtrace: [size="1"]527: 2019-06-05 13:36:40 <00197> [0x7f1ff42bbba0] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 528: 2019-06-05 13:36:40 (epoll_pwait+0x00000020) liboffset 000f4ba0 [size="1"]529: 2019-06-05 13:36:40 <00197> [0x01dfb8d1] => /bin/miglogd[/size] [size="1"]530: 2019-06-05 13:36:40 <00197> [0x00ed25fa] => /bin/miglogd[/size] [size="1"]531: 2019-06-05 13:36:40 <00197> [0x0042ea44] => /bin/miglogd[/size] [size="1"]532: 2019-06-05 13:36:40 <00197> [0x0043529f] => /bin/miglogd[/size] [size="1"]533: 2019-06-05 13:36:40 <00197> [0x004321e8] => /bin/miglogd[/size] [size="1"]534: 2019-06-05 13:36:40 <00197> [0x004326de] => /bin/miglogd[/size] [size="1"]535: 2019-06-05 13:36:40 <00197> [0x00434584] => /bin/miglogd[/size] [size="1"]536: 2019-06-05 13:36:40 <00197> [0x00434ee7] => /bin/miglogd[/size] [size="1"]537: 2019-06-05 13:36:40 <00197> [0x7f1ff41e7eaa] => /usr/lib/x86_64-linux-gnu/libc.so.6[/size] 538: 2019-06-05 13:36:40 (__libc_start_main+0x000000ea) liboffset 00020eaa [size="1"]539: 2019-06-05 13:36:40 <00197> [0x0042b7da] => /bin/miglogd[/size] 540: 2019-06-05 13:38:40 the killed daemon is /bin/miglogd: status=0x0 Crash log interval is 3600 seconds miglogd crashed 1 times. The last crash was at 2019-06-05 13:36:40
After that DNS records started flowing and populating in FAZ again! We are not sure of the cause or how to replicate. But it appears to be resolved. We will monitor to see if the logs keep flowing as they should now.
Good info to have.
Was there anything obvious (besides not getting DNS records) that led you to miglogd?
Any new deployment we are doing is going out on 6.0.x, the existing platforms are still on 5.6 for now - will look at upgrading within the next 6 months probably.
We use it and it works well for us. A lot of 200E and under, some 600D, and 1500D with about 20G aggregate traffic.
That being said, YMMV and please talk to your SE about its stability for the features you use.
Fortinet designed and built FortiOS v6.0.1 to deliver the advanced protection and performance that standalone products simply cannot match. The services work together as a system to provide better visibility and mitigation of the latest network and application threats, stopping attacks before damage can occur.
Supported software Fortinet FortiGate-VMX v6.0.1 VMware vSphere v6.0/6.5/6.7 VMware NSX v6.3.x/6.4.0/6.4.1 NetX library version 6.4.0-7564187
For more information on the additional supported software, see the VMware Compatibility Guide.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1629 | |
1063 | |
749 | |
443 | |
210 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.