Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
gcraenen
New Contributor

Several problems high memory and cpu usage blocking WAN connection after upgrade to 6.2

Hi,

 

After upgrading from 6.0.4 to 6.2 I have problems with WAN connectivity falling out. I'm getting this message in de Fortios GUI:

 

Conserve mode activated due to high memory usage

 

I have tried to downgrade to 6.0.4 but can't with the error message that it failed beacause it cannot download the file from fortiguard. Help.

5 Solutions
SMabille
Contributor

Hi,

 

Download the image from the Fortinet support website and upload/apply it from your browser.

 

Best regards,

Stephane

View solution in original post

Dave_Hall
Honored Contributor

This is by no means a fix, but a work-around is to have the fgt perform a daily reboot. 

 

config system global set daily-restart enable set restart-time <time value> end

NSE4/FMG-VM64/FortiAnalyzer-VM/6.0 (FWF30E/FW92D/FGT200D/FGT101E/FGT81E)/ FAP220B/221C

View solution in original post

NSE4/FMG-VM64/FortiAnalyzer-VM/6.0 (FWF30E/FW92D/FGT200D/FGT101E/FGT81E)/ FAP220B/221C
Adrian_Lewis

Vinicius wrote:

Do you know when will the 6.2.2 be released?

No specific dates are available only targeted dates which can slip if any issues are identified. Currently target though I hear is this week.

View solution in original post

ISOffice

Hi all,

 

We upgraded our 100D appliances to 6.2.2 a week ago and noticed a slight improvement in GUI performance when viewing logs in Log & Report. However, when filters were applied the CPU once again spiked to 90+% with multiple instances of the 'log_se' process running.

I have an ongoing support call logged with Fortinet and their TAC Engineer (cheers Kevin!!) suggested that I fail over to the 'slave' appliance to see if the issue could be replicated there. To my surprise, it wasn't. The GUI was responsive and I could apply filters to viewing logs and the CPU didn't spike. The only difference between the appliances was that the 'master' appliance had used 75% of it's disk for log storage and the 'slave' appliance only 2% (we are running active-passive). When viewing logs (with filters applied) on the 'slave' appliance the 'log_se' process didn't even appear when I ran 'diag sys top'.

Long story short, on failing back to the 'master' appliance I deleted all logs on the local disk (execute log delete-all) and can now view logs, with multiple filters, quite easily and without overly stressing the CPU. We syslog traffic logs in real-time to a third party product, so it isn't critical for us to retain logs on the appliance's hard disk for extended periods of time.

Querying local disk logs, particularly if there are many of them, possibly results in high CPU usage due to the high i/o requirements to do so.

Whilst this may not be relevant to many others in this discussion, I thought I would share our experience anyway.

 

Best regards,

 

John P

View solution in original post

tanr
Valued Contributor II

@gcraenen, did you open a support ticket with TAC?  I known they've fixed a number of bugs between 6.2.0 and 6.2.3, some specific to ipsengine, but if your specific issue and repro case hasn't been reported them then it's unlikely to have gotten fixed.

 

Not saying that 6.2.3 is stable enough, though!  We usually wait till the .4 releases to start testing them for possible production use.  I'm reasonably hopeful

View solution in original post

79 REPLIES 79
Break16
New Contributor

still no fix? this happened today again! bug was reported over 4 weeks ago !!! what a bad service.

andrewbailey

Hello,

 

If it's any help I've raised a ticket and had confirmation it is a known issue due to be resolved in 6.2.1 (as others have reported).

 

The release date of 6.2.1 (I'm told) is late May/ early June so perhaps a couple of weeks away.

 

In the meantime as rkhair suggested the work around "diagnose test application ipsmonitor 99" does seem to be work well and support stated it was a valid work around.

 

I have set up an "automation stitch" to run this command hourly (which seems ok for my 60E, you may prefer different time intervals) and that seems to be working well. It does spike CPU usage- so perhaps needs some thought/ review before using in a production network.

 

Essentially the "automation stitch" just executes that command as a script regularly to keep the memory issues under control.

 

That might be a useful way to keep things under control until the 6.2.1 release is available.

 

Hope that's useful.

 

Kind Regards,

 

 

Andy.

 

 

 

thuynh_FTNT

Thanks Andy/rkhair. That is very helpful. Hi Break16, sorry about the inconvenience. Like Andy said, we already identified the root cause and added a fix as soon as it's reported. However, you will need to upgrade the firmware to FortiOS 6.2.1 (scheduled to release soon) to pick up the fix. For now please use the work around per Andy's suggestion. 

jonoarm

Yep, Same here gone back to 6.05

goodj
New Contributor

This happened to me today.   Updated past week to 6.2 from 5.6 on 1500D HA Cluster

anujdalal
New Contributor

Hi,

 

Having the same issue on Azure FGVM02 (memory leak?). I have 2 running in Active/Active HA. I removed one of them from the Azure Loadbalancer back-end pool ("cluster") at 64% memory usage. Even with close to no traffic going through it, the memory usage stayed at 64% constantly. The usage gradually climbs when the ipsengine is in use. diagnose sys top shows ipsengine using lots of memory, and not releasing it.

 

As for downgrading the firmware, I get the same error message, but downloading the image from https://support.fortinet.com/Download/VMImages.aspx seems to do the trick.

 

Thanks.

toren

Hi,

 

Have the same problem on 61E. I restart FW every 2-3 days as it goes into "Conserve mode". 

 

Just had a call with support and was told to change web filter enabled security policy mode from flow based to proxy-based. That's a workaround till FortiOS 6.2.1 will be released which according to the support will happen early July.

 

Also diagnose test application ipsmonitor 99 can be used instead of restart to drop memory consumption.

mattf
New Contributor

I'm going to post this here in the hope that it helps others while we not so patiently wait for a fix.

 

If none of the above has helped you, work out where the memory leak is on your own fortigates by using the following command. get sys perf top On our firewalls, it's actually the WAD process which has the memory leak. There's 4 consecutive processes amounting to 40% of the total memory usage. This is determined by looking at the right most column in the results.

 

Press ctrl + c to stop the "sys perf" report.

 

Use diagnose test application wad 99 to restart the process which is causing your memory leak. *wad can be interchanged with any other process name which is responsible for the memory usage.  

marlonjohn
New Contributor

happening to me also after 6.0 to 6.2 Fortigate 60e

lladereche
New Contributor

I've created this auto-script to restart ipsmonitor every 6 hours:

 

config system auto-script edit "memoryscript" set interval 21600 set repeat 0 set start auto set script "diagnose test application ipsmonitor 99" next end

 

Thanks!

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors