Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
plindgren
New Contributor

Fortigate process " wad" consuming 62% of memory.

i get the " CFG_CMDBAPI_ERR" when i try to make changes on my fortigate. which is other than that operational. I tired the command " diag test application ipsmonitor 99" but it did not work. So i used the command " diag sys top 1" to see what was hogging all that memory. And i found a process named " wad" that uses 62% of the memory. However this machine is in production and i dont know what the process does and i cant seem to find it anywhere. Can i kill it? What does it do? Is there a process reference for fortios out there somewhere?
" This solution needs to be idiot proof!" " Why? Do you plan on hiring idiots?"
" This solution needs to be idiot proof!" " Why? Do you plan on hiring idiots?"
37 REPLIES 37
james_heyworth

Ok, we could more than likely revert back to 5.6.5 in order to remediate temporarily. However, I'd prefer steps forward rather than back.

 

wolfgang_cernohorsky
New Contributor

We had the same problem with Version 5.6.6 on our FortiGates 600D.  The good news are, since a vew days, release 5.6.7 is available and according to the release notes, "High memory usage on WAD" should be fixed.

 

Hope this helps,

Wolfgang

Toshi_Esumi

We're experiencing the same on one of our 1500Ds, which runs 5.6.6 for about a month. Before that it was running 5.4.8. We're going to kill wad tonight then schedule an upgrade to 5.6.7 soon.

Toshi_Esumi
SuperUser
SuperUser

Does anyone know what triggers this wad memory usage escalation, which was fixed with 5.6.7?

When I checked it before killing them (fnsysctl killall wad) tonight, I found it came back down to normal level at about 50% two hours ago. Until then it had kept growing steadily for last two weeks based on our mem usage monitoring tool. I asked some coworkers if they did something to it but nobody seems to have done anything. It's very strange.

sdlengua

Upgraded my 600C's to 5.6.7 last week. Same issue. Memory was up to 82% this morning and had to kill several WAD processes. Apparently not fixed in 5.6.7, just FYI. Calling in support ticket to see what's going on. Very frustrating. We are performing Full SSL and proxy based.

Toshi_Esumi

Thank you for the info. Please open a TT with TAC to claim it's still not fixed. We skipped 5.6.7 or even next 5.6.8 since this wad memory problem doesn't come back so far and another issue we've been waiting for a fix would not be in 5.6.8.

AtiT
Valued Contributor

Hello, We upgraded our FGT-1500D A-P cluster 7 days ago to FortiOS 5.6.7, at this moment the memory is OK, but still the usage is increasing a little bit. We will see.

 

I would recommend you to not kill the wad process but it is better to restart it. Always restart processes in case they have command for that.

Check the overall CPU and memory status: # diagnose sys top-summary

CPU [|||||||||||| ] 30.7% Mem [|||||||||||||||||||||||||||| ] 71.0% 11460M/16064M Processes: 20 (running=4 sleeping=239) PID RSS CPU% ^MEM% FDS TIME+ NAME * 262 6G 60.6 38.3 41675 48:06.38 wad [x14] 247 1G 54.0 8.2 985 17:53.21 ipsmonitor [x12] 287 1G 5.0 7.1 930 21:00.71 cw_acd 256 645M 34.9 4.0 263 59:32.15 sslvpnd [x12] 272 349M 2.5 2.2 20 51:14.84 urlfilter 241 213M 4.8 1.3 61 19:52.33 miglogd [x7] 244 147M 1.7 0.9 24 00:44.52 httpsd [x5] .....

 

Use the "m" key on your keyboard to sort the process groups according to memory usage. Use "q" key on your keyboard to quit the diagnose command.

 

Show the running processes: # diagnose sys top Use the "m" key on your keyboard to sort the processes according to memory usage. Use "q" key on your keyboard to quit the diagnose command.

 

Locate your wad process and his process ID, let's say for now: wad 351 S 2.7 9.0 The 351 is the process ID.

 

Now reset and enable debuging: # diagnose debug reset # diagnose debug enable

List all your wad processes and ocate your process ID (pid): # diagnose test application wad 1000 Process [0]: WAD manager type=manager(0) pid=262 diagnosis=yes. Process [1]: type=dispatcher(1) index=0 pid=345 state=running diagnosis=no debug=enable valgrind=unsupported/disabled Process [2]: type=wanopt(2) index=0 pid=346 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [3]: type=worker(3) index=0 pid=347 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [4]: type=worker(3) index=1 pid=348 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [5]: type=worker(3) index=2 pid=349 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [6]: type=worker(3) index=3 pid=350 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [7]: type=worker([style="background-color: #ffff00;"]3[/style]) index=[style="background-color: #ffff00;"]4[/style] pid=351 state=running <============================= diagnosis=no debug=enable valgrind=supported/disabled Process [8]: type=worker(3) index=5 pid=352 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [9]: type=worker(3) index=6 pid=353 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [10]: type=worker(3) index=7 pid=354 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [11]: type=worker(3) index=8 pid=355 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [12]: type=worker(3) index=9 pid=356 state=running diagnosis=no debug=enable valgrind=supported/disabled Process [13]: type=informer(4) index=0 pid=344 state=running diagnosis=no debug=enable valgrind=unsupported/disabled

Now access your wad process - enter into the process menu: Remember: 2xxx - the wad process always starts with 2. x3xx - the wad process type depending on whether it is a dispatcher, worker, informer etc. in brackets (). xx04 - the index number of the process (two digits). If the index is one digit, put 0 before the index.

 

So it looks like this in our case for wad process with pid=351 # diagnose test application wad 2304 Set diagnosis process: type=worker index=4 pid=351

 

Now you are working on your wad process and you can list the available options with issuing: # diagnose test application wad

WAD process 351 test usage: 1: display process status 2: display total memory usage. 99: restart all WAD processes 1000: List all WAD processes. 1001: dispaly debug level name and values 1002: dispaly status of WANOpt storages ..... etc. There is a lot of them.

 

What you need is to restart the process so use 99 like: # diagnose test application wad 99

 

Reset and disable your debug: # diagnose debug disable # diagnose debug reset

 

Now you can check the CPU and memory again with command: # diagnose sys top-summary

Compare the results with the output at the beginning of this post.

More work but better than just killing processes.

AtiT

AtiT
vdp
New Contributor

Running 6.2 firmware, and experiencing the same wad high mem usage issue (conserve mode activated).

I guess Fortinet didn't fix this issue.

The only answer I got from Support is "buy a bigger device".

 

JaapHoetmer
New Contributor III

Hi there. I am experiencing the same issue on a 100E live-stby cluster, 89% memory loading, and the WAD process consuming 44% of total memory. Running v6.2.0 build0866 (GA).

 

Two WAD processes were consuming 16.5% of memory each, and two additional WAD processes 7% and 4.7% of memory. I have restarted these processes using the instructions provided above, and that has fixed the issue for now. But it will probably return, as the same situation occurred yesterday morning.

 

 

Kind regards, Jaap
Kind regards, Jaap
figge

The release notes for 5.6.8 state that this WAD memory issue is resolved.

Can anyone confirm this? Have you upgraded to 5.6.8 and gotten rid of the bug?

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors