Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
jmlux
New Contributor III

Session timeout/TTL expiration counter not updated?

Hi,

 

"diag sys session list" shows this:

 

 

session info: proto=6 proto_state=01 duration=722 expire=28077 timeout=28800 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/7
state=log may_dirty npu synced none log-start
statistic(bytes/packets/allow_err): org=1031/6/1 reply=659/6/1 tuples=2
orgin->sink: org pre->post, reply pre->post dev=31->41/41->31 gwy=192.168.x.x/192.168.y.y
hook=pre dir=org act=noop 172.x.x.x:61697->192.x.x.x:1521(0.0.0.0:0)
hook=post dir=reply act=noop 192.x.x.x:1521->172.x.x.x:61697(0.0.0.0:0)
pos/(before,after) 0/(0,0), 0/(0,0)
misc=0 policy_id=1 auth_info=0 chk_client_info=0 vd=4
serial=00863f17 tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_mode=0
npu_state=0x003000
npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=128/129, ipid=129/128, vlan=0x801e/0x801c
vlifid=129/128, vtag_in=0x001e/0x001c in_npu=1/1, out_npu=1/1, fwd_en=0/0

After that I perform some activity. I clearly see using Wireshark that the corresponding IP/port src/dest is used (PSH/ACK, etc.)

However the session expiry timer does not seem to be updated:

 


session info: proto=6 proto_state=01 duration=895 expire=27904 timeout=28800 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 policy_dir=0 tunnel=/ vlan_cos=0/7
state=log may_dirty npu synced none log-start
statistic(bytes/packets/allow_err): org=1031/6/1 reply=659/6/1 tuples=2
orgin->sink: org pre->post, reply pre->post dev=31->41/41->31 gwy=192.168.x.x/192.168.y.y
hook=pre dir=org act=noop 172.x.x.x:61697->192.x.x.x:1521(0.0.0.0:0)
hook=post dir=reply act=noop 192.x.x.x:1521->172.x.x.x:61697(0.0.0.0:0)
pos/(before,after) 0/(0,0), 0/(0,0)
misc=0 policy_id=1 auth_info=0 chk_client_info=0 vd=4
serial=00863f17 tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_mode=0
npu_state=0x003000
npu info: flag=0x81/0x81, offload=8/8, ips_offload=0/0, epid=128/129, ipid=129/128, vlan=0x801e/0x801c
vlifid=129/128, vtag_in=0x001e/0x001c in_npu=1/1, out_npu=1/1, fwd_en=0/0

 

Why can that be?

 

Bye,

Marki

 

1 Solution
ede_pfau
SuperUser
SuperUser

But the FGT doesn't seem to count any traffic bytes between both snapshots. Could be a misleading status because of NP-offloading.

Does the session really expire then (you could test that after setting a lower value for the session TTL)?


Ede

"Kernel panic: Aiee, killing interrupt handler!"

View solution in original post

Ede"Kernel panic: Aiee, killing interrupt handler!"
14 REPLIES 14
ede_pfau
SuperUser
SuperUser

But the FGT doesn't seem to count any traffic bytes between both snapshots. Could be a misleading status because of NP-offloading.

Does the session really expire then (you could test that after setting a lower value for the session TTL)?


Ede

"Kernel panic: Aiee, killing interrupt handler!"
Ede"Kernel panic: Aiee, killing interrupt handler!"
jmlux
New Contributor III

I tried with a telnet session set to 300 seconds TTL. There seems to be no reset of the counter if you create traffic right after the connection has been established. Strange. Or there is some lag with the update. But I don't think so because the counter either seems to be updated immediately or never. Just not during the first 20 seconds or so.

 

The connection in question previously has been offloaded, right. It seems to behave similarly, however the initial period where no update occurs seems to be way larger than 20 seconds. But its timeout is also higher (28800s).

emnoc
Esteemed Contributor III

I don't see the confusion

 

The timeout in the 1st line is being mis-understood correctly, that's the ttl value assigned ( default 3600 ), the expiration count is what counts down once idle. The duration is the total duration of the traffic session until it closes or meets the session limits.

 

If you want to see that behavior and get a better understanding, craft a config sys session-ttl for something like port 22 & a ridiculous value like 900sec and then ssh into the fgt Y& do nothing from that ssh session, you will see the  behavior correctly.

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
jmlux
New Contributor III

emnoc wrote:

I don't see the confusion

I don't know what you mean. As I said I generated traffic between the two diag snippets that I showed. But the expire timer was not reset.

 

I also made some experiments and posted them 3 minutes before your post :) (https://forum.fortinet.com/FindPost/134022)

jmlux
New Contributor III

Only after about 5000 seconds on the connection with timeout=28800 was I able to reset the expire timer with traffic:

duration=5101 expire=27784
duration=5106 expire=28798

 

emnoc
Esteemed Contributor III

Are you sure the timer is just for the traffic your monitoring & not something else? Did you use a diag system session filter and on the session traffic in question?

 

I would still do what I suggest and build a session-ttl to a traffic profile that's hardly used  like maybe ssh and ssh int the firewall and monitor. Try to use something with no KAs & monitor.

 

You should see the duration increase and the expire count down unless traffic resets it ( i.e KA, push data,etc...)

 

Ken

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
emnoc
Esteemed Contributor III

Also add, if you select a very low usage traffic  you can run a diag sniffer packet  in a 2nd window and look for traffic for that session and see what happens if traffic does or does not  travel

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
jmlux
New Contributor III

Ok, so in order to sniff I had to disable auto-asic-offload on the Fortigate. What do we see here:

[ul]
  • When offloading is disabled, the expire counter is reset every time correctly.
  • When offloading is enabled, it is not correctly reset, at least not during the early stages of the connection.[/ul]

    As a consequence, this means that while using connection offloading, the actual/experienced TTL will be shorter than what is configured because traffic happening early after the establishment of the connection is somehow not taken into account...

     

    What do you think?

  • ede_pfau
    SuperUser
    SuperUser

    Nice debugging. I think the latency at the start is related to the offloading, and it's duration is relative to the TTL of that policy (apparantly >20%).

    One thing missing is to check if the session really expires because of the missing reset, or if this really is a lack of timely updating the GUI or the counters. Maybe you could look into this, too. If the session expires, ongoing traffic would open a new session, so the session ID would change. Or you could directly observe a session setup in 'diag deb flow'.

     

    IMHO Fortinet should be made aware of this, and one way would be to open a ticket. Combined with a note that either the displayed values are stale (but the TTL mechanism as such is working correctly), or there is a lack of internally updating them (which would mean a shorter TTL than specified/wanted). I'd think the former is the case.


    Ede

    "Kernel panic: Aiee, killing interrupt handler!"
    Ede"Kernel panic: Aiee, killing interrupt handler!"
    Labels
    Top Kudoed Authors