Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
maxiboom
New Contributor II

FGSP Session synchronization

Hello everyone,

 

I have just configured FGSP beetween pairs of FGCP clusters. I have tested some work scenarios: NAT sync, assymetric sync, failover session, DNAT sync etc. All works fine, except one problem. 

I have some assymetric traffic and I reboot FGSP peer. After peer rebooted session not sync back to this peer therefore I have traffic loss.

I have found in documentation "down-intfs-before-sess-sync" option, but it doesn't help.

Should back sync session work at all? Or where I misconfigured?

Session sync works over L3 link, ForiOs 7.0.3.

 

Configuration:

 

Peer1:

config system ha
  set group-id 10
  set group-name "Cluster"
  set mode a-p
  set hbdev "port8" 0
  set session-pickup enable
  set session-pickup-connectionless enable
  set session-pickup-expectation enable
  set override disable
  set priority 200

end


config system cluster-sync
  edit 1
     set peerip 192.168.50.200
     set syncvd "root"
     set down-intfs-before-sess-sync "OUT1" "OUT2"
  next

end

 

config system standalone-cluster
  set standalone-group-id 1
  set group-member-id 1
end

 

Peer2:

config system ha
  set group-id 20
  set group-name "Cluster"
  set mode a-p
  set hbdev "port8" 0
  set session-pickup enable
  set session-pickup-connectionless enable
  set session-pickup-expectation enable
  set override disable
  set priority 200

end


config system cluster-sync
  edit 1
     set peerip 192.168.150.200
     set syncvd "root"
     set down-intfs-before-sess-sync "OUT1" "OUT2"
  next

end

 

config system standalone-cluster
  set standalone-group-id 1
  set group-member-id 2
end

1 Solution
maxiboom
New Contributor II

Hello,

 

Yes, it was a problem with virtualization environment and in production it works without problem.

View solution in original post

5 REPLIES 5
ESCHAN_FTNT
Staff
Staff

Hi maxiboom, could you provide the output of command "diag sys ha stand"?

maxiboom

Hi @ESCHAN_FTNT ,

 

peer1:
Group=1, ID=1
Detected-peers=1
Kernel standalone-peers: num=1.
peer0: vfid=0, peerip:port = 192.168.50.200:708, standalone_id=2
session-type: send=1537, recv=76
packet-type: send=0, recv=0
Kernel standalone dev_base:
standalone_id=0:
standalone_id=1:
phyindex=0: mac=50:00:00:09:00:00, linkfail=1
phyindex=1: mac=50:00:00:09:00:01, linkfail=1
phyindex=2: mac=50:00:00:09:00:02, linkfail=1
phyindex=3: mac=50:00:00:09:00:03, linkfail=1
phyindex=4: mac=50:00:00:09:00:04, linkfail=1
phyindex=5: mac=50:00:00:09:00:05, linkfail=1
phyindex=6: mac=50:00:00:09:00:06, linkfail=1
phyindex=7: mac=50:00:00:09:00:07, linkfail=1
phyindex=8: mac=50:00:00:09:00:08, linkfail=1
phyindex=9: mac=50:00:00:09:00:09, linkfail=1
phyindex=10: mac=50:00:00:09:00:0a, linkfail=1
phyindex=11: mac=50:00:00:09:00:0b, linkfail=1
standalone_id=2:
phyindex=0: mac=00:09:0f:09:00:00, linkfail=1
phyindex=1: mac=00:09:0f:09:00:01, linkfail=1
phyindex=2: mac=00:09:0f:09:00:02, linkfail=1
phyindex=3: mac=00:09:0f:09:00:03, linkfail=1
phyindex=4: mac=00:09:0f:09:00:04, linkfail=1
phyindex=5: mac=00:09:0f:09:00:05, linkfail=1
phyindex=6: mac=00:09:0f:09:00:06, linkfail=1
phyindex=7: mac=00:09:0f:09:00:07, linkfail=1
phyindex=8: mac=00:09:0f:09:00:08, linkfail=1
phyindex=9: mac=00:09:0f:09:00:09, linkfail=1
phyindex=10: mac=00:09:0f:09:00:0a, linkfail=1
phyindex=11: mac=00:09:0f:09:00:0b, linkfail=1
standalone_id=3:
standalone_id=4:
standalone_id=5:
standalone_id=6:
standalone_id=7:
standalone_id=8:
standalone_id=9:
standalone_id=10:
standalone_id=11:
standalone_id=12:
standalone_id=13:
standalone_id=14:
standalone_id=15:

 

peer2:

Group=1, ID=2
Detected-peers=1
Kernel standalone-peers: num=1.
peer0: vfid=0, peerip:port = 192.168.150.200:708, standalone_id=1
session-type: send=30, recv=47
packet-type: send=0, recv=0
Kernel standalone dev_base:
standalone_id=0:
standalone_id=1:
phyindex=0: mac=00:09:0f:09:0a:00, linkfail=1
phyindex=1: mac=00:09:0f:09:0a:01, linkfail=1
phyindex=2: mac=00:09:0f:09:0a:02, linkfail=1
phyindex=3: mac=00:09:0f:09:0a:03, linkfail=1
phyindex=4: mac=00:09:0f:09:0a:04, linkfail=1
phyindex=5: mac=00:09:0f:09:0a:05, linkfail=1
phyindex=6: mac=00:09:0f:09:0a:06, linkfail=1
phyindex=7: mac=00:09:0f:09:0a:07, linkfail=1
phyindex=8: mac=00:09:0f:09:0a:08, linkfail=1
phyindex=9: mac=00:09:0f:09:0a:09, linkfail=1
phyindex=10: mac=00:09:0f:09:0a:0a, linkfail=1
phyindex=11: mac=00:09:0f:09:0a:0b, linkfail=1
standalone_id=2:
phyindex=0: mac=50:00:00:13:00:00, linkfail=1
phyindex=1: mac=50:00:00:13:00:01, linkfail=1
phyindex=2: mac=50:00:00:13:00:02, linkfail=1
phyindex=3: mac=50:00:00:13:00:03, linkfail=1
phyindex=4: mac=50:00:00:13:00:04, linkfail=1
phyindex=5: mac=50:00:00:13:00:05, linkfail=1
phyindex=6: mac=50:00:00:13:00:06, linkfail=1
phyindex=7: mac=50:00:00:13:00:07, linkfail=1
phyindex=8: mac=50:00:00:13:00:08, linkfail=1
phyindex=9: mac=50:00:00:13:00:09, linkfail=1
phyindex=10: mac=50:00:00:13:00:0a, linkfail=1
phyindex=11: mac=50:00:00:13:00:0b, linkfail=1
standalone_id=3:
standalone_id=4:
standalone_id=5:
standalone_id=6:
standalone_id=7:
standalone_id=8:
standalone_id=9:
standalone_id=10:
standalone_id=11:
standalone_id=12:
standalone_id=13:
standalone_id=14:
standalone_id=15:

 

ESCHAN_FTNT

Hi maxiboom, I dont see any particular issue here. Two things you could try though:-

1. Run 'execute sync-session" on the rebooted peer. This command works for pure FGSP only. Not sure if it could work in FGCP over FGSP setup.

2. Find the daemon "sessionsync" and try to kill the session on the rebooted peer. 

 

If it still doesn't work, then I would suggest to open a TAC ticket for further investigation. Cheers.

maxiboom

Hello @ESCHAN_FTNT 

 

1. I dind't find this command.

2. It didn't help.

 

I suppose, it maybe problem with virtualization environment, since I  have been testing it in eve-ng. Later I can test this in real environment and will write results.Thanks for your help.

 

maxiboom
New Contributor II

Hello,

 

Yes, it was a problem with virtualization environment and in production it works without problem.