Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
KordiaRG
New Contributor

Management-ip not accessible on slave node in cluster

Hi all,

 

I'm running a pair of 60E's on 5.6.3 as a HA cluster.  I've setup a VLAN interface for management to the root VDOM, given it an IP, and also given each member a management-ip in the same subnet.  So for example, vlan interface is VL100-MGMT, IP is 10.0.100.10/24, and each node has a management-ip set as 10.0.100.11/24 and 10.0.100.12/24 respectively.

 

I can access the VIP (10.0.100.10) fine.  I can access node A's management-ip (10.0.100.11) fine.  However, I cannot access node B's management IP.  A diag sniffer shows no traffic for .12 going to node B.  The cluster appears otherwise healthy and a diag sys ha status looks good.

 

Looking at the arp table on the gateway, I see that all 3 of these addresses have entries, but all to the same virtual MAC:

 

10.0.100.10 = 00:09:0f:09:00:03

10.0.100.11 = 00:09:0f:09:00:03

10.0.100.12 = 00:09:0f:09:00:03

 

The odd thing is, I have an almost identical config on another cluster of 60E's (same version), and these work fine.  On these, the arp table of the gateway shows node B's management-ip with the hardware address of node B, which seems sensible...

 

10.0.100.10 = 00:09:0f:09:00:03

10.0.100.11 = 00:09:0f:09:00:03

10.0.100.12 = 90:6c:ac:0a:0b:0c

 

Anyone else seen this?  A bug?

I was about to log a ticket, but annoyingly this is the 14th site in a national rollout which means the time since purchase is around 9 months and the support auto-started and expired a week ago :(

 

Thanks,

Richard

14 REPLIES 14
Toshi_Esumi
SuperUser
SuperUser

We had a similar issue with 5.4.? some time ago although we don't use VRRP on mgmt interfaces between a-p. But it was fixed later release and we don't see the issue at least with 5.4.8 now.

emnoc
Esteemed Contributor III

NO sure what your doing but are trying to set HA-direct and define a interface only for the  two nodes ?

e.g

 

config system ha     set group-id 1     set group-name "socpuppetsgrp"     set mode a-p     set ha-mgmt-status enable     set ha-mgmt-interface "mgmt1"     set ha-mgmt-interface-gateway 10.10.1.111     set override disable  end

 

And now on mgmt it will not be part of root vdom and you  use it for a dedicated mgmt. You do the same for nodes in the cluster and define the correct  address for each and the gateway

 

Ken Felix

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
KordiaRG
New Contributor

Hi Ken,

 

This is using the new management-ip directive that was introduced in 5.6.  I'm not using the HA reserved management interface feature.  I use my root vdom purely for device management and it connects only to my management VLAN 100 (no other interfaces in root vdom). 

 

config system interface   edit "VL100-MGMT"     set vdom "root"     set management-ip 10.0.100.12 255.255.255.0     set ip 10.0.100.10 255.255.255.0     set allowaccess ping https ssh snmp http fgfm     set device-identification enable     set role lan     set snmp-index 27     set interface "internal2"     set vlanid 100   next end

 

Rich

KordiaRG

Thanks Toshi.  We don't use VRRP either - this is using the management-ip feature which appeared in v5.6.

 

Toshi_Esumi

I didn't know about the new feature. But then likely a bug or not intended setting, which TAC would be able to tell you. Check the release notes of 5.6.4 and 5.6.5. A fix might be in there.

emnoc
Esteemed Contributor III

This is not a bug, you can only get to this  via  the  Cluster Node that holds the Layer3 RIB and that address , is 10.0.100.12 pingable ?

 

What does diagsniffer show you ?

 

What does diag debug flow show  ?

 

 

Also do you have any issues with a set dedicate  management interface ?

 

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
KordiaRG
New Contributor

emnoc wrote:

This is not a bug, you can only get to this  via  the  Cluster Node that holds the Layer3 RIB and that address , is 10.0.100.12 pingable ?

 

Not 100% sure what you mean, but from node A (active) I can ping 10.0.100.10 (the VIP), 10.0.100.11 (it's management-ip), but NOT 10.0.100.12 (B's management IP).

 

From node B results are the same.  I can ping 10.0.100.10, 10.0.100.11 but not 10.0.100.12.

 

So oddly, node B cannot ping it's own management-ip.

 

emnoc wrote:

What does diagsniffer show you ?

 

Whilst running a ping from my workstation 10.254.0.163 to 10.0.100.12...

 

From node A:

FW01A (root) # diagnose sniffer packet VL100-MGMT 'port not 514 and port not 22 and port not 161 and port not 53 and ip' 4 30
interfaces=[VL100-MGMT]
filters=[port not 514 and port not 22 and port not 161 and port not 53 and ip]
1.103906 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
6.104118 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
11.106247 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
16.105633 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
21.105117 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
26.105037 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
31.106346 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
36.104207 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
41.103817 VL100-MGMT -- 10.254.0.163 -> 10.0.100.12: icmp: echo request
^C
18 packets received by filter
0 packets dropped by kernel
 

 

From node B:

 

FW01B (root) $ diagnose sniffer packet VL100-MGMT ip 4 30
interfaces=[VL100-MGMT]
filters=[ip]
^C
0 packets received by filter
0 packets dropped by kernel

 

emnoc wrote:

What does diag debug flow show  ?

 

TBH, I couldn't get this to work as I normally do.  The show console command isn't accepted.  I get no output on either node, so suspect it's not printing to my SSH.

 

emnoc wrote:

Also do you have any issues with a set dedicate  management interface ?

 

It's just how we designed this template.  I generally don't like the dedicated interface approach - so was glad they introduced this 'management-ip' option. I've not delved into how the FG manages virtual MAC's, arp etc., but from what I see I assume the 00:09:0f:xx:xx:xx MAC's are virtual MAC's that are responded to by the active member.  

 

In which case, referring back to my first post, from another device on the same VLAN / subnet as 10.0.100.12/24, I do get ARP entries for .12.  The associated MAC is the 00:09:0f:09:00:03 address.  However, on another installation where this *IS* working fine, I see the permanent MAC for node B in the arp table associated with .12.  Which I think makes sense - management of non-active node should go to the MAC of that node, not the cluster's virtual MAC.

 

Rich

ede_pfau

You might try to set the originating IP address before you ping:

exec ping-option source a.b.c.d

 

Otherwise, it might pick some IP address from a 'nearby' FGT port which just won't fit.

Ede Kernel panic: Aiee, killing interrupt handler!
Ede Kernel panic: Aiee, killing interrupt handler!
emnoc
Esteemed Contributor III

On FW1B  login and do the following;

 

 

  get router info routing connect

  diag ip arp list

 

Do you see anything ?

 

 

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors