Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
echo
Contributor II

mgmt interface configuration

I have used mgmt ports on fgt's in the past without problems: I have two HA clusters, each one of them has their own IP in one and the same network and I used NAT in the firewall rule to get access to the other cluster which was not the main cluster. In this configuration I could manage every one of the four devices separately and this has been useful and needed to get the HA fixed when it has broken sometimes. That was so in 5.4.

 

After upgrading to 6.4 I see that something has changed. Recently I restored a broken HA cluster and noted that the mgmt1 interface shows its address with red background and mentioning there an overlapping address. Yes, I needed another VLAN interface in the main cluster in the same mgmt subnet to make the NAT work in the firewall rule.

 

Is it possible to get the management working without a NAT-rule? There's information here: https://docs.fortinet.com/document/fortigate/6.4.4/administration-guide/313152/out-of-band-managemen...

 

But one thing is unclear and even confusing: what is the gateway in "management interface reservation" configuration? Where is it? It is not shown in the diagram. A random IP in the same network which doesn't even have to exist? If overlapping of subnets is not allowed, it can't be in the same unit/VDOM if it is meant to be a real address. (Do I need a separate FGT to manage the cluster?...) Also, there is no explanation of how the 10.11.101.100 works in that diagram that is common to both units and that is used to configure the new separate addresses for units. And the explanation for "Destination subnet", which is "Optionally, enter a Destination subnet to indicate the destinations that should use the defined gateway.", doesn't really tell me anything what is it really and what is it used for. Then there is "set ha-direct enable" option but no good explanation, what is this and for what purpose is it needed.

 

Has anybody got working the mgmt of HA cluster members without overlapping subnets (in one of the VDOMs of the same device) and without a firewall rule with NAT? What is the secret here?

11 REPLIES 11
Debbie_FTNT
Staff
Staff

Hey echo,

wow, a lot of questions about HA :).

Ok, to break it down a bit:

- if you configure an HA management interface, this interface is technically considered to be in a different (hidden) VLAN

-> the HA management interface does NOT use the same routing table/local-in policies/other interface configuration you may have in place

-> setting the gateway in the management interface (this is in the HA configuration; worded a bit confusingly, I agree) essentially tells the FortiGate what gateway to use for traffic from the HA interface

-> this can be with specified subnets (FortiGate will have routes to the subnets via the HA management interface and defined gateway), or essentially a default route via the HA interface; these settings (gateway/specified subnets) are only used for HA management traffic

A screenshot:

Debbie_FTNT_0-1656675730513.png

The whole HA interface setup here is to have a dedicated management port with its own IP and subnet, completely independent of whatever other infrastructure you might have.

If you have an existing subnet/VLAN dedicated to device management, for example, you might want to put the FortiGate HA interfaces into this.

 

Regarding the diagram:

- port2 and IP 10.11.101.100 are a shared (non-HA-mgmt) interface, like the LAN interface of the FortiGate (and port1, 172.20.120.141, would be the shared WAN interface)

-> in an active/passive setup, the primary FortiGate would respond on those two interfaces, port1 and port2, and the secondary would NOT

- port8 is the HA management interface, with unique IPs for each FortiGate (in this case, as an overlapping subnet to port2, but this is not required!)

 

Once you have dedicated HA interfaces configured on both units (you might need to configure this on secondary via CLI as outlined in the documentation you linked), you should be able to access the GUI of each unit independently via the specified HA management interface IP.
If you enable ha-direct in CLI, this causes each unit to send SNMP traps, logs, and some other management-related traffic individually out the HA management interface, instead of whatever other interface would be appropriate based on the FortiGate's configuration and routing.

 

Does this clear up the confusion?

+++ Divide by Cucumber Error. Please Reinstall Universe and Reboot +++
echo

Thank you for the explanation. My questions about it are as follows.

1. "... what gateway to use for traffic from the HA interface". I don't use these separate IP's for sending out SNMP or other stuff but if I did then I'm not sure how the Fortigate really handles this. Usually the gateway should be in the same subnet, not in some other. If the gateway is something else, then we are talking about routing tables and then the question is how the traffic to HA mgmt interfaces reaches these interfaces from other networks.

2. I understood about 10.11.101.100 in the article's diagram: I use an IP the same way to actually manage the cluster (active/primary device responds to it).

3. For ha-direct, I understood now, thank you.

4. For port8 as mgmt interface, I still don't understand. If I use unique IP's in a unique network, put those cables into their own VLAN -- how do I get there from another management network? Where should the gateway be for that network? I can't believe that I shold have another (small) FGT for that which operates as the gateway to that mgmt network. So is that "gateway" in ha mgmt config (seen above) ALSO used for getting access to those IP-s? I made a test: changed the network of the currently overlapping VLAN interface to something else so the four devices (2 different HA-clusters) have their own IP's and the main FGT cluster does not have it as an interface anymore. Then I set the gateway address on HA mgmt config. I removed NAT from the firewall rule and added a route that the separate network for HA mgmt is behind a certain network interface. Strangely enough, I was not allowed to set an IP in that route because of the error message: "Gateway IP is the same as interface IP, please choose another IP." Why's that, I don't understand. But there's no access to the mgmt interfaces anymore even though the firewall rule matched. So I tried diag debug flow. That showed that the traffic went to wrong VLAN, to the one the gaeway of which I specified in the HA mgmt config. Of course. So to get the mgmt working, the "gateway" in HA mgmt config seems to be not necessary (unusable for that purpose). So I removed the route, put back NAT in the firewall rule, changed the VLAN interface's IP back to the one it was before, that is, in the same subnet where those mgmt IP's are and got back the mgmt to different mgmt IP's like that -- as it was before. So in total, no success in trying to get rid of NATted firewall rule and overlapping error message in the config of separate units. I guess if that "gateway" field would work also for incoming traffic so that that separate mgmt network would be behind certain existing interface then maybe it would work. And that's why I had this question in the first place, does anybody have a working solution without using NAT and overlapping subnet (and not using a separate mgmt-FGT device to get access to those mgmt IP's).

Debbie_FTNT

Hey echo,

maybe I can explain a bit clearer with an example:

 

- a large existing network infrastructure (multiple switches/routers/etc)

- a dedicated subnet for the management interfaces of these devices, let's say 10.0.0.0/24; this would be to connect to management interfaces, SNMP traffic, and other management related stuff, but NO user traffic or similar

- other traffic (VoIP, user traffic...) is in other subnets, for example 192.168.0.0/24

- at least one of the routers (NOT the FortiGate, at least in this example) would serve as gateway between management subnet and other subnets (with IP 10.0.0.254 for example)

- FortiGate would have WAN interfaces and LAN interfaces in 192.168.0.0 subnet (and serve as gateway between them)

- FortiGate would have dedicated HA management interfaces in 10.0.0.0 subnet (.101 for primary, .102 for secondary for example)

-> the gateway to be configured on the HA interface setting would be 10.0.0.254

-> with this, the FortiGate units would be accessible individually on 10.0.0.101 and 10.0.0.102 (and would send return traffic via 10.0.0.254 as defined gateway)
-> cluster primary (but not secondary) would also be accessible via 192.168.0.0 subnet
-> with ha-direct enabled, the cluster units would send traffic to snmp servers or logging solutions out the HA interface (10.0.0.101 or .102) and, if the destination is not in the same subnet, use the gateway 10.0.0.254 to accomplish this

 

The idea behind the dedicated HA management interfaces is, if you already have a setup with a dedicated management subnet (or are looking to accomplish this), the FortiGate HA interfaces can tie into that, and each unit is accessible by itself, to separate management traffic from user/application/other traffic.

 

Addendum:

- another of the  FortiGate interfaces could serve as gateway to the management subnet, if the FortiGate should also function as router between the management subnet and other subnets.

-> to continue the example from above: port1 on FortiGate is LAN interface, with 192.168.0.254/24, wan1 is WAN interface with a public IP, port2 is HA management interface with 10.0.0.101/24 and 10.0.0.102 on the other node, and port3 is the gateway for that management subnet with 10.0.0.254/24 (other switches/routers/etc could also have their management IPs in 10.0.0.0/24 subnet, and FortiGate would serve as gateway to those management interfaces, including the cluster nodes' own interfaces)
-> cabling would be something like: port2 (HA management) on both FortiGates go to a switch, and from that switch would go back to port3 (gateway for management subnet) on the FortiGates.

 

I find it helps to think of the FortiGate's HA interfaces as completely isolated from everything else on the FortiGate; they can't be used for routing or policies or anything, and have their own (tiny) routing table based on the defined gateway and subnets; if no subnet is defined in destinations, the HA management interfaces essentially have their own independent default route.

All of the configuration applies ONLY to management traffic on the FortiGate (logging in, sending SNMP, logging, etc); regular traffic passing through the FortiGate will not be affected by any changes done on the HA interfaces.

 

Sorry for the wall of text. I hope that clarifies it?

TL;DR: no you do not need a separate FortiGate to get to the HA management interfaces, but yes you technically need a gateway (another router like a second FortiGate, or the FortiGate itself in a weird loop) if you want to use the HA management interfaces for out-of-band (as in, separate subnet) access

+++ Divide by Cucumber Error. Please Reinstall Universe and Reboot +++
echo

Thanks for the efforts to clarify!

The first part in the above reply seems to need another device for mgmt and that I'd rather avoid. Getting the mgmt out-of-band has not been a goal for me (so far). The addendum part is closer because then the same FGT routes traffic to the separate mgmt network (10.0.0.0/24). I basically have the cabling already as described.


It looks like this is not the case that HA mgmt interfaces are completely isolated from everything else: if they were, I wouldn't get the warning about overlapping subnet with an existing VLAN interface in one of the VDOMs (root in my case). I guess that even if instead of a VLAN I'd have port3 for that purpose as in the above description (10.0.0.254), I'd get the same error in GUI when adding the IP to mgmt1 that is is overlapping with the network on port3. By the way, I tried this with an IP in the network that is set to a software switch interface that I happened to have: the overlap error on mgmt1 interface (when trying to use an IP from the network of that software switch) now didn't point to the correct interface (ssw) but to another one that is completely unrelated... Seems like a bug. That other was even a VLAN, not ssw or another physical.


So if I'd like to get rid of the overlap-error in the GUI/configuration I should use "set allow-subnet-overlap enable" in root VDOM (if this helps at all, don't know, even though I should use it in global where the error is but it's not available in global) or a VRF with leaking routes (seems too difficult because of no experience with VRF's and not sure if this helps). I was thinking of using a separate mgmt VDOM for those mgmt addresses but the mgmt1 port can't be added to another VDOM and adding that overlapping VLAN interface to another VDOM (and then adding a route to mgmt-network pointing to the VDOM-linl) wouldn't help either because of the same error (overlapping). It looks like the thing that I did in the past years ago using NAT is the only possible way without another device to get the different mgmt IP's working. But with 6.4 and possibly with other earlier 6.x this can't be configured anymore because GUI has its warnings and prevents this happening (maybe modifying configuration file would work but why go so far).

 

On the other hand, the referred article at docs.fortinet.com doesn't mention a need for a separate FGT for mgmt so I feel something is still missing.

Toshi_Esumi
SuperUser
SuperUser

Since Debbie dissected all questions, I have only comment for the design. You have at least four FGT devices in multiple clusters. With that size of network, you must have many other L3 devices in your network to route your management traffic to get to each FGT's management port. You shouldn't rely on one of FGTs to route/NAT your access. Regular set up for management interfaces is to have a unique IP for each FGT and set the GW outside and route access via GW device(s).

Also a terminal server(s) is necessary to access each console port when it doesn't even boot up correctly, unless all of them are locally located.

 

Toshi

echo

In my case I don't want to have a separate FGT for management. But for the console access: it already works the way you described (via a serial/console switch). That is very important to have such to see exactly what happens with booting one of the members. Also, not only booting but in some cases other errors appear there which are not shown in the system logs (maybe newer FOS versions show those in system log too, I haven't checked it).

Toshi_Esumi

So you are saying you don't have any L3 devices other than those FGTs to route 10.0.0.100/29 and .101&.102 for the first cluster's and .103&.104 for the second cluster's MGMT interfaces? Nowadays most switches can do that with a separate VLAN.

 

<edit> I miscalculated a subnet boundary. It should have been like 10.0.0.96/28, then GW on the switch side is .110 so that each device can take 101-104. </edit>

 

Toshi

echo

Yes, we have switches that can route but we haven't used those switches for routing to keep the whole design as simple as possible. I have to think about it, what would it mean in our environment to use that routing and what else needs to be configured then. For the subnet and mask -- I understood what you mean. Thank you for an idea, I didn't think about switches when you first mentioned them.

echo

I thought about the routing from one of our switches. I feel that I'd better not do that unless I can test it but building a test environment seems as good as impossible at the moment. Because if the switch starts accepting and deciding about routing then what happens to the rest of the traffic? Will that get stuck? Will it need a default route? But which one, considering different VLANs? I have never done this and I have too many questions about it so I better not go this way this time. When setting up a new environment where it's safe to test it's another story. But thank you for the hint!

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors