Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
infrasigrp
New Contributor II

Azure SDN fabric connector

Hello,

 

We've setup an FGVM cluster on our Azure tenant, based on Fortinet github template https://github.com/fortinet/azure-templates/tree/main/FortiGate/AvailabilityZones/Active-Passive-ELB...

 

I've originally setup the SDN connector to create firewall objects, and to have this done, following the documentation, i gave the "reader" permission on subscriptions to the two FGVM virtual machines on Azure. It worked for a while and i could create dynamic objects correctly.

 

Since some reboots and few days of exploitation, the connector has stopped working. Following that KB : https://docs.fortinet.com/document/fortigate-public-cloud/7.0.0/azure-administration-guide/985498/tr...here is the debug log i got:

 

azd sdn connector AzureSDN prepare to update
azd sdn connector AzureSDN start updater process 881
azd sdn connector AzureSDN start updating
azd updater process 881 is updating
azd updater process 881 is updating
curl DNS lookup failed: management.azure.com
azd api failed, url = https://management.azure.com/subscriptions?api-version=2018-06-01, rc = -1,
azd failed to list subscriptions
azd failed to get ip addr list
azd reap child pid: 881

 

"curl DNS lookup failed" : i don't understand, since a "ping management.azure.com" resolves correctly the address:


fgvm-appliance # exec ping management.azure.com
PING arm-frontdoor-prod.trafficmanager.net (40.79.131.240): 56 data bytes

 

The two DNS servers setup on the FGVM are reachable...

Here is the SDN connector configuration (default from github template):

 

config system sdn-connector
edit "AzureSDN"
set type azure
set ha-status enable
set update-interval 30
next

 

On trafic-side, if i try to traceroute that load-balancer IP 40.79.131.240... (i know this is one of the multiple IPs, but it's representative). The packet goes out by the WAN interface, from local. I can't trace once after that, it goes on Azure external load balancer and internet.

 

#execute traceroute 40.7.131.240
id=20085 trace_id=1 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=1."
id=20085 trace_id=1 func=init_ip_session_common line=5955 msg="allocate a new session-000053df"
traceroute to 40.79.131.240 (40.79.131.240), 32 hops max, 3 probe packets per hop, 84 byte packets
1 *id=20085 trace_id=2 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=2."

 

The default route is the WAN interface of the FGVM (port1), it's the default from the github template.

 

config router static
edit 1
set gateway [redacted: external load-balancer IP]
set device "port1"
next

 

Any ideas ?

15 REPLIES 15
infrasigrp

Hello @DanielCastillo,

If i understand correctly, if the cluster is configured to allow each node direct access, like its the case with the github "active-passive-ELB-ILB-AZ" template; The management interface & routing of it is configured directly in the HA configuration:

infrasigrp_0-1664871032789.png

i still don't understand the link between the SDN connector & the management interface, how do you force the SDN to use the management interface for its trafic ?

 

 

kitzin
New Contributor

I had the same issue, but on our on-prem cluster (7.0.6).

I turned off "management interface reservation" in the HA configuration and the fabric connector started to work again.

 

As @DanielCastillo mentioned it seems to use the hidden management VDOM for communication so if you don't have any default route or can't route to Azure it won't work.

 

We're currently not using the individual management interfaces so I could just turn it off for now, but in air sealed out-of-band management it can be rough. Not sure if this should be considered a bug but we should at least be able to configure the source interface, IMO.

infrasigrp
New Contributor II

Hello @kitzin @DanielCastillo 

 

I can confirm that i have a default route on the FG via Azure, via an external loadbalancer. I need to conserve an individual access to each Fortigate, there is a public IP bound on corresponding NIC on Azure, that is the default configuration recommended by Fortinet in the Azure ARM templates https://github.com/fortinet/azure-templates/blob/main/FortiGate/Active-Passive-ELB-ILB/README.md. There is nothing particular about HA configuration for the SDN connector to work correctly... I don't know either if much people uses these templates... 

 

New connector crash today,

azd api failed, url = https://management.azure.com/subscriptions/[redacted]/providers/Microsoft.Network/applicationGateways?api-version=2018-06-01, rc = 503 {"error":{"code":"ServerTimeout","message":"The request timed out."

then... the fortigate deletes cached dynamic addresses... ! causing network interruptions... 

infrasigrp_0-1666106655848.png

 

infrasigrp

About the SDN configuration, i just saw that there is two interesting settings "nic" & "route-table" which i cannot configure with a "set", some infos about that? :)infrasigrp_1-1666107048237.png

 

freddelm
New Contributor II

Any one find a resolution to this issue. Still experiencing this for issue on 7.0.12 on Azure.

tebby
New Contributor

I have been having this issue with one of our customers. No errors when running 

diag debug application azd -1
diag debug enable
diag test application azd 4

or 

diag debug application azd -1
diag debug enable
diag test application azd 99

However, the FGT was not able to pull in the resources related to newly created public IP addresses or fail over the IPs between FGTs.

The fix was to add the subscription ID and resource group name in the below section. 

 

config system sdn-connector

edit "AzureSDN"

set type azure

set ha-status enable

set subscription-id "xxx"

set resource-group "piz-test"

 

I haven't required these 2 previously, but as soon as i added them and ran

diag debug application azd -1
diag debug enable
diag test application azd 99

the FGT pulled in the new objects, and when performaing a manual failover the public IPs moved from the primary to the secondary (now active) firewall.

I learned about the additinal commands from the following link

https://community.fortinet.com/t5/FortiGate-Cloud/Technical-Tip-A-first-steps-troubleshooting-guide-... 

 

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors