Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
infrasigrp
New Contributor II

Azure SDN fabric connector

Hello,

 

We've setup an FGVM cluster on our Azure tenant, based on Fortinet github template https://github.com/fortinet/azure-templates/tree/main/FortiGate/AvailabilityZones/Active-Passive-ELB...

 

I've originally setup the SDN connector to create firewall objects, and to have this done, following the documentation, i gave the "reader" permission on subscriptions to the two FGVM virtual machines on Azure. It worked for a while and i could create dynamic objects correctly.

 

Since some reboots and few days of exploitation, the connector has stopped working. Following that KB : https://docs.fortinet.com/document/fortigate-public-cloud/7.0.0/azure-administration-guide/985498/tr...here is the debug log i got:

 

azd sdn connector AzureSDN prepare to update
azd sdn connector AzureSDN start updater process 881
azd sdn connector AzureSDN start updating
azd updater process 881 is updating
azd updater process 881 is updating
curl DNS lookup failed: management.azure.com
azd api failed, url = https://management.azure.com/subscriptions?api-version=2018-06-01, rc = -1,
azd failed to list subscriptions
azd failed to get ip addr list
azd reap child pid: 881

 

"curl DNS lookup failed" : i don't understand, since a "ping management.azure.com" resolves correctly the address:


fgvm-appliance # exec ping management.azure.com
PING arm-frontdoor-prod.trafficmanager.net (40.79.131.240): 56 data bytes

 

The two DNS servers setup on the FGVM are reachable...

Here is the SDN connector configuration (default from github template):

 

config system sdn-connector
edit "AzureSDN"
set type azure
set ha-status enable
set update-interval 30
next

 

On trafic-side, if i try to traceroute that load-balancer IP 40.79.131.240... (i know this is one of the multiple IPs, but it's representative). The packet goes out by the WAN interface, from local. I can't trace once after that, it goes on Azure external load balancer and internet.

 

#execute traceroute 40.7.131.240
id=20085 trace_id=1 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=1."
id=20085 trace_id=1 func=init_ip_session_common line=5955 msg="allocate a new session-000053df"
traceroute to 40.79.131.240 (40.79.131.240), 32 hops max, 3 probe packets per hop, 84 byte packets
1 *id=20085 trace_id=2 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=2."

 

The default route is the WAN interface of the FGVM (port1), it's the default from the github template.

 

config router static
edit 1
set gateway [redacted: external load-balancer IP]
set device "port1"
next

 

Any ideas ?

15 REPLIES 15
infrasigrp
New Contributor II

Hello everyone,

 

Some updates about our issue,

I noticed that when i setup the Fortiguard DNS, it work again.

 

The system DNS is configured with a source IP & interface, like that i can create the appropriates rules between the FGT and our DNS servers. It seems to work correctly, since i can ping everything from the Fortigate and see it resolved correctly.


But, the behavior from the SDN connector seems to be different : i tried to capture all the trafic between the FGT DNS source ip & my DNS server, i can see the packets containing the requests from my Azure servers, but nothing containing the "management.azure.com" or "graph.microsoft.com".

 

Do you know if there is an other DNS resolution mechanism used for SDN connectors, to call APIs ? I couldn't see any parameters regarding the network in the SDN configuration...

 

Thanks in advance,

Arnaud

infrasigrp
New Contributor II

Hello everyone,

 

I confirmed that no DNS trafic come from the private IP i've set-up. The following is the source-IP configured from internal FGT services:

infrasigrp_0-1643035379532.png

It seems that the source-ip setting is not taken in account...

This is the config:

infrasigrp_1-1643035511618.png

Any ideas?

Hassan09
Staff
Staff

Hello,

 

It looks like you have disabled public IPs on MGMT interface. MGMT interfaces must also be able to access internet to interact with Azure Management API.

Try to run this command and check if the resolution is working:

 

#exec enter vsys_hamgm

#exe ping www.google.com

 

HA
infrasigrp
New Contributor II

Hello Hassan,

No, i haven't disabled public IPs on dedicated mgmt interfaces.

infrasigrp_0-1643299570596.png

As well as:

infrasigrp_1-1643299674413.png

Regards

Arnaud

 

DanielCastillo
New Contributor

I had the same issue:

DanielCastillo_0-1652819445599.png

Moreover, the FortiGate lost connectivity, all the interfaces (except the heartbeat) were brought down:

DanielCastillo_1-1652819582993.png

 

 

infrasigrp

Still an issue today. The SDN connector cannot contact Azure AD sometimes:

infrasigrp_0-1664525688029.png

I'll definitely fallback on static / fqdn objects... shame.

DanielCastillo

I resolved this. You must use the dedicated management interface ALWAYS, so the fortigate can contact dns server and azure. The dedicated interface doesn't depend from the azure and SDN algorithm so it works.

infrasigrp

Hello @DanielCastillo 

How do you force the management interface for the SDN connector ? There is no "source-ip" or interface selection :

infrasigrp_0-1664539168698.png

 

 

DanielCastillo

When you use this interface, you can generate a default route on the ha configuration. The behavior that I noticed is: when a fortigate initiates all the interfaces are down, except management one, so every route on the routing table will be useless and the fortigate will use the default route on the ha config and the management interface (the only one that doesn't depend of the Azure-FortiGate SDN algorhitm)

Labels
Top Kudoed Authors