Hello,
We've setup an FGVM cluster on our Azure tenant, based on Fortinet github template https://github.com/fortinet/azure-templates/tree/main/FortiGate/AvailabilityZones/Active-Passive-ELB...
I've originally setup the SDN connector to create firewall objects, and to have this done, following the documentation, i gave the "reader" permission on subscriptions to the two FGVM virtual machines on Azure. It worked for a while and i could create dynamic objects correctly.
Since some reboots and few days of exploitation, the connector has stopped working. Following that KB : https://docs.fortinet.com/document/fortigate-public-cloud/7.0.0/azure-administration-guide/985498/tr...here is the debug log i got:
azd sdn connector AzureSDN prepare to update
azd sdn connector AzureSDN start updater process 881
azd sdn connector AzureSDN start updating
azd updater process 881 is updating
azd updater process 881 is updating
curl DNS lookup failed: management.azure.com
azd api failed, url = https://management.azure.com/subscriptions?api-version=2018-06-01, rc = -1,
azd failed to list subscriptions
azd failed to get ip addr list
azd reap child pid: 881
"curl DNS lookup failed" : i don't understand, since a "ping management.azure.com" resolves correctly the address:
fgvm-appliance # exec ping management.azure.com
PING arm-frontdoor-prod.trafficmanager.net (40.79.131.240): 56 data bytes
The two DNS servers setup on the FGVM are reachable...
Here is the SDN connector configuration (default from github template):
config system sdn-connector
edit "AzureSDN"
set type azure
set ha-status enable
set update-interval 30
next
On trafic-side, if i try to traceroute that load-balancer IP 40.79.131.240... (i know this is one of the multiple IPs, but it's representative). The packet goes out by the WAN interface, from local. I can't trace once after that, it goes on Azure external load balancer and internet.
#execute traceroute 40.7.131.240
id=20085 trace_id=1 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=1."
id=20085 trace_id=1 func=init_ip_session_common line=5955 msg="allocate a new session-000053df"
traceroute to 40.79.131.240 (40.79.131.240), 32 hops max, 3 probe packets per hop, 84 byte packets
1 *id=20085 trace_id=2 func=print_pkt_detail line=5783 msg="vd-root:0 received a packet(proto=1, [redacted:IP of WAN interface]:33727->40.79.131.240:2048) from local. type=8, code=0, id=33727, seq=2."
The default route is the WAN interface of the FGVM (port1), it's the default from the github template.
config router static
edit 1
set gateway [redacted: external load-balancer IP]
set device "port1"
next
Any ideas ?
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hello everyone,
Some updates about our issue,
I noticed that when i setup the Fortiguard DNS, it work again.
The system DNS is configured with a source IP & interface, like that i can create the appropriates rules between the FGT and our DNS servers. It seems to work correctly, since i can ping everything from the Fortigate and see it resolved correctly.
But, the behavior from the SDN connector seems to be different : i tried to capture all the trafic between the FGT DNS source ip & my DNS server, i can see the packets containing the requests from my Azure servers, but nothing containing the "management.azure.com" or "graph.microsoft.com".
Do you know if there is an other DNS resolution mechanism used for SDN connectors, to call APIs ? I couldn't see any parameters regarding the network in the SDN configuration...
Thanks in advance,
Arnaud
Hello everyone,
I confirmed that no DNS trafic come from the private IP i've set-up. The following is the source-IP configured from internal FGT services:
It seems that the source-ip setting is not taken in account...
This is the config:
Any ideas?
Hello,
It looks like you have disabled public IPs on MGMT interface. MGMT interfaces must also be able to access internet to interact with Azure Management API.
Try to run this command and check if the resolution is working:
#exec enter vsys_hamgm
#exe ping www.google.com
Hello Hassan,
No, i haven't disabled public IPs on dedicated mgmt interfaces.
As well as:
Regards
Arnaud
I had the same issue:
Moreover, the FortiGate lost connectivity, all the interfaces (except the heartbeat) were brought down:
Still an issue today. The SDN connector cannot contact Azure AD sometimes:
I'll definitely fallback on static / fqdn objects... shame.
I resolved this. You must use the dedicated management interface ALWAYS, so the fortigate can contact dns server and azure. The dedicated interface doesn't depend from the azure and SDN algorithm so it works.
Hello @DanielCastillo
How do you force the management interface for the SDN connector ? There is no "source-ip" or interface selection :
When you use this interface, you can generate a default route on the ha configuration. The behavior that I noticed is: when a fortigate initiates all the interfaces are down, except management one, so every route on the routing table will be useless and the fortigate will use the default route on the ha config and the management interface (the only one that doesn't depend of the Azure-FortiGate SDN algorhitm)
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1672 | |
1083 | |
752 | |
446 | |
226 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.