Cybersecurity Forum

This forum is for all security enthusiasts to discuss Fortinet's latest & evolving technologies and to connect & network with peers in the cybersecurity hemisphere. Share and learn on a broad range of topics like best practices, use cases, integrations and more. For support specific questions/resources, please visit the Support Forum or the Knowledge Base.

CaroKiel
New Contributor II

Connection between VMware ESXi and FG 240D

Hello,

I have a question about configuring interfaces in a HA environment.

As you can see in the attachment, we have a three-node ESX cluster that hosts (among others) two VMs: a DC and a web server. The cluster also forms two distributed virtual switches, one for LAN and one for the DMZ. The DvSwitch "LAN" is connected to a hardware switch that also has a leg connected to the Fortinet cluster (2x FG240D, active-passive).

The DvSwitch "DMZ" consists of two GBit-NICs per ESX that are directly connected to the Fortinet cluster. Details can be seen in the attachment.

Unfortunately, I'm actually not able to ping the web server from the DC and vice versa.

The FG cluster has routes into both subnets, but can only ping the DC, the web server is not answering. And yes, there are rules configured to allow any traffic between the two subnets for testing purposes.

Does anyone have an idea, where my problem may be related to?

Any help is appreciated, if you need more details, just let me know.

All the best from Germany,

Caroline

3 REPLIES 3
Not applicable

I believe that you are on scenario where the Fortigates are acting like different switches, so information about how your FGT cluster are running (if FGCP and if active-active OR active-passive would help, I will assume as active-passive so it is simple to debug). As far I know FGCP creates different L2 domains per FGT running in routing mode (your Fortigates are not running as transparent mode right?) - http://kb.fortinet.com/kb/viewContent.do?externalId=FD31396

Therefore, if I presumed correctly, then I will start to presume on VMWare ;)

I am not sure what the default NIC teaming setting on VMWare, but if using IP Hash (static LAGG) or LACP (would not work because of the splited L2 domain as above), the only options that you could use are the one that not use beacons/LACP. The VMWare beacons specially could be trickier for Fortigate as they should traverse from on cluster member interface to the correspondent interface on each other cluster member – the slide #9 have a comparison table of options - http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/support/landing-pages/virtual-suppo...

But if I am right about the issues with beacon you could have only the option to use the explicit failover and make it matches with the Fortigate active/passive cluster membership (make the VM NIC connected to active FGT cluster member) higher.

Anyway, there is variables that you need to test/check as failback (how the VMWare NIC Teaming will behave when the FGT master unit return to operation after a fail/reboot and if this will matches the VMWare behavior), so you could need to adjust the FGT preemptive mode to force this match.

Additional information like if the WEB server could ping the Fortigate IP address on DMZ Interface from any of VMWare hosts, including if you disconnect/disable different DMZ interfaces on VMWare hosts could help to determine how VMWare hosts are sending the DMZ traffic through Fortigates.

Please if this helps or if you feel comfortable to share additional information, I will be glad to help.

Regards,

Felicio Santos, CAPM
HP MASE FlexNetwork v1, MCITP 2008 SRV, ENT, ENT Messaging
FTNT NSE4 / MCSE NT,2000,2003 / MCSA 2000,2003,2008+SEC,Office365 / Network+

CaroKiel
New Contributor II

Dear Felicio,

thank you very much for your detailed explanations. I reconfigured the VMware side as per your recommendations and it's jus working now.

Thank you again, you really saved my week!

All the best to you,

Caroline

r_fantini
New Contributor

If the FW01 is directly connected to the ESX01, I think you have connectivity only when the Web01 is running on-top of the esx01 host.

In the others case I suppose the ping is failing, I suggest to create a VLAN on the physical switch fo the DMZ, (or take another physical switch) connect the hosts and the two firewalls on the switch to be sure all works fine.

Also take in mind you are going to create an infrastructure with redundancy (you have bought 2 firewall), so the better way is to have a core switch in stack (or something alternative), with both the vlans, and connect the firewall and the esx hosts distributed on these 2 switches, so you don't have any Single Point of Failure.

 

Best Regards

Roberto