Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
SecurityPlus
Contributor II

Switch Trunk Group - Hyper-V Server

I’m relatively new to Hyper-V Server and to managing FortiSwitches via a FortiGate

FortiGate 60F on firmware 7.0.9

Pair of S224E switches on firmware 7.2.2 connected via fortilink

Dell PowerEdge T740 server running Windows Server 2022 with Hyper-V Server role. Hosting 2 Windows Server 2022 VM’s

The host server has 4 NIC ports, 2 each 10G and 2 each 1G

 

Would like to use a Trunk via the FortiGate switches with one Ethernet connection coming from each switch going to the 2 10G ports on the Dell PowerEdge R740 Server. Would like to do this for greater reliability and potentially greater networking speed.

 

I created a Trunk Group via the FortiGate on the FortiSwitches as follows:

  1. Click New, Trunk Group
  2. Name Trunk Group
  3. MC-LAG, click Enabled
  4. Mode, click Active LACP
  5. Click 2 ports from right navigation, click OK
  6. Click OK

I then created a NIC Team in the Windows Server Host as follows:

  1. Name Team, i.e. NicTeam-01
  2. Verify desired team member are checked
  3. Select Teaming mode: LACP
  4. Load balancing mode: Dynamic (verify correct setting)
  5. Standby adapter: None (verify correct setting)
  6. Click OK
  7. Click refresh on Local Server to view updated NIC team setting

When I tried to create a Hyper-V virtual switch using the host Virtual Switch Manager I don’t see the NicTeam-01 listed in Hyper-V Virtual Switch Manager. The only item I found that might relate to this NicTeam-01 is titled Microsoft Network Adapter Multiplexor Driver. Is this proper title for this NIC Team?

 

When I click OK to create the Virtual Switch, I get the following error:

Error applying Virtual Switch Properties changes

Failed while adding virtual Ethernet switch connections.

Attaching a virtual switch to an LBFO team is deprecated. Switch Embedded Teaming (SET) is an inbox replacement for this functionality. For more information on LBFO deprecation please see https://aka.ms/LBFODeprecation. To override this block, use the AllowNetLbfoTeams option in New-VMSwitch.

 

I have done some research on this, but I can’t determine a proper way forward. Any suggestions?

12 REPLIES 12
Mohamed_Gaber
Contributor

When configure LACP links, they should be connected to one switch or stacked switches. You have to use Switch independent aggregation.

Mohamed Gaber
Cell : +201001615878
E-mail : mohamed.gaber@alkancit.com
Mohamed GaberCell : +201001615878E-mail : mohamed.gaber@alkancit.com
gfleming

Please make sure you are answering with accurate information. LACP can be formed across two independent switches using MC-LAG which OP is using. Switch Independent mode should not be used in this case.

Cheers,
Graham
Mohamed_Gaber

Is MC-LAG supported on all switches?

Mohamed Gaber
Cell : +201001615878
E-mail : mohamed.gaber@alkancit.com
Mohamed GaberCell : +201001615878E-mail : mohamed.gaber@alkancit.com
gfleming

On all switches above FSW-1XX yes. OP has 224E.

Cheers,
Graham
SecurityPlus
Contributor II

Thank you for the suggestions. I will give these changes a try. Appreciate it very much.

SecurityPlus
Contributor II

I changed the Trunk Group to use 2 ports on only 1 of the switches, MC-LAG, click Enabled
Mode, click Active LACP.

 

I set the Microsoft Server NIC Team to team the two NIC's set aside in the FortiSwitch as a trunk group, selected Teaming mode: Switch Independent, Load balancing mode: Dynamic, Standby adapter: None. When I clicked OK to add this as a virtual switch, I again got the same error as earlier. Any further suggestion?

SecurityPlus
Contributor II

Any other ideas or suggestions concerning avoiding the virtual switch to an LBFO team is deprecated error message? If possible, I would like to use a trunk group/team between the FortiSwitch and the Host Windows 2022 Hyper-V Server.

gfleming

It sounds like possibly your MC-LAG configuration is not complete. Have you followed the docs?


It sounds like a Standalone FortiGate Unit with Dual-Homed FortiSwitch Units topology. Refer to documentation here: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/780635/switch-redundancy-with-m...

 

For more details on setting up the ports and interfaces (especially the ICL) see this doc: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801208/transitioning-from-a-for...

 

More info regarding the server connections: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801194/deploying-mclag-topologi...

 

Also for Hyper-V specific configuration and troubleshooting this is not the forum as we are not Microsoft folks. Do you have another Fortiswitch you can test the LACP configuration with?

Cheers,
Graham
SecurityPlus

Thanks Graham for these suggestions and the reference documentation.

 

I may well be incorrect, but I think when the switches were installed a few years ago that due to an older FortiOS, the prior FortiGate model, or another factor, that it did not allow the support MCLAG. For whatever reason Switch 1 connects to FortiGate A and switch 2 connects to FortiGate B port. Both A and B are Interface members in a FortiLink (Hardware Switch).

 

Is the MCLAG generally considered best when using 2 or more FortiSwitches managed by one FortiGate? 

 

If Yes, would this be a good document to follow to convert from the current topology to MCLAG:

https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801208/transitioning-from-a-for...

 

I realize that troubleshooting Hyper-V is not the focus of the forum. I started here wondering if an incorrect FortiSwitch configuration might be the root cause of the problem.

Top Kudoed Authors