I’m relatively new to Hyper-V Server and to managing FortiSwitches via a FortiGate
FortiGate 60F on firmware 7.0.9
Pair of S224E switches on firmware 7.2.2 connected via fortilink
Dell PowerEdge T740 server running Windows Server 2022 with Hyper-V Server role. Hosting 2 Windows Server 2022 VM’s
The host server has 4 NIC ports, 2 each 10G and 2 each 1G
Would like to use a Trunk via the FortiGate switches with one Ethernet connection coming from each switch going to the 2 10G ports on the Dell PowerEdge R740 Server. Would like to do this for greater reliability and potentially greater networking speed.
I created a Trunk Group via the FortiGate on the FortiSwitches as follows:
I then created a NIC Team in the Windows Server Host as follows:
When I tried to create a Hyper-V virtual switch using the host Virtual Switch Manager I don’t see the NicTeam-01 listed in Hyper-V Virtual Switch Manager. The only item I found that might relate to this NicTeam-01 is titled Microsoft Network Adapter Multiplexor Driver. Is this proper title for this NIC Team?
When I click OK to create the Virtual Switch, I get the following error:
Error applying Virtual Switch Properties changes
Failed while adding virtual Ethernet switch connections.
Attaching a virtual switch to an LBFO team is deprecated. Switch Embedded Teaming (SET) is an inbox replacement for this functionality. For more information on LBFO deprecation please see https://aka.ms/LBFODeprecation. To override this block, use the AllowNetLbfoTeams option in New-VMSwitch.
I have done some research on this, but I can’t determine a proper way forward. Any suggestions?
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
When configure LACP links, they should be connected to one switch or stacked switches. You have to use Switch independent aggregation.
Please make sure you are answering with accurate information. LACP can be formed across two independent switches using MC-LAG which OP is using. Switch Independent mode should not be used in this case.
Is MC-LAG supported on all switches?
On all switches above FSW-1XX yes. OP has 224E.
Thank you for the suggestions. I will give these changes a try. Appreciate it very much.
I changed the Trunk Group to use 2 ports on only 1 of the switches, MC-LAG, click Enabled
Mode, click Active LACP.
I set the Microsoft Server NIC Team to team the two NIC's set aside in the FortiSwitch as a trunk group, selected Teaming mode: Switch Independent, Load balancing mode: Dynamic, Standby adapter: None. When I clicked OK to add this as a virtual switch, I again got the same error as earlier. Any further suggestion?
Any other ideas or suggestions concerning avoiding the virtual switch to an LBFO team is deprecated error message? If possible, I would like to use a trunk group/team between the FortiSwitch and the Host Windows 2022 Hyper-V Server.
It sounds like possibly your MC-LAG configuration is not complete. Have you followed the docs?
It sounds like a Standalone FortiGate Unit with Dual-Homed FortiSwitch Units topology. Refer to documentation here: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/780635/switch-redundancy-with-m...
For more details on setting up the ports and interfaces (especially the ICL) see this doc: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801208/transitioning-from-a-for...
More info regarding the server connections: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801194/deploying-mclag-topologi...
Also for Hyper-V specific configuration and troubleshooting this is not the forum as we are not Microsoft folks. Do you have another Fortiswitch you can test the LACP configuration with?
Thanks Graham for these suggestions and the reference documentation.
I may well be incorrect, but I think when the switches were installed a few years ago that due to an older FortiOS, the prior FortiGate model, or another factor, that it did not allow the support MCLAG. For whatever reason Switch 1 connects to FortiGate A and switch 2 connects to FortiGate B port. Both A and B are Interface members in a FortiLink (Hardware Switch).
Is the MCLAG generally considered best when using 2 or more FortiSwitches managed by one FortiGate?
If Yes, would this be a good document to follow to convert from the current topology to MCLAG:
I realize that troubleshooting Hyper-V is not the focus of the forum. I started here wondering if an incorrect FortiSwitch configuration might be the root cause of the problem.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1547 | |
1031 | |
749 | |
443 | |
210 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.