- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Switch Trunk Group - Hyper-V Server
I’m relatively new to Hyper-V Server and to managing FortiSwitches via a FortiGate
FortiGate 60F on firmware 7.0.9
Pair of S224E switches on firmware 7.2.2 connected via fortilink
Dell PowerEdge T740 server running Windows Server 2022 with Hyper-V Server role. Hosting 2 Windows Server 2022 VM’s
The host server has 4 NIC ports, 2 each 10G and 2 each 1G
Would like to use a Trunk via the FortiGate switches with one Ethernet connection coming from each switch going to the 2 10G ports on the Dell PowerEdge R740 Server. Would like to do this for greater reliability and potentially greater networking speed.
I created a Trunk Group via the FortiGate on the FortiSwitches as follows:
- Click New, Trunk Group
- Name Trunk Group
- MC-LAG, click Enabled
- Mode, click Active LACP
- Click 2 ports from right navigation, click OK
- Click OK
I then created a NIC Team in the Windows Server Host as follows:
- Name Team, i.e. NicTeam-01
- Verify desired team member are checked
- Select Teaming mode: LACP
- Load balancing mode: Dynamic (verify correct setting)
- Standby adapter: None (verify correct setting)
- Click OK
- Click refresh on Local Server to view updated NIC team setting
When I tried to create a Hyper-V virtual switch using the host Virtual Switch Manager I don’t see the NicTeam-01 listed in Hyper-V Virtual Switch Manager. The only item I found that might relate to this NicTeam-01 is titled Microsoft Network Adapter Multiplexor Driver. Is this proper title for this NIC Team?
When I click OK to create the Virtual Switch, I get the following error:
Error applying Virtual Switch Properties changes
Failed while adding virtual Ethernet switch connections.
Attaching a virtual switch to an LBFO team is deprecated. Switch Embedded Teaming (SET) is an inbox replacement for this functionality. For more information on LBFO deprecation please see https://aka.ms/LBFODeprecation. To override this block, use the AllowNetLbfoTeams option in New-VMSwitch.
I have done some research on this, but I can’t determine a proper way forward. Any suggestions?
- Labels:
-
FortiGate
-
FortiSwitch
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When configure LACP links, they should be connected to one switch or stacked switches. You have to use Switch independent aggregation.
Cell : +201001615878
E-mail : mohamed.gaber@alkancit.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please make sure you are answering with accurate information. LACP can be formed across two independent switches using MC-LAG which OP is using. Switch Independent mode should not be used in this case.
Graham
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is MC-LAG supported on all switches?
Cell : +201001615878
E-mail : mohamed.gaber@alkancit.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
On all switches above FSW-1XX yes. OP has 224E.
Graham
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the suggestions. I will give these changes a try. Appreciate it very much.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I changed the Trunk Group to use 2 ports on only 1 of the switches, MC-LAG, click Enabled
Mode, click Active LACP.
I set the Microsoft Server NIC Team to team the two NIC's set aside in the FortiSwitch as a trunk group, selected Teaming mode: Switch Independent, Load balancing mode: Dynamic, Standby adapter: None. When I clicked OK to add this as a virtual switch, I again got the same error as earlier. Any further suggestion?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any other ideas or suggestions concerning avoiding the virtual switch to an LBFO team is deprecated error message? If possible, I would like to use a trunk group/team between the FortiSwitch and the Host Windows 2022 Hyper-V Server.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It sounds like possibly your MC-LAG configuration is not complete. Have you followed the docs?
It sounds like a Standalone FortiGate Unit with Dual-Homed FortiSwitch Units topology. Refer to documentation here: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/780635/switch-redundancy-with-m...
For more details on setting up the ports and interfaces (especially the ICL) see this doc: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801208/transitioning-from-a-for...
More info regarding the server connections: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801194/deploying-mclag-topologi...
Also for Hyper-V specific configuration and troubleshooting this is not the forum as we are not Microsoft folks. Do you have another Fortiswitch you can test the LACP configuration with?
Graham
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Graham for these suggestions and the reference documentation.
I may well be incorrect, but I think when the switches were installed a few years ago that due to an older FortiOS, the prior FortiGate model, or another factor, that it did not allow the support MCLAG. For whatever reason Switch 1 connects to FortiGate A and switch 2 connects to FortiGate B port. Both A and B are Interface members in a FortiLink (Hardware Switch).
Is the MCLAG generally considered best when using 2 or more FortiSwitches managed by one FortiGate?
If Yes, would this be a good document to follow to convert from the current topology to MCLAG:
I realize that troubleshooting Hyper-V is not the focus of the forum. I started here wondering if an incorrect FortiSwitch configuration might be the root cause of the problem.
