I’m relatively new to Hyper-V Server and to managing FortiSwitches via a FortiGate
FortiGate 60F on firmware 7.0.9
Pair of S224E switches on firmware 7.2.2 connected via fortilink
Dell PowerEdge T740 server running Windows Server 2022 with Hyper-V Server role. Hosting 2 Windows Server 2022 VM’s
The host server has 4 NIC ports, 2 each 10G and 2 each 1G
Would like to use a Trunk via the FortiGate switches with one Ethernet connection coming from each switch going to the 2 10G ports on the Dell PowerEdge R740 Server. Would like to do this for greater reliability and potentially greater networking speed.
I created a Trunk Group via the FortiGate on the FortiSwitches as follows:
I then created a NIC Team in the Windows Server Host as follows:
When I tried to create a Hyper-V virtual switch using the host Virtual Switch Manager I don’t see the NicTeam-01 listed in Hyper-V Virtual Switch Manager. The only item I found that might relate to this NicTeam-01 is titled Microsoft Network Adapter Multiplexor Driver. Is this proper title for this NIC Team?
When I click OK to create the Virtual Switch, I get the following error:
Error applying Virtual Switch Properties changes
Failed while adding virtual Ethernet switch connections.
Attaching a virtual switch to an LBFO team is deprecated. Switch Embedded Teaming (SET) is an inbox replacement for this functionality. For more information on LBFO deprecation please see https://aka.ms/LBFODeprecation. To override this block, use the AllowNetLbfoTeams option in New-VMSwitch.
I have done some research on this, but I can’t determine a proper way forward. Any suggestions?
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
OK well you have to either choose to use MCLAG or do not use MCLAG. In the configuration you posted you are enabling MCLAG for the Trunk Group. If your switches aren't configured for MCLAG then you can't enable MCLAG for the Trunk Group. And if the ports are on two different switches then a LAG won't work unless you are using MCLAG.
I can't really answer your questions about whether MCLAG is best or what docs you should follow to change anything because I don't have an accurate depiction of your current physical topology or what other requirements are on your network. MCLAG is good to provide redundancy at the switch level. i.e. if SWA goes down, connections remain on SWB and provides bandwidth aggregation assuming SWA and SWB are healthy. You can achieve redundancy with STP, too. And in your case it might be simpler to just leverage STP. Unless you know for sure you need >10Gbps at the server interface.
I wouldn't go changing anything until you've identified current state and fully understand what you need to do to get to your desired state.
Based on what you've told me I assume your current topology may be this: https://docs.fortinet.com/document/fortiswitch/7.2.1/fortilink-guide/801204/single-fortigate-unit-ma... (2 switches in a ring "stack" with one active link and one passive link going to the FGT).
Or it could be this:
I'm sorry to disappear. Some sickness in our group and general busyness. I will review the very helpful feedback, hopefully further next week.
You need to install the Switch Embedded Teaming feature. This feature replaces the deprecated LBFO teams. After installing SET, you can use the New-VMSwitch cmdlet with the AllowNetLbfo Teams parameter to override the block and create the virtual switch. Follow the documentation provided by Microsoft for configuring and managing SET. Thanks
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1720 | |
1093 | |
752 | |
447 | |
234 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.