We have a /24 LAN configured on a vlan interface. It has been requested to segregate one host on the LAN so it can only reach other LAN hosts via defined policies.
Is it possible to do this without changing the host IP address, subnet and default gateway?
For instance, is there any way that can two VLANs be treated as one interface in terms of their subnet but control traffic between them using policies?
We have a lot of flexibility with VLAN configuration. We do not currently use zones but I do not believe this would help.
The managed switches support private VLAN, but this is not an option since we have multiple switches on the LAN (the privacy setting is restricted to the local switch only).
From what I can see this is not possible, but it is certainly worth asking.
If there are no other options we'll just assign a new IP range to the host and proceed with a regular layer 3 solution. But it would be very nice to do this somehow "in the background". And it would be very useful elsewhere - Oh, all those stray devices I could isolate!
Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
I have this kind of setup on one of my boxes. I believe this is refered to as a VLAN translation setup. Here is a quick simplified overview...
You have your "normal" unsecured, everything else VLAN. You then also have one or more "secured" vlans depending on your segregation needs. You have a fortigate operating in transparent mode connected to a switch. On that switch, you have devices plugged into ports that are ONLY allowed to use one vlan. So your users and upstream routers are connected to vlan 1, a web server on vlan 2, and database on vlan 3 (for example). The web server switch port is only allowed vlan2, the database port is only allowed vlan3, and all other ports are only allowed vlan1. You then have your fortigate plugged into the switch on a trunk port that is allowed vlan 1, 2, and 3. You create rules that state users can connect from vlan1 to vlan2 to the web server on appropriate ports. The web server can connect to the backend database from vlan2 to vlan3 on appropriate ports, but users on vlan1 cannot talk to the database due to an implicit deny.
What happens is that in transparent mode- the firewall will advertise mac addresses across all interfaces. So your web server on vlan2 cannot see any other devices because only it and the firewall sit on vlan2. The firewall advertises that it has routes to all other mac addresses on the local network and therefore traffic can flow. The same holds true in reverse for users to get to the web server as long as appropriate firewall policies are in place.
Doing it this way allows you to scale your setup as you can just add more ports to the secured vlan and protect more servers behind the firewall without every changing any ip settings. The caveat is that the machines on the same vlan have unrestricted access to each other and you will eventually need aggregated interfaces to make sure you have enough throughput. You can also do this in a cheap fashion by directly connecting the devices to the physical interfaces on the firewall and forgetting about vlans- but it doesn't scale.
if you already have a NAT mode fortigate- you need to enable VDOMs and create a new transparent vdom which will add some complexities. You could also just build a transparent mode configuration from the ground up if that is the device's only function.
CISSP, NSE4
Vdoms add up on smaller devices because in effect you spawn worker processes for each vdom separately. Also it becomes a chore to synchronize address and custom service lists between them. The primary reason for vdom separation is to have a firewall that operates in transparent and NAT mode at the same time or for administrative separation. You can set minimum and maximum values for certain resources like vpn tunnels, policies, and sessions but not really a maximum/minimum CPU/memory so resource management on smaller boxes can be tricky if you're using a lot of UTM functions.
CISSP, NSE4
If you do activate VDOMs on your device, I would be very interested in knowing how much of a cpu/memory/throughput performance hit your Fortinet takes after VDOM implementation.
I am interested because I have to decide how to segment my own FortiWiFi 60D network, and I want to optimize both multi-departmental segmentation and packet traffic flow. (VLAN's are very popular and VLAN tagging is very efficient, but what is the VDOM load on the Fortinet hardware and does it impose an overall performance penalty?)
A two VDOM configuration, with one VDOM loaded up with multiple VLAN's could become a very desirable configuration if Fortinet performance remains robust and real time monitoring & occasional packet captures are not impacted adversely.
If you go this route, please let us all know what it was like.
Good luck,
Spock out...
Vdoms add up on smaller devices because in effect you spawn worker processes for each vdom separately. Also it becomes a chore to synchronize address and custom service lists between them. The primary reason for vdom separation is to have a firewall that operates in transparent and NAT mode at the same time or for administrative separation. You can set minimum and maximum values for certain resources like vpn tunnels, policies, and sessions but not really a maximum/minimum CPU/memory so resource management on smaller boxes can be tricky if you're using a lot of UTM functions.
CISSP, NSE4
Thanks Kenundrum for your fast reply to my VDOM vs VLAN performance query!
Journeyman doesn't say what Fortinet device he is running, so, hopefully, he will see your posting and dig a little deeper into multiple VDOM resource utilization if, like me, he has a less powerful Fortinet deployed. (Better to consider a change like this before taking the plunge rather than finding out later that routing performance could dive into a shallow puddle on an overloaded box.)
Question: is there an in-depth KB article on VDOM vs VLAN implementation that relates network architecture choice to Fortinet device cpu/ram/version or model numbers? (In other words, how can we know before "experimenting" with a production configuration?) Some guidelines would be helpful and probably reduce the number of "Oh God!" moments in the network admin community.
Spock out...
@Spock, yes this is a 60D cluster. We intend to swap to 60E for asset management rather than performance reasons.
Reading through this discussion I'm pretty sure we can accommodate all the issues raised and have a stable and well-behaved solution. That said there is some risk in this approach and we have not yet made our decision to proceed.
We have a small test area where we can evaluate our more adventurous improvements, but beyond verifying basic functionality this one will be hard to test as some important LAN elements are missing from the test environment.
Happy to post back with results if we proceed.
For those interested in the results of this solution, we will not proceed for our immediate requirement and will implement plain boring layer 2 & 3 segregation.
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1711 | |
1093 | |
752 | |
447 | |
231 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.