Solved! Go to Solution.
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
PCNSE
NSE
StrongSwan
conf system settings set multicast-forward enable end2) Configure the multicast-policy
conf firewall multicast-policy edit 1 set srcintf internal1 set dstintf internal2 end3) configure the firewall policy that allows the traffic. [also considering conf global\tp-mc-skip-policy, which will skip firewall policies for multicast traffic.] 4) Configure the multicast group join on internal1:
conf vdom edit root conf router multicast conf interface edit internal1 conf join-group edit address 239.100.100.101 edit address 239.100.111.99 edit address 239.100.111.100 end endIt' s unclear how to join-group to multiple groups. Thanks! Matt [edit] I' ll obviously have to test this with the prod multicast infrastructure, but for now, iperf will do the trick!
PCNSE
NSE
StrongSwan
I' ve always hated static joins within cisco router when working the financial sector and working with 200+ groups at any given time, due to the extra load it places.I don' t think I' m concerned with this level, nor do I have multiple hops to be concerned with. It' s simply: hostA>data to multicast group>VLAN200 on switchA>Fortigate internal1>Fortigate internal2>VLAN100 on switchA>host B I think the above will work, but I need to test. I did some sniffing with iperf and it seemed to be fine. I was trying to use soni' s win32 port of PackETH, without any luck (it wasn' t writing the IGMP JOIN membership query packets properly).
$-6 groups on a firewall probably would not make a big impact but it is a process that would run at interval XI really don' t understand what you mean by " $-6 groups" In fact, I don' t understand what you were trying to express in this sentence at all. Are you saying that " there will be latency introduced to the packet arrival time by processing these packets through firewall policies" ? If so, I' m already prepared to troubleshoot the jitter/drift that may occur. Our middleware infrastructure is such that it can take easily measure this (it' s already builtin to the apps, horray!).
If you do igmp snooping enable and have a PIM/IGMP querier, the modern switches typically know this and respect or trust that multicast router and local active subscriptions.PIM and IGMP querier are the same thing? I don' t think they are, but I could be wrong. Maybe I don' t understand well, but are you saying that the switch is already acting as a multicast router (likely speaking PIM)?
On the IGMP create and joins issues, does your switch automatically forward traffic to a host within the same vlan and locally? and without a querier?If I did not have the IGMP querier running, the the IGMP snooping doesn' t work. Also, I do have the " drop packets from unknown" (as previously defined). If I had both of these things disabled, the multicast traffic would turn into broadcast traffic and hit every single node on the switch. The devs really didn' t split the traffic up well enough (in my opinion), so the multicast groups are quite chattery to the hosts that are subscribed. But this has little effect.
One other issues that with L3-mcast-interfaces & why they are so much better; let' s say your have joins on the FGT and bridging enable, but no subscriber within the other L3 subnet, the forwarder process will always send that group, regardless of a true subscriber with in the other network subnet.Yes, since the fortigate interface (internal1 in my case) is subscribed to the multicast group, it will be subscribed to the multicast group and receive packets pushed to this multicast group. You are also saying that the Fortigate will then push the packets through to the other interface (internal2) and try to publish these to the multicast group. But, they won' t have any group members, so my switch will drop the traffic thanks to the " drop packets from unknown" setting. I do see that this will cause undesirable traffic. [:' (]
A true multicasts routing would expire the flow upon no active subscription.I am not clear on how to implement PIM routing, but would like to, since that last bit is not great. Specifically since the firewall will have to inspect all these packets. The interface internal1 would still have to join the multicast group. Are you saying that that' s not correct? That the PIM router will dynamically join the group when it hears a IGMP JOIN request from a client on internal2? Thanks very much for your input, it' s been invaluable! Thanks, Matt
I really don' t understand what you mean by " $-6 groups" In fact, I don' t understand what you were trying to express in this sentence at all. Are you saying that " there will be latency introduced to the packet arrival time by processing these packets through firewall policies" ? If so, I' m already prepared to troubleshoot the jitter/drift that may occur. Our middleware infrastructure is such that it can take easily measure this (it' s already builtin to the apps, horray!).that shold have been 5 to 6 groups won' t make that much impact in the cpu, but if you place alot of joins, that sooner or later becomes yet another process that tacks on to the CPU and memory consumption. Since FWs are just that firewalls, and not mainly routers , be advise of the impact this could or could not cause, is all that I' m warning you about
PIM and IGMP querier are the same thing? I don' t think they are, but I could be wrong. Maybe I don' t understand well, but are you saying that the switch is already acting as a multicast router (likely speaking PIM)?Totally wrong here; IGMP is group membership, PIM is protocol independent multicasts 2 diffferent function, different discovery and dst groups. 224.0.0.1 vrs 224.0.0.13 for example 13.0.0.224.in-addr.arpa domain name pointer pim-routers.mcast.net. 1.0.0.224.in-addr.arpa domain name pointer all-systems.mcast.net. They both deal with multicast but are 2 unique beast and plays 2 unique functions. In some of the bigger networks that I' ve worked, you can enable and disable IGMP and PIM independently of each other. With cisco router is automatic ( you enable PIM ...IGMP query is also enable ) FWIW: Before PIM we used DVMRP or mrouted-OSPF. But typically anything modern since 2000 is PIM awared. IGMP has three version v1 v2 v3, v3 is a unique version that provides both group and sender subscription matching. I would suggest you read up on IGMP and PIM from the RFC point of view. On the latter, igmp-snooping switches, are not multicasts router all of time and, let me repeat; it (IGMP snooping ) deals with IGMP group subscriptions As I type, I have a ton of L2 cisco switches, all are IGMP-snooping awared and enabled, but they are not multicast routers by a long shot.
Yes, since the fortigate interface (internal1 in my case) is subscribed to the multicast group, it will be subscribed to the multicast group and receive packets pushed to this multicast group. You are also saying that the Fortigate will then push the packets through to the other interface (internal2) and try to publish these to the multicast group. But, they won' t have any group members, so my switch will drop the traffic thanks to the " drop packets from unknown" setting. I do see that this will cause undesirable traffiThanks for that clarification and that was what I was going towards
The interface internal1 would still have to join the multicast group. Are you saying that that' s not correct? That the PIM router will dynamically join the group when it hears a IGMP JOIN request from a client on internal2?kinda correct, 1st off PIM is not a IGPM querier process or function, 2nd when you enable PIM on that interface, you get IGMP queier by default and don' t need the static-joins that you are doing. So the L3 interface would query for any active subscriptions, and if any are heard and a valid group is present, than it would AUTOMATICALLY forward the traffic. every thing else that you are planning todo & to control 5-6 group , could be eliminated by just enable multicats routing between 2 subnets and maybe one fwpolicy & a few fwpolicy-address entries. Since the subnets are local to the fwappliance, the RPF-checks will pass and as long as a fwpolicy is present then ...bingo you have sender and receivers awared and receiving the groups. I really think your creating alot of unwarrant work for nothing and over thinking this imho
PCNSE
NSE
StrongSwan
config router multicast set multicast-routing enable config interface edit port1 set pim-mode sparse-mode set rp-candidate enable set rp-candidate-group multicast_port1 set rp-candidate-priority 15 end end...where multicast_port1 contains the address list of each of the multicast groups I would like to route? The concept you' ve described sounds appealing, internal1 will only JOIN the multicast group when a host off internal2 JOINs the group, but from what I read through the multicast tech note, it does not seem easy or even possible (without an RP configured). Am I missing something simple? Thanks, Matt
PCNSE
NSE
StrongSwan
PCNSE
NSE
StrongSwan
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1688 | |
1087 | |
752 | |
446 | |
227 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.