Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
mbrowndcm
New Contributor III

Multicast " bridging" and joining multiple groups

Hello, I am interested in bridging multicast traffic across a Fortigate. It appears to be either very simple, or very complicated. I have reviewed the technote on Multicast, and I am unsure if I need to worry about configuring PIM routing. If not, then I believe I would have to configure the join-group on the interface that is on the same VLAN as the multicast traffic/group. Here is a diagram for reference. Has anyone configured multicast " bridging?" Am I missing anything? Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
1 Solution
mbrowndcm
New Contributor III

[I guess I needed to take the weekend to sleep on this... I' ve corrected my questions as necessary.] Is there any way that I can configure the interface near the sender to dynamically register as a router for certain multicast groups only when receiving nodes JOIN the groups on the interface near the receivers? It doesn' t seem that I can leverage functionality as an RP for this, correct? Like a JOIN or PRUNE pim packet? Is it standard to just allow multicast to always hit the interface? I' m concerned with maintaining the lowest possible latency between the two subnets, so it is mildly concerning that dynamic JOIN/PRUNE isn' t possible. But it also quite possible I' m missing the point. Any assistance is appreciated. Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]

View solution in original post

" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
17 REPLIES 17
mbrowndcm
New Contributor III

Excellent! My switch is querying the following: IGMP v2 Type 0x11 (membership query) Destination 224.0.0.1 all-hosts.mcast.net This is not the all-routers.mcast.net, obviously. Will I then need to set up a query on the Fortigate, or should I simply expect that the PIM hello packets will be sniffed by the switch?
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

No, on the fortigate as soon as you enable pim, that should turn on IGMP, the switch is inspecting the IGMP queries and could care less on the PIM querier. by default most cisco and most other switches vendor query at a rate of 1 every min ( 60secs ). e.g ( of a typical cisco router ) Internet address is 10.8.11.3/28 IGMP is enabled on interface Current IGMP host version is 2 Current IGMP router version is 2 IGMP query interval is 60 seconds IGMP querier timeout is 120 seconds IGMP max query response time is 10 seconds Last member query count is 2 Last member query response interval is 1000 ms Inbound IGMP access group is not set IGMP activity: 0 joins, 0 leaves Multicast routing is enabled on interface Multicast TTL threshold is 0 Multicast designated router (DR) is 0.0.0.0 IGMP querying router is 10.8.11.3 (this system) No multicast groups joined by this system IGMP snooping is globally disabled IGMP snooping is disabled on this interface IGMP snooping fast-leave (for v2) is disabled and querier is disabled IGMP snooping explicit-tracking is enabled IGMP snooping last member query response interval is 1000 ms IGMP snooping report-suppression is enabled

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

Thanks emnoc. Here is the result: We' ll be using iperf to test multicast from vlan200 interconnected to vlan100 through fortigate' s internal1 to internal2 (respectively). Start sniffing on a host that is not subscribed to a multicast group on the 192.168.200.0 subnet Try sharkbox (192.168.200.101) ip.addr == 239.100.112.112 Start sniffing on a host that is not subscribed to a multicast group on the 192.168.100.0 subnet Try arabell' s machine ip.addr == 239.100.112.112 Start a sniffer on the internal1 interface of the fortigate diag sniffer packet internal1 ' host 239.100.112.112' Start a sniffer on the internal2 interface of the fortigate diag sniffer packet internal2 ' host 239.100.112.112'
conf vdom
 edit root
 
 config router multicast
 set multicast-routing enable
 set route-limit 20
 set route-threshold 20
 end
 ##*******ERROR LISTED BELOW
 
 config system settings 
 set multicast-forward enable 
 end
 
 config firewall multicast-policy
 edit 15
 set srcaddr 192.168.200.101 255.255.255.255
 set srcintf internal1 
 set dstaddr 239.100.112.112 255.255.255.0
 set dstintf internal2 
 end
 
 
 config router multicast
 conf interface
 edit internal1
 set dr-priority 1
 set hello-interval 65323
 set pim-mode sparse-mode
 set passive enable
 end
 
 conf interface
 edit internal2
 set dr-priority 1
 set hello-interval 65323
 set pim-mode sparse
 set passive enable
 end
 
TEST 1: on a host on 192.168.100.0 (try arabell' s machine) run: iperf -s -u -B 239.100.112.112 -i 1 on the host 192.168.200.101 run... to generate traffic: iperf -c 239.100.112.112 -u -T 32 -t 3 -i 1 RESULT: a single packet is delivered to internal1, internal2, and the client at 192.168.18.0/24 once. When repeated attempts to send any packets occur, they are not received by internal1 (etc). Expected result: 1) no multicast traffic should arrive to sharkbox 2) multicast traffic should be visible on internal1 3) multicast traffic should be visible on internal2 ********ERROR
NY_Internet (root) # conf router multicast
 NY_Internet (multicast) # set multicast-routing enable
 NY_Internet (multicast) # set route-limit 20
 defaulting route-threshold to route-limit
 NY_Internet (multicast) # end
 The current number of installed multicast routes is 3247833. The route limit can not be set lower than the number of installed multicast routes.
 object set operator error, -7, roll back the setting
 Command fail. Return code -7
When i set the route-limit first, then set multicast-routing enable, the error wasn' t reported. Conclusion So, this method did not work. I have a feeling it is related to not having the fortigate learn the multicast groups by performing queries. What do you think?
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

What does your multicast route table look like iirc will show you this or IGMP queries information. get router info multicast Also what does wireshark or diag sniffer shows for IGMP subscriptions being sent from the 2 interfaces ( vlan100 & 200 ) and the same for any show ip igmp snoop from the switch? And lastly; config firewall multicast-policy edit 15 set srcaddr 192.168.200.101 255.255.255.255 set srcintf internal1 set dstaddr 239.100.112.112 255.255.255.0 set dstintf internal2 end and then your testing ; on a host on 192.168.100.0 (try arabell' s machine) run: iperf -s -u -B 239.100.112.112 -i 1 on the host 192.168.200.101 run... to generate traffic: iperf -c 239.100.112.112 -u -T 32 -t 3 -i 1 I think that' s not going to work with that fwpolicy. Your client is on 192.168.200.x if I' m reading this correctly, but your fwpolicy is allowing from src net 192.168.200.x or I' m I missing something ? You need to specify 192.168.100.x or swap sender/client around with that current policy. Also I would increase the iperf time values to -t 60 for -i 1 , your only letting this run for 1sec if iirc my iperf switches right lastly is this a windows or unix host ( please say unix/linux ), Igmp could be broken on some windows version. So I would wireshark my client reports to ensure it' s really sending IGMP when using iperf. if it' s a unix/linux OS execute a nestat -ng to see the actual group subscriptions. e.g root@venus01 ~ # netstat -ng IPv6/IPv4 Group Memberships Interface RefCnt Group --------------- ------ --------------------- netstat: no support for `AF INET (igmp)' on this system. also as an alternative; If the show ip igmp snooping output shows zero information, I would then suspect a clientOS issues with regards to IGMP. On the set route-limit issues, I' m stumped on that and not sure what' s going on. ignore it for now. Hope the above helps

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

Finally picked this back up. TTL = 1 on the multicast packets. I bet this has always been, and thanks to a response you posted here, I sniffed the traffic and saw that TTL=1. Today and yesterday, I had been testing poorly, simply using iperf as a client to connect to a multicast group that was already being populated with data, Not using the iperf client. I am unsure what I did last time this thread was updated.
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
emnoc
Esteemed Contributor III

Glad you got it working, multicast packets ttl will always kill you. Cisco has a neat trick with inspecting ttl value in the mpack show cmd, to find low ttls or duplicate packets. We can do the same with the diag sniffer

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
mbrowndcm
New Contributor III

[I guess I needed to take the weekend to sleep on this... I' ve corrected my questions as necessary.] Is there any way that I can configure the interface near the sender to dynamically register as a router for certain multicast groups only when receiving nodes JOIN the groups on the interface near the receivers? It doesn' t seem that I can leverage functionality as an RP for this, correct? Like a JOIN or PRUNE pim packet? Is it standard to just allow multicast to always hit the interface? I' m concerned with maintaining the lowest possible latency between the two subnets, so it is mildly concerning that dynamic JOIN/PRUNE isn' t possible. But it also quite possible I' m missing the point. Any assistance is appreciated. Thanks, Matt
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
" …you would also be running into the trap of looking for the answer to a question rather than a solution to a problem." - [link=http://blogs.msdn.com/b/oldnewthing/archive/2013/02/13/10393162.aspx]Raymond Chen[/link]
CharlesK
New Contributor

Hi,

Did you ever get this working ..?

I am having the same issue and have no resolution yet ...

Thanks,

Charles K

 

Labels
Top Kudoed Authors