Hi Fellows,
I have configured a simple ipsec tunnel hub to 1x spoke hoping to add more spokes later on.
the ipsec tunnel is dial up based with peertyp any . the tunnel is up and can ping the local subnet behind the hub from the remote subnet behind the spoke but not the opposite direction .
I assigned IP to the tunnel interfaces and I can ping only direction [spoke to hub but not hub to spoke]
I triple checked the static routes and firewall policies and all look fine.
am I missing anything ?
SPOKE TO HUB PING
# execute ping 192.168.2.11
PING 192.168.2.11 (192.168.2.11): 56 data bytes
64 bytes from 192.168.2.11: icmp_seq=0 ttl=255 time=4.6 ms
64 bytes from 192.168.2.11: icmp_seq=1 ttl=255 time=4.4 ms
64 bytes from 192.168.2.11: icmp_seq=2 ttl=255 time=4.4 ms
^C
--- 192.168.2.11 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 4.4/4.4/4.6 ms
=====================
HUB TO SPOKE PING
====================
# execute ping 192.168.3.25
PING 192.168.3.25 (192.168.3.25): 56 data bytes
^C
--- 192.168.3.25 ping statistics ---
131 packets transmitted, 0 packets received, 100% packet loss
==================================
If ping from spoke to hub works, routing is working fine for both directions. Beside if you assign a tunnel interface ip, you must have forced to configure "remote-ip". Ping between them doesn't involve routing because those are /32 connected routes.
Did you enabling "ping" on the spoke tunnel interface? Check in CLI "show system interface <tunnel_Interface_name>". Should show up like "set allowaccess ping https ssh" if allowed.
Toshi
Created on 11-04-2024 02:14 PM Edited on 11-04-2024 02:19 PM
Or not. If it's a "dialup" or "dynamic". I need to see your phase1-interface config in CLI on both sides to determine that, under "config vpn ipsec phase1-interface".
Another possibility is NAT is enabled on the spoke side policy, IPsec-interface->LAN-interface.
Toshi
ping is definitely enabled on the tunnel interfaces in both side and no NAT is off on 4 x firewall policies
Then you have to do sniff and flow debug as @arahman is aking.
When you do that, you might need to disable ASIC offloading on the direction of policy. The command line is "set auto-asic-offload disable".
I would suggest sniffing on the tunnel-interface on the hub side first. I'm assuming it's going into the tunnel. Then you need to sniff on the spokeside. If you see it's coming in but no replies are going back to the tunnel, that's when I would run flow debug on the spoke side.
Toshi
HI, can you please take the sniffer and also the debug flow when doing the ping in the direction it is not working. Thanks
di sniffer packet any ' host <source ip> and host <destination ip> and icmp ' 4 0 l
and also the debug flow
di de di
di de reset
di de flow show iprope en
di de flow show func en
di de flow filter addr <source ip addr> <destination ip addr> and
di de flow filter proto 1
di de flow trace start 300
di de en
Created on 11-06-2024 02:17 PM Edited on 11-06-2024 03:11 PM
id=65308 trace_id=1138 func=print_pkt_detail line=5795 msg="vd-root:0 received a packet(proto=1, 192.168.1.2:2->192.168.2.1:2048) tun_id=0. 0.0.0 from hub_Internal. type=8, code=0, id=2, seq=33541."
id=65308 trace_id=1138 func=resolve_ip_tuple_fast line=5883 msg="Find an existing session, id-000c4924, original direction"
id=65308 trace_id=1138 func=npu_handle_session44 line=1199 msg="Trying to offloading session from hub_Internal to to_spoke, skb.npu_flag=00000400 ses.s tate=00000204 ses.npu_state=0x01040000"
id=65308 trace_id=1138 func=fw_forward_dirty_handler line=436 msg="state=00000204, state2=00000001, npu_state=01040000"
id=65308 trace_id=1138 func=ipsecdev_hard_start_xmit line=669 msg="enter IPSec interface to_spoke, tun_id=0.0.0.0"
id=65308 trace_id=1139 func=print_pkt_detail line=5795 msg="vd-root:0 received a packet(proto=1, 192.168.1.2:2->192.168.2.1:2048) tun_id=0.0.0.0 from hub_Internal. type=8, code=0, id=2, seq=33542."
id=65308 trace_id=1139 func=resolve_ip_tuple_fast line=5883 msg="Find an existing session, id-000c4924, original direction"
id=65308 trace_id=1139 func=npu_handle_session44 line=1199 msg="Trying to offloading session from hub_Internal to to_spoke, skb.npu_flag=00000400 ses.s tate=00000204 ses.npu_state=0x01040000"
id=65308 trace_id=1139 func=fw_forward_dirty_handler line=436 msg="state=00000204, state2=00000001, npu_state=01040000"
id=65308 trace_id=1139 func=ipsecdev_hard_start_xmit line=669 msg="enter IPSec interface to_spoke, tun_id=0.0.0.0"
Created on 11-06-2024 02:39 PM Edited on 11-06-2024 02:40 PM
As you can see the packets to 192.168.2.1 and 192.205.236.1 went into the tunnel "to_spoke" in "original direction". But neither of them are 192.168.3.25, and more importantly no replies are coming back.
If those two subnets are supposed to be on the other side, you need to do the same on the other side FGT as I said before.
Toshi
Toshi
Created on 11-06-2024 02:45 PM Edited on 11-06-2024 02:50 PM
Wait a sec. With your original post 192.168.2.x was on HUB side. Why the packet to 192.168.2.1 goes into "to_spoke"?
Which FGT did you run this flow dedug?
What are the subnets on the HUB side and the spoke side?
And those are coming from "hub_internal". I'm assuming you took this at the HUB FGT, and it's a VLAN or other interface. Then the subnet doesn't match.
Toshi
my bad, I had to change the config again as didnt the hardware available for me all the time for testing.
192.168.2.x is behind the spoke
192.168.1.x is behind the hub
192.168.2.x pings 192.168.1.1 ( hub_internal is hub lan interface ) and 192.168.1.2 ( machine connected to hub lan interface)
and yes, hub_internal is configured as .1a with VLAN
"to_spoke" is the ipsec tunnel
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1740 | |
1108 | |
752 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.