Support Forum
The Forums are a place to find answers on a range of Fortinet products from peers and product experts.
graffle
New Contributor

Can't ping from Internal to SSL-VPN

Hi folks,

 

I'm a bit new to this, so hoping someone can help.  I have our SSL VPN set up and working decently well:  remote clients can access internal the (single) internal network resources, and also split tunnels through to external resources (e.g. AWS).  So that's working well.  The part I'm struggling with is getting the internal network to access VPN clients.  

 

I have a policy set up as such:

-----------------------------------

 

Incoming Interface:  internal

Outgoing Interface: SSL-VPN tunnel interface (ssl.root)

Source: all

Destination: VPN Range

Schedule: always

Service: All

Action: Allow

 

(This is in addition to the regular SSL-VPN -> internal/wan policies that are working as expected right now)

 

Internal IP Range: 10.0.1.0/24

VPN IP Range : 10.0.2.0/24

 

Ping from SSL VPN to Internal is fine (e.g. 10.0.2.123 -> 10.0.1.123)

Ping from Internal to SSL VPN times out (e.g. 10.0.1.123 -> 10.0.2.123)

 

When I ping from internal to the SSL VPN resource, I can see in FortiClient that the resource is receiving/sending data, and the firewall logs (Windows 10) also shows the ICMP allowed and received:

 

2019-11-10 11:21:48 ALLOW ICMP xxx.xxx.xxx.xxx 10.0.2.2 - - 0 - - - - 8 0 - RECEIVE

 

I'm at a bit of a loss, not sure how to proceed from here.  Any help would be really appreciated.

 

Best,

 

Graf

7 Solutions
Toshi_Esumi
SuperUser
SuperUser

I would run sniffing (diag sniffer packet) on the FGT to see if the ping replies are actually coming back from the client. That would tell if it's on the client side or the FGT side.

View solution in original post

emnoc
Esteemed Contributor III

The tunnel adapter should always be a /32 , what does the routing show? What happens when you do a  {  diag sniffer packet ssl.root "icmp" } ?

 

Ken Felix

PCNSE 

NSE 

StrongSwan  

View solution in original post

PCNSE NSE StrongSwan
Toshi_Esumi

This doesn't explain the symptom but why you have NAT enabled for ssl.root -> internal policy?

View solution in original post

Toshi_Esumi

Get rid of the NAT first. Then I think Ken asked you about the routing table on the client machine "route print" for windows and "netstat -nr" for Mac.

View solution in original post

Toshi_Esumi

The remote machine's local subnet is the same 10.0.1.0/24 with the FGT's local subnet. That's why the response to 10.0.1.100 doesn't come in the tunnel. If it's the real subnet at the remote location, you need to change it. Otherwise, you need to set a tricky NAT for the local machine (10.0.1.100) to something else, and inject that IP/subnet to the client's routing table by adjusting split-tunnel subnets in SSL VPN config on the FGT.

View solution in original post

Toshi_Esumi

Ok, I was wrong. I was misreading the routing table.

10.0.1.0 255.255.255.0 On-link 10.0.1.43 306 10.0.1.0 255.255.255.0 10.0.2.3 10.0.2.2 1

The lower the metric, the higher the precedence. So returning packet still should come in the tunnel although this is potentially a problem. Because this machine can't even access a local printer when the tunnel is up.

 

But I have some bad experiences with Windows machines when a local subnet and a remote subnet into a tunnel are the same. I always avoid the conflict. To verify, you can create a new subnet like 10.255.0.0/24 or something on the FGT then set up the split tunnel with this subnet. Then try pinging it from inside of this subnet. I think you would get reply with this set up.

View solution in original post

Toshi_Esumi

Now everything looks normal. Remote access from the client into internal network still works even after some changes you made so far, right? Then I don't have any other idea this wouldn't work. Are you really sure Windows FW is not blocking your pinging at the machine? With a testing like this, I always double-check turning it off completely because it comes back on every time Windows update happens.

View solution in original post

18 REPLIES 18
Toshi_Esumi
SuperUser
SuperUser

I would run sniffing (diag sniffer packet) on the FGT to see if the ping replies are actually coming back from the client. That would tell if it's on the client side or the FGT side.

graffle

Hi Toshi,

 

Thank you for the response.  I ran the sniffer between while pinging the two machines, here's what I got back:

 

# diag sniffer packet internal 'icmp' 1 5 interfaces=[internal] filters=[icmp] 1.791606 10.0.1.100 -> 10.0.2.3: icmp: echo request 2.791165 10.0.1.100 -> 10.0.2.3: icmp: echo request 3.792441 10.0.1.100 -> 10.0.2.3: icmp: echo request 4.802223 10.0.1.100 -> 10.0.2.3: icmp: echo request 5.812401 10.0.1.100 -> 10.0.2.3: icmp: echo request

 

(no replies coming back).  The machine will respond to pings on its regular network (192.168.1.0/24), but not from the internal network.. I'm at a bit of a loss.

 

(for context, my end goal is to be able to Remote Desktop into VPN'd machines, but the internal network can't connect to them)

graffle
New Contributor

Found this thread:

https://forum.fortinet.com/tm.aspx?m=145375

 

And from there disabled the NAT on SSL-VPN --> internal, and it works now =)

 

Thanks!

graffle

Ok, next problem :)  The remote computer (on SSL-VPN) won't respond to pings unless I manually set its subnet to 255.255.0.0.  However, each time it reconnects to the VPN, the subnet is changed back to 255.255.255.255.  I understand that this expected, and from the handful of threads I've found suggested a static route to allow the remote computer to respond to pings, but nothing has worked so far.  Any ideas?

emnoc
Esteemed Contributor III

The tunnel adapter should always be a /32 , what does the routing show? What happens when you do a  {  diag sniffer packet ssl.root "icmp" } ?

 

Ken Felix

PCNSE 

NSE 

StrongSwan  

PCNSE NSE StrongSwan
graffle
New Contributor

Hi Ken,

 

Output of diag sniffer:

 

diag sniffer packet ssl.root 'icmp' interfaces=[ssl.root] filters=[icmp] pcap_lookupnet: ssl.root: no IPv4 address assigned 0.142874 10.0.1.100 -> 10.0.2.2: icmp: echo request 1.147461 10.0.1.100 -> 10.0.2.2: icmp: echo request 2.155535 10.0.1.100 -> 10.0.2.2: icmp: echo request 3.163188 10.0.1.100 -> 10.0.2.2: icmp: echo request 4.168851 10.0.1.100 -> 10.0.2.2: icmp: echo request 5.173410 10.0.1.100 -> 10.0.2.2: icmp: echo request

....

graffle

Gah, I really have no idea how to get this to work.  If I force the subnet on the remote VPN machine to /16, the pings are returned.  If it goes back to /32 (as I understand is correct), it's lost.  I've tried adding a static route:

10.0.2.0/24 | ssl.root

but that didn't change anything.  Still no response from the remote machine.  What am I missing here? :\

graffle
New Contributor

Digging some more, I have the same problem as the guy from this thread:

https://forum.fortinet.com/tm.aspx?m=110871

 

Best I can tell we're set up the same, but just in case, this is my setup:

 

VPN/SSL-VPN Settings:

   Tunnel Mode Client Settings:  Custom IP Ranges (10.0.2.0/24)

 

VPN/SSL-VPN Portals:

   Enabled Tunnel Mode

         Routing Address: 

               10.0.1.0/24  # Internal 

               10.0.2.0/24  # SSL-VPN

 

   Source IP Pools:

          10.0.2.0/24 # SSL-VPN

 

Policy and Objects:

    internal -> ssl.root: 

          (src) 10.0.1.0/24 

          (dst) 10.0.2.0/24

          NAT disabled

          Service: ALL

          Action: ACCEPT

 

    ssl.root -> internal:

          (src) 10.0.2.0/24

          (dst) 10.0.1.0/24

          NAT enabled

          Service: ALL

          Action: ACCEPT

    

Network:

     Static Routes:

              (dst) 0.0.0.0/0 | (gateway) xxx.xxx.xxx.xxx | (interface) wan1

              (dst) 10.0.2.0/24 | (gateway) [empty] | (interface) ssl.root

 

I think what I'm missing is that while the ping reaches the remote machine from internal over ssl.root (e.g 10.0.1.100 --> 10.0.2.2), the reply isn't coming back to the internal machine.  Best I can tell the policy from ssl.root -> internal, and the split tunnel to reach 10.0.1.0/24 should cover that?  

 

    

 

 

Toshi_Esumi

This doesn't explain the symptom but why you have NAT enabled for ssl.root -> internal policy?

Announcements

Select Forum Responses to become Knowledge Articles!

Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.

Labels
Top Kudoed Authors