Environment
PostgresSQL = 15
Patroni = patroni-2.1.4-1.rhel8.x86_64 & patroni-etcd-2.1.4-1.rhel8.x86_64
OS = RHEL = 8.x
Detail
We have a setup of Postgres Cluster with 2 nodes. On top of Postgres Cluster a load balancer (name HAproxy) is also installed. Additionally the Postgres Cluster use Patroni software for HA related activities against Postgres Cluster. Patroni update the HAproxy to let him know who is Active / Leader node of Postgres Cluster thru API. So HAproxy know who is Active node of Postgres Cluster and send traffic to only Active node of Postgres.
My Query: Does Fortinet Firewall have support for Postgres Cluster and can we use Fortinet Firewall in place of HAproxy in above mentioned environment
Solved! Go to Solution.
As mentioned in a previous response to make it work for the intra-interface (i.e. Trust to Trust) you must disable "Preserve Client IP" in the Server Load Balance config and you must enable NAT on the FW Policy.
Hi Graham, Thanks for your back to back response. I tried follow your suggestion but could not get success. Today I engaged with Fortinet support and could not fix the query. Anyway support picked my configuration and checking in lab scenario. I will update in case get any success from Fortinet support.....
Does Fortinet have any official document specifically related to PostgreSQL ?
No it would be impossible for Fortinet to document every server technology out there. It's up to server operators and admins to understand how their servers function.
Question for you what is the output of "curl http://192.168.3.78:5432” and “curl http://192.168.3.79:5432” ?
curl -vv http://192.168.3.78:5432
* Rebuilt URL to: http://192.168.3.78:5432/
* Trying 192.168.3.78...
* TCP_NODELAY set
* Connected to 192.168.3.78 (192.168.3.78) port 5432 (#0)
> GET / HTTP/1.1
> Host: 192.168.3.78:5432
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 192.168.3.78 left intact
curl: (52) Empty reply from server
curl -vv http://192.168.3.78:8121
* Rebuilt URL to: http://192.168.3.78:8121/
* Trying 192.168.3.78...
* TCP_NODELAY set
* Connected to 192.168.3.78 (192.168.3.78) port 8121 (#0)
> GET / HTTP/1.1
> Host: 192.168.3.78:8121
> User-Agent: curl/7.61.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 200 OK
< Server: BaseHTTP/0.6 Python/3.6.8
< Date: Thu, 26 Jan 2023 19:17:51 GMT
< Content-Type: application/json
<
* Closing connection 0
{"state": "running", "postmaster_start_time": "2023-01-12 19:28:36.824401+05:00", "role": "master", "server_version": 150001, "xlog": {"location": 86103592}, "timeline": 11, "replication": [{"usename": "replicator", "application_name": "ISC-DGB-2", "client_addr": "192.168.3.79", "state": "streaming", "sync_state": "async", "sync_priority": 0}], "dcs_last_seen": 1674760670, "database_system_identifier": "7178857672087907798", "patroni": {"version": "2.1.4", "scope": "PostgreSQL--HA"}}
curl -vv http://192.168.3.79:5432
* Rebuilt URL to: http://192.168.3.79:5432/
* Trying 192.168.3.79...
* TCP_NODELAY set
* Connected to 192.168.3.79 (192.168.3.79) port 5432 (#0)
> GET / HTTP/1.1
> Host: 192.168.3.79:5432
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 192.168.3.79 left intact
curl: (52) Empty reply from server
curl -vv http://192.168.3.79:8121
* Rebuilt URL to: http://192.168.3.79:8121/
* Trying 192.168.3.79...
* TCP_NODELAY set
* Connected to 192.168.3.79 (192.168.3.79) port 8121 (#0)
> GET / HTTP/1.1
> Host: 192.168.3.79:8121
> User-Agent: curl/7.61.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< Server: BaseHTTP/0.6 Python/3.6.8
< Date: Thu, 26 Jan 2023 19:19:57 GMT
< Content-Type: application/json
<
* Closing connection 0
{"state": "running", "postmaster_start_time": "2023-01-26 23:02:25.784164+05:00", "role": "replica", "server_version": 150001, "xlog": {"received_location": 86103592, "replayed_location": 86103592, "replayed_timestamp": "2023-01-26 23:04:54.269609+05:00", "paused": false}, "timeline": 11, "dcs_last_seen": 1674760790, "database_system_identifier": "7178857672087907798", "patroni": {"version": "2.1.4", "scope": "PostgreSQL--HA"}}
As I described in my initial posts that first backend server Active/Master role verified via port 8121 response as http status code 200 against GET / request then the connection with backend server established on port 5432 . Please check the pictures which I posted earlier
As I described in earlier posts that first the status code 200 will be verified on port 8121 for Active/Master role on node against GET / request and then connection established on port 5432 i.e., PostgreSQL port
Yes I know the status code is what we are looking for. I was curious if there was any data being sent so we could use that in the health check. Can you make it so that the server sends data in the GET request? Can you customize it? That would help so you can leverage the HTTP host check in FortiGate.
Otherwise we can use the JSON output to match on the other port but that might be a bit more complicated. I'm not sure if you can just use ""role": "master"" as matched content or if you'd need the entire JSON string....
Created on 01-26-2023 12:34 PM Edited on 01-26-2023 12:42 PM
I tried below mentioned ways one by one this but no success.
1- I mentioned master in the match content box and leave the URL box empty but could not get success
2- I have to create two health check boxes because I only have the IP and not index.htm which I can use so the situation would be like below and added both boxes in the Virtual Server. (no success)
3- I change the URL with only / in start. I saw this at somewhere on internet(no success)
4- I used the point 2 and 3 with port 5432 but still no success
Created on 01-26-2023 02:40 PM Edited on 01-26-2023 02:41 PM
URL should only be what comes after the server host. So in your list of "real servers" the FortiGate will use the single health check on all of them. So the URL field should be whatever goes after the IP address. In your case it's just "/". Or you can probably leave it blank, I reckon.
So again, just one health check, and assign it to your Virtual Server object.
No success. Will update after support response
Just realized if you are using "master" as your match content it will treat both servers as alive since "postmaster" exists in both responses. You need to find some unique string that only the master shows. Looks like "replication" is unique to both.
Also can you confirm you enabled NAT on the FW Policy and disabled "Preserve Client IP" on the Load Balancer object?
1- After changing Network Type to TCP under virtual server only one backend server will be able to get telnet request even though its configured as Standby under Real Server.
2- However I changed both Backend servers to Active - Active and only Leader/Active node is able to get telnet connection. I also checked after failover too.
3- The environment work well when I configure the Firewall policy with different zones like its working DMZ to Trust. However could not work with Trust to Trust. I am still in lab and will keep update with result
Health Check
Virtual Server
Firewall Policy
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1741 | |
1109 | |
755 | |
447 | |
240 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2025 Fortinet, Inc. All Rights Reserved.