ERROR: too many dynamic shared memory segments

2019-09-11 Thread Nicola Contu
Hello,
We are running postgres 11.5 and in the last two weeks we did :

- upgrade of postgres to 11.5 from 11.4
- increased shared_buffer to 1/3 of the memory
- increased effective_cache_size = 160GB from 120
- increased checkpoint_completion_target = 0.9 from 0.7
- increased checkpoint_timeout = 1h
- increased work_mem = 2GB (this can be set up to 4GB) from 600MB

Since that, in the last two weeks we saw an increment of this error :

ERROR: too many dynamic shared memory segments

Is there any relation between these parameters or the pgsql 11.5 version?

Any help can be appreciated.

Thank you,
Nicola


Re: ERROR: too many dynamic shared memory segments

2019-09-11 Thread Pavel Stehule
Hi

st 11. 9. 2019 v 9:48 odesílatel Nicola Contu 
napsal:

> Hello,
> We are running postgres 11.5 and in the last two weeks we did :
>
> - upgrade of postgres to 11.5 from 11.4
> - increased shared_buffer to 1/3 of the memory
> - increased effective_cache_size = 160GB from 120
> - increased checkpoint_completion_target = 0.9 from 0.7
> - increased checkpoint_timeout = 1h
> - increased work_mem = 2GB (this can be set up to 4GB) from 600MB
>
> Since that, in the last two weeks we saw an increment of this error :
>
> ERROR: too many dynamic shared memory segments
>
> Is there any relation between these parameters or the pgsql 11.5 version?
>

I expect it can be related to increasing work_mem - maybe parallel hash
join was used

Regards

Pavel

>
> Any help can be appreciated.
>
> Thank you,
> Nicola
>


Re: ERROR: too many dynamic shared memory segments

2019-09-11 Thread Nicola Contu
Hello,
We did not see any error in the logs, just that one.
Unfortunately we had problems installing updates in this machine and we are
not installing updates since a few months.

Do you think that can be the issue? We are running Centos 7.

I will look into those parameters as well.

Thanks for your feedback


Il giorno mer 11 set 2019 alle ore 09:56 Mickael Franc (mickaelf) <
micka...@cisco.com> ha scritto:

> Hello,
>
> Maybe a change needed in a kernel parameter such "kernel.shmmax" or
> "kernel.shmall" to allow by kernel to provide a huge shared-mem.
> Did you see any other error in logs ? Have you upgrade your kernel since
> last two weeks ?
>
> Best,
> --
> *De :* Nicola Contu 
> *Envoyé :* mercredi 11 septembre 2019 09:47
> *À :* pgsql-generallists.postgresql.org <
> pgsql-general@lists.postgresql.org>
> *Cc :* Alessandro Aste 
> *Objet :* ERROR: too many dynamic shared memory segments
>
> Hello,
> We are running postgres 11.5 and in the last two weeks we did :
>
> - upgrade of postgres to 11.5 from 11.4
> - increased shared_buffer to 1/3 of the memory
> - increased effective_cache_size = 160GB from 120
> - increased checkpoint_completion_target = 0.9 from 0.7
> - increased checkpoint_timeout = 1h
> - increased work_mem = 2GB (this can be set up to 4GB) from 600MB
>
> Since that, in the last two weeks we saw an increment of this error :
>
> ERROR: too many dynamic shared memory segments
>
> Is there any relation between these parameters or the pgsql 11.5 version?
>
> Any help can be appreciated.
>
> Thank you,
> Nicola
>


Exporting and importing table having blob datatype.

2019-09-11 Thread Daulat Ram
Hello Team,

We have a large table having blob data type. Anyone please suggest how we can 
export it from Oracle DB in parts via ora2pg and then import it into postgres.

Thanks,
Daulat


Re: ERROR: too many dynamic shared memory segments

2019-09-11 Thread Nicola Contu
If the error persist I will try to revert the work_mem.
Thanks a lot

Il giorno mer 11 set 2019 alle ore 10:10 Pavel Stehule <
pavel.steh...@gmail.com> ha scritto:

> Hi
>
> st 11. 9. 2019 v 9:48 odesílatel Nicola Contu 
> napsal:
>
>> Hello,
>> We are running postgres 11.5 and in the last two weeks we did :
>>
>> - upgrade of postgres to 11.5 from 11.4
>> - increased shared_buffer to 1/3 of the memory
>> - increased effective_cache_size = 160GB from 120
>> - increased checkpoint_completion_target = 0.9 from 0.7
>> - increased checkpoint_timeout = 1h
>> - increased work_mem = 2GB (this can be set up to 4GB) from 600MB
>>
>> Since that, in the last two weeks we saw an increment of this error :
>>
>> ERROR: too many dynamic shared memory segments
>>
>> Is there any relation between these parameters or the pgsql 11.5 version?
>>
>
> I expect it can be related to increasing work_mem - maybe parallel hash
> join was used
>
> Regards
>
> Pavel
>
>>
>> Any help can be appreciated.
>>
>> Thank you,
>> Nicola
>>
>


Re: pgbouncer with ldap

2019-09-11 Thread Ayub M
Achilleas, for this setup to work are changes to postgresql.conf and
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where
these files are not accessible.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> > Christoph Moench-Tegeder wrote:
> >>> It has hba and via hba file one can specify ldap connections
> >>>
> >>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
> >> https://pgbouncer.github.io/config.html#hba-file-format
> >> "Auth-method field: Only methods supported by PgBouncer’s auth_type
> >> are supported", and "ldap" is not supported.
> >> When there's no ldap support in pgbouncer, there's no ldap support
> >> in pgbouncer.
> > To throw in something less tautological:
> >
> > PgBouncer supports PAM authentication, so if you are on UNIX,
> > you could use PAM's LDAP module to do what you want.
> Right, I had written a blog about it :
>
> https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap
>
> However, I always wished (since my first endeavors with pgbouncer) it was
> less complicated.
> >
> > Yours,
> > Laurenz Albe
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>


Re: pg_dump problems: [archiver (db)] query failed: ERROR: relation "pg_opfamily" does not exist

2019-09-11 Thread Erwin Moller

On 8/20/2019 5:11 PM, Adrian Klaver wrote:

On 8/20/19 8:02 AM, Adrian Klaver wrote:

On 8/20/19 7:37 AM, Erwin Moller wrote:





That is a bummer.
Since I have no idea how to safely install an older version of 
pg_dump alongside my current 10.10 install, I will wait for the patch.


A quick test showed that:

1) Go here:
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgresql-11/

2) Grab
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgresql-11/postgresql-client-11_11.4-1.pgdg18.10%2b1_amd64.deb 



3) Using Ark extract pg_dump from data.tar.xz at path
/usr/lib/postgresql/11/bin


Just to be clear I did the above in my home directory so as not to 
overwrite the installed programs.


Also probably not a bad idea to rename the below to something like 
pg_dump_11_4 to keep things straight.





./pg_dump -V
pg_dump (PostgreSQL) 11.4 (Ubuntu 11.4-1.pgdg18.10+1)






I do not have a 8.1 instance to try against, so I could not test that 
part 


Thanks Adrian,

It took some time for me to get back to this, but thanks to your 
suggestions I got it working.
Since I am not too familiar with the inner workings of Ubuntu, some of 
your suggestions took me a while longer to parse, but with succes. ;-)


In case some other unskilled soul like me happens on this problem, here 
is what I did in babysteps:


I didn't want to grap the version you suggested (11.4), since I run 
10.10, so I wanted to be safe and use 10.9 client tools:


Found it here:
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgresql-10/
postgresql-client-10_10.9-1.pgdg18.10+1_amd64.deb 



I created a new directory in my home:
mkdir pg_dump_10.9
cd pg_dump_10.9
wget 
https://apt.postgresql.org/pub/repos/apt/pool/main/p/postgresql-10/postgresql-client-10_10.9-1.pgdg18.10+1_amd64.deb


now how to extract my pg_dump? After a short Google I did the following:
dpkg-deb -R ./postgresql-client-10_10.9-1.pgdg18.10+1_amd64.deb tmp

then walked to the right (new) directory:
cd tmp/usr/lib/postgresql/10/bin

There it was.: pg_dump from 10.9. Yes!
I moved it to my home-dir, and renamed it as you suggested (to avoid 
confusion).


Work right from there. :-)

I was expecting all kind of dependencies I had to solve, but it worked 
right out of the box.


Thanks Adrian.

Regards,
Erwin Moller


RE: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-11 Thread Kumar, Virendra


Re: Running a Simple Update Statement Fails, Second Time Suceeds.

2019-09-11 Thread Tom Lane
"Kumar, Virendra"  writes:
> Just got confirmation from developer that they are not modifying any type.
> We are on 11.5 RHEL.

Interesting.  Can you exhibit a self-contained test case?

regards, tom lane




Re: Recover data from aborted transactions

2019-09-11 Thread Luca Ferrari
On Tue, Sep 10, 2019 at 7:18 PM Aaron Spike  wrote:
> I'm pretty sure that the records I'm looking for are part of one of these 
> aborted transactions. From what I read online, it seems that data from 
> uncommitted transactions exists in the Write-Ahead Logs. Is there anyway to 
> access this data?

Interesting, what did you read? I don't think that aborted transaction
information is as complete as a committed one on the logs.

Luca




How to reformat output of "age()" function

2019-09-11 Thread David Gauthier
How can I change the default output of the "age" function to be, for
example, in minutes?

E.g.
dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
  age

 1 day 22:00:00
(1 row)

I want the equivalent of that time delta in minutes.

Thanks in Advance


Re: How to reformat output of "age()" function

2019-09-11 Thread Ron

On 9/11/19 10:38 AM, David Gauthier wrote:
How can I change the default output of the "age" function to be, for 
example, in minutes?


E.g.
dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
      age

 1 day 22:00:00
(1 row)

I want the equivalent of that time delta in minutes.


This SO answer might help: https://stackoverflow.com/a/24930139/1543618

--
Angular momentum makes the world go 'round.


Re: How to reformat output of "age()" function

2019-09-11 Thread Basques, Bob (CI-StPaul)
SELECT EXTRACT(EPOCH FROM age('2019-09-11 09:00:00','2019-09-09 
11:00:00')::INTERVAL)/60;

A nice explanation and even a slick function are here:

https://stackoverflow.com/questions/3460805/postgresql-format-interval-as-minutes

bobb



On Sep 11, 2019, at 10:38 AM, David Gauthier 
mailto:davegauthie...@gmail.com>> wrote:



How can I change the default output of the "age" function to be, for example, 
in minutes?

E.g.
dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
  age

 1 day 22:00:00
(1 row)

I want the equivalent of that time delta in minutes.

Thanks in Advance



Re: How to reformat output of "age()" function

2019-09-11 Thread Francisco Olarte
David:

On Wed, Sep 11, 2019 at 5:39 PM David Gauthier  wrote:
> How can I change the default output of the "age" function to be, for example, 
> in minutes?
> E.g.
> dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
>   age
> 
>  1 day 22:00:00
> (1 row)
> I want the equivalent of that time delta in minutes.

Some answers have already been posted, but also consider what you are
doing. Intervals have three components, months, days, seconds for a
reason, ( ignoring leap seconds for now ) not all days have 24h (
daylight saving time changes ) and not all months have 28/29/30/31
days. IIRC interval normalization for epoch assumes all months have 30
days, all days have 24 hours.

If you want to know the elapsed minutes between two timestamps, it
might be better to do it directly, extract the epoch from both (
seconds ), substract, divide by 60 truncating/rounding if you need to.

This is what happens in one case on my timezone ( Europe/Madrid ):


test=# select '2019.11.20 20:00:00'::timestamptz as end, '2019.06.10
10:00:00'::timestamptz as start;
  end   | start
+
 2019-11-20 20:00:00+01 | 2019-06-10 10:00:00+02
(1 row)

test=# select age('2019.11.20 20:00:00'::timestamptz, '2019.06.10
10:00:00'::timestamptz) as interval;
interval
-
 5 mons 10 days 10:00:00
(1 row)

test=# select extract(epoch from age('2019.11.20
20:00:00'::timestamptz, '2019.06.10 10:00:00'::timestamptz)) as
interval_seconds;
 interval_seconds
--
 1386
(1 row)

test=# select extract(epoch from '2019.11.20 20:00:00'::timestamptz) -
extract(epoch from '2019.06.10 10:00:00'::timestamptz) as
elapsed_seconds;
 elapsed_seconds
-
14122800
(1 row)

Francisco Olarte.




REVOKE DROP rights

2019-09-11 Thread Miles Elam
Is there any way to prevent a user from dropping a table when that user has
create rights? I'd like to allow that user to be able to create and delete
their own tables but not specific shared tables.

Is the only way to put the shared tables into a different schema?

Thanks in advance


Re: kind of a bag of attributes in a DB . . .

2019-09-11 Thread Albretch Mueller
On 9/10/19, Adrian Klaver  wrote:
> If there is no rhyme or reason to the metadata I am not sure how you
> could come up with an efficient search strategy. Seems it would be a
> brute search over everything.

 Not exactly. Say some things have colours but now weight. You could
still Group them as being "weighty" and then tell about how heavy they
are, with the colorful ones you could specify the colours and then see
if there is some correlation between weights and colours ...

 lbrtchx




Re: REVOKE DROP rights

2019-09-11 Thread Tom Lane
Miles Elam  writes:
> Is there any way to prevent a user from dropping a table when that user has
> create rights? I'd like to allow that user to be able to create and delete
> their own tables but not specific shared tables.

I think maybe you didn't read the manual closely.  Creation privileges
cover the right to create an object (in a given database or
schema), but only the creator/owner has the right to drop a particular
object once it exists.

We do grant the owner of a schema or database the right to drop objects
within it, since they could surely achieve that result by dropping the
whole schema or database.  But merely having create privilege doesn't
extend to that.

So basically you want a shared schema that is owned by some trusted
role, and your less-trusted roles have create (and usage!) on that
schema.

regards, tom lane




Re: REVOKE DROP rights

2019-09-11 Thread Miles Elam
Makes sense. Thanks!

On Wed, Sep 11, 2019 at 1:43 PM Tom Lane  wrote:

> Miles Elam  writes:
> > Is there any way to prevent a user from dropping a table when that user
> has
> > create rights? I'd like to allow that user to be able to create and
> delete
> > their own tables but not specific shared tables.
>
> I think maybe you didn't read the manual closely.  Creation privileges
> cover the right to create an object (in a given database or
> schema), but only the creator/owner has the right to drop a particular
> object once it exists.
>
> We do grant the owner of a schema or database the right to drop objects
> within it, since they could surely achieve that result by dropping the
> whole schema or database.  But merely having create privilege doesn't
> extend to that.
>
> So basically you want a shared schema that is owned by some trusted
> role, and your less-trusted roles have create (and usage!) on that
> schema.
>
> regards, tom lane
>


Re: How to reformat output of "age()" function

2019-09-11 Thread David Gauthier
Thanks a lot!

On Wed, Sep 11, 2019 at 12:34 PM Francisco Olarte 
wrote:

> David:
>
> On Wed, Sep 11, 2019 at 5:39 PM David Gauthier 
> wrote:
> > How can I change the default output of the "age" function to be, for
> example, in minutes?
> > E.g.
> > dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
> >   age
> > 
> >  1 day 22:00:00
> > (1 row)
> > I want the equivalent of that time delta in minutes.
>
> Some answers have already been posted, but also consider what you are
> doing. Intervals have three components, months, days, seconds for a
> reason, ( ignoring leap seconds for now ) not all days have 24h (
> daylight saving time changes ) and not all months have 28/29/30/31
> days. IIRC interval normalization for epoch assumes all months have 30
> days, all days have 24 hours.
>
> If you want to know the elapsed minutes between two timestamps, it
> might be better to do it directly, extract the epoch from both (
> seconds ), substract, divide by 60 truncating/rounding if you need to.
>
> This is what happens in one case on my timezone ( Europe/Madrid ):
>
>
> test=# select '2019.11.20 20:00:00'::timestamptz as end, '2019.06.10
> 10:00:00'::timestamptz as start;
>   end   | start
> +
>  2019-11-20 20:00:00+01 | 2019-06-10 10:00:00+02
> (1 row)
>
> test=# select age('2019.11.20 20:00:00'::timestamptz, '2019.06.10
> 10:00:00'::timestamptz) as interval;
> interval
> -
>  5 mons 10 days 10:00:00
> (1 row)
>
> test=# select extract(epoch from age('2019.11.20
> 20:00:00'::timestamptz, '2019.06.10 10:00:00'::timestamptz)) as
> interval_seconds;
>  interval_seconds
> --
>  1386
> (1 row)
>
> test=# select extract(epoch from '2019.11.20 20:00:00'::timestamptz) -
> extract(epoch from '2019.06.10 10:00:00'::timestamptz) as
> elapsed_seconds;
>  elapsed_seconds
> -
> 14122800
> (1 row)
>
> Francisco Olarte.
>


Re: kind of a bag of attributes in a DB . . .

2019-09-11 Thread Adrian Klaver

On 9/11/19 9:46 AM, Albretch Mueller wrote:

On 9/10/19, Adrian Klaver  wrote:

If there is no rhyme or reason to the metadata I am not sure how you
could come up with an efficient search strategy. Seems it would be a
brute search over everything.


  Not exactly. Say some things have colours but now weight. You could
still Group them as being "weighty" and then tell about how heavy they
are, with the colorful ones you could specify the colours and then see
if there is some correlation between weights and colours ...


It would help to see some sample data, otherwise any answer would be 
pure speculation.




  lbrtchx




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to reformat output of "age()" function

2019-09-11 Thread Adrian Klaver

On 9/11/19 9:34 AM, Francisco Olarte wrote:

David:

On Wed, Sep 11, 2019 at 5:39 PM David Gauthier  wrote:

How can I change the default output of the "age" function to be, for example, 
in minutes?
E.g.
dvdb=> select age('2019-09-11 09:00:00','2019-09-09 11:00:00');
   age

  1 day 22:00:00
(1 row)
I want the equivalent of that time delta in minutes.


Some answers have already been posted, but also consider what you are
doing. Intervals have three components, months, days, seconds for a
reason, ( ignoring leap seconds for now ) not all days have 24h (
daylight saving time changes ) and not all months have 28/29/30/31
days. IIRC interval normalization for epoch assumes all months have 30
days, all days have 24 hours.

If you want to know the elapsed minutes between two timestamps, it
might be better to do it directly, extract the epoch from both (
seconds ), substract, divide by 60 truncating/rounding if you need to.

This is what happens in one case on my timezone ( Europe/Madrid ):


test=# select '2019.11.20 20:00:00'::timestamptz as end, '2019.06.10
10:00:00'::timestamptz as start;
   end   | start
+
  2019-11-20 20:00:00+01 | 2019-06-10 10:00:00+02
(1 row)

test=# select age('2019.11.20 20:00:00'::timestamptz, '2019.06.10
10:00:00'::timestamptz) as interval;
 interval
-
  5 mons 10 days 10:00:00
(1 row)

test=# select extract(epoch from age('2019.11.20
20:00:00'::timestamptz, '2019.06.10 10:00:00'::timestamptz)) as
interval_seconds;
  interval_seconds
--
  1386
(1 row)

test=# select extract(epoch from '2019.11.20 20:00:00'::timestamptz) -
extract(epoch from '2019.06.10 10:00:00'::timestamptz) as
elapsed_seconds;
  elapsed_seconds
-
 14122800
(1 row)


Epoch is not the issue, age() is. Leave age() out of it:

set timezone = 'Europe/Madrid';

test=# select extract(epoch from '2019.11.20 20:00:00'::timestamptz) -
test-# extract(epoch from '2019.06.10 10:00:00'::timestamptz) as
test-# elapsed_seconds;
 elapsed_seconds
-
14122800


test=# select extract(epoch from '2019.11.20 20:00:00'::timestamptz 
-'2019.06.10 10:00:00'::timestamptz) as

elapsed_seconds;
 elapsed_seconds
-
14122800
(1 row)




Francisco Olarte.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: ERROR: too many dynamic shared memory segments

2019-09-11 Thread Thomas Munro
On Wed, Sep 11, 2019 at 11:20 PM Nicola Contu  wrote:
> If the error persist I will try to revert the work_mem.
> Thanks a lot

Hi Nicola,

It's hard to say exactly what the cause of the problem is in your case
and how to avoid it, without knowing what your query plans look like.
PostgreSQL allows 64 + 2 * max_connections segments to exist a time,
and it needs a number of them that depends on work_mem (in the case of
Parallel Hash Join and Parallel Bitmap Index Scan), and also depends
on the number of Gather nodes that appear in the plan, which in some
unusual cases can result from partitioning.

I've seen people reaching this error by running a lot of parallel
queries concurrently.  If that's the cause, then you can definitely
get some relief by turning work_mem down, or by turning
max_connections up (even though you don't want to allow more
connections -- because it influences the formula for deciding on the
DSM segment limit).  We should probably adjust some of the internal
constants to give us more slots, to avoid that problem, as discussed
here:

https://www.postgresql.org/message-id/flat/CA%2BhUKGL6H2BpGbiF7Lj6QiTjTGyTLW_vLR%3DSn2tEBeTcYXiMKw%40mail.gmail.com

I've also seen people reaching this error by somehow coming up with
plans that had a very large number of Gather nodes in them,
corresponding to partitions; that's probably a bad plan (it'd
presumably be better to terminate parallelism higher up in the plan,
but these plans do seem to exist in the wild; I don't recall exactly
why).  I think we need a bit of a redesign so that if there are
multiple Gather nodes, they share the same main DSM segment, instead
of blowing through this limit.

-- 
Thomas Munro
https://enterprisedb.com




update returning order by syntax error question

2019-09-11 Thread raf
Hi,

postgresql-9.6.15

I just tried something like:

  select * from
  (update tblname t set ... where ... returning ...)
  order by ...;

assuming it would work but it didn't.
That's OK. I found on stackoverflow
that a CTE can be used to do it:

  with u as
  (update tblname t set ... where ... returning ...)
  select * from u order by ...;

What surprises me is the syntax error:

  ERROR:  syntax error at or near "t"
  LINE 2:  tblname t
   ^
If the syntax was invalid because an update returning
statement can't appear in a from clause, I'd expect the
error to be at the token "update".

It's almost as if the parser sees "update" as a possible
table name (rather than a reserved word) and "tblname"
as the alias for that table and it's expecting a comma
or left/right/full etc. when it seess the "t".

Anyone care to explain why the error is what it is?
It's no big deal. I'm just curious.

cheers,
raf





Re: update returning order by syntax error question

2019-09-11 Thread Luca Ferrari
On Thu, Sep 12, 2019 at 5:45 AM raf  wrote:
>   ERROR:  syntax error at or near "t"
>   LINE 2:  tblname t

This works on 9.6.9 and 11.4:

luca=> with u as ( update t_all set id = 5 where id <= 5 returning *)
select * from u;
 id | ref_id
+
  5 |  1
(1 row)

luca=> select version();
 version
--
 PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
7.2.0-8ubuntu3.2) 7.2.0, 64-bit
(1 row)


However, I know for sure that UPDATE has some restrictions on the
table aliasing (at least, they are not used as for a SELECT), so the
problem could be in the real query you are executing.
It works with or without the order by.

Luca




Re: pgbouncer with ldap

2019-09-11 Thread Achilleas Mantzios

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf and 
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. Unfortunately 
no experience with Aurora. He have been building from source for ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> Christoph Moench-Tegeder wrote:
>>> It has hba and via hba file one can specify ldap connections
>>>
>>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
>> https://pgbouncer.github.io/config.html#hba-file-format
>> "Auth-method field: Only methods supported by PgBouncer’s auth_type
>> are supported", and "ldap" is not supported.
>> When there's no ldap support in pgbouncer, there's no ldap support
>> in pgbouncer.
> To throw in something less tautological:
>
> PgBouncer supports PAM authentication, so if you are on UNIX,
> you could use PAM's LDAP module to do what you want.
Right, I had written a blog about it :

https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

However, I always wished (since my first endeavors with pgbouncer) it was 
less complicated.
>
> Yours,
> Laurenz Albe


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt