Re: [GENERAL] table dependencies

2015-09-07 Thread Raymond O'Donnell
On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:
> No worries. 
> 
> I found a way. 
> 

Would you share it, for the archives?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SPI_execute error handling

2015-09-07 Thread James Harper
The docs for SPI_execute at http://www.postgresql.org/docs/9.4/static/spi.html 
say:

"
Note that if a command invoked via SPI fails, then control will not be returned 
to your procedure. Rather, the transaction or subtransaction in which your 
procedure executes will be rolled back. (This might seem surprising given that 
the SPI functions mostly have documented error-return conventions. Those 
conventions only apply for errors detected within the SPI functions themselves, 
however.) It is possible to recover control after an error by establishing your 
own subtransaction surrounding SPI calls that might fail. This is not currently 
documented because the mechanisms required are still in flux.
"

so when my background worker executes "SELECT * FROM blah" and "blah" doesn't 
exist, i get:

2015-09-07 18:14:41 AEST [1958-44] ERROR:  relation "blah" does not exist at 
character 15
2015-09-07 18:14:41 AEST [1958-45] QUERY:  SELECT * FROM "blah"
2015-09-07 18:14:41 AEST [1889-18] LOG:  worker process: tds handler (PID 1958) 
exited with exit code 1
2015-09-07 18:14:41 AEST [1889-19] LOG:  unregistering background worker "tds 
handler"

And control is never returned to my worker, as expected.

How can I get control returned back to my worker so that I can give the client 
program a sensible error? I already create a transaction like 
StartTransactionCommand(), but then maybe that isn't the subtransation that the 
cryptic documentation above refers to??

thanks

James


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SPI_execute error handling

2015-09-07 Thread Pavel Stehule
2015-09-07 11:04 GMT+02:00 James Harper :

> The docs for SPI_execute at
> http://www.postgresql.org/docs/9.4/static/spi.html say:
>
> "
> Note that if a command invoked via SPI fails, then control will not be
> returned to your procedure. Rather, the transaction or subtransaction in
> which your procedure executes will be rolled back. (This might seem
> surprising given that the SPI functions mostly have documented error-return
> conventions. Those conventions only apply for errors detected within the
> SPI functions themselves, however.) It is possible to recover control after
> an error by establishing your own subtransaction surrounding SPI calls that
> might fail. This is not currently documented because the mechanisms
> required are still in flux.
> "
>
> so when my background worker executes "SELECT * FROM blah" and "blah"
> doesn't exist, i get:
>
> 2015-09-07 18:14:41 AEST [1958-44] ERROR:  relation "blah" does not exist
> at character 15
> 2015-09-07 18:14:41 AEST [1958-45] QUERY:  SELECT * FROM "blah"
> 2015-09-07 18:14:41 AEST [1889-18] LOG:  worker process: tds handler (PID
> 1958) exited with exit code 1
> 2015-09-07 18:14:41 AEST [1889-19] LOG:  unregistering background worker
> "tds handler"
>
> And control is never returned to my worker, as expected.
>
> How can I get control returned back to my worker so that I can give the
> client program a sensible error? I already create a transaction like
> StartTransactionCommand(), but then maybe that isn't the subtransation that
> the cryptic documentation above refers to??
>

Hi

When I need to solve similar situation, I take a code from plpgsql. look on
function exec_stmt_block. Similar code is in plpgsql_check
https://github.com/okbob/plpgsql_check/blob/master/plpgsql_check.c

Regards

Pavel




>
> thanks
>
> James
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] table dependencies

2015-09-07 Thread FarjadFarid(ChkNet)
Sure, 

I would have but I am not sure if it would add any value. As I have a 
customised network adaptor and sql classes covering different databases. 

Best Regards


Farjad


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Raymond O'Donnell
Sent: 07 September 2015 08:45
To: FarjadFarid(ChkNet); 'pgsql-general'
Subject: Re: [GENERAL] table dependencies

On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:
> No worries. 
> 
> I found a way. 
> 

Would you share it, for the archives?

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table dependencies

2015-09-07 Thread Gavin Flower

On 07/09/15 19:44, Raymond O'Donnell wrote:

On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:

No worries.

I found a way.


Would you share it, for the archives?

Ray.



I think I can do it relatively simply, in a reasonable general fashion.

if it is of real interest let me know, and I'll see if I can find time 
to try and implement  it.



Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SPI_execute error handling

2015-09-07 Thread James Harper
> Hi
> 
> 
> When I need to solve similar situation, I take a code from plpgsql.
> look on function exec_stmt_block. Similar code is in plpgsql_check
> https://github.com/okbob/plpgsql_check/blob/master/plpgsql_check.c
> 

Thanks. Got it sorted!

James


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bdr admin role

2015-09-07 Thread Ray Stell



On 9/6/15 10:55 PM, Craig Ringer wrote:

On 4 September 2015 at 21:46, Ray Stell  wrote:

FATAL:  role "postgresql" does not exist

It works if I init with "-U postgresql"

The demo works fine if
I use the string "postgresql" as the admin role.  It looks like you might
want to build -U support into bdr at some point.

It's not at all clear to me how this is happening. The string
"postgresql" isn't hardcoded by BDR anywhere that it could conceivably
be passed as a username for a connection string, so I'm not sure
what's going on.

I've never used "postgresql" as the default superuser name. Typically
I use "postgres".

I don't see anything particularly informative in the main PostgreSQL
sources that could explain it either. We pass "postgresql" as the
PGSQL_PAM_SERVICE and use it as the default RADIUS identifier, but I
fail to see how those could get passed as the login role identifier.

Are you running it under a unix user named "postgresql", by any chance?

Yes.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Jayadevan M
Hello ,

I have a parent table and 6 child tables (partitions). The child tables
have check constraints defined in the form
CHECK (myuid in (123,456,..)).
myuid is bigint, the constraints for the 6 child tables are definitely
mutually exclusive. The number of values in the list ranges from 2-10 for 5
of the child tables. For the 6th child table, the list is 2500+ elements.
When I try explain/explain analyze for even a simple query like

select * from parent where myuid in (123,456,789)

the child table with 2500+ elements gets always scanned. I have an index on
the column and that does get used. But why doesn't the planner just use
constraint exclusion and not go for the index scan? Anyone faced a similar
issue?

Thanks,
Jayadevan


Re: [GENERAL] BDR global sequences in two machine failover

2015-09-07 Thread Giovanni Maruzzelli
On Sep 7, 2015 5:05 AM, "Craig Ringer"  wrote:
>
> On 7 September 2015 at 00:18, Giovanni Maruzzelli 
wrote:
> > Hello,
> >
> > Typical HA situation.
> >
> > I have master-master, two only machines, one active and one passive
> > (standby) with floating IP.
> > I write to only one machine at time, the one with the floating IP.
>
> This is a deployment that is better suited to the typical approach
> with an active node, a standby streaming replica, and failover. Tools
> like repmgr help with this.
>

Craig, thanks a lot for your answer!

My use case is keeping internal state of some load balanced servers that
need to act as one (eg cluster of voip servers).

Last update wins is ok.

If I do not use global sequences, and I use uuid as primary keys, would BDR
be a correct choice?

BDR is appealing not only because of new toy coolness, but also because of
possible geodistribution and the seemingly sheer simplicity of
installation/management.

Btw, congratulation for the feat!

-giovanni




> > When one machine is down I can no more refill sequence allocated chunk
(eg:
> > next pool of values)...
>
> Global sequence allocation requires a quorum of half the nodes plus
> one. So in a 2-node system that means both nodes.
>
> > How do you deal with this?
>
> Don't use a 2-node multi-master asynchronous replication system as an
> active/standby failover system.
>
> (BTW, newer BDR versions allow you to increase the preallocated chunk
> size, but that's just kicking the ball down the road a bit).
>
> > Seems that BDR global sequences will not be good for master-master
failover.
>
> It's fine with more nodes. You have bigger worries, though, due to the
> *asynchronous* nature of the replication. You don't know if the peer
> node(s) have received all the changes from the master that failed. Not
> only that, but if it comes back online later, it'll replay those
> changes, and they might get discarded if more recent updates have
> since been applied to those rows, resulting in lost updates. See the
> documentation on multi-master conflicts and last-update-wins.
>
> This is very good behaviour for append-mostly applications, apps that
> are designed to work well with last-update-wins resolution, etc. It's
> really not what you want for some apps, though, and is extremely bad
> for a few workloads like apps that try to generate gapless sequences
> using counter tables. You *must* review the application if you're
> going to deploy it against a BDR system ... or any other asynchronous
> replication based solution.
>
> You can't just deploy a multi-master system like this and treat it as
> a single node. The very design choices that make it tolerant of
> latency and network partitions also means you have to think much more
> about how the application interacts with the system.
>
> With normal streaming replication you can make it synchronous, so
> there's no such concern. Or you can use it asynchronously, and accept
> that you'll lose some transactions, but you'll at least know (if you
> monitor replica lag) how big a time window you lose, and on failover
> you'll be making the decision to discard those transactions.  There
> are no multi-master conflicts to be concerned with, and failover
> becomes a simple (albeit painful) known quantity.
>
> > So, when you consumed the preallocated chunk (default to 15000 values),
your
> > surviving machine will no more be able to insert into a table with a
serial
> > column with underlying BDR global sequence.
> Correct.
>
> If you don't mind being tied to a fixed limit on the number of nodes
> you can instead use step/offset local sequences.
>
> > We're back to changing the start and increment of each sequence that
underly
> > the "serial" field in each table.
> > And must do so differently for each node (only two in a master-master
> > failover).
>
> Correct.
>
> > Is there any workaround?
>
> Keep it simple. Use streaming replication and a hot standby.
>
> > For "traditional" (non BDR) serial, there is a way to set into
configuration
> > what will be START and INCREMENT of all sequences?
>
> No.
>
> > Or each serial sequence must be individually ALTERed for each serial
column
> > in each table?
>
> Yes.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services


Re: [GENERAL] table dependencies

2015-09-07 Thread Melvin Davidson
All,
fwiw,  I once wrote a plpgsql function to assist in generating slony set
adds.
It grabs all the tables in pg_class and sorts them by foreign key count.
You can pull the main query logic from it and modify to suit your needs.

I've attached for your convenience.


On Mon, Sep 7, 2015 at 6:27 AM, Gavin Flower 
wrote:

> On 07/09/15 19:44, Raymond O'Donnell wrote:
>
>> On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:
>>
>>> No worries.
>>>
>>> I found a way.
>>>
>>> Would you share it, for the archives?
>>
>> Ray.
>>
>>
>> I think I can do it relatively simply, in a reasonable general fashion.
>
> if it is of real interest let me know, and I'll see if I can find time to
> try and implement  it.
>
>
> Cheers,
> Gavin
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: slony_generate_set_adds(integer, text, integer)

-- DROP FUNCTION slony_generate_set_adds(integer, text, integer);

CREATE OR REPLACE FUNCTION slony_generate_set_adds(integer, text, integer)
  RETURNS void AS
$BODY$

DECLARE

p_set_idALIAS FOR $1;
p_schemaALIAS FOR $2;
p_start_seq ALIAS FOR $3;

v_recordrecord;
v_schemaTEXT;
v_relname   TEXT;
v_seq   TEXT;
v_prefix_digits INT4;
v_ctr   INT4;

v_msg   TEXT;

BEGIN

v_prefix_digits := 3;
v_ctr   := p_start_seq;

CREATE TEMP TABLE t_slony_set_add
( slonik_command TEXT )
ON COMMIT DROP;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO v_seq;
SELECT v_ctr INTO v_seq;

-- 2008-06-12
-- Sorting by OID replaced by sort by FK count

FOR v_record IN SELECT n.nspname, t.relname , 0 as count
  FROM pg_class t
  JOIN pg_namespace n ON n.oid = t.relnamespace
 WHERE relkind = 'r'
   AND relname NOT LIKE 'pg_%'
   AND relname NOT LIKE 'sql_%'
   AND n.nspname = p_schema
   AND t.OID NOT IN 
(SELECT conrelid
   FROM pg_constraint
  WHERE contype = 'f'
AND contype <> 'p'
AND contype <> 'c')
UNION
   SELECT n.nspname, t.relname as table, count(c.conname) as count
 FROM pg_class t
 JOIN pg_constraint c ON ( c.conrelid = t.OID AND c.contype = 
'f')
 JOIN pg_namespace n ON n.oid = t.relnamespace
WHERE relkind = 'r'
  AND t.relname NOT LIKE 'pg_%'
  AND t.relname NOT LIKE 'sql_%'
  AND n.nspname != 'rollback'
  AND CASE WHEN p_schema IS NOT NULL 
THEN n.nspname = p_schema
ELSE TRUE
END
GROUP BY n.nspname, t.relname
ORDER BY 3, 2

LOOP

SELECT 'SET ADD TABLE (SET ID=' || p_set_id || ', ORIGIN=1, ID=' 
|| v_seq || ', FULLY QUALIFIED NAME=' || 
|| v_record.nspname || '.' || v_record.relname ||  || ', comment=' 
||  
|| v_record.relname || ' fks->'|| v_record.count::text || ''');' INTO 
v_msg;

INSERT INTO t_slony_set_add 
( slonik_command )
VALUES
( v_msg);

RAISE NOTICE '%', v_msg;

v_ctr := v_ctr +1;
--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO 
v_seq;
SELECT v_ctr INTO v_seq;

END LOOP;

v_prefix_digits := v_prefix_digits + 1;
--v_ctr := 1;
v_ctr   := p_start_seq;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO v_seq;
SELECT v_ctr INTO v_seq;

FOR v_record IN SELECT n.nspname, c.relname 
  FROM pg_class c
 LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
 LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
 WHERE c.relkind = 'S'::"char"
   AND CASE WHEN p_schema IS NOT NULL 
THEN n.nspname = p_schema
ELSE TRUE
END
  ORDER BY c.oid

LOOP

SELECT 'SET ADD SEQUENCE (SET ID=' || p_set_id || ', ORIGIN=1, ID='
|| v_seq || ', FULLY QUALIFIED NAME='
||  || v_record.nspname || '.'|| v_record.relname ||  || ', 
comment='
||  || v_record.relname || ''');' INTO v_msg;

RAISE NOTICE '%', v_msg;
INSERT INTO t_slony_set_add 
( slonik_command )
VALUES
( v_msg);

v_ctr := v_ctr +1;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO v_seq;
SELECT v_ctr INTO v_seq;

END LOOP;

COPY t_slony_set_add 
--TO '/tmp/slony_set_adds.txt';
TO 'F:\temp\slony_set_adds.txt';

RETURN;
-- SAMPLE CALL
-- SELECT slony_generate_set_adds(1, 'public', 1)

Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Melvin Davidson
First, what is the PostgresSQL version ??

Next, in postgresql.conf, what is the value of constraint_exclusion ?

On Mon, Sep 7, 2015 at 8:55 AM, Jayadevan M 
wrote:

> Hello ,
>
> I have a parent table and 6 child tables (partitions). The child tables
> have check constraints defined in the form
> CHECK (myuid in (123,456,..)).
> myuid is bigint, the constraints for the 6 child tables are definitely
> mutually exclusive. The number of values in the list ranges from 2-10 for 5
> of the child tables. For the 6th child table, the list is 2500+ elements.
> When I try explain/explain analyze for even a simple query like
>
> select * from parent where myuid in (123,456,789)
>
> the child table with 2500+ elements gets always scanned. I have an index
> on the column and that does get used. But why doesn't the planner just use
> constraint exclusion and not go for the index scan? Anyone faced a similar
> issue?
>
> Thanks,
> Jayadevan
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] table dependencies

2015-09-07 Thread FarjadFarid(ChkNet)
Thanks Melvin. Much appreciate it. 

 

 

From: Melvin Davidson [mailto:melvin6...@gmail.com] 
Sent: 07 September 2015 14:38
To: Gavin Flower
Cc: r...@iol.ie; FarjadFarid(ChkNet); pgsql-general
Subject: Re: [GENERAL] table dependencies

 

All, 
fwiw,  I once wrote a plpgsql function to assist in generating slony set adds.

It grabs all the tables in pg_class and sorts them by foreign key count.

You can pull the main query logic from it and modify to suit your needs.

I've attached for your convenience.

 

 

On Mon, Sep 7, 2015 at 6:27 AM, Gavin Flower  
wrote:

On 07/09/15 19:44, Raymond O'Donnell wrote:

On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:

No worries.

I found a way.

Would you share it, for the archives?

Ray.



I think I can do it relatively simply, in a reasonable general fashion.

if it is of real interest let me know, and I'll see if I can find time to try 
and implement  it.


Cheers,
Gavin




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




-- 

Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
wish to share my fantasy is entirely up to you.   
 



Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread David G. Johnston
On Mon, Sep 7, 2015 at 8:55 AM, Jayadevan M 
wrote:

> Hello ,
>
> I have a parent table and 6 child tables (partitions). The child tables
> have check constraints defined in the form
> CHECK (myuid in (123,456,..)).
> myuid is bigint, the constraints for the 6 child tables are definitely
> mutually exclusive. The number of values in the list ranges from 2-10 for 5
> of the child tables. For the 6th child table, the list is 2500+ elements.
> When I try explain/explain analyze for even a simple query like
>
> select * from parent where myuid in (123,456,789)
>
> the child table with 2500+ elements gets always scanned. I have an index
> on the column and that does get used. But why doesn't the planner just use
> constraint exclusion and not go for the index scan? Anyone faced a similar
> issue?
>
>
IIRC ​The planner doesn't understand​

​overlaps so having a definition of:

IN (1,2,3,4,5); or nearly equivalently = ANY(ARRAY[1,2,3,4,5]))​

and a request for:

IN (1,3,5) / = ANY(ARRAY[1,3,5]) is going to get you nowhere with the
planner.

​I am not sure but am doubting it is intelligent enough to recognize the
functional expression even if all of the values are present.  "simple
equality" (
http://www.postgresql.org/docs/9.4/interactive/ddl-partitioning.html) this
is not.

David J.


[GENERAL] jsonb value retrieval performance

2015-09-07 Thread Tom Smith
Hello:

Does JSONB storage has some internal indexing(like hasmap)
to fast look up a value given a key?
I have a jsonb doc with two level keys
(parentKey: {childKey:value}}
there are maybe 2000 parent keys per doc and 100 child keys per parent key

and I am trying to get value via  jsonb->parentKey->childKey
it seems it is very slow.
Would it be actually faster to use top level key only and parse it at
client side?

Thanks


Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Vick Khera
On Mon, Sep 7, 2015 at 4:48 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> IIRC ​The planner doesn't understand​
>
> ​overlaps so having a definition of:
>
> IN (1,2,3,4,5); or nearly equivalently = ANY(ARRAY[1,2,3,4,5]))​
>
> and a request for:
>
> IN (1,3,5) / = ANY(ARRAY[1,3,5]) is going to get you nowhere with the
> planner.
>

The partition code does not execute your CHECK condition; it only tests to
see if the query includes it explicitly. For example, if you have a split
on "id % 100 = 59" for a table, then looking for id = 13059 does you no
good; you have to search for "id = 13059 AND id % 100 = 59" to invoke the
table exclusions.


Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Jayadevan M
On Mon, Sep 7, 2015 at 7:12 PM, Melvin Davidson 
wrote:

> First, what is the PostgresSQL version ??
>

9.3.6

>
> Next, in postgresql.conf, what is the value of constraint_exclusion ?
>

partition

In response to the other possible issues pointed out - the planner is
indeed *skipping the rest of the child tables* (the ones with just a few
values in the check constraint). That is why I feel the number of elements
in the check constraint on this particular child table is causing it to be
scanned. The query ends up scanning the table where the data will be found
and the table with 2500+ values in the check constraint. I may be missing
something?
I tried changing the filter from myuid in (1,2,3) to myuid = 1 or myuid = 2
or
It did not improve the plan. One Index Cond became 3 Index Cond .


Thanks,
Jayadevan


Re: [GENERAL] BDR global sequences in two machine failover

2015-09-07 Thread Craig Ringer
On 7 September 2015 at 20:56, Giovanni Maruzzelli  wrote:

> If I do not use global sequences, and I use uuid as primary keys, would BDR
> be a correct choice?

For something like a VoIP service where eventual consistency is
usually OK and geographic redundancy with latency tolerance and
partition tolerance is needed, yes, it could make a lot of sense.

You could use UUID keys or use normal sequences with different offsets
on the nodes. UUID will probably be easier to manage.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bdr admin role

2015-09-07 Thread Craig Ringer
On 7 September 2015 at 20:34, Ray Stell  wrote:
>
>
> On 9/6/15 10:55 PM, Craig Ringer wrote:
>>
>> On 4 September 2015 at 21:46, Ray Stell  wrote:
>>
>> FATAL:  role "postgresql" does not exist
>
> It works if I init with "-U postgresql"
>>>
>>> The demo works fine if
>>> I use the string "postgresql" as the admin role.  It looks like you might
>>> want to build -U support into bdr at some point.
>>
>> It's not at all clear to me how this is happening. The string
>> "postgresql" isn't hardcoded by BDR anywhere that it could conceivably
>> be passed as a username for a connection string, so I'm not sure
>> what's going on.
>>
>> I've never used "postgresql" as the default superuser name. Typically
>> I use "postgres".
>>
>> I don't see anything particularly informative in the main PostgreSQL
>> sources that could explain it either. We pass "postgresql" as the
>> PGSQL_PAM_SERVICE and use it as the default RADIUS identifier, but I
>> fail to see how those could get passed as the login role identifier.
>>
>> Are you running it under a unix user named "postgresql", by any chance?
>
> Yes.

Hm, ok. So somewhere a connection string lacks a 'user=' entry, and
libpq is defaulting to the operating system user name, which doesn't
exist as database user.

I don't see this in my tests. Can you confirm the exact BDR version
you're testing/using? "git rev-parse --short HEAD" and "git branch"
please.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general