Primary key gist index?

2018-03-14 Thread Jeremy Finzel
Hello!  From all that I can tell, it is not possible using a btree_gist
index as a primary key.  If so, why not?  I have a table with this gist
index which truly ought to be its primary key.  as_of_date is of range date
type:

EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

Any direction here would be much appreciated.

Right now, I am forced to create a redundant btree index UNIQUE, btree (id,
lower(as_of_date)) in order to have a primary key on the table.

Thanks!
Jeremy


Re: Primary key gist index?

2018-03-14 Thread Adrian Klaver

On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
Hello!  From all that I can tell, it is not possible using a btree_gist 
index as a primary key.  If so, why not?  I have a table with this gist 


https://www.postgresql.org/docs/10/static/btree-gist.html

"In general, these operator classes will not outperform the equivalent 
standard B-tree index methods, and they lack one major feature of the 
standard B-tree code: the ability to enforce uniqueness."


index which truly ought to be its primary key.  as_of_date is of range 
date type:


EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

Any direction here would be much appreciated.

Right now, I am forced to create a redundant btree index UNIQUE, btree 
(id, lower(as_of_date)) in order to have a primary key on the table.


Thanks!
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-14 Thread chris


I believe its Postgresql-9.4.1208.jre7.jar
On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach 
out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf 
- failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//


















wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-14 Thread Hans Schou
Hi

I got the message
  ERROR: could not open relation with OID 0
when running the "General Table Size Information" from
https://wiki.postgresql.org/wiki/Disk_Usage

This patch gives some system tables
@@ -12,5 +12,6 @@
   FROM pg_class c
   LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
   WHERE relkind = 'r'
+  AND reltoastrelid!=0
   ) a
 ) a;

But I guess it was supposed to give size of all tables.

I'm running version 9.1.9 so it should be working according to the wiki.

The original statement:

SELECT *, pg_size_pretty(total_bytes) AS total
, pg_size_pretty(index_bytes) AS INDEX
, pg_size_pretty(toast_bytes) AS toast
, pg_size_pretty(table_bytes) AS TABLE
  FROM (
  SELECT *, total_bytes-index_bytes-COALESCE(toast_bytes,0) AS
table_bytes FROM (
  SELECT c.oid,nspname AS table_schema, relname AS TABLE_NAME
  , c.reltuples AS row_estimate
  , pg_total_relation_size(c.oid) AS total_bytes
  , pg_indexes_size(c.oid) AS index_bytes
  , pg_total_relation_size(reltoastrelid) AS toast_bytes
  FROM pg_class c
  LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
  WHERE relkind = 'r'
  ) a) a;


Any help much appreciated.

./best regards


Re: JDBC connectivity issue

2018-03-14 Thread Adrian Klaver

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what driver 
is in use. The failover syntax looks to be fairly recent, so being off 
by a little on the driver version can make a big difference.


Or you could do as this post suggests:

https://www.postgresql.org/message-id/CADK3HHJgdio_TZ-fpk4rguWaA-wWZFNZrjBft_T4jLBK_E_c8w%40mail.gmail.com

That is install driver version 42.2.1.


On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach 
out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf 
- failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//



















--
Adrian Klaver
adrian.kla...@aklaver.com



Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Alexander Farber
Good afternoon,

A PostgreSQL 10.3 table contains JSON data like:

[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 12,
"value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, "letter":
"C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]

Please suggest, how to extract only the "letter" values and concatenate
them to a string like "ABCD"?

I suppose at the end I should use the ARRAY_TO_STRING function, but which
JSON function to use for extracting the "letter" values to an array?

I keep looking at
https://www.postgresql.org/docs/10/static/functions-json.html but haven't
found a good one yet

Thank you
Alex


Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Ivan E. Panchenko

Hi Alex,

SELECT  string_agg(x->>'letter','') FROM json_array_elements(

'[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 
12, "value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, 
"letter": "C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]'::json


) x;

Regards,

Ivan Panchenko
Postgres Professional
the Russian PostgreSQL Company

14.03.2018 19:27, Alexander Farber пишет:

Good afternoon,

A PostgreSQL 10.3 table contains JSON data like:

[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 
12, "value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, 
"letter": "C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]


Please suggest, how to extract only the "letter" values and 
concatenate them to a string like "ABCD"?


I suppose at the end I should use the ARRAY_TO_STRING function, but 
which JSON function to use for extracting the "letter" values to an array?


I keep looking at 
https://www.postgresql.org/docs/10/static/functions-json.html but 
haven't found a good one yet


Thank you
Alex






Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Alexander Farber
Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko <
i.panche...@postgrespro.ru> wrote:

> Hi Alex,
>
> SELECT  string_agg(x->>'letter','') FROM json_array_elements(
>
> '[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8, "row": 12,
> "value": 10, "letter": "B"}, {"col": 9, "row": 12, "value": 1, "letter":
> "C"}, {"col": 10, "row": 12, "value": 2, "letter": "D"}]'::json
>
> ) x;
>
>
# select * from words_moves where gid=656 order by played desc limit 3;
 mid  | action | gid | uid  |played
|
tiles | score
--++-+--+---+--+---
 1353 | swap   | 656 |7 | 2018-03-14 17:22:18.430082+01 |
"ЙНРР"
| ¤
 1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | [{"col": 4,
"row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, "value": 3,
"letter": "У"}]   |19
 1351 | play   | 656 |7 | 2018-03-14 16:38:48.132546+01 | [{"col": 9,
"row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 13, "value": 2,
"letter": "М"}] |16
(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select
jsonb_array_elements(tiles) from words_moves where gid=656 and
action='play' order by played desc limit 5) x;
ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
   ^
HINT:  No operator matches the given name and argument type(s). You might
need to add explicit type casts.
LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?

Regards
Alex


Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Adrian Klaver

On 03/14/2018 10:02 AM, Alexander Farber wrote:

Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko 
mailto:i.panche...@postgrespro.ru>> wrote:


Hi Alex,

SELECT  string_agg(x->>'letter','') FROM json_array_elements(

'[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8,
"row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12,
"value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2,
"letter": "D"}]'::json

) x;


# select * from words_moves where gid=656 order by played desc limit 3;
  mid  | action | gid | uid  |    played 
|
tiles | score

--++-+--+---+--+---
  1353 | swap   | 656 |    7 | 2018-03-14 17:22:18.430082+01 | 
"ЙНРР"   
| ¤
  1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | [{"col": 
4, "row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, "value": 
3, "letter": "У"}]   |    19
  1351 | play   | 656 |    7 | 2018-03-14 16:38:48.132546+01 | [{"col": 
9, "row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 13, "value": 
2, "letter": "М"}] |    16

(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select 
jsonb_array_elements(tiles) from words_moves where gid=656 and 
action='play' order by played desc limit 5) x;

ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
    ^
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.

LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?


Do you still have non-arrays in the tile field?:

https://www.postgresql.org/message-id/CAADeyWgYKKaArJb6JK_xEtSO%3D7aeNaYqBu_ef-D5W7s8EFPfpQ%40mail.gmail.com



Regards
Alex





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Adrian Klaver

On 03/14/2018 10:12 AM, Adrian Klaver wrote:

On 03/14/2018 10:02 AM, Alexander Farber wrote:

Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko 
mailto:i.panche...@postgrespro.ru>> wrote:


    Hi Alex,

    SELECT  string_agg(x->>'letter','') FROM json_array_elements(

    '[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8,
    "row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12,
    "value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2,
    "letter": "D"}]'::json

    ) x;


# select * from words_moves where gid=656 order by played desc limit 3;
  mid  | action | gid | uid  |    played | 
tiles | score
--++-+--+---+--+--- 

  1353 | swap   | 656 |    7 | 2018-03-14 17:22:18.430082+01 | "ЙНРР" 
| ¤
  1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | 
[{"col": 4, "row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, 
"value": 3, "letter": "У"}]   |    19
  1351 | play   | 656 |    7 | 2018-03-14 16:38:48.132546+01 | 
[{"col": 9, "row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 
13, "value": 2, "letter": "М"}] |    16

(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select 
jsonb_array_elements(tiles) from words_moves where gid=656 and 
action='play' order by played desc limit 5) x;

ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
    ^
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.

LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?


Do you still have non-arrays in the tile field?:

https://www.postgresql.org/message-id/CAADeyWgYKKaArJb6JK_xEtSO%3D7aeNaYqBu_ef-D5W7s8EFPfpQ%40mail.gmail.com 


I should have looked closer before answering, yes there are:

1353 | swap   | 656 |7 | 2018-03-14 17:22:18.430082+01 | "ЙНРР"






Regards
Alex








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Primary key gist index?

2018-03-14 Thread Jeremy Finzel
On Wed, Mar 14, 2018 at 8:33 AM, Adrian Klaver 
wrote:

> On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
>
>> Hello!  From all that I can tell, it is not possible using a btree_gist
>> index as a primary key.  If so, why not?  I have a table with this gist
>>
>
> https://www.postgresql.org/docs/10/static/btree-gist.html
>
> "In general, these operator classes will not outperform the equivalent
> standard B-tree index methods, and they lack one major feature of the
> standard B-tree code: the ability to enforce uniqueness."
>
>
> index which truly ought to be its primary key.  as_of_date is of range
>> date type:
>>
>> EXCLUDE USING gist (id WITH =, as_of_date WITH &&)
>>
>> Any direction here would be much appreciated.
>>
>> Right now, I am forced to create a redundant btree index UNIQUE, btree
>> (id, lower(as_of_date)) in order to have a primary key on the table.
>>
>> Thanks!
>> Jeremy
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

Thank you for the ref.  But I don't understand how an exclusion constraint
does not have "the ability to enforce uniqueness" unless they just mean
that is the case "under the covers of postgres".  That is exactly what it
does, right?  By the definition of the exclusion index I have above, there
cannot be more than one row with the same id and as_of_date values.

Thanks,
Jeremy


Re: Primary key gist index?

2018-03-14 Thread Paul Jungwirth

On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
Hello!  From all that I can tell, it is not possible using a btree_gist 
index as a primary key.  If so, why not?  I have a table with this gist 
index which truly ought to be its primary key.  as_of_date is of range 
date type:


EXCLUDE USING gist (id WITH =, as_of_date WITH &&)


I'm curious why you need a primary key on this table, especially if the 
exclusion constraint is already preventing duplicate/overlapping records?


Technically I think an exclusion constraint (or at least this one) 
fulfills the formal requirements of a primary key (is unique, isn't 
null), but maybe there are other primary-key duties it doesn't meet, 
like defining foreign keys that reference it. I've been on-and-off 
building an extension for temporal foreign keys at [1]. That is pretty 
new, but perhaps it will be useful/interesting to you. And if you have 
any feedback, I'd love to hear it!


But anyway, maybe if you shared why the table needs a real PRIMARY KEY, 
people here can suggest something.


[1] https://github.com/pjungwir/time_for_keys

Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Primary key gist index?

2018-03-14 Thread Tom Lane
Paul Jungwirth  writes:
> On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
>> Hello!  From all that I can tell, it is not possible using a btree_gist 
>> index as a primary key.  If so, why not?  I have a table with this gist 
>> index which truly ought to be its primary key.  as_of_date is of range 
>> date type:
>> 
>> EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

> Technically I think an exclusion constraint (or at least this one) 
> fulfills the formal requirements of a primary key (is unique, isn't 
> null), but maybe there are other primary-key duties it doesn't meet, 
> like defining foreign keys that reference it.

I think a key point is that an exclusion constraint might or might not
provide behavior that could be construed as uniqueness.  The logic for
PRIMARY KEY hasn't got nearly enough knowledge to tell whether particular
operators used in a particular way in a GIST index will behave in a way
that would support calling that a primary key.  b-tree indexes, on the
other hand, have basically only one behavior, so they're easy.

Also, as you mention, extrapolating behavior that's not really equality
to situations like foreign keys gets pretty interesting pretty fast.
An exclusion constraint using && might ensure that no two values in the
column are identical, but it would not be enough to ensure that a proposed
FK row can't && with more than one PK row.  So what then?

regards, tom lane



Re: Primary key gist index?

2018-03-14 Thread Adrian Klaver

On 03/14/2018 11:10 AM, Jeremy Finzel wrote:


On Wed, Mar 14, 2018 at 8:33 AM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 03/14/2018 06:19 AM, Jeremy Finzel wrote:

Hello!  From all that I can tell, it is not possible using a
btree_gist index as a primary key.  If so, why not?  I have a
table with this gist


https://www.postgresql.org/docs/10/static/btree-gist.html


"In general, these operator classes will not outperform the
equivalent standard B-tree index methods, and they lack one major
feature of the standard B-tree code: the ability to enforce uniqueness."


index which truly ought to be its primary key.  as_of_date is of
range date type:

EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

Any direction here would be much appreciated.

Right now, I am forced to create a redundant btree index UNIQUE,
btree (id, lower(as_of_date)) in order to have a primary key on
the table.

Thanks!
Jeremy



-- 
Adrian Klaver

adrian.kla...@aklaver.com 


Thank you for the ref.  But I don't understand how an exclusion 
constraint does not have "the ability to enforce uniqueness" unless they 
just mean that is the case "under the covers of postgres".  That is 
exactly what it does, right?  By the definition of the exclusion index I 
have above, there cannot be more than one row with the same id and 
as_of_date values.


https://www.postgresql.org/docs/10/static/sql-createtable.html#SQL-CREATETABLE-EXCLUDE

"The EXCLUDE clause defines an exclusion constraint, which guarantees 
that if any two rows are compared on the specified column(s) or 
expression(s) using the specified operator(s), not all of these 
comparisons will return TRUE. If all of the specified operators test for 
equality, this is equivalent to a UNIQUE constraint, although an 
ordinary unique constraint will be faster. ..."


I have always taken the above to mean that while is possible to create 
an EXCLUDE that enforces uniqueness that operation is internal to the 
EXCLUDE and is not knowable to Postgres when it is looking for a UNIQUE 
index. Probably because an EXCLUDE can be non-unique.





Thanks,
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Primary key gist index?

2018-03-14 Thread Steven Lembark
On Wed, 14 Mar 2018 11:28:59 -0700
Paul Jungwirth  wrote:

> > EXCLUDE USING gist (id WITH =, as_of_date WITH &&)  

> null), but maybe there are other primary-key duties it doesn't meet, 
> like defining foreign keys that reference it. I've been on-and-off 

The PK does provide a unique index.
It may be worth adding and index on 

id where ( upper as_of_date = infinity )

for faster searches of current id's (assuming, of course, that there
is an infinite value for upper of as_of_date).


You will also need a PK if the table is used in foreign key 
constraints -- the other table has to ref this table's PK.
-- 
Steven Lembark   1505 National Ave
Workhorse Computing Rockford, IL 61103
lemb...@wrkhors.com+1 888 359 3508



Re: Primary key gist index?

2018-03-14 Thread Paul Jungwirth
On 03/14/2018 11:44 AM, Tom Lane wrote:> Also, as you mention, 
extrapolating behavior that's not really equality

to situations like foreign keys gets pretty interesting pretty fast.
An exclusion constraint using && might ensure that no two values in the
column are identical, but it would not be enough to ensure that a proposed
FK row can't && with more than one PK row.  So what then?


This is (perhaps) a digression from the OP's question, but in temporal 
databases it is fine if the child record's as_of_time overlaps with 
multiple records from the parent table. In fact that would be pretty 
normal. What's required is that the child's as_of_time is completely 
covered by the "sum" of the as_of_times of the parent records with a 
matching ID. For example:


houses
id  as_of_timeappraisal
1   [2016-01-01, 2017-01-01)  $200k
1   [2017-01-01, 2018-01-01)  $230k

rooms
id  as_of_timehouse_id
1   [2016-01-01, 2018-01-01)  1

If you had a "temporal foreign key" from rooms to houses, that example 
should be valid. I understand that's not possible today with normal 
foreign keys, and maybe it's too specific to be desirable as a 
general-purpose feature, but that's what the github extension I linked 
to tries to offer.


--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Ivan E. Panchenko

14.03.2018 20:02, Alexander Farber пишет:

Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko 
mailto:i.panche...@postgrespro.ru>> wrote:


Hi Alex,

SELECT  string_agg(x->>'letter','') FROM json_array_elements(

'[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8,
"row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12,
"value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2,
"letter": "D"}]'::json

) x;


# select * from words_moves where gid=656 order by played desc limit 3;
 mid  | action | gid | uid  | played | 
tiles | score

--++-+--+---+--+---
 1353 | swap   | 656 |    7 | 2018-03-14 17:22:18.430082+01 | "ЙНРР" 
| ¤
 1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | [{"col": 
4, "row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, "value": 
3, "letter": "У"}]   |    19
 1351 | play   | 656 |    7 | 2018-03-14 16:38:48.132546+01 | [{"col": 
9, "row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 13, 
"value": 2, "letter": "М"}] |    16

(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select 
jsonb_array_elements(tiles) from words_moves where gid=656 and 
action='play' order by played desc limit 5) x;

ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
   ^
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.

LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?
Yes, here x is the alias for the record, not for the json field. So you 
need to write the query like


select string_agg(x->>'letter', ' ')
from (
   select jsonb_array_elements(tiles) x
   from words_moves
   where gid=656 and action='play'
   order by played desc limit 5
) y;



Regards
Alex



Regards,
Ivan


Re: Primary key gist index?

2018-03-14 Thread Jeremy Finzel
On Wed, Mar 14, 2018 at 1:29 PM Paul Jungwirth 
wrote:

> On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
> > Hello!  From all that I can tell, it is not possible using a btree_gist
> > index as a primary key.  If so, why not?  I have a table with this gist
> > index which truly ought to be its primary key.  as_of_date is of range
> > date type:
> >
> > EXCLUDE USING gist (id WITH =, as_of_date WITH &&)
>
> I'm curious why you need a primary key on this table, especially if the
> exclusion constraint is already preventing duplicate/overlapping records?
>
> Technically I think an exclusion constraint (or at least this one)
> fulfills the formal requirements of a primary key (is unique, isn't
> null), but maybe there are other primary-key duties it doesn't meet,
> like defining foreign keys that reference it. I've been on-and-off
> building an extension for temporal foreign keys at [1]. That is pretty
> new, but perhaps it will be useful/interesting to you. And if you have
> any feedback, I'd love to hear it!
>
> But anyway, maybe if you shared why the table needs a real PRIMARY KEY,
> people here can suggest something.
>
> [1] https://github.com/pjungwir/time_for_keys
>
> Yours,
>
> --
> Paul  ~{:-)
> pj@ 


Because many extensions require primary keys. I also infer primary keys for
various purposes.



illuminatedcomputing.com 
>
>


Re: Best options for new PG instance

2018-03-14 Thread pinker
Bugzilla from scher...@proteus-tech.com wrote
> Oh - and lots of memory is always good no matter what as others have said.

I'm probably "the others" here. I have seen already really large
instalations like with 6TB of RAM. Dealing with it is like completely other
universe of problems, because of NUMA - you cannot really have large RAM
without multiple sockets, because every processor has got maximum memory
capacity. What's next - those processors need to communicate with each other
and the hardware and those algorithms aren't perfect yet (would rather say
are underdeveloped).

so - more memory is a good rule of thumb, but sky isn't the limit :)



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Need followup setup instructions to postgresql-10.3-1-linux-x64.run

2018-03-14 Thread pinker
you probably need to change pg_hba.conf. set the authentication method to
trust for your user, reload the server with pg_ctl, go to psql and change
the passwords. more details you will find here:

https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: JDBC connectivity issue

2018-03-14 Thread chris

Oh sorry for getting the wrong information.

How would I get the information on what driver is currently installed?


On 03/14/2018 09:44 AM, Adrian Klaver wrote:

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what 
driver is in use. The failover syntax looks to be fairly recent, so 
being off by a little on the driver version can make a big difference.


Or you could do as this post suggests:

https://www.postgresql.org/message-id/CADK3HHJgdio_TZ-fpk4rguWaA-wWZFNZrjBft_T4jLBK_E_c8w%40mail.gmail.com 



That is install driver version 42.2.1.


On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, 
to first try to connect to the master. If that fails to then 
reach out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/ 



Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 
ERROR:com.zaxxer.hikari.util.PropertyElf - failed to set 
properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two 
IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//
























PgBackrest questions

2018-03-14 Thread chiru r
Hi,

I am testing Pgbackrest and I have few questions.

please help if any one has tested these cases.

1. I used postures user to perform backups and restores with Pgbackrest
tool.
The Trust authentication in pg_hba.conf file is working without issues.

If I use md5 authentication in pg_hba_conf file and postgres user password
.pgpass file, the pgbackrest backup is failing.
Where should  i need to mention the password for postgres user in order to
success the Pgbackrest backups?.


2.  All the restores through the backrest is going to refer the  Wal
archive files under archive//* directory, which are taken from
pgbackrest?

3. What is the compression level by default in pgbackrest?

Thanks,
Chiranjeevi


Re: JDBC connectivity issue

2018-03-14 Thread Adrian Klaver

On 03/14/2018 01:47 PM, chris wrote:

Oh sorry for getting the wrong information.


At this point not sure whether it is wrong information or not.



How would I get the information on what driver is currently installed?


I am not a Java programmer, so I am not the best person to answer this. 
Still I would the think the place to start would be the connection code 
itself. Another way would be examining CLASSPATH:


https://jdbc.postgresql.org/documentation/head/classpath.html




On 03/14/2018 09:44 AM, Adrian Klaver wrote:

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what 
driver is in use. The failover syntax looks to be fairly recent, so 
being off by a little on the driver version can make a big difference.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PgBackrest questions

2018-03-14 Thread Stephen Frost
Greetings,

* chiru r (chir...@gmail.com) wrote:
> I am testing Pgbackrest and I have few questions.

Great!

> 1. I used postures user to perform backups and restores with Pgbackrest
> tool.
> The Trust authentication in pg_hba.conf file is working without issues.

Please don't use 'trust'.

> If I use md5 authentication in pg_hba_conf file and postgres user password
> .pgpass file, the pgbackrest backup is failing.

There really shouldn't ever be a need to use md5 authentication with
pgbackrest.  Instead, I'd strongly suggest you use 'peer'.  The 'peer'
method is perfectly safe as it depends on the authentication which
Unix provides, but it doesn't require a password or any of the
associated complications.  Note that 'trust' should *not* be used and
I'm glad to see that you're looking for alternatives to using 'trust'.

> 2.  All the restores through the backrest is going to refer the  Wal
> archive files under archive//* directory, which are taken from
> pgbackrest?

When pgbackrest performs a restore, it will write out the recovery.conf
file for you which includes the restore command to pull the WAL from the
repo and stanza configured.  You shouldn't need to worry about where
those files are, specifically, coming from (and it's even possible that
it might change in the future...).  Is there a specific reason you're
asking?

> 3. What is the compression level by default in pgbackrest?

pgbackrest will use different compression levels depending on what
you're doing.  If your repo is compressed (the default), then zlib level
6 is used.  If you're storing your repo as uncompressed, then pgbackrest
will use level 3 compression for the network transfer.

Note that, in any case, pgbackrest only performs compression once- at
the source.  If the repo is compressed the the data is compressed to
level 6 at the source and then streamed directly out to disk in that
compressed form.  If the repo is uncompressed, then the data is
compressed to level 3 at the source and then decompressed before going
into the repo.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Alexander Farber
Thank you -

On Wed, Mar 14, 2018 at 8:41 PM, Ivan E. Panchenko <
i.panche...@postgrespro.ru> wrote:

> Yes, here x is the alias for the record, not for the json field. So you
> need to write the query like
>
> select string_agg(x->>'letter', ' ')
> from (
>select jsonb_array_elements(tiles) x
>from words_moves
>where gid=656 and action='play'
>order by played desc limit 5
> ) y;
>
>
This has worked perfectly:

words=> select string_agg(x->>'letter', ' ')
words-> from (
words(>select jsonb_array_elements(tiles) x
words(>from words_moves
words(>where gid=656 and action='play'
words(>order by played desc limit 5
words(> ) y;
   string_agg

 А Н Т Щ П
(1 row)


Re: wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-14 Thread Michael Paquier
On Wed, Mar 14, 2018 at 04:17:54PM +0100, Hans Schou wrote:
> I got the message
>   ERROR: could not open relation with OID 0
> when running the "General Table Size Information" from
> https://wiki.postgresql.org/wiki/Disk_Usage

I cannot see this failure on latest HEAD on a database running the
regression database.  This is an elog() message by the way, which is
something to report internal errors, so users should never be able to
face it.

> But I guess it was supposed to give size of all tables.
> 
> I'm running version 9.1.9 so it should be working according to the
> wiki.

You should update and upgrade.  9.1 has fallen out of community support
1 year and a half ago, and 9.1.9 is utterly outdated.
--
Michael


signature.asc
Description: PGP signature


Re: wiki Disk Usage, table size: ERROR: could not open relation with OID 0

2018-03-14 Thread Tom Lane
Michael Paquier  writes:
> On Wed, Mar 14, 2018 at 04:17:54PM +0100, Hans Schou wrote:
>> I got the message
>> ERROR: could not open relation with OID 0
>> when running the "General Table Size Information" from
>> https://wiki.postgresql.org/wiki/Disk_Usage

>> I'm running version 9.1.9 so it should be working according to the
>> wiki.

> You should update and upgrade.  9.1 has fallen out of community support
> 1 year and a half ago, and 9.1.9 is utterly outdated.

The query does fail on < 9.2, because on rows with no reltoastrelid
it will call pg_total_relation_size(0), and we didn't make those
functions forgiving of bogus OIDs until 9.2.

Given that pre-9.2 is well out of support I didn't feel like complicating
the query to handle that; what I did do was change the labeling to say
"works with >= 9.2" instead of "works with >= 9.0".  But hey, it's a wiki;
if you feel more ambitious, edit away.

regards, tom lane



Re: PgBackrest questions

2018-03-14 Thread chiru r
Hi Stephen,

Thank you very much for your quick reply.


On Wed, Mar 14, 2018 at 6:17 PM, Stephen Frost  wrote:

> Greetings,
>
> * chiru r (chir...@gmail.com) wrote:
> > I am testing Pgbackrest and I have few questions.
>
> Great!
>
> > 1. I used postures user to perform backups and restores with Pgbackrest
> > tool.
> > The Trust authentication in pg_hba.conf file is working without issues.
>
> Please don't use 'trust'.
>
> > If I use md5 authentication in pg_hba_conf file and postgres user
> password
> > .pgpass file, the pgbackrest backup is failing.
>
> There really shouldn't ever be a need to use md5 authentication with
> pgbackrest.  Instead, I'd strongly suggest you use 'peer'.  The 'peer'
> method is perfectly safe as it depends on the authentication which
> Unix provides, but it doesn't require a password or any of the
> associated complications.  Note that 'trust' should *not* be used and
> I'm glad to see that you're looking for alternatives to using 'trust'.
>
> Is there any alternative method other than Peer and Trust  to use with
pgbackrest tool?.


> > 2.  All the restores through the backrest is going to refer the  Wal
> > archive files under archive//* directory, which are taken
> from
> > pgbackrest?
>
> When pgbackrest performs a restore, it will write out the recovery.conf
> file for you which includes the restore command to pull the WAL from the
> repo and stanza configured.  You shouldn't need to worry about where
> those files are, specifically, coming from (and it's even possible that
> it might change in the future...).  Is there a specific reason you're
> asking?
>
> I am concerned about WAL archives because, I am planning to transfer my
backups to Tape as soon as pgbackrest completes backup on disk.
The restore of backups is going to be on different server.So i am concerned
about recovery.

Ex: Always I restore my production backups on DEV environment.

And also I am planning to *remove *the Full,Diff and incremental backups on
disk as soon as it transferred to tape. Is there any issue? since I am
removing backups instead of Pg-backrest tool.

4. I observed that the *backup.info * and
*backup.info.copy* files under stanza directory. I compared both files I
did not see any difference.
What is the reason to keep two files with same contents in same
directory ?

5. The *backup.manifest *and *backup.manifest.copy* files exist under each
backup directory(full/diff/incremental). What is the reason to keep to
files in same directory?.
What is the difference of backup.manifest file under *backup.history*
directory and  under each backup directory?

Thanks,
Chiranjeevi