Concatenate of values in hierarchical data

2018-04-03 Thread Mr. Baseball 34
I have the data below, returned from a PostgreSQL table using this SQL:

SELECT ila.treelevel,
   ila.app,
   ila.lrflag,
   ila.ic,
   ila.price,
   ila.treelevel-1 as parent,
   ila.seq
FROM indexlistapp ila
WHERE ila.indexlistid IN (SELECT il.indexlistid
  FROM indexlist il
  WHERE il.model =  '$model'
  AND ('$year' BETWEEN il.beginyear AND
il.endyear)
  AND il.type = '$part')
ORDER BY ila.seq;

Data Returned

level   app  lrflag  ic price parent seq
indexlistid

-
1,  'Front', null,   null,  null, 0, 27,
439755
2,  'V-Series'   null,   null,  null, 1, 28,
439755
3,  'opt J56',   null,   null,  null, 2, 29,
439755
4,  'R.','R','536-01132AR','693.00',  3, 30,
439755
4,  'L.','L','536-01133AL','693.00',  3, 31,
439755
3,  'opt J63',   null,   null,  null, 2, 32,
439755
4,  'R.','R','536-01130R',  null, 3, 33,
439755
4,  'L.','L','536-01131L',  null, 3, 34,
439755
2,  'exc. V-Series', null,   null,  null, 1, 35,
439755
3,  'opt JE5',   null,   null,  null, 2, 36,
439755
4,  'AWD',   null,   null,  null, 3, 37,
439755
5,  'R.',null,   '536-01142',   null, 4, 38,
439755
5,  'L.',null,   '536-01143',   null, 4, 39,
439755
4,  'RWD',   null,   null,  null, 3, 40,
439755
5,  'R.',null,   '536-01143',   null, 4, 41,
439755
5,  'L.',null,   '536-01142',   null, 4, 42,
439755
3,  'opt J55',   null,   null,  null, 2, 43,
439755
4,  'AWD',   null,   null,  null, 3, 44,
439755
5,  'R.',null,   '536-01036',   null, 4, 45,
439755
5,  'L.',null,   '536-01037',   null, 4, 46,
439755
4,  'RWD',   null,   null,  null, 3, 47,
439755
5,  'R.',null,   '536-01037',   null, 4, 48,
439755
5,  'L.',null,   '536-01036',   null, 4, 49,
439755
1,  'Rear',  null,   null,  null, 0, 260,
439765
2,  'Base',  null,   null,  null, 1, 261,
439765
3,  'opt JE5',   null,   null,  null, 2, 262,
439765
4,  'R.','R','536-01038R',  null, 3, 263,
439765
4,  'L.','L','536-01039L',  null, 3, 264,
439765
3,  'opt J55',   null,   null,  null, 2, 265,
439765
4,  'R.','R','536-01042R',  null, 3, 266,
439765
4,  'L.','L','536-01043L',  null, 3, 267,
439765
2,  'V-Series',  null,   null,  null, 1, 268,
439765
3,  'R.','R','536-01134AR', '403.00', 2, 269,
439765
3,  'L.','L','536-01135AL', '466.00', 2, 270,
439765

matching data from indexlist

model  type   beginyear  endyear  indexlistid
-
'CTS', '536', 2009,  2010,439755
'CTS', '536', 2009,  2010,439765

There are primary keys on indexlist (on indexlistid) and indexlistapp (on
indexlistid) but there is no foreign key pointing to the other table. The
indexlistid in indexlist
points directly to the indexlistid in indexlistapp. The parent column is
simply calculated from the treelevel. The tree is built entirely from the
seq and treelevel.

*I need the data to be returned in this format:*

app   price  ic
---
'Front-V-Series-opt J56-R.',  '$693','536-01132AR'
'Front-V-Series-opt J56-L.',  '$693','536-01132AL'
'Front-V-Series-opt J63-R.',  null,  '536-01130R'
'Front-V-Series-opt J63-L.',  null,  '536-01131L'
'Front-exc. V-Series-opt JE5-AWD-R.', null,  '536-01142'
'Front-exc. V-Series-opt JE5-AWD-L.', null,  '536-01143'
'Front-exc. V-Series-opt JE5-RWD-R.', null,  '536-01143'
'Front-exc. V-Series-opt JE5-RWD-L.', null,  '536-01142'
'Front-exc. V-Series-opt J55-AWD-R.', null,  '536-01136'
'Front-exc. V-Series-opt J55-AWD-L.', null,  '536-01137'
'Front-exc. V-Series-opt J55-RWD-R.', null,  '536-01137'
'Front-exc. V-Series-opt J55-RWD-L.', null,  

Re: Autovacuum Problem

2018-04-03 Thread Kein Name
> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but
not actually return it to the filesystem / OS (unless it happens to be the
last blocks in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since
8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases
where e.g. need for free space is urgent.

H then when is the space released to the system when the manual VACUUM
FULL run is not done? Any way to "force" this?

2018-04-03 8:49 GMT+02:00 Achilleas Mantzios :

> On 03/04/2018 09:36, Kein Name wrote:
>
>> However no space seems to be freed to the system.
>>
>> Is there any way a bloody newbie can debug this behaviour?
>>
>
> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but
> not actually return it to the filesystem / OS (unless it happens to be the
> last blocks in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since
> 8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases
> where e.g. need for free space is urgent.
>
>
> Any help is appreciated!
>>
>> Thanks
>> Stefan
>>
>>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>


Re: Autovacuum Problem

2018-04-03 Thread Achilleas Mantzios

On 03/04/2018 10:00, Kein Name wrote:

> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but not 
actually return it to the filesystem / OS (unless it happens to be the last blocks 
in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since 8.2) 
run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. 
need for free space is urgent.

H then when is the space released to the system when the manual VACUUM FULL run is 
not done? Any way to "force" this?

Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?


2018-04-03 8:49 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 09:36, Kein Name wrote:

However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but not 
actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
Ppl in normal/average type of installations/workloads no longer (since 8.2) 
run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. 
need for free space is urgent.


Any help is appreciated!

Thanks
Stefan


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Autovacuum Problem

2018-04-03 Thread Kein Name
> Why would you want that? Do you have any control over the application?
Any "special" patterns used in the app?

Drive is running full :/
Sadly I have no control and knowledge whatsoever over/about the application.

I tuned the autovacuum parameters now for the critical tables, to have it
run more often and using greater ressources while doing so but I am
unsure if this helps with the bloating.

2018-04-03 9:39 GMT+02:00 Achilleas Mantzios :

> On 03/04/2018 10:00, Kein Name wrote:
>
> > VACUUM <> VACUUM FULL
> > Normally running VACUUM via autovacuum should help reuse free space but
> not actually return it to the filesystem / OS (unless it happens to be the
> last blocks in the data file(s)).
> > Ppl in normal/average type of installations/workloads no longer (since
> 8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases
> where e.g. need for free space is urgent.
>
> H then when is the space released to the system when the manual VACUUM
> FULL run is not done? Any way to "force" this?
>
> Why would you want that? Do you have any control over the application? Any
> "special" patterns used in the app?
>
>
> 2018-04-03 8:49 GMT+02:00 Achilleas Mantzios  >:
>
>> On 03/04/2018 09:36, Kein Name wrote:
>>
>>> However no space seems to be freed to the system.
>>>
>>> Is there any way a bloody newbie can debug this behaviour?
>>>
>>
>> VACUUM <> VACUUM FULL
>> Normally running VACUUM via autovacuum should help reuse free space but
>> not actually return it to the filesystem / OS (unless it happens to be the
>> last blocks in the data file(s)).
>> Ppl in normal/average type of installations/workloads no longer (since
>> 8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases
>> where e.g. need for free space is urgent.
>>
>>
>> Any help is appreciated!
>>>
>>> Thanks
>>> Stefan
>>>
>>>
>> --
>> Achilleas Mantzios
>> IT DEV Lead
>> IT DEPT
>> Dynacom Tankers Mgmt
>>
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>


Re: Autovacuum Problem

2018-04-03 Thread Achilleas Mantzios

On 03/04/2018 10:54, Kein Name wrote:

> Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?

Drive is running full :/
Sadly I have no control and knowledge whatsoever over/about the application.

I tuned the autovacuum parameters now for the critical tables, to have it run 
more often and using greater ressources while doing so but I am unsure if 
this helps with the bloating.


You could try to setting log_statement=all for a (short) period with a typical 
"heavy-ish" write activity to know what the app is doing.
IMHO you must prepare for a larger storage. After that you should monitor disk 
usage periodically to see if usage is stabilized or at least if the increase 
rate drops.



2018-04-03 9:39 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 10:00, Kein Name wrote:

> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but 
not actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since 
8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where 
e.g. need for free space is urgent.

H then when is the space released to the system when the manual VACUUM FULL run 
is not done? Any way to "force" this?

Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?


2018-04-03 8:49 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 09:36, Kein Name wrote:

However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but 
not actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
Ppl in normal/average type of installations/workloads no longer (since 
8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where 
e.g. need for free space is urgent.


Any help is appreciated!

Thanks
Stefan


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Seems like there is an issue with reltuples showing twice the number of rows

2018-04-03 Thread Ranjith Ramachandra
I am relying on reltuples on my web app to get fast row counts.

This was recommended by this article to get fast approx row counts:
https://wiki.postgresql.org/wiki/Count_estimate


However for some table I am getting twice as many values when I try to do
this. I did some more research and came up with this query.

select reltuples, n_live_tup, n_dead_tup
  from pg_stat_user_tables join pg_class using (relname)
 where relname =
'main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d';


it returns


 reltuples  | n_live_tup | n_dead_tup
-++
2.7209e+06 |1360448 |1360448


If I run analyze main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d

and I run the same query again,

  reltuples  | n_live_tup | n_dead_tup
-++
 1.36045e+06 |1360448 |1360448

But after some time the value goes back to being double the value. This is
causing me a lot of problems since this inaccuracy does not make any sense
to me.

Any help would be appreciated.

FYI, also asked the same question on stackoverflow since I am new to
postgres mail lists.

https://stackoverflow.com/questions/49625259/postgres-reltuples-seems-to-return-twice-the-number-of-values



-- 
~Ranjith


Re: dblink: could not send query: another command is already in progress

2018-04-03 Thread Laurenz Albe
Thiemo Kellner wrote:
> On 03/30/18 11:14, Laurenz Albe wrote:
> > You have to consume the result before you can send the next query.
> 
> I changed implementation but still get the same error but now different 
> context. I tried to retrieve the result but I failed
> 
> I committed the last code to its project repository at SourceForge 
> https://sourceforge.net/p/pglogger/code/HEAD/tree/
> 
> and put the code to pastebin.com
> Test calls: https://pastebin.com/xfUp9NAL
> function WRITE_MESSAGE: https://pastebin.com/LDjE0Czx
> function WRITE_MESSAGE_TO_TABLE: https://pastebin.com/vkBkehZF
> 
> [...]
> psql:testing/test.pg_sql:42: ERROR:  function 
> WRITE_MESSAGE_TO_TABLE(i_function => text, i_message => text, i_level => 
> text, i_present_user => name, i_session_user => name, 
> i_transaction_timestamp => timestamp with time zone, i_transaction_id => 
> bigint, i_server_pid => bigint, i_remote_address => inet, i_remote_port 
> => integer) does not exist
> HINT:  No function matches the given name and argument types. You might 
> need to add explicit type casts.
> CONTEXT:  Error occurred on dblink connection named 
> "PGLOGGER_CONNECTION": could not execute query.
> SQL statement "SELECT * from 
> dblink_get_result(V_DBLINK_CONNECTION_NAME) as t(ID bigint)"
> PL/pgSQL function write_message(text,text) line 126 at PERFORM
> psql:testing/test.pg_sql:42: STATEMENT:  select WRITE_MESSAGE('Test 
> WRITE_MESSAGE 2nd call', 'NOTICE');

I couldn't dig through all your code, but two things seemed suspicious:

> if (
> V_DBLINK_CONNECTION_NAMES is null
>  or V_DBLINK_CONNECTION_NAME != any(V_DBLINK_CONNECTION_NAMES)

I don't think you want "!=ANY" there.  Shouldn't that be "<>ALL" ?
"<>ANY" will *always* be true if there are at least two different entries.


The other thing is that you seem to call "dblink_get_result" on any existing
connection before use.  But you can only call the function if there is a
result outstanding.

One of your error messages above seems to suggest that this is your problem.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



How to install pgTAP on cenos machine

2018-04-03 Thread Raghavendra Rao J S V
Hi,

How to install pgTAP on Centos machine.? I tried to install but no luck.
Please guide me to proceed further.

-- 
Regards,
Raghavendra Rao J S V


Re: Autovacuum Problem

2018-04-03 Thread Vitaliy Garnashevich


However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


In our experience, autovacuum is able to contain bloating of table data, 
but not bloating of indexes.


You could see where the bloating is by running the following queries:

CREATE EXTENSION IF NOT EXISTS pgstattuple;

select c.relname, (pgstattuple(c.relname)).*
from pg_class c
join pg_namespace n on (n.oid = c.relnamespace and n.nspname = 'public')
where c.relkind = 'r'
order by c.reltuples desc;

select c.relname, (pgstatindex(c.relname)).*
from pg_class c
inner join pg_namespace n on (n.oid = c.relnamespace and n.nspname = 
'public')

where c.relkind = 'i'
order by c.reltuples desc;


The first SELECT query gives some information about table data, the 
second SELECT query is about indexes. The queries will take quite some 
time to run, because they will have to scan through every data page on 
disk. Look at the following columns:


pgstattuple.free_percent - percent of table data which is allocated on 
disk, but does not currently contain any actual data (describes table 
data bloat).


pgstatindex.avg_leaf_density - percent of stored leaf index data within 
the totally allocated on disk (describes index bloat). It's 90% by 
default. It's not good when it drops too much (e.g. to 50% or 10%).


Look at the biggest tables/indexes first. In our case, from 1000+ tables 
there are only few which contribute most to DB size bloating. But in 
general that would depend on your specific case, and the write patterns 
which you application does. You can then do VACUUM FULL for those 
specific tables, or REINDEX TABLE/INDEX for specific tables/indexes. 
VACUUM FULL or REINDEX TABLE may work faster than doing REINDEX INDEX 
for several indexes of the same table. I do not know any efficient way 
to prevent bloating of indexes.


Regards,
Vitaliy





Re: Seems like there is an issue with reltuples showing twice the number of rows

2018-04-03 Thread Tomas Vondra
On 04/03/2018 11:14 AM, Ranjith Ramachandra wrote:
> I am relying on reltuples on my web app to get fast row counts.
> 
> This was recommended by this article to get fast approx row
> counts: https://wiki.postgresql.org/wiki/Count_estimate
> 
> 
> However for some table I am getting twice as many values when I try to
> do this. I did some more research and came up with this query.
> 
> select reltuples, n_live_tup, n_dead_tup
>               from pg_stat_user_tables join pg_class using (relname)
>              where relname =
> 'main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d';
> 
> 
> it returns
> 
> 
>  reltuples  | n_live_tup | n_dead_tup
> -++
> 2.7209e+06 |    1360448 |    1360448
> 
> 
> If I run analyze main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d
> 
> and I run the same query again,
> 
>   reltuples  | n_live_tup | n_dead_tup
> -++
>  1.36045e+06 |    1360448 |    1360448
> 
> But after some time the value goes back to being double the value. This
> is causing me a lot of problems since this inaccuracy does not make any
> sense to me.
> 

There was a difference between VACUUM and ANALYZE in handling recently
dead rows (essentially deleted rows that can't be removed yet), causing
similar changes to reltuples. Essentially if you do VACUUM and ANALYZE,
it may set reltuples to rather different estimates. That is fixed now
and should be in the next minor release.

Hard to say if this is a case of that, but judging by the number of dead
tuples chances are it is.

It's probably better to use n_live_tup instead, though. I'd say that's
closer to the "live tuples" definition.

regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: How to install pgTAP on cenos machine

2018-04-03 Thread Adrian Klaver

On 04/03/2018 02:34 AM, Raghavendra Rao J S V wrote:

Hi,

How to install pgTAP on Centos machine.? I tried to install but no luck.


What did you do to install and where did it fail?


Please guide me to proceed further.

--
Regards,
Raghavendra Rao J S V




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Seems like there is an issue with reltuples showing twice the number of rows

2018-04-03 Thread Tom Lane
Tomas Vondra  writes:
> On 04/03/2018 11:14 AM, Ranjith Ramachandra wrote:
>> it returns
>>  reltuples  | n_live_tup | n_dead_tup
>> -++
>> 2.7209e+06 |    1360448 |    1360448
>> 
>> If I run analyze main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d
>> and I run the same query again,
>>   reltuples  | n_live_tup | n_dead_tup
>> -++
>>  1.36045e+06 |    1360448 |    1360448
>> 
>> But after some time the value goes back to being double the value.

> There was a difference between VACUUM and ANALYZE in handling recently
> dead rows (essentially deleted rows that can't be removed yet), causing
> similar changes to reltuples. Essentially if you do VACUUM and ANALYZE,
> it may set reltuples to rather different estimates. That is fixed now
> and should be in the next minor release.

No, I think this is the *other* thing we fixed recently: VACUUM thinks
it should set reltuples to total tuples (live + dead) whereas ANALYZE
counts only live tuples.  We did not risk back-patching that.

The question I'd ask about this case is why is there persistently 100%
bloat?  Those dead tuples should've gotten reclaimed by autovacuum.
Perhaps an open prepared transaction, or some such?

> It's probably better to use n_live_tup instead, though. I'd say that's
> closer to the "live tuples" definition.

Yeah, you might be better off looking at that, particularly since it
updates on-the-fly not just after a vacuum or analyze.

regards, tom lane



Re: How to get an inclusive interval when using daterange

2018-04-03 Thread hmidi slim
I tried it and I got the same result.


Re: How to get an inclusive interval when using daterange

2018-04-03 Thread Adrian Klaver

On 04/03/2018 07:35 AM, hmidi slim wrote:

I tried it and I got the same result.


Tried what?

--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Seems like there is an issue with reltuples showing twice the number of rows

2018-04-03 Thread Ranjith Ramachandra
List,

OP here. Thank you for replying. Confirms my diagnosis that it might have
to do with analyze vaccum.

Some debug info.

1. Loaded a CSV to fill the table with data.
2. performed analyse vacuum on this table after uploading.
3. I do not see any reason for dead rows because I have not updated data in
this table. But I may not understand dead rows correctly.
4. I can reproduce this problem on multiple machines with 9.6.8 postres
installed.


Can not wait for next minor update since my prod would get updated. My
current strategy is to use n_live_tup.  On my local it seems to work fine.

Thinking about it, I could even develop another mechanism for keeping a
track of row counts by manually scanning the row count with a background
process, a hard count with count(*) too.

But happy provide debug any other info if needed. Will reply within 24
hours max.

This is what I had found earlier before I contacted the list. Relevant?

1. https://www.postgresql.org/message-id/20180312231417.484d64c0%40engels
2.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=81b9b5ce490a645bde8df203ec4a3b2903d88f31
3.
https://www.postgresql.org/message-id/151956654251.6915.675951950408204404.p...@coridan.postgresql.org



On Tue 3 Apr, 2018, 19:49 Tom Lane,  wrote:

> Tomas Vondra  writes:
> > On 04/03/2018 11:14 AM, Ranjith Ramachandra wrote:
> >> it returns
> >>  reltuples  | n_live_tup | n_dead_tup
> >> -++
> >> 2.7209e+06 |1360448 |1360448
> >>
> >> If I run analyze main_csv_0f247511f5c247c8813ba3cec90c6ae1_yf34fbb38d
> >> and I run the same query again,
> >>   reltuples  | n_live_tup | n_dead_tup
> >> -++
> >>  1.36045e+06 |1360448 |1360448
> >>
> >> But after some time the value goes back to being double the value.
>
> > There was a difference between VACUUM and ANALYZE in handling recently
> > dead rows (essentially deleted rows that can't be removed yet), causing
> > similar changes to reltuples. Essentially if you do VACUUM and ANALYZE,
> > it may set reltuples to rather different estimates. That is fixed now
> > and should be in the next minor release.
>
> No, I think this is the *other* thing we fixed recently: VACUUM thinks
> it should set reltuples to total tuples (live + dead) whereas ANALYZE
> counts only live tuples.  We did not risk back-patching that.
>
> The question I'd ask about this case is why is there persistently 100%
> bloat?  Those dead tuples should've gotten reclaimed by autovacuum.
> Perhaps an open prepared transaction, or some such?
>
> > It's probably better to use n_live_tup instead, though. I'd say that's
> > closer to the "live tuples" definition.
>
> Yeah, you might be better off looking at that, particularly since it
> updates on-the-fly not just after a vacuum or analyze.
>
> regards, tom lane
>


Re: is pg_advisory_lock() suitable for long runs

2018-04-03 Thread Merlin Moncure
On Sat, Mar 31, 2018 at 1:49 PM, Radoslav Nedyalkov
 wrote:
> Hi all,
> it's very simple and intuitive case but let me describe first.
> 1. session 1 calls pg_advisory_lock(1234) and succeeds.
> 2. session 2 calls pg_advisory_lock(1234) and stops on waiting.
> All fine BUT pid for session2 appears already with backend_xmin in
> pg_stat_activity
> which means vacuum won't be able to remove rows younger than session2
> backend_xmin.
>
> Well, we planned to use pg_advisory_lock() as a boot phase in a hot-standby
> appserver
> and apparently this will be problematic as the session2 might wait for
> weeks.
>
> Any thoughts ? Do we miss something ?

Holding a transaction open for weeks is generally not a good idea, at
all.  Advisory locks were invented very specifically to allow
application locks to be held without involving long running
transactions.  Holding a session open for weeks might be ok, but any
blocked lockers ought to time out and try another course of action.

merlin



Re: How to get an inclusive interval when using daterange

2018-04-03 Thread hmidi slim
HI,
I tried* insert into availability values ('product x',
'[2018-02-02,2018-03-01]'::daterange); *and I got the same result such as*
insert into availability values ('product x', daterange('2018-02-02',
'2018-03-01', '[]').*


Re: How to get an inclusive interval when using daterange

2018-04-03 Thread Paul Jungwirth

On 04/03/2018 09:40 AM, hmidi slim wrote:
I tried insert into availability values ('product x', 
'[2018-02-02,2018-03-01]'::daterange); and I got the same result such 
as insert into availability values ('product x', daterange('2018-02-02', 
'2018-03-01', '[]').


Yes, those are equivalent ways of constructing the same daterange.

If you really want a closed/closed daterange, you'll need to create your 
own type. I don't really recommend that, but you can do it. (Using 
close/open is the standard because it's so convenient for 
combining/comparing ranges.)


It's easy to create a type without a canonical function, e.g.:

CREATE FUNCTION date_minus(date1 date, date2 date)
RETURNS float AS $$
SELECT cast(date1 - date2 as float);
$$ LANGUAGE sql immutable;

CREATE TYPE daterange2 AS range
(subtype = date, subtype_diff = date_minus);

Then you can say:

SELECT daterange2('2018-01-01', '2018-03-01', '[]');

This is not great though, because without a canonical function Postgres 
doesn't know that [x,y] is equal to [x,y+1). If you prefer to have a 
canonical function, you'll need to write one in C. (It's easy but you 
won't be able to install it on a managed service like AWS RDS.) It might 
help to read these and the code they link to (The second one is by me.):


https://stackoverflow.com/questions/29895077/how-to-create-a-new-date-range-type-with-included-upper-bound-in-postgres

https://illuminatedcomputing.com/posts/2016/06/inet-range/

--
Paul  ~{:-)
p...@illuminatedcomputing.com



single sql, multiple schemas, one result set

2018-04-03 Thread PegoraroF10
Suppose a DB with dozens of schemas with same structure.
DB
  Schema1
Table1
Table2
  Schema2
Table1
Table2
  Schema3
Table1
Table2
Then we want to execute a SQL on specific schemas and the result of it could
be a UNION ALL. So, how could be a function that runs that SQL on each
schema and results just on result set ?

Then I would use something like ...
select * from SQLFromMultipleShema(Array['Schema1,Schema3'],'select count(*)
from Table2');




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: single sql, multiple schemas, one result set

2018-04-03 Thread Guyren Howe
Make a view that joins all the things, with a column providing the name of the 
schema that they came from.

> On Apr 3, 2018, at 10:47 , PegoraroF10  wrote:
> 
> Suppose a DB with dozens of schemas with same structure.
> DB
>  Schema1
>Table1
>Table2
>  Schema2
>Table1
>Table2
>  Schema3
>Table1
>Table2
> Then we want to execute a SQL on specific schemas and the result of it could
> be a UNION ALL. So, how could be a function that runs that SQL on each
> schema and results just on result set ?
> 
> Then I would use something like ...
> select * from SQLFromMultipleShema(Array['Schema1,Schema3'],'select count(*)
> from Table2');
> 
> 
> 
> 
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
> 



Re: single sql, multiple schemas, one result set

2018-04-03 Thread Rob Sargent



On 04/03/2018 11:47 AM, PegoraroF10 wrote:

Suppose a DB with dozens of schemas with same structure.
DB
   Schema1
 Table1
 Table2
   Schema2
 Table1
 Table2
   Schema3
 Table1
 Table2
Then we want to execute a SQL on specific schemas and the result of it could
be a UNION ALL. So, how could be a function that runs that SQL on each
schema and results just on result set ?

Then I would use something like ...
select * from SQLFromMultipleShema(Array['Schema1,Schema3'],'select count(*)
from Table2');




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


Should work if you add a tag to every table referenced in the sql sent in:

select * from SQLFromMultipleShema(Array['Schema1,Schema3'],'select count(*)
from SCHEMA_NAME.Table2')

Then, in a loop over the schema name array replacing the SCHEMA_NAME as you 
build up a union of your injected sql.

Or perhaps loop over the schame name array reseting search_path as you build up 
a temp table of the result from the sql passed in as an argument.

Pretty sure you're going to end up need "dynamic sql";







LDAP Bind Password

2018-04-03 Thread Kumar, Virendra
Hi There,

Is anybody aware of how to encrypt bind password for ldap authentication in 
pg_hba.conf. Anonymous bind is disabled in our organization so we have to use 
bind ID and password but to keep them as plaintext in pg_hba.conf defeat 
security purposes. We want to either encrypt it or authenticate without 
binding. Any insights into this is appreciated.

Regards,
Virendra



This message is intended only for the use of the addressee and may contain
information that is PRIVILEGED AND CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified that any
dissemination of this communication is strictly prohibited. If you have
received this communication in error, please erase all copies of the message
and its attachments and notify the sender immediately. Thank you.


Re: dblink: could not send query: another command is already in progress

2018-04-03 Thread Thiemo Kellner



On 04/03/18 11:28, Laurenz Albe wrote:

[...]
psql:testing/test.pg_sql:42: ERROR:  function
WRITE_MESSAGE_TO_TABLE(i_function => text, i_message => text, i_level =>
text, i_present_user => name, i_session_user => name,
i_transaction_timestamp => timestamp with time zone, i_transaction_id =>
bigint, i_server_pid => bigint, i_remote_address => inet, i_remote_port
=> integer) does not exist
HINT:  No function matches the given name and argument types. You might
need to add explicit type casts.
CONTEXT:  Error occurred on dblink connection named
"PGLOGGER_CONNECTION": could not execute query.
SQL statement "SELECT * from
dblink_get_result(V_DBLINK_CONNECTION_NAME) as t(ID bigint)"
PL/pgSQL function write_message(text,text) line 126 at PERFORM
psql:testing/test.pg_sql:42: STATEMENT:  select WRITE_MESSAGE('Test
WRITE_MESSAGE 2nd call', 'NOTICE');


I couldn't dig through all your code, but two things seemed suspicious:


Thanks for looking into all the same.


if (
 V_DBLINK_CONNECTION_NAMES is null
  or V_DBLINK_CONNECTION_NAME != any(V_DBLINK_CONNECTION_NAMES)


I don't think you want "!=ANY" there.  Shouldn't that be "<>ALL" ?
"<>ANY" will *always* be true if there are at least two different entries.


In my simple test it works as expected with respect to that. But I did 
use it the wrong way as I thought that ! would invert the result of the 
any operation as a whole. I changed it to != all but the error persists.



The other thing is that you seem to call "dblink_get_result" on any existing
connection before use.  But you can only call the function if there is a
result outstanding.


I call dblink_get_result only if I do not open a dblink connection, i. 
e. only on second and following function calls. I put more notice output 
into the code showing that dblink_send_query has been called once before 
the first call of dblink_get_result. I changed my query to reflect 
return bigint value of the called function write_message_to_table. Error 
persists.


I put a new pastebin https://pastebin.com/7R45R7qK and committed the 
changes.


--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
SIP/iptel.org: thiemo.kellner
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

Re: dblink: could not send query: another command is already in progress

2018-04-03 Thread Laurenz Albe
Thiemo Kellner wrote:
> > The other thing is that you seem to call "dblink_get_result" on any existing
> > connection before use.  But you can only call the function if there is a
> > result outstanding.
> 
> I call dblink_get_result only if I do not open a dblink connection, i. 
> e. only on second and following function calls. I put more notice output 
> into the code showing that dblink_send_query has been called once before 
> the first call of dblink_get_result. I changed my query to reflect 
> return bigint value of the called function write_message_to_table. Error 
> persists.

That's not going to work.
You can only call dblink_get_result if there is an outstanding result.
You cannot call it on an idle connection, that will cause an error.

You should write your code so that whenever dblink_send_query has been
called, you also call dblink_get_result, regardless if the query has
been cancelled or not, so that you always leave the connection in
the "ready for query" state.
Then you don't have to clean up.

Of course you could also ignore the error you get on dblink_get_result,
but that is ugly.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com