Re: Reconnecting a slave to a newly-promoted master

2018-07-10 Thread Michael Paquier
On Mon, Jul 09, 2018 at 05:58:53PM -0700, Shawn Mulloney wrote:
> There are three PostgreSQL machines: A, B, and C. B and C are slaves off of
> the master, A. A fails, and B is promoted to being the new master. Can C
> just be pointed at A and have it "just work"?

In your question I am pretty sure that you mean "B" instead of "A" as
"A" has died and is off the grid.  The answer to that question would be
perhaps no, as if C could have replayed WAL ahead of B in which case you
could finish with a corrupted C instance if trying to reconnect it
directly to the promoted B.
--
Michael


signature.asc
Description: PGP signature


Re: Create DDL trigger to catch which column was altered

2018-07-10 Thread Łukasz Jarych
It is no possible?

Jacek

pon., 9 lip 2018 o 13:38 Łukasz Jarych  napisał(a):

> Hi Guys,
>
> i am using sqls like below to track ddl changes:
>
> CREATE TABLE track_ddl
>> (
>>   event text,
>>   command text,
>>   ddl_time timestamptz,
>>   usr text
>> );
>> CREATE OR REPLACE FUNCTION track_ddl_function()
>> RETURNS event_trigger
>> AS
>> $$
>> BEGIN
>>   INSERT INTO track_ddl values(tg_tag, tg_event, now(), session_user);
>>   RAISE NOTICE 'DDL logged';
>> END
>> $$ LANGUAGE plpgsql SECURITY DEFINER;
>>
>
>
>> CREATE EVENT TRIGGER track_ddl_event ON ddl_command_start
>> WHEN TAG IN ('CREATE TABLE', 'DROP TABLE', 'ALTER TABLE')
>> EXECUTE PROCEDURE track_ddl_function();
>> CREATE TABLE event_check(i int);
>> SELECT * FROM track_ddl;
>
>
> And and drop table is ok. But when i am altering i would like to know new
> vales and old values like when i am catching DML changes:
>
> CREATE OR REPLACE FUNCTION change_trigger() RETURNS trigger AS $$
>>
>>
>>> BEGIN
>>
>>
>>> IF  TG_OP = 'INSERT'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, new_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user, row_to_json(NEW));
>>
>>
>>> RETURN NEW;
>>
>>
>>> ELSIF   TG_OP = 'UPDATE'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, new_val, old_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user,
>>
>>
>>> row_to_json(NEW),
>>> row_to_json(OLD));
>>
>>
>>> RETURN NEW;
>>
>>
>>> ELSIF   TG_OP = 'DELETE'
>>
>>
>>> THEN
>>
>>
>>> INSERT INTO logging.t_history (tabname,
>>> schemaname, operation, who, old_val)
>>
>>
>>> VALUES (TG_RELNAME, TG_TABLE_SCHEMA,
>>> TG_OP, current_user, row_to_json(OLD));
>>
>>
>>> RETURN OLD;
>>
>>
>>> END IF;
>>
>>
>>> END;
>>
>>
>>> $$ LANGUAGE 'plpgsql' SECURITY DEFINER;
>>
>>
> It is possible?
> Or write function which will tell me all new values in new columns?
>
> I was trying to change sqls like here:
>
> CREATE TABLE track_ddl
>> (
>>   event text,
>>   command text,
>>   ddl_time timestamptz,
>>   usr json
>> );
>> CREATE OR REPLACE FUNCTION track_ddl_function()
>> RETURNS event_trigger
>> AS
>> $$
>> BEGIN
>>   INSERT INTO track_ddl values(tg_tag, tg_event, now(), row_to_json(NEW));
>>   RAISE NOTICE 'DDL logged';
>> END
>> $$ LANGUAGE plpgsql SECURITY DEFINER;
>
>
> but this is not working.
>
> Please help,
> Jacek
>
>


Re: Create event triger

2018-07-10 Thread Łukasz Jarych
No possible?

Jacek

pon., 9 lip 2018 o 13:03 Łukasz Jarych  napisał(a):

> Hi,
>
> i have small database and i am tracking changes using trigger:
>
> CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig
>
> FOR EACH ROW EXECUTE PROCEDURE change_trigger();
>
> It is possible to create general trigger for all tables?
> Like event trigger?
>
> It would be very helpful for me.
> Now i have to set up this trigger on each table.
>
> Best,
> Jacek
>


Re: Create event triger

2018-07-10 Thread Guillaume Lelarge
2018-07-10 10:56 GMT+02:00 Łukasz Jarych :

> No possible?
>
>
Nope, you need to set up the trigger on each table.

Jacek
>
> pon., 9 lip 2018 o 13:03 Łukasz Jarych  napisał(a):
>
>> Hi,
>>
>> i have small database and i am tracking changes using trigger:
>>
>> CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig
>>
>> FOR EACH ROW EXECUTE PROCEDURE change_trigger();
>>
>> It is possible to create general trigger for all tables?
>> Like event trigger?
>>
>> It would be very helpful for me.
>> Now i have to set up this trigger on each table.
>>
>> Best,
>> Jacek
>>
>


-- 
Guillaume.


Re: Create event triger

2018-07-10 Thread Łukasz Jarych
Thank you very much  Guillaume.

Do you know maybe any function to do it automatically?

Best,
Jacek

wt., 10 lip 2018 o 11:25 Guillaume Lelarge 
napisał(a):

> 2018-07-10 10:56 GMT+02:00 Łukasz Jarych :
>
>> No possible?
>>
>>
> Nope, you need to set up the trigger on each table.
>
> Jacek
>>
>> pon., 9 lip 2018 o 13:03 Łukasz Jarych  napisał(a):
>>
>>> Hi,
>>>
>>> i have small database and i am tracking changes using trigger:
>>>
>>> CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig
>>>
>>> FOR EACH ROW EXECUTE PROCEDURE change_trigger();
>>>
>>> It is possible to create general trigger for all tables?
>>> Like event trigger?
>>>
>>> It would be very helpful for me.
>>> Now i have to set up this trigger on each table.
>>>
>>> Best,
>>> Jacek
>>>
>>
>
>
> --
> Guillaume.
>


Re: Create event triger

2018-07-10 Thread Guillaume Lelarge
2018-07-10 11:28 GMT+02:00 Łukasz Jarych :

> Thank you very much  Guillaume.
>
> Do you know maybe any function to do it automatically?
>
>
Nope, but it should be easy to write a shell script or a DO script to do it.

Best,
> Jacek
>
> wt., 10 lip 2018 o 11:25 Guillaume Lelarge 
> napisał(a):
>
>> 2018-07-10 10:56 GMT+02:00 Łukasz Jarych :
>>
>>> No possible?
>>>
>>>
>> Nope, you need to set up the trigger on each table.
>>
>> Jacek
>>>
>>> pon., 9 lip 2018 o 13:03 Łukasz Jarych  napisał(a):
>>>
 Hi,

 i have small database and i am tracking changes using trigger:

 CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig

 FOR EACH ROW EXECUTE PROCEDURE change_trigger();

 It is possible to create general trigger for all tables?
 Like event trigger?

 It would be very helpful for me.
 Now i have to set up this trigger on each table.

 Best,
 Jacek

>>>
>>
>>
>> --
>> Guillaume.
>>
>


-- 
Guillaume.


Re: Create event triger

2018-07-10 Thread Łukasz Jarych
Maybe yes,

but for me when i am learning it is not...

Best,
Jacek

wt., 10 lip 2018 o 11:29 Guillaume Lelarge 
napisał(a):

> 2018-07-10 11:28 GMT+02:00 Łukasz Jarych :
>
>> Thank you very much  Guillaume.
>>
>> Do you know maybe any function to do it automatically?
>>
>>
> Nope, but it should be easy to write a shell script or a DO script to do
> it.
>
> Best,
>> Jacek
>>
>> wt., 10 lip 2018 o 11:25 Guillaume Lelarge 
>> napisał(a):
>>
>>> 2018-07-10 10:56 GMT+02:00 Łukasz Jarych :
>>>
 No possible?


>>> Nope, you need to set up the trigger on each table.
>>>
>>> Jacek

 pon., 9 lip 2018 o 13:03 Łukasz Jarych  napisał(a):

> Hi,
>
> i have small database and i am tracking changes using trigger:
>
> CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig
>
> FOR EACH ROW EXECUTE PROCEDURE change_trigger();
>
> It is possible to create general trigger for all tables?
> Like event trigger?
>
> It would be very helpful for me.
> Now i have to set up this trigger on each table.
>
> Best,
> Jacek
>

>>>
>>>
>>> --
>>> Guillaume.
>>>
>>
>
>
> --
> Guillaume.
>


Re: Create event triger

2018-07-10 Thread Thomas Kellerer
Łukasz Jarych schrieb am 09.07.2018 um 13:03:
> i have small database and i am tracking changes using trigger:
> 
> CREATE TRIGGER t BEFORE INSERT OR UPDATE OR DELETE ON t_trig
> 
> FOR EACH ROW EXECUTE PROCEDURE change_trigger();
> 
> It is possible to create general trigger for all tables?
> Like event trigger?
> 
> It would be very helpful for me. 
> Now i have to set up this trigger on each table.

Maybe pgaudit is a better solution? 

https://www.pgaudit.org/




Re: correcting tablespaces inside data folder

2018-07-10 Thread Duarte Carreira
Hi.
On Fri, Jul 6, 2018 at 6:30 PM Duarte Carreira  wrote:

>
>
> On Fri, Jul 6, 2018 at 6:15 PM Magnus Hagander 
> wrote:
>
>> (please don't top-post. It makes it hard to follow discussions)
>>
>> On Fri, Jul 6, 2018 at 7:05 PM, Duarte Carreira 
>> wrote:
>>
>>> Magnus,
>>> You mean changing the symlinks inside pg_tblspc?
>>>
>>>
>> Yes. As long as the server is shut down, you can modify those symlinks.
>>
>
> Ok, I'll try on my laptop and see how it goes.
> Thanks.
>

Update: everything seems to be working fine.
On windows you can't change the symlinks (called junctions on windows). You
have to delete them with rd symlimk_id, and then recreate them pointing to
the new location with:
mklink /J symlink_id path_to_new_location

Have to be specially carefull to recreate the symlinks correctly!

Thanks again.
Duarte

>
>
>>
>>
>>
>>>
>>> On Fri, Jul 6, 2018 at 5:49 PM Magnus Hagander 
>>> wrote:
>>>
 On Fri, Jul 6, 2018 at 6:42 PM, Duarte Carreira 
 wrote:

> Hello.
>
> Yes I'm one of those guys who only recently realized the mess of
> having tablespaces inside the data directory... now I want to use
> pg_upgrade and it will make things even worse...
>
> Does anyone have a best approach to this problem? Fastest/safest?
>
> pg 9.3.x on win
>
>
 If you can afford to shut the server down, the easiest is to shut it
 down, move the tablespaces (with mv, and as long as you stay within the
 partition it should be almost instant), update the symlinks to point to the
 new location, and start it up again.

 --
  Magnus Hagander
  Me: https://www.hagander.net/ 
  Work: https://www.redpill-linpro.com/ 

>>>
>>
>>
>> --
>>  Magnus Hagander
>>  Me: https://www.hagander.net/ 
>>  Work: https://www.redpill-linpro.com/ 
>>
>


Re: Split daterange into sub periods

2018-07-10 Thread hmidi slim
Based on you example I updated it to get the results that I want:
create table hmidi(
id serial primary key,
product_id integer,
d date range)

insert into hmidi(product_id, d) values(15, '[2018-11-01, 2018-11-01]');
insert into hmidi(product_id, d) values(15, '[2018-11-03, 2018-11-04]');

Then I update you query:
with month as (
select distinct s::date, hmidi.product_id
from generate_series('2018-11-01'::date,
'2018-11-05'::date,'1day'::interval) s
cross join hmidi
order by s::date
),
tmp as (
select month.s, month.product_id,
case when (hmidi.d @> month.s)
then 1
else null
end as covered
from month inner join hmidi on hmidi.product_id = month.product_id
group by month.product_id, month.s, hmidi.d, hmidi.product_id
),
tmp2 as (
select *,
coalesce((sum(case when covered = 1 then 1 else NULL end) over
(partition by product_id order by s)) + 1,1) as p
from tmp
group by product_id,s, covered
)
select product_id,
daterange(min(s), max(s)) as range
from tmp2
where covered is null
and product_id = 15
group by p, product_id

I got these results:
15"[2018-11-01,2018-11-02)"
15"empty"
15"[2018-11-04,2018-11-05)"

However I should get:

15"[2018-11-02, 2018-11-02]"
15"[2018-11-05, 2018-11-05]"

I didn't master the usage of window functions such as 'over and partition'.
I tried to resolve the problems by myself but no vain. Could you try to
clarify me what is wrong with my query?
Thanks.


2018-07-05 16:39 GMT+01:00 Andreas Kretschmer :

>
>
> On 05.07.2018 15:49, hmidi slim wrote:
>
> Hi,
> I'm looking for splitting a daterange into many subperiods following this
> example:
>
> Base Date: [2018-01-01, 2018-01-31]
> overlapped_periods:
> 1- [ 2018-01-04, 2018-01-06]
> 2- [ 2018-01-09, 2018-01-12]
> 3- [ 2018-01-18, 2018-01-19]
>
> I try to get such a result:
> 1- [ 2018-01-01, 2018-01-03]
> 2- [ 2018-01-07, 2018-01-08]
> 3- [ 2018-01-13, 2018-01-17]
> 4- [ 2018-01-20, 2018-01-31]
>
> The operator '-' does not support this :
>
> SELECT daterange('2018-01-01', '2018-01-31', '[]') - daterange('2018-01-04', 
> '2018-01-06', '[]');
>
>
> I got this error:
>
> *ERROR: result of range difference would not be contiguous
>
> *
>
> Is there any operators to make the split of daterange?
>
> andreas@[local]:5432/test# \d hmidi
> Table "public.hmidi"
>  Column |   Type| Collation | Nullable | Default
> +---+---+--+-
>  id | integer   |   | not null |
>  d  | daterange |   |  |
> Indexes:
> "hmidi_pkey" PRIMARY KEY, btree (id)
>
> andreas@[local]:5432/test# insert into hmidi values
> (1,'[2018-01-04,2018-01-06]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (2,'[2018-01-09,2018-01-12]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (3,'[2018-01-18,2018-01-19]');INSERT 0 1
> andreas@[local]:5432/test# with month as (select s::date from
> generate_series('2018-01-01'::date, '2018-01-31'::date,'1day'::interval)
> s), tmp as ( select month.s, case when hmidi.d @> month.s then 1 else NULL
> end as covered from month left join hmidi on month.s <@ hmidi.d),tmp2 as (
> select *, coalesce((sum(case when covered = 1 then 1 else NULL end) over
> (order by s))+1,1) as p from tmp) select p, min(s), max(s) from tmp2 where
> covered is null group by p order by p;
>  p  |min |max
> ++
>   1 | 2018-01-01 | 2018-01-03
>   4 | 2018-01-07 | 2018-01-08
>   8 | 2018-01-13 | 2018-01-17
>  10 | 2018-01-20 | 2018-01-31
> (4 rows)
>
>
> Regards, Andreas
> --
> 2ndQuadrant Deutschland
>


Re: Reporting bug on pgAdmin 4.3

2018-07-10 Thread Adrian Klaver

On 07/09/2018 09:40 PM, a wrote:

Hi

I'm doing a normal query on pgAdmin, my server platform is win server 
2008, my laptop is win10, both using pgsql 10.


The results of the query shows 8488 rows are selected, which is correct.


How did you determine this, in pgAdmin or somewhere else?



However, when I drag down in data output window, it shows more than 
10 rows.


The rows all have data?



Further test shows that each time they are different, both in number and 
in actual data.


The past and copy results are the same, even without drag down actions.


So where did you paste and copy from and how?



ODBC works fine, I haven't try the client.


By client do you mean psql or something else?



This problem only occur on my computer, while trying on the server, it 
works fine.


Do you mean using pgAdmin works on the server.



Can any one provide any method to fix this??


Not enough information at this time to arrive at a fix.



Thanks

Shore



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Create DDL trigger to catch which column was altered

2018-07-10 Thread Adrian Klaver

On 07/10/2018 01:56 AM, Łukasz Jarych wrote:

It is no possible?


AFAIK it is not possible, per:

https://www.postgresql.org/message-id/3385.1518828768%40sss.pgh.pa.us



Jacek




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Create event triger

2018-07-10 Thread Adrian Klaver

On 07/10/2018 02:30 AM, Łukasz Jarych wrote:

Maybe yes,

but for me when i am learning it is not...


How do you do CREATE TABLE now, ad hoc in the client or via scripts?

If via scripts you could create a template script for the trigger and 
then just fill in the table name as needed.


A function to add the trigger would be more involved and I do not have 
the time at the moment to create an example. Will see if I can come up 
with something later.




Best,
Jacek



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Reporting bug on pgAdmin 4.3

2018-07-10 Thread a
Hi, since I'm not with the test environment so there may be not very exact 
answers.


1, The data was set to load from oracle server at 0 am of each work day. There 
are 3 ways to determine the right results:
a. data from oracle server;
b. there is going to be a statement summary in query history;
c. use select count() statement


2, Yes they do all have data, but I do not have time to test them since I was 
at work. Furthermore, each time there will be differences in number of rows.


3, I click the top left cell in data output window, so it will select all the 
data. Paste them to excel.


4, Yep, client means psql.


5, Yes I installed pgadmin 4.3 on the server too. While executing the same 
statement on the server using remote desktop (windows), it shows the correct 
results.


6, I may do some more tests when I have time.


7, Anymore questions, please let me know


Thanks a lot!


Shore




-- Original --
From:  "Adrian Klaver";
Date:  Tue, Jul 10, 2018 09:30 PM
To:  "a"<372660...@qq.com>;"pgsql-general";

Subject:  Re: Reporting bug on pgAdmin 4.3



On 07/09/2018 09:40 PM, a wrote:
> Hi
> 
> I'm doing a normal query on pgAdmin, my server platform is win server 
> 2008, my laptop is win10, both using pgsql 10.
> 
> The results of the query shows 8488 rows are selected, which is correct.

How did you determine this, in pgAdmin or somewhere else?

> 
> However, when I drag down in data output window, it shows more than 
> 10 rows.

The rows all have data?

> 
> Further test shows that each time they are different, both in number and 
> in actual data.
> 
> The past and copy results are the same, even without drag down actions.

So where did you paste and copy from and how?

> 
> ODBC works fine, I haven't try the client.

By client do you mean psql or something else?

> 
> This problem only occur on my computer, while trying on the server, it 
> works fine.

Do you mean using pgAdmin works on the server.

> 
> Can any one provide any method to fix this??

Not enough information at this time to arrive at a fix.

> 
> Thanks
> 
> Shore


-- 
Adrian Klaver
adrian.kla...@aklaver.com

Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna


We recently did a test on COPY and found that on large tables (47 million rows 
, 20GB of raw data) the 
difference in COPY with 16 indexes and COPY without any index is 1:14. That is, 
COPY is 14 times slower 
when data is ingested with all indexes as opposed to COPY first without index 
and then create all index.

I googled for earlier posting on this and it looks like this has been asked 
before too.  

This is what I am thinking to do:

1 - Extract index definition and save it as a SQL somewhere, either a file or a 
table.
2 - Drop all indexes.
3 - Ingest data via COPY
4 - Recreate all indexes saved in (1).

Is there a generic sql or script or tool to accomplish (1).

thanks


Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Thomas Kellerer
Ravi Krishna schrieb am 10.07.2018 um 16:08:
> 
> We recently did a test on COPY and found that on large tables (47 million 
> rows , 20GB of raw data) the 
> difference in COPY with 16 indexes and COPY without any index is 1:14. That 
> is, COPY is 14 times slower 
> when data is ingested with all indexes as opposed to COPY first without index 
> and then create all index.
> 
> I googled for earlier posting on this and it looks like this has been asked 
> before too.  
> 
> This is what I am thinking to do:
> 
> 1 - Extract index definition and save it as a SQL somewhere, either a file or 
> a table.
> 2 - Drop all indexes.
> 3 - Ingest data via COPY
> 4 - Recreate all indexes saved in (1).
> 
> Is there a generic sql or script or tool to accomplish (1).

You can do this with a script like this:

-- First store the definitions of all the indexes in a table:

create table index_backup 
as
select *
from pg_indexes
where schemaname = 'public'
  and tablename = 'the_table';

-- now drop all the indexes:
do
$$
declare
  l_rec record;
begin
  for l_rec in select schemaname, indexname from index_backup
  loop
execute format('drop index %I.%I', l_rec.schemaname, l_rec.indexname);
  end loop;
end;
$$

-- now insert the data

...

-- and restore all indexes

do
$$
declare
  l_rec record;
begin
  for l_rec in select indexdef from index_backup
  loop
 execute l_rec.indexdef;
  end loop;
end;
$$




Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Adrian Klaver

On 07/10/2018 07:08 AM, Ravi Krishna wrote:


We recently did a test on COPY and found that on large tables (47 million rows 
, 20GB of raw data) the
difference in COPY with 16 indexes and COPY without any index is 1:14. That is, 
COPY is 14 times slower
when data is ingested with all indexes as opposed to COPY first without index 
and then create all index.


Did you include the time to CREATE INDEX after the COPY or is the 1:14 
only for the COPY stage?




I googled for earlier posting on this and it looks like this has been asked 
before too.

This is what I am thinking to do:

1 - Extract index definition and save it as a SQL somewhere, either a file or a 
table.
2 - Drop all indexes.
3 - Ingest data via COPY
4 - Recreate all indexes saved in (1).

Is there a generic sql or script or tool to accomplish (1).

thanks




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> 
> Did you include the time to CREATE INDEX after the COPY or is the 1:14 only 
> for the COPY stage?

Yes.

Time taken to load 47 mil rows with all 16 indexes intact:  14+ hrs
Time taken to load the same after dropping index and then loading and finally 
creating 16 indexes: 1 hr 40 min

Frankly I am surprised by this staggering difference.





Refresh materialized view deadlocking?

2018-07-10 Thread Xiao Ba
I have a materialized view that gets refreshed concurrently every 10
seconds (b/c it contains time sensitive stuff). At times it appears to get
backed up so there are multiple refreshes queued and then gets stuck
waiting for a lock? Does anybody know if this is expected behavior? Am I
supposed to be manually managing to ensure that only one refresh is being
requested at a time?

Thanks,
Xiao


Re: Refresh materialized view deadlocking?

2018-07-10 Thread Akshaya Acharya
Hi Xiao

From
https://www.postgresql.org/docs/current/static/sql-refreshmaterializedview.html

For concurrently option
" Even with this option only one REFRESH at a time may run against any one
materialized view. "

I would guess yes.

Regards, Akshaya

On Tue, 10 Jul 2018 at 23:24, Xiao Ba  wrote:

> I have a materialized view that gets refreshed concurrently every 10
> seconds (b/c it contains time sensitive stuff). At times it appears to get
> backed up so there are multiple refreshes queued and then gets stuck
> waiting for a lock? Does anybody know if this is expected behavior? Am I
> supposed to be manually managing to ensure that only one refresh is being
> requested at a time?
>
> Thanks,
> Xiao
>


max_standby_streaming_delay not enforced

2018-07-10 Thread Patrick Hemmer
I ran into an issue the other day on one of my database clusters where
it appears a replica had a transaction open for almost 24 hours. This
was causing stale data on the replica (as observed by queries against
the db), as well as tuple bloat on the master (due to
`hot_standby_feedback=on` and vacuum not cleaning them up).
The part that is confusing me is that I have
`max_standby_streaming_delay=30s`, and from all the documentation I can
find, this should have resulted in the transaction being killed.

The replica has since been restarted, so I cannot gather any information
about current state. However I have monitoring and logging on many of
the postgres stats, so I might be able to retrieve some information if
requested.
I have also tried reproducing the issue in a lab, but have so far been
unable to.

Are there any scenarios where a replica might still delay replaying the
transaction log past the value configured in `max_standby_streaming_delay`?

For version information, I am running PostgreSQL 9.6.3 on CentOS/7

-Patrick


Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Vick Khera
On Tue, Jul 10, 2018 at 1:13 PM, Ravi Krishna  wrote:

> >
> > Did you include the time to CREATE INDEX after the COPY or is the 1:14
> only for the COPY stage?
>
> Yes.
>
> Time taken to load 47 mil rows with all 16 indexes intact:  14+ hrs
> Time taken to load the same after dropping index and then loading and
> finally creating 16 indexes: 1 hr 40 min
>


https://fle.github.io/temporarily-disable-all-indexes-of-a-postgresql-table.html


Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> 
> 
> https://fle.github.io/temporarily-disable-all-indexes-of-a-postgresql-table.html
>  
> 
>  


This does not work in RDS.  In order to update system catalog tables 
(pg_index), one needs privileges which is
denied in RDS. In RDS terminology, the user must belong to role rdsadmin and 
that role is reserved only for AWS 
internal users.

Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Nikolay Samokhvalov
On Tue, Jul 10, 2018 at 12:26 PM Ravi Krishna  wrote:

>
>
>
> https://fle.github.io/temporarily-disable-all-indexes-of-a-postgresql-table.html
>
>
>
>
> This does not work in RDS.  In order to update system catalog tables
> (pg_index), one needs privileges which is
> denied in RDS. In RDS terminology, the user must belong to role rdsadmin
> and that role is reserved only for AWS
> internal users.
>

Right, but pl/pgsql works fine there, so you can backup&drop all indexes
and then restore them using `do $$ ... $$ language plpgsql;`, as Thomas
Kellerer already suggested.

One note here: there is some old confusion between concepts of unique
indexes and unique constraints in Postgres (see
https://www.postgresql.org/message-id/flat/CANNMO%2BKHkkDg-FBi0_78ADmfLiT9kODmz%2B8m6fR6f5kPL-n_ZQ%40mail.gmail.com#cfb3a9eaed8649d7d24ad7944ccb37cf),
so there is a chance that after such backup/restore procedure you'll get
logically correct but phisically different state.


Re: Reporting bug on pgAdmin 4.3

2018-07-10 Thread Adrian Klaver

On 07/10/2018 07:01 AM, a wrote:
Hi, since I'm not with the test environment so there may be not very 
exact answers.


1, The data was set to load from oracle server at 0 am of each work day. 
There are 3 ways to determine the right results:

     a. data from oracle server;
     b. there is going to be a statement summary in query history;
     c. use select count() statement

2, Yes they do all have data, but I do not have time to test them since 
I was at work. Furthermore, each time there will be differences in 
number of rows.


3, I click the top left cell in data output window, so it will select 
all the data. Paste them to excel.


4, Yep, client means psql.

5, Yes I installed pgadmin 4.3 on the server too. While executing the 
same statement on the server using remote desktop (windows), it shows 
the correct results.


6, I may do some more tests when I have time.

7, Anymore questions, please let me know


I tried to replicate using pgAdmin4 3.1 and could not. I connected to a 
remote database and in the Query Tool ran a query that fetched 9733 
rows. That is what showed up in the data grid. So:


1) What version of pgAdmin4 3.x are you on? If you are on 3.0 I would 
suggest upgrading to 3.1 to see if that helps.


2) I suspect the reason you are seeing it on the client instance of 
pgAdmin vs the server version is down to the grid not refreshing 
properly while waiting on data to arrive from the server.


3) If the problem persists in 3.1 I would check/file an issue here:

https://redmine.postgresql.org/

You will need a Postgres community account to access.



Thanks a lot!

Shore


-- Original --
*From: * "Adrian Klaver";
*Date: * Tue, Jul 10, 2018 09:30 PM
*To: * "a"<372660...@qq.com>;"pgsql-general";
*Subject: * Re: Reporting bug on pgAdmin 4.3

On 07/09/2018 09:40 PM, a wrote:
 > Hi
 >
 > I'm doing a normal query on pgAdmin, my server platform is win server
 > 2008, my laptop is win10, both using pgsql 10.
 >
 > The results of the query shows 8488 rows are selected, which is correct.

How did you determine this, in pgAdmin or somewhere else?

 >
 > However, when I drag down in data output window, it shows more than
 > 10 rows.

The rows all have data?

 >
 > Further test shows that each time they are different, both in number and
 > in actual data.
 >
 > The past and copy results are the same, even without drag down actions.

So where did you paste and copy from and how?

 >
 > ODBC works fine, I haven't try the client.

By client do you mean psql or something else?

 >
 > This problem only occur on my computer, while trying on the server, it
 > works fine.

Do you mean using pgAdmin works on the server.

 >
 > Can any one provide any method to fix this??

Not enough information at this time to arrive at a fix.

 >
 > Thanks
 >
 > Shore


--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Hustler DBA
Hi Community,
A client of mine is looking for an open source tool to deploy and promote
PostgreSQL DDL changes through database environments as part of SDLC. What
tools (open source) does the community members use? I normally use scripts,
but they want something open source.

Thanks,
Neil Barrett


Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Adrian Klaver

On 07/10/2018 03:13 PM, Hustler DBA wrote:

Hi Community,
A client of mine is looking for an open source tool to deploy and 
promote PostgreSQL DDL changes through database environments as part of 
SDLC. What tools (open source) does the community members use? I 


https://sqitch.org/


normally use scripts, but they want something open source.


The above is based on scripts. Will that work for the client or do they 
want a GUI tool?




Thanks,
Neil Barrett



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Rich Shepard

On Tue, 10 Jul 2018, Hustler DBA wrote:


A client of mine is looking for an open source tool to deploy and promote
PostgreSQL DDL changes through database environments as part of SDLC. What
tools (open source) does the community members use? I normally use
scripts, but they want something open source.


Neil,

  I'm far from a professional DBA, but scripts are certainly open source
because they're text files.

  To track changes for almoste everything I highly recommend Git for version
control. It's distributed and can handle most types of files. I use it for
tracking coding projects and well as report and other text documents that
are edited and revised prior to release.

Rich



Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Tim Cross


Ravi Krishna  writes:

> We recently did a test on COPY and found that on large tables (47 million 
> rows , 20GB of raw data) the 
> difference in COPY with 16 indexes and COPY without any index is 1:14. That 
> is, COPY is 14 times slower 
> when data is ingested with all indexes as opposed to COPY first without index 
> and then create all index.
>

This is very system dependent. On our system, when we tested a similar
approach, we found that the time saved through dropping the indexes
before copy was lost when rebuilding the indexes afterwards. In fact, it
ended up being slightly slower.

I suspect a lot depends on the number and types of indexes you
have. Your example had a lot more indexes than ours. We were loading 22
batches with 2.5M records per batch. While copy was significantly faster
than transaction based inserts (minutes vs hours), the differences between
indexes and no indexes was measured in minutes. We only had 3 or 4
indexes.   

> I googled for earlier posting on this and it looks like this has been asked 
> before too.  
>
> This is what I am thinking to do:
>
> 1 - Extract index definition and save it as a SQL somewhere, either a file or 
> a table.
> 2 - Drop all indexes.
> 3 - Ingest data via COPY
> 4 - Recreate all indexes saved in (1).
>
> Is there a generic sql or script or tool to accomplish (1).
>

We are loading data via Javascript using pg and pg-copy-streams modules. It is
pretty straight forward to drop the indexes and recreate them afterwards
via sql, so we didn't look for a tool as such.

As data is only inserted into this table and only by this process, we
also turned off autovacuum for this table, performing vacuum and analyze
manually after load. 

Tim

-- 
Tim Cross



Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Melvin Davidson
> We recently did a test on COPY and found that on large tables (47 million
rows , 20GB of raw data) the
> difference in COPY with 16 indexes...

*I am very suspicious of why you need 16 indexes. Are you sure all those
indexes are actually being utilized?*

*Try executing the attached query, You may find find some are really not
needed.*

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
SELECT n.nspname as schema,
   i.relname as table,
   i.indexrelname as index,
   i.idx_scan,
   i.idx_tup_read,
   i.idx_tup_fetch,
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' || 
quote_ident(i.relname))) AS table_size, 
   pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' || 
quote_ident(i.indexrelname))) AS index_size,
   pg_get_indexdef(idx.indexrelid) as idx_definition
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
 WHERE i.idx_scan = 0
   AND n.nspname <> 'pg_catalog'
   AND NOT idx.indisprimary
   AND NOT idx.indisunique
 ORDER BY 1, 2, 3;




Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Hustler DBA
Thanks Adrian and Rich,
I will propose sqitch to the client, but I think they want something with a
GUI frontend.

They want to deploy database changes, track which environments the change
was deployed to, be able to rollback a change (with a rollback script),
track when and if the change was rolled back and in which
environment/database... so pretty much a deployment and tracking GUI
software with a frontend.

In the past, for doing database deployments to Oracle, I created a tool
using PHP (frontend/backend), MySQL (repository to track deployments and
store deployment logs) and scripted the release scripts for deployment and
rollback, and had my tool manage the scripts up the environments. The
client is "looking" for something more open source for PostgreSQL. Do we
have anything similar to this?

Neil

On Tue, Jul 10, 2018 at 6:22 PM, Rich Shepard 
wrote:

> On Tue, 10 Jul 2018, Hustler DBA wrote:
>
> A client of mine is looking for an open source tool to deploy and promote
>> PostgreSQL DDL changes through database environments as part of SDLC. What
>> tools (open source) does the community members use? I normally use
>> scripts, but they want something open source.
>>
>
> Neil,
>
>   I'm far from a professional DBA, but scripts are certainly open source
> because they're text files.
>
>   To track changes for almoste everything I highly recommend Git for
> version
> control. It's distributed and can handle most types of files. I use it for
> tracking coding projects and well as report and other text documents that
> are edited and revised prior to release.
>
> Rich
>
>


Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Gavin Flower

On 11/07/18 11:04, Hustler DBA wrote:

Thanks Adrian and Rich,
I will propose sqitch to the client, but I think they want something 
with a GUI frontend.


They want to deploy database changes, track which environments the 
change was deployed to, be able to rollback a change (with a rollback 
script), track when and if the change was rolled back and in which 
environment/database... so pretty much a deployment and tracking GUI 
software with a frontend.


In the past, for doing database deployments to Oracle, I created a 
tool using PHP (frontend/backend), MySQL (repository to track 
deployments and store deployment logs) and scripted the release 
scripts for deployment and rollback, and had my tool manage the 
scripts up the environments. The client is "looking" for something 
more open source for PostgreSQL. Do we have anything similar to this?


Neil

On Tue, Jul 10, 2018 at 6:22 PM, Rich Shepard 
mailto:rshep...@appl-ecosys.com>> wrote:


On Tue, 10 Jul 2018, Hustler DBA wrote:

A client of mine is looking for an open source tool to deploy
and promote
PostgreSQL DDL changes through database environments as part
of SDLC. What
tools (open source) does the community members use? I normally use
scripts, but they want something open source.


Neil,

  I'm far from a professional DBA, but scripts are certainly open
source
because they're text files.

Being text files has nothing to do with being Open Source!  As I could 
send you a BASH script, or Java source code of a program, where they are 
under a Proprietary licence.


On the other hand, being a script could be open source, it all depends 
on the licence!




  To track changes for almoste everything I highly recommend Git
for version
control. It's distributed and can handle most types of files. I
use it for
tracking coding projects and well as report and other text
documents that
are edited and revised prior to release.

Rich



Hi Neil,

Please bottom post, as that is the convention in these lists. This 
convention allows people to read the history, before reading the reply.  
Alternatively, you can intersperse your comments if that makes the 
context  easier to follow.  You can also omit large chunks that are no 
longer relevant, replacing them with "[...]".


Note that using scripts makes it easier to automate and to document, 
plus it gives you far more control.  With PostgreSQL I use psql, as it 
is easier to use than any GUI tool.  I use an editer to create SQL 
scripts and execute them from psql.  Note that you can use psql to 
execute SQL from within a BASH script.


Scripts once working and tested, can be reused and stored in git.  This 
is not something you can do with actions in a GUI!



Cheers,
Gavin




Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> 
> I am very suspicious of why you need 16 indexes. Are you sure all those 
> indexes are actually being utilized?
> Try executing the attached query, You may find find some are really not 
> needed.

This is a DATAMART application and the indexes are to satisfy a large number of 
queries possible from tableau. Keep in mind
this table has 200+ cols. 
That said, it is possible that a few of them may be unnecessary.  I will run 
your script and check that out.

thanks for the script.



Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-10 Thread Rob Sargent




On 07/10/2018 05:04 PM, Hustler DBA wrote:

Thanks Adrian and Rich,
I will propose sqitch to the client, but I think they want something 
with a GUI frontend.


They want to deploy database changes, track which environments the 
change was deployed to, be able to rollback a change (with a rollback 
script), track when and if the change was rolled back and in which 
environment/database... so pretty much a deployment and tracking GUI 
software with a frontend.


In the past, for doing database deployments to Oracle, I created a 
tool using PHP (frontend/backend), MySQL (repository to track 
deployments and store deployment logs) and scripted the release 
scripts for deployment and rollback, and had my tool manage the 
scripts up the environments. The client is "looking" for something 
more open source for PostgreSQL. Do we have anything similar to this?


Neil

open a github,gitlab,sorceforge repo, switch to postgres and make your 
opensource




Re: Create event triger

2018-07-10 Thread Adrian Klaver

On 07/10/2018 02:30 AM, Łukasz Jarych wrote:

Maybe yes,

but for me when i am learning it is not...


The example function. It is a minimal example but it should serve as a 
starting point.:


CREATE OR REPLACE FUNCTION public.add_trigger(tbl_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
EXECUTE 'CREATE TRIGGER ' || quote_ident(tbl_name||'_change') || ' 
AFTER UPDATE ON ' || quote_ident(tbl_name) || ' EXECUTE PROCEDURE 
ts_update()';

END;
$function$
;

create table trg_test(id int, fld_1 varchar);

test=> \d trg_test 



   Table "public.trg_test" 



 Column |   Type| Collation | Nullable | Default 



+---+---+--+- 



 id | integer   |   |  | 



 fld_1  | character varying |   |

select add_trigger('trg_test');




test=> \d trg_test 



   Table "public.trg_test" 



 Column |   Type| Collation | Nullable | Default 



+---+---+--+- 



 id | integer   |   |  | 



 fld_1  | character varying |   |  | 



Triggers: 



trg_test_change AFTER UPDATE ON trg_test FOR EACH STATEMENT EXECUTE 
PROCEDURE ts_update()






Best,
Jacek

wt., 10 lip 2018 o 11:29 Guillaume Lelarge 


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Reporting bug on pgAdmin 4.3

2018-07-10 Thread a
Hey, I fixed the problem by using web pgadmin also with a upgrade to 4.3.1.


So I installed pgadmin 4.3 under the location of .\PostgreSQL\10\pgadmin, so it 
will start up a window instead of a web viewer.


I deleted it and re-installed the 4.3.1 and started it with web browser, the 
problem fixed.


Thanks for your kind reply and help


Shore


-- Original --
From:  "Adrian Klaver";
Date:  Wed, Jul 11, 2018 04:28 AM
To:  "a"<372660...@qq.com>;"pgsql-general";

Subject:  Re: Reporting bug on pgAdmin 4.3



On 07/10/2018 07:01 AM, a wrote:
> Hi, since I'm not with the test environment so there may be not very 
> exact answers.
> 
> 1, The data was set to load from oracle server at 0 am of each work day. 
> There are 3 ways to determine the right results:
>  a. data from oracle server;
>  b. there is going to be a statement summary in query history;
>  c. use select count() statement
> 
> 2, Yes they do all have data, but I do not have time to test them since 
> I was at work. Furthermore, each time there will be differences in 
> number of rows.
> 
> 3, I click the top left cell in data output window, so it will select 
> all the data. Paste them to excel.
> 
> 4, Yep, client means psql.
> 
> 5, Yes I installed pgadmin 4.3 on the server too. While executing the 
> same statement on the server using remote desktop (windows), it shows 
> the correct results.
> 
> 6, I may do some more tests when I have time.
> 
> 7, Anymore questions, please let me know

I tried to replicate using pgAdmin4 3.1 and could not. I connected to a 
remote database and in the Query Tool ran a query that fetched 9733 
rows. That is what showed up in the data grid. So:

1) What version of pgAdmin4 3.x are you on? If you are on 3.0 I would 
suggest upgrading to 3.1 to see if that helps.

2) I suspect the reason you are seeing it on the client instance of 
pgAdmin vs the server version is down to the grid not refreshing 
properly while waiting on data to arrive from the server.

3) If the problem persists in 3.1 I would check/file an issue here:

https://redmine.postgresql.org/

You will need a Postgres community account to access.

> 
> Thanks a lot!
> 
> Shore
> 
> 
> -- Original --
> *From: * "Adrian Klaver";
> *Date: * Tue, Jul 10, 2018 09:30 PM
> *To: * "a"<372660...@qq.com>;"pgsql-general";
> *Subject: * Re: Reporting bug on pgAdmin 4.3
> 
> On 07/09/2018 09:40 PM, a wrote:
>  > Hi
>  >
>  > I'm doing a normal query on pgAdmin, my server platform is win server
>  > 2008, my laptop is win10, both using pgsql 10.
>  >
>  > The results of the query shows 8488 rows are selected, which is correct.
> 
> How did you determine this, in pgAdmin or somewhere else?
> 
>  >
>  > However, when I drag down in data output window, it shows more than
>  > 10 rows.
> 
> The rows all have data?
> 
>  >
>  > Further test shows that each time they are different, both in number and
>  > in actual data.
>  >
>  > The past and copy results are the same, even without drag down actions.
> 
> So where did you paste and copy from and how?
> 
>  >
>  > ODBC works fine, I haven't try the client.
> 
> By client do you mean psql or something else?
> 
>  >
>  > This problem only occur on my computer, while trying on the server, it
>  > works fine.
> 
> Do you mean using pgAdmin works on the server.
> 
>  >
>  > Can any one provide any method to fix this??
> 
> Not enough information at this time to arrive at a fix.
> 
>  >
>  > Thanks
>  >
>  > Shore
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Adrian Klaver
adrian.kla...@aklaver.com