Replication pull instead of push

2018-03-07 Thread PegoraroF10
We have several servers, our masters, installed on several cities. Then we
need have a centralized database (our standby) where we can do our reports. 
So, usual way of doing postgres replication is creating a
replication/subscription process. 
But then that master server should be reachable and sometimes it´s not, just
because their IP changed or because we need to configure its router to get
Postgres server properly. 

Finally my question, there is a way to pull data which needs to be
replicated from master to slave. 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Replication pull instead of push

2018-03-07 Thread PegoraroF10
My question was ... how to pull data from Master into Slave. 
Postgres has a Publication/Subscription model that Slave connects to Master
and gets the data.
I want something that Master server gets its data which needs to be
replicated and send that data to Slave.

Is that way possible ?
If there is no extension or statement which does that, how can I do it
myself ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Replication push instead of pull

2018-03-07 Thread PegoraroF10
We have several servers, our masters, installed on several cities. Then we 
need have a centralized database where we can do our reports. 

So, postgres publication/subscription model always opens Master server to
Slave gets its data..

There is a way to push data from master to slave instead of pulling data
from Master to Slave ?





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



single sql, multiple schemas, one result set

2018-04-03 Thread PegoraroF10
Suppose a DB with dozens of schemas with same structure.
DB
  Schema1
Table1
Table2
  Schema2
Table1
Table2
  Schema3
Table1
Table2
Then we want to execute a SQL on specific schemas and the result of it could
be a UNION ALL. So, how could be a function that runs that SQL on each
schema and results just on result set ?

Then I would use something like ...
select * from SQLFromMultipleShema(Array['Schema1,Schema3'],'select count(*)
from Table2');




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pg_basebackup or dump for starting replication process

2018-04-05 Thread PegoraroF10
For replication purposes only, there are any difference between pg_basebackup
or dump to copy data from Master to Slave ?
On Docs is written that pg_basebackup can be used both for point-in-time
recovery and as the starting point for a log shipping or streaming
replication standby servers.

We are using just Dump, there are any caveat of using it for logical
replication ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
I´m using Postgres 10 on ubuntu.

suppose a simple Master/Detail structure like this:

create table Master(ID integer primary key, name text);
create table Detail(ID integer primary key, Master_ID Integer, OtherInfo
text);
alter table Detail add constraint FKMasterDetail foreign key (Master_ID)
references Master(ID) on update cascade on delete cascade;

Then insert some records on it:
insert into Master(ID, Name) values(1,'First'), values(2,'Second');
insert into Detail(ID, Master_ID, OtherInfo) values(1,1,'Detail
Information'), (2,2,'Detail Information2');

Then, if I delete on Master will delete on detail too. Fine.
delete from Master where ID=1;

But now, suppose I have a complex trigger before update or delete that runs
on Detail table.
create function DoAComplexJobOnDetail() returns trigger as $$
begin
  -- Do lots of things then
  Return new; --This is the error, because I´m returning new even for
delete;
end;$$ language plpgsql;
create trigger DetailDoAComplexJob before update or delete on Detail for
each row execute procedure DoAComplexJobOnDetail();

Then try to delete the other Master record. It will be deleted on Master but
Detail record doesn´t and will obviously become invalid because the foreign
key.
delete from Master where ID=2;
select * from Master; --will show no records.
select * from Detail; --will show one record pointing to Master_ID=2, that
doesn´t exist anymore.

Is this a bug or it´s mine responsability to check that trigger result ?
If that trigger responds incorrectly I think that no information could be
executed.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
I know my trigger is incorrect. I know that I can use TG_OP to know what
operation is being done. 
My question is ...
> Is this a bug or it´s mine responsability to check that trigger result ?

I think it´s a bug because if something got wrong on detail deletion and it
was rolled back, how could be a parent record be deleted ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
My point of view that there was a partial rollback, just on detail table. If
I´ve done a delete from Master and I have a foreign key to it with cascade
option, or all records should be deleted or no one should, this is my point.

Did you see that Master table has no records and Detail table has one record
?
I think you agree with me that we have a a detail record with no master, so
it´s unusable, right ?

I´m not able to do a backup/restore of this database because that record
doesn´t match. 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-19 Thread PegoraroF10
Correct, that delete done a partial commit. And this is absolutely
unacceptable.
I know I did that trigger incorrectly but referential integrity is
obligatory.
Imagine if I have a database crash and need to restore as soon as possible.
How much time I´ll spend removing those records from a backup to get entire
database restored properly. 

Well, I´ll review all my triggers. And I have hundreds of them.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Problem with trigger makes Detail record be invalid

2018-04-20 Thread PegoraroF10
Well, talking about responsabilities, I think one of responsabilities of a
mature database is that it can only accept data it was configured for. If
you try to store a timestamp in a integer field or a huge numeric value in a
smallint field, Postgres will block you because that operation is not
acceptable. 
So, it's not acceptable to break referential integrity, is it ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: execute block like Firebird does

2018-05-30 Thread PegoraroF10
Some time ago I´ve posted this thread because we had lots of these execute
blocks to be translated to Postgres. Now, continuing on same matter, I would
like to discuss the same topic, basically calling the server one time only,
instead of several times.
Usually we want get some values from server and then insert or update some
records based on that returned values. Each of these calls will spend time
and this is the point I would like to discuss. 

How to send a script to server and return one or more values from that
execution ?

You´ll probably answer me that I could solve that with a function. But
suppose those executions are dynamic, depends on businness rules or any
other problem.

So, is that possible to change a DO structure is ran, to be possible to
return one or more values ? 
It would be like ...
DO returns(ID Integer, Description Text) as
$$
begin
  select ... 
  insert ...
  select ... into ID, Description
end
$$

Using this way would be possible to create that script on client, call it
just one time and have a result for that execution, exactly the way a
"execute block" does on Firebird.

Is that possible or there is a way to call just one time the server to
return values without creating a function to each call ?

What do you think change how DO structure is ran to have results from ?
Version 12, what do you think ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Well, I think it´ll not as easy as you said. That tables has dependencies.
So, if I try to alter type it gives me ERROR: cannot alter type of a column
used in a trigger definition. I dropped all Triggers of that table and it
gives me ERROR: cannot alter type of a column used by a view or rule. 
Then, if I need to drop everything to change that thing I think is better
just have an empty structure and copy entire database data to it. 

And why do we used domains, because we were Firebird and on that database a
integer or a domain based on an integer is the same, so we converted to
Postgres using that way. 

Thanks



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Adrian, on Firebird if you create a table or procedure with fields or params
declared with domains, they can be used with those domains or with their
base type. On Postgres I32 is not equal to integer.

create procedure myproc(id i32) returns(x i32) as ... 

select * from myproc(cast(5 as integer)) -- works on firebird. On Postgres
it doesn´t because that procedure or function expects for a I32 and not a
integer.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Sorry, the example I was thinking was this one, which works on Firebird,
using its way of writing, obviously.

create function myproc(id integer) returns I32 language sql as 'select $1';

On postgres ERROR: return type mismatch in function declared to return i32

What I mean is that Firebird sees I32 and integer as the same, Postgres
doesn´t.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
- Because we don´t need to give rigths to user on sequences;
- Nobody will change values of pk fields, because we would like to have
GENERATE ALWAYS on those PK Fields.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Too slow to create new schema and their tables, functions, triggers.

2019-07-23 Thread PegoraroF10
We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.

Basically, my script creates an entire structure for a new customer:
- Create schema TempSchema; --just to be sure that nobody will connect until
it finishes
- create tables (100), constraints, functions, etc.
- import data using restore only data.
- rename TempSchema to production name.

On second step, when creating tables, functions and so on, it takes
sometimes a minute just to create one table or one function.

So, does this happens because we have almost 200 schemas on that database ?
I´m almost sure about.
What do I need do to run my script as before ? Do I need to Reindex ? Vacuum
? Or am I reaching a limit in a number of schemas in a Postgres database ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-24 Thread PegoraroF10
I did not have vacuumed or reindexed my database for last 30 days and that
was my problem.
It works fine if I do a reindex database before adding that new schema. 
Well, I´ll try just reindexing system before adding a new schema to see if
it works.





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread PegoraroF10
So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...

statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem...0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU...   0 mins 0.013 secs
statement: create or replace function contatoscampa...  2 mins 13.492 secs
statement: create or replace function FORMATARTELEF...  0 mins 0.013 secs
statement: create or replace function ChecaVar  0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs

So, is this a vacuum problem ? What do I need to configure it  ?
And again, if I do a Reindex database before creating that schema, it works
perfectly.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread PegoraroF10
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread PegoraroF10
Running that sql:

namesetting
autovacuum  on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold50
autovacuum_freeze_max_age   2
autovacuum_max_workers  3
autovacuum_multixact_freeze_max_age 4
autovacuum_naptime  60
autovacuum_vacuum_cost_delay20
autovacuum_vacuum_cost_limit-1
autovacuum_vacuum_scale_factor  0.2
autovacuum_vacuum_threshold 50
autovacuum_work_mem -1




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread PegoraroF10
Nope, that schema and all its entire structure did not exist. 



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread PegoraroF10
select count(*), count(*) filter (where last_autovacuum is not null) from
pg_stat_all_tables
count   count
36605   1178

But what tables should I see if vacuum ran on it ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread PegoraroF10
pg_stat_all_tables
count   count
37158   807

pg_stat_sys_tables
count   count
16609   223

pg_stat_user_tables
count   count
20549   584




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread PegoraroF10
Correct, seems to be something wrong on system tables. Maybe our script is
running something is already there, we will check.

select * from pg_stat_sys_tables where schemaname = 'pg_catalog'

relname  n_live_tup n_dead_tup
pg_attrdef  3699 1095
pg_index  4756 1183
pg_sequence   20827   1482
pg_statistic  171699  27101
pg_trigger221319 20718
pg_shdepend225017 22337
pg_attribute  883023  164153
pg_depend 1553586 142960

and all them last_vacuum is null and last_autovacuum is too old.

So, is it better to configure autovacuum properly to these tables or should
I run vacuum periodically ?
Obviously I´ll check our script too.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




why toasted is created when domain is used ?

2019-08-05 Thread PegoraroF10
select count(*) Total_Tables,
   count(*) filter (where t2.oid is not null) Toasted_Tables,
   count(*) filter (where t2.reltuples > 0) Toasted_with_records
from pg_class t1 left join pg_class t2 on t1.reltoastrelid = t2.oid and
t2.relkind = 't'
where t1.relkind = 'r'

total_tablestoasted_tables  toasted_with_records
11564   9521263

Then seeing structure of my tables only few of them really needs toasting,
so it´s ok to have few toast tables with records. The question is, why those
toasted tables were created ?
Then, doing a simple test using one of those tables ...

create table with_toast(
id_pk i32, Field2 i32 not null, Field3 i16 not null, Field4 numeric(7, 2),
Field5 i32 default public.DefaultValueForField(), Field6 datahora default
CURRENT_TIMESTAMP);

create table without_toast(
id_pk integer, Field2 integer not null, Field3 smallint not null, Field4
numeric(7, 2),
Field5 integer default public.DefaultValueForField(), Field6 timestamp
default CURRENT_TIMESTAMP);

As you can see this table doesn´t need to have a toast and probably will
never have records on it.

select t1.oid, t1.relname, t1.relkind, t1.oid, t2.oid, t2.relname
from pg_class t1 left join pg_class t2 on t1.reltoastrelid = t2.oid and
t2.relkind = 't'
inner join pg_namespace ns on ns.oid = t1.relnamespace where t1.relkind =
'r' and t1.relname ~* 'test';

oid relnamerelkind  oid oid relname
9375155 with_toastr 9375155 9375160 pg_toast_9375155
9375167 without_toast   r   9375167 NULLNULL

So, the question is, when I create a table using domains it creates a toast
for it, why ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: why toasted is created when domain is used ?

2019-08-05 Thread PegoraroF10
Well, I used only domains with fixed sizes.

create domain i32 as integer;
create domain i16 as smallint;
create domain datahora as timestamp;



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




slow queries on system tables

2019-08-15 Thread PegoraroF10
The driver I´m using does some queries on system tables and obviously I
cannot change them because are driver inside. These two queries are usually
fast, but sometimes they take 8 or 15 times more to run. The question is,
can I change something on these tables ? Can i create an index, can I load
them into cache ? What can I do to speed up if I cannot change the way these
queries are written ?

Extracted from pg_stat_statements:
Query 1 - calls: 20 times, min_time ~ 20ms, mean_time ~ 50ms, max_time ~
480ms
SELECT current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA,
t.relname AS TABLE_NAME, t.oid AS TABLE_OID, current_database() AS
INDEX_CATALOG, n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS
INDEX_OID, a.attname
AS COLUMN_NAME, a.attnum AS COLUMN_POSITION,indoption[index] & $1 AS
DESCENDING , indisunique::integer AS UNIQUE FROM   (SELECT
generate_series($2, indnatts - $3), indrelid, indexrelid, indkey,  
indoption, indisunique, indisprimary
FROM pg_index i)   i(index, indrelid, indexrelid, indkey, indoption,
indisunique, indisprimary)  INNER JOIN pg_class c ON c.oid = indexrelid  
INNER JOIN pg_class t ON t.oid = indrelid   INNER JOIN pg_namespace n ON
n.oid = t.relnamespace
INNER JOIN pg_attribute a ON a.attrelid = indrelid AND a.attnum =
indkey[index] WHERE n.nspname LIKE $4 AND t.relname LIKE $5 ORDER BY
indisprimary::integer DESC, n.nspname, t.relname, c.relname, a.attnum

Query 1 - calls: 2 times, min_time ~ 70ms, mean_time ~ 95ms, max_time ~
170ms.
SELECT pg_attribute.attname FROM pg_index, pg_class, pg_attribute  WHERE
upper(pg_class.relname) = $1 AND indrelid = pg_class.oid AND
pg_attribute.attrelid = pg_class.oidAND pg_attribute.attnum =
any(pg_index.indkey) AND indisprimary




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-15 Thread PegoraroF10
last values for autovacuum and autoanalyse are Null.

Thanks, I´ll change and see if that solve our problem



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-16 Thread PegoraroF10
Well, not exactly.

Values for autovacuum and autoanalyse are Null because I did not configured
them for system tables yet, but I´m doing vacuum manually once a week. My
question now is why those selects varies that way. 

Almost all times it spend 20ms but 2 o 3% of the times it spend 500ms, why ? 
System tables are not cached ? What is Postgres doing on that time to get
that response times ?
Can I help Postgres on any way ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-16 Thread PegoraroF10
As I told you before, these queries cannot be changed because the driver
creates them. 
As I cannot change them how can I help Postgres to run it faster, just that.

My log_min_duration_statement = 500ms, so I find SQL some dozens of this sql
inside it daily, but I´m sure this sql is used thousands and thousands a
day.
 
duration: 566.415 ms  execute PRSTMTST1590201604/PORTALST1590201604: SELECT
current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA, t.relname AS
TABLE_NAME, t.oid AS TABLE_OID, current_database() AS INDEX_CATALOG,
n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS INDEX_OID,
a.attname AS COLUMN_NAME, a.attnum AS COLUMN_POSITION,indoption[index] & 1
AS DESCENDING , indisunique::integer AS UNIQUE FROM   (SELECT
generate_series(0, indnatts - 1), indrelid, indexrelid, indkey,   indoption,
indisunique, indisprimary FROM pg_index i)   i(index, indrelid, indexrelid,
indkey, indoption, indisunique, indisprimary)  INNER JOIN pg_class c ON
c.oid = indexrelid   INNER JOIN pg_class t ON t.oid = indrelid   INNER JOIN
pg_namespace n ON n.oid = t.relnamespace   INNER JOIN pg_attribute a ON
a.attrelid = indrelid AND a.attnum = indkey[index] WHERE n.nspname LIKE
'f10_0275, public' AND t.relname LIKE 'sys_var' ORDER BY
indisprimary::integer DESC, n.nspname, t.relname, c.relname, a.attnum

QUERY PLAN
Sort  (cost=260362.69..260362.69 rows=1 width=470) (actual
time=59.526..59.527 rows=0 loops=1)
  Sort Key: ((i.indisprimary)::integer) DESC, n.nspname, t.relname,
c.relname, a.attnum
  Sort Method: quicksort  Memory: 25kB
  ->  Nested Loop  (cost=1003.58..260362.68 rows=1 width=470) (actual
time=59.519..59.519 rows=0 loops=1)
->  Nested Loop  (cost=1003.44..260360.98 rows=1 width=238) (actual
time=5.251..58.872 rows=424 loops=1)
  ->  Gather  (cost=1003.02..260360.42 rows=1 width=174) (actual
time=5.232..72.894 rows=424 loops=1)
Workers Planned: 1
Workers Launched: 1
->  Nested Loop  (cost=3.02..259360.32 rows=1 width=174)
(actual time=8.357..48.200 rows=212 loops=2)
  Join Filter: (t.oid = a.attrelid)
  ->  Hash Join  (cost=2.59..259357.51 rows=4
width=138) (actual time=8.322..46.124 rows=212 loops=2)
Hash Cond: (i.indrelid = t.oid)
->  ProjectSet  (cost=0.00..258260.16
rows=51008000 width=66) (actual time=0.011..33.557 rows=53694 loops=2)
  ->  Parallel Seq Scan on pg_index i 
(cost=0.00..2710.08 rows=51008 width=64) (actual time=0.007..6.804
rows=43564 loops=2)
->  Hash  (cost=2.44..2.44 rows=12 width=72)
(actual time=7.614..7.615 rows=210 loops=2)
  Buckets: 1024  Batches: 1  Memory
Usage: 30kB
  ->  Index Scan using
pg_class_relname_nsp_index on pg_class t  (cost=0.42..2.44 rows=12 width=72)
(actual time=0.029..7.556 rows=210 loops=2)
Index Cond: ((relname >=
'sys'::name) AND (relname < 'syt'::name))
Filter: (relname ~~
'sys_account'::text)
Rows Removed by Filter: 1696
  ->  Index Scan using
pg_attribute_relid_attnum_index on pg_attribute a  (cost=0.42..0.69 rows=1
width=70) (actual time=0.009..0.009 rows=1 loops=424)
Index Cond: ((attrelid = i.indrelid) AND
(attnum = (i.indkey)[(generate_series(0, (i.indnatts - 1)))]))
  ->  Index Scan using pg_class_oid_index on pg_class c 
(cost=0.42..0.56 rows=1 width=68) (actual time=0.005..0.005 rows=1
loops=424)
Index Cond: (oid = i.indexrelid)
->  Index Scan using pg_namespace_oid_index on pg_namespace n 
(cost=0.14..0.92 rows=1 width=68) (actual time=0.001..0.001 rows=0
loops=424)
  Index Cond: (oid = t.relnamespace)
  Filter: (nspname ~~ 'f10_0511, public'::text)
  Rows Removed by Filter: 1
Planning Time: 1.176 ms
Execution Time: 76.031 ms




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
Coming back to my problem, today happened again, that search comes slow.
Explaining again, auto vacuum and auto analyse for pg_* tables are not
configured one by one, so its using that defaults of scale_factor = 0.2 and
threshold = 50. Today, when that problem of speed came back, the view
pg_stat_sys_tables for pg_class table shows that it ran 415 times autovacuum
only this week and there are 75187 updates on this table. How can I have
been updated 75000 times this table only in a week ? What kind of statement
does an update on this table ? And why did it run 415 times autovacuum and
only 1 autoanalyse ?
relname seq_scanseq_tup_readidx_scanidx_tup_fetch   
n_tup_ins   n_tup_upd
n_tup_del   n_tup_hot_upd   n_live_tup  n_dead_tup  
n_mod_since_analyze
last_vacuum last_autovacuum last_analyzelast_autoanalyze
vacuum_count
autovacuum_countanalyze_count   autoanalyze_count
pg_class5395512 38598962913 616674711   2219978581  2243
75187   625 44008
169891  56457435NULL2019-08-22 19:58:48.497317  NULL
2019-08-22
13:03:02.770579 0   415 0   1




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
https://explain.depesz.com/s/5Rrd

EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT
current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA, t.relname AS
TABLE_NAME, t.oid AS TABLE_OID, current_database() AS INDEX_CATALOG,
n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS INDEX_OID,
a.attname AS COLUMN_NAME, a.attnum AS COLUMN_POSITION,indoption[index] & 1
AS DESCENDING , indisunique::integer AS UNIQUE FROM   (SELECT
generate_series(0, indnatts - 1), indrelid, indexrelid, indkey,   indoption,
indisunique, indisprimary FROM pg_index i)   i(index, indrelid, indexrelid,
indkey, indoption, indisunique, indisprimary)  INNER JOIN pg_class c ON
c.oid = indexrelid   INNER JOIN pg_class t ON t.oid = indrelid   INNER JOIN
pg_namespace n ON n.oid = t.relnamespace   INNER JOIN pg_attribute a ON
a.attrelid = indrelid AND a.attnum = indkey[index] WHERE n.nspname LIKE
'f10_0275, public' AND t.relname LIKE 'sys_var' ORDER BY
indisprimary::integer DESC, n.nspname, t.relname, c.relname, a.attnum;



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
autovacuum_analyze_threshold   = 50;
autovacuum_analyze_scale_factor   = 0.1



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-23 Thread PegoraroF10
This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers. 
I cannot imagine our script doing 75000 updates just for adding those
schemas.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: slow queries on system tables

2019-08-24 Thread PegoraroF10
these values were collected before vacuum analyse:select (select reltuples
from pg_Class where relName = t.relName), * from pg_stat_sys_tables t where
schemaname = 'pg_catalog' and relname in
('pg_class','pg_attribute','pg_index');
reltuples   relid   schemaname  relname seq_scanseq_tup_read
idx_scan
idx_tup_fetch   n_tup_ins   n_tup_upd   n_tup_del   n_tup_hot_upd   
n_live_tup
n_dead_tup  n_mod_since_analyze last_vacuum last_autovacuum 
last_analyze
last_autoanalyzevacuum_countautovacuum_countanalyze_count
autoanalyze_count
930475  1249pg_catalog  pg_attribute22462   61582178
646833988   2785143399
25381   776 15287   566 10094   15497   41444   NULLNULLNULL
NULL0   0   0   0
169186  1259pg_catalog  pg_class5541140 39631813757 
638041068   2316793709
224375187   625 44008   169891  56457435NULL2019-08-22 
19:58:48.497317  NULL
2019-08-22 13:03:02.770579  0   415 0   1
87387   2610pg_catalog  pg_index5263825 33638625440 
246528384   590160274   1142
0   208 0   934 208 1350NULLNULLNULLNULL
0   0   0   0
Then ... vacuum analyze pg_catalog.pg_index;vacuum analyze
pg_catalog.pg_attribute;vacuum analyze pg_catalog.pg_class;
reltuples   relid   schemaname  relname seq_scanseq_tup_read
idx_scan
idx_tup_fetch   n_tup_ins   n_tup_upd   n_tup_del   n_tup_hot_upd   
n_live_tup
n_dead_tup  n_mod_since_analyze last_vacuum last_autovacuum 
last_analyze
last_autoanalyzevacuum_countautovacuum_countanalyze_count
autoanalyze_count
940569  1249pg_catalog  pg_attribute22550   61582570
646919000   2785430403
25381   776 15287   566 940569  0   0   2019-08-24 
09:01:26.334472  NULL2019-08-24
09:01:27.169883 NULL1   0   1   0
150903  1259pg_catalog  pg_class5542002 39640547448 
63819   2316941739
224375187   625 44008   150903  0   0   2019-08-24 
09:01:33.130845  2019-08-22
19:58:48.497317 2019-08-24 09:01:33.441355  2019-08-22 13:03:02.770579  
1   415
1   1
88321   2610pg_catalog  pg_index5263994 33639781274 
246576056   590229532   1142
0   208 0   88321   0   0   2019-08-24 09:01:18.080615  
NULL2019-08-24 09:01:18.275907
NULL1   0   1   0
and this is explain before
https://explain.depesz.com/s/fUNB
and after
https://explain.depesz.com/s/1FZR



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Operator is not unique

2019-09-24 Thread PegoraroF10
If I do ...
select 1::NUMERIC + 1::BIGINT;

I get ...
[42725] ERROR: operator is not unique: numeric + bigint Hint: Could not
choose a best candidate operator. You might need to add explicit type casts.

This error means I have more than one way to calculate that formula ? 
Did I create that operator erroneously ?
If so, how can I see what I did and how to fix it ?

I tried but didn´t find any answer.
select * from pg_operator 
join LATERAL ( select typname::text from pg_type where pg_type.oid = oprleft
) l(oprleft_name) on TRUE 
join LATERAL ( select typname::text from pg_type where pg_type.oid =
oprright ) r(oprright_name) on TRUE 
join LATERAL ( select nspname from pg_namespace where pg_namespace.oid =
oprnamespace ) n(namespace) on TRUE



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-24 Thread PegoraroF10
I don´t know if I did.

PostgreSQL 11.5 (Ubuntu 11.5-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled
by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, 64-bit



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-24 Thread PegoraroF10
Nope, seems I didn´t create anything.

SELECT * FROM pg_operator WHERE oid = '+(numeric, bigint)'::regoperator;
ERROR: operator does not exist: +(numeric, bigint)

Ok, I can cast, it works. But why works without casting for you and not for
me ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-25 Thread PegoraroF10
Doing a inner join with pg_proc I´ll get this result. So, none of this casts
I´ve created. Column prorettype is different, this is the problem ?select
format_type(castsource, NULL), format_type(casttarget, NULL),castfunc,
castcontext, castmethod, pr.* from pg_cast inner join pg_proc pr on castfunc
= pr.oid where (castsource = 'numeric'::regtype or castsource =
'bigint'::regtype) and format_type(casttarget, NULL) in ('numeric','bigint')
order by castsource;
format_type format_type castfunccastcontext castmethod  
proname pronamespace
proownerprolang procost prorows provariadic protransformprokind 
prosecdef
proleakproofproisstrict proretset   provolatile proparallel 
pronargs
pronargdefaults prorettype  proargtypes proallargtypes  proargmodes
proargnames proargdefaults  protrftypes prosrc  probin  proconfig   
proacl
bigint  numeric 1781i   f   numeric 11  10  12  1   
0   0   -   f   false   false   truefalse   i
s   1   0   170020  NULLNULLNULLNULLNULL
int8_numericNULLNULLNULL
numeric bigint  1779a   f   int811  10  12  1   
0   0   -   f   false   false   truefalse   i   s   
1
0   20  1700NULLNULLNULLNULLNULLnumeric_int8
NULLNULLNULL
numeric numeric 1703i   f   numeric 11  10  12  1   
0   0   numeric_transform   f   false
false   truefalse   i   s   2   0   17001700 23 NULL
NULLNULLNULLNULLnumeric NULL
NULLNULL




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Operator is not unique

2019-09-25 Thread PegoraroF10
format_type format_type castfunccastcontext 
castmethod
bigint  smallint714 a   f
bigint  integer 480 a   f
bigint  real652 i   f
bigint  double precision482 i   f
bigint  numeric 1781i   f
bigint  regclass1287i   f
bigint  regtype 1287i   f
bigint  regconfig   1287i   f
bigint  regdictionary   1287i   f
bigint  regrole 1287i   f
bigint  regnamespace1287i   f
bigint  bit 2075e   f
bigint  money   3812a   f
bigint  oid 1287i   f
bigint  regproc 1287i   f
bigint  regprocedure1287i   f
bigint  regoper 1287i   f
bigint  regoperator 1287i   f
numeric bigint  1779a   f
numeric smallint1783a   f
numeric integer 1744a   f
numeric real1745i   f
numeric double precision1746i   f
numeric money   3824a   f
numeric numeric 1703i   f




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

steps of a sql

2019-09-27 Thread PegoraroF10
Some days ago one of my customers claimed that one of his queries were
spending too much time. Then on postgres log it appears really strange. That
query doesn´t usually appears as parse, sometimes only as execute, because
it tooks almost all times less than 500ms. But on that day something
happened, what was that, PARSE time of 10 minutes ?

Command_Tag  Message
PARSE  duration: 552607.263 ms  parse PRSTMTST13817971293: WITH
...
SELECTduration: 1211.195 ms  execute
PRSTMTST13955596862/PORTALST13955596862: WITH  ...
PARSE  duration: 673398.585 ms  parse PRSTMTST13857292877: WITH
...
SELECTduration: 2742.588 ms  execute
PRSTMTST13817971293/PORTALST13817971293: WITH  ...
SELECTduration: 1318.766 ms  execute
PRSTMTST13844185613/PORTALST13844185613: WITH  ...

Then, I didn´t find in the docs what are the steps of execution of a query
and obviously why that SQL took so long. These 5 examples are the same
query, so it doesn´t make sense that I´ll wait 10 minutes do parse and just
1 or 2 seconds to execute. So, the quetion is, how can I know if I´m running
a simple query or a extended query, like the docs explain on
https://www.postgresql.org/docs/11/protocol-flow.html and what can I do to
find what happened and solve to never occur again.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-27 Thread PegoraroF10
oprname oprkind oprleft oprright
oprresult   oprcode
+   l   0   20  20  int8up
+   l   0   21  21  int2up
+   l   0   23  23  int4up
+   l   0   700 700 float4up
+   l   0   701 701 float8up
+   l   0   17001700numeric_uplus
+   b   20  20  20  int8pl
+   b   20  21  20  int82pl
+   b   20  23  20  int84pl
+   b   20  869 869 int8pl_inet
+   b   21  20  20  int28pl
+   b   21  21  21  int2pl
+   b   21  23  23  int24pl
+   b   23  20  20  int48pl
+   b   23  21  23  int42pl
+   b   23  23  23  int4pl
+   b   23  10821082integer_pl_date
+   b   600 600 600 point_add
+   b   602 600 602 path_add_pt
+   b   602 602 602 path_add
+   b   603 600 603 box_add
+   b   700 700 700 float4pl
+   b   700 701 701 float48pl
+   b   701 700 701 float84pl
+   b   701 701 701 float8pl
+   b   702 703 702 timepl
+   b   718 600 718 circle_add_pt
+   b   790 790 790 cash_pl
+   b   869 20  869 inetpl
+   b   103410331034aclinsert
+   b   108223  1082date_pli
+   b   108210831114datetime_pl
+   b   108211861114date_pl_interval
+   b   108212661184datetimetz_pl
+   b   108310821114timedate_pl
+   b   108311861083time_pl_interval
+   b   111411861114timestamp_pl_interval
+   b   118411861184timestamptz_pl_interval
+   b   118610821114interval_pl_date
+   b   118610831083interval_pl_time
+   b   118611141114interval_pl_timestamp
+   b   118611841184interval_pl_timestamptz
+   b   118611861186interval_pl
+   b   118612661266interval_pl_timetz
+   b   126610821184timetzdate_pl
+   b   126611861266timetz_pl_interval
+   b   170017001700numeric_add
+   b   170022832283pg_catalog.day_inc
+   b   228317002283pg_catalog.day_inc
+   b   383138313831range_union




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Operator is not unique

2019-09-30 Thread PegoraroF10
SELECT * FROM pg_extension;
extname, extversion
plpgsql,1.0
pg_stat_statements,1.6
pg_trgm,1.4
tablefunc,1.0
unaccent,1.1
pageinspect,1.7




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-30 Thread PegoraroF10
This select gives me:

ERROR: more than one function named "pg_catalog.day_inc"



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Operator is not unique

2019-09-30 Thread PegoraroF10
your select returns no records but if I use 
WHERE p.proname ~ 'day_inc'
instead of 
WHERE p.proname OPERATOR(pg_catalog.~) '^(pg_catalog.day_inc)$'


Schema
Name
Result data type
Argument data types
Type


pg_catalog
day_inc
anyelement
adate anyelement, ndays numeric
func


pg_catalog
day_inc
anyelement
ndays numeric, adate anyelement
func






--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Performance on JSONB select

2019-10-02 Thread PegoraroF10
select * from MyTable where
  ((JsonBField->>'status'='descartada' and
To_Date(JsonBField->'descartada'->>'data','-mm-dd') > Current_Date) or
   (JsonBField->>'status'='contrato' and
To_Date(JsonBField->'contrato'->>'data','-mm-dd') > Current_Date-7) or
   (JsonBField->>'status'='naoatribuido'));

Considering this table has a million records, I would like to use a proper
index, but how can I use an index when using operator >. 




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Performance on JSONB select

2019-10-02 Thread PegoraroF10
ok, my select performed better but I had to create 8 indices to speed up my
query.

I would love to create just one index using GIN(JsonBField jsonb_ops) but
using version 11 I cannot use operators like > and  <. 

I see on docs that version 12 has jsonpath Filter Expression Elements and
they include > and <. So, the question is, will be possible to have just one
index and use it with "jsonpath Filter Expression Elements ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Performance on JSONB select

2019-10-05 Thread PegoraroF10
I think I solved my problem. I didn´t know it was possible but I´ve created
an almost dynamic index, because it´ll index for status and corresponding
datevalue of an object with that status value. 

Postgres, I love you.
 
Just one B-Tree index with 2 fields, Status and DateTime of that respective
Status object. My Json has always a status and a respective object of that
status with other values. So I know if it has that status it has a Date
Value or not on its correspondent object. 

{
"status": "visitadescartada",
"contrato": {},
"atribuido": {"datevalue": "2019-09-05 14:47:11.149095-03"},
"trabalhando": {},
"visitaagendada": {"datevalue": "2019-09-05
15:06:24.255548-03","caption": "Agendado"},
"visitadescartada": {"datevalue": "2019-09-12
11:47:17.45782-03","caption": "Desagendado"},
"digitacaodescartada": {}
}

create index IndexByStatusAndDate on MyTable (
(JsonBField->>$$status$$),
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')) 
where (JsonBField ? $$status$$);

select * from MyTable where (JsonBField ? $$status$$) and
case when JsonBField->>$$status$$=$$cadastrodescartado$$ then
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$visitadescartada$$ then
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$contrato$$ then
castimmutabletimestamp(funilvendas->(JsonBField->>$$status$$)->>'data') >
castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$naoatribuido$$ then True end;

And performance now is great because I´m using both fields on index.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Performance on JSONB select

2019-10-05 Thread PegoraroF10
I think I solved my problem. I didn´t know it was possible but I´ve created
an almost dynamic index, because it´ll index for status and corresponding
datevalue of an object with that status value.

Postgres, I love you.
 
Just one B-Tree index with 2 fields, Status and DateTime of that respective
Status object. My Json has always a status and a respective object of that
status with other values. So I know if it has that status it has a Date
Value or not on its correspondent object.

{
"status": "visitadescartada",
"contrato": {},
"atribuido": {"datevalue": "2019-09-05 14:47:11.149095-03"},
"trabalhando": {},
"visitaagendada": {"datevalue": "2019-09-05
15:06:24.255548-03","caption": "Agendado"},
"visitadescartada": {"datevalue": "2019-09-12
11:47:17.45782-03","caption": "Desagendado"},
"digitacaodescartada": {}
}

create index IndexByStatusAndDate on MyTable (
(JsonBField->>$$status$$),
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue'))
where (JsonBField ? $$status$$);

select * from MyTable where (JsonBField ? $$status$$) and
case when JsonBField->>$$status$$=$$cadastrodescartado$$ then
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$visitadescartada$$ then
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$contrato$$ then
castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'data') >
castimmutabletimestamp($$2019-10-01$$)
 when JsonBField->>$$status$$=$$naoatribuido$$ then True end;

And performance now is great because I´m using both fields on index.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Performance on JSONB select

2019-10-05 Thread PegoraroF10
I told it was almost dynamic because it uses DateValue from an object or
another, depending on value of staus key.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Performance on JSONB select

2019-10-05 Thread PegoraroF10
No, no, no, partial index is the where clause of it and I´m not talking about
that. 
I´m talking about datevalue, which will be used one or another, depending on
status value

This record iindex will be status and datevalue from "visitadescartada"
object
{
  "status": "visitadescartada",
  "atribuido": {"datevalue": "2019-09-05 14:47:11"},
  "visitadescartada": {"datevalue": "2019-09-12 11:47:17"},
  "digitacaodescartada": {}
}

And here the index record will be status and datevalue from "atribuido"
object
{
  "status": "atribuido",
  "atribuido": {"datevalue": "2019-09-05 14:47:11"},
  "visitadescartada": {"datevalue": "2019-09-12 11:47:17"},
  "digitacaodescartada": {}
}



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Performance on JSONB select

2019-10-08 Thread PegoraroF10
Sorry, I told you that the status was always populated but not, I need that
filter for the index because not all records are using that json structure.
When json is not null then yes, Status is always there. 
I have maybe 20 or 25% of records having json populated, so, I really need
that filter.

Yes, that solution is fine. Selectivity is fine, speed is fine and the index
is "almost dynamic"



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Changing PK on replicated database

2019-10-16 Thread PegoraroF10
I have replication using Publication/Subscription and configured with REPLICA
IDENTITY DEFAULT, so it uses PK values to do the replication. Then, I´ve
imported data in a new schema and that has several tables with a record with
its PK = 0. Replication works but my application doesn´t because it needs to
see pk values > 0. 

So, I have to change those records with 0 on their pk to any value, what is
the best way to do that ?
If i just change pk valued on master how will the data of that record be
replicated ?
That record will be sent to replica as update but that PK doesn´t exist on
replica server, so ...

Or do I need to update them manually on Master and Replicated servers ?

I didn´t find any info about this on Docs and because that I´m posting about
this.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Changing PK on replicated database

2019-10-17 Thread PegoraroF10
Correct, those records are on replica too. I´m just talking about the best
way to update those keys and all their dependent tables.
If I change them first on master they will not be replicated because it will
be an update and their pk will not be found on replica, correct ?
If so, do I need to update manually on replica and later on master ? 
And on replica server, all FK will be updated if I change that PK ? On
replica triggers are not triggered, are foreign key cascade ?
If I do this way, when I change that PK on master I´ll get a warning on
replica server because that PK did not exist anymore ?

The only question is, what are correct steps to do when you need to change a
PK on replicated database, just that.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Changing PK on replicated database

2019-10-17 Thread PegoraroF10
Very good, records were changed and everything is ok.
Well, sorry for this stupid question but some time ago we did something
similar to this and our replica stoped, so I was just afraid of that
happenning again.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




security on user for replication

2019-11-11 Thread PegoraroF10
We use replication with publication/subsctription. It´s ok, works fine.
But if I go to my replica server and do select * from pg_subscription 
on field subconninfo I have all properties to connect. host, port, user,
password and dbname, all these info are available.
Documentation says user for replication is equivalent to a superuser and
must have the login attribute. If this user has all this power and using
that select on replica all that info is available ...
How can I hide that info from users which are connected to my replica server
or 
If it´s possible to have a replication user with not superuser rights or
with NoLogin



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

naming triggers for execution

2019-11-15 Thread PegoraroF10
On create trigger page of the doc:
If multiple triggers of the same kind are defined for the same event, they
will be fired in alphabetical order by name.
But suppose we have two triggers, one is called for every table for auditing
purposes, for example. And other for a specific process of that table.
Considering that the auditing needs to be the last, how can I be sure it´ll
ran lastly ? 

Would be cool if we could define position for triggers, or like SQL Server
which defines first and last position of it. But as we cannot use this
thing, what is the way you use.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: naming triggers for execution

2019-11-15 Thread PegoraroF10
well, my way of doing auditing is done on replica, so it´s a little different
on production server I do on before update

  IF (tg_op = 'UPDATE') AND (new.* IS DISTINCT FROM old.*) THEN
new.userauditing = User_ID
new.datetimeauditing = current_timestamp;
  END IF;
  RETURN new;

Then, on replica server I do
  IF (tg_op = 'INSERT') THEN
insert into auditingtable  with insert data
  ELSIF (tg_op = 'UPDATE') AND (new.datetimeauditing IS DISTINCT FROM
old.datetimeauditing) THEN
insert into auditingtable  with old and new data
  ELSIF (tg_op = 'DELETE') THEN
insert into auditingtable  with old data
  END IF;
That trigger on replica is configured to run on replica with ENABLE REPLICA
TRIGGER 

With this approach I´m sure nothing will be audited if nothing was changed
and additionally all auditing will be done on replica which will frees the
production server for production and not auditing.

But, independently of my auditing is being different from yours, what do you
do when you have two triggers using same event on same table. 
Another example I can give you is when you define a PK. Imagine you have a
function which creates your PK, but another trigger needs that pk value to
do something. Both are ran before insert but trigger which creates PK needs
to be the first. How can you sure this happens.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: naming triggers for execution

2019-11-15 Thread PegoraroF10
I know it´s possible, but it´s ugly.
When you see a trigger called zaudittable is really strange



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




replace single char for string using regexp_replace

2019-12-29 Thread PegoraroF10
I have to replace ; , " and ' chars in an string putting its correspondent
ascii value

I have a
Replace(Replace(Replace(Replace($$Text,with;On'It"$$,',','chr(59)'),';','chr(44)'),'"','chr(34)'),,'chr(39)')
It works but I would like to call just one Replace.

I tried this but didn´t work
select
regexp_replace($$Text,with;On'It"$$,$$,|;|"|'$$,$$chr(59)|chr(44)|chr(34)|chr(39)$$,'g');
, -> chr(59)
; -> chr(44)
" -> chr(34)
' -> chr (39)




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Options for Postgres FDW

2020-01-30 Thread PegoraroF10
I´m trying to insert/update some tables using FDW. I´m having problems when I
want to use default values and triggers. 

For columns with default values, if I drop that column on foreign table
definition is fine for inserts but I cannot use that column for
updates/selects because it doesn´t exist on foreign table definition.

For triggers, there is a way to define search_path for that
server/connection ? Because suppose I´ve used "import foreign schema
CustomerSchema", when I do some insert/update on any table of this schema,
if that table calls a trigger which needs another table, that trigger
doesn´t sees that table because it´s using pg_catalog schema. The only way
to solve this is defining search_path when starting all triggers ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Options for Postgres FDW

2020-01-30 Thread PegoraroF10
Additionally, there is a list of all options for create foreign server and
create foreign table ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




How to discover what table is

2020-03-08 Thread PegoraroF10
I have a publication/subscription replication.
Then this week started to see this message on Log of replica server.

Message is "duplicate key value violates unique constraint "pksyslookup""
Detail is "Key (lookup_id)=(56) already exists." 

and on production server message is
"logical decoding found consistent point at 5D5/5CD64ED0"


I have 200 schemas on that database and every schema has that table, so ho
can I discover what table is having that violation ? 
There is a way to know what WAL is being processed and get the command
should run to discover what table should be updated ?




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: How to discover what table is

2020-03-08 Thread PegoraroF10
correct, what schema that table belongs to.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: How to discover what table is

2020-03-10 Thread PegoraroF10
Now I have the same problem with a different message.
I´ve added a table on all schemas and did a refresh publication. when
postgres sees a new table on publisher he goes to replicamand trucates that
table to start copying. ok but I have 300 schemas, how can I know what
schema that table belongs to ?

On log of replica server ...

Message
cannot truncate a table referenced in a foreign key constraint  
Detail
Table "rel_indicacao" references "cad_digitacao".   
Hint
Truncate table "rel_indicacao" at the same time, or use TRUNCATE ...
CASCADE.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: How to discover what table is

2020-03-10 Thread PegoraroF10
built in logical replication



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: How to discover what table is

2020-03-11 Thread PegoraroF10
Well, for now it´s solved but I´ll explain what happens to solve it better on
future.
Suppose on Master you have a database with hundreds of schemas with same
structure, so table Customer happens 500 times on that DB. That database
being replicated with publication/subscription for all tables model, just
that. Now to have new schemas added to this database you go to replica,
create them with structure only, go to master and create them with data, go
back to replica and refresh publication. But then imagine that one of those
schemas you´ve created on replica with data. This is where problem occurs
and message is just PK of Table Customer is duplicated but is not easy to
find which table because I´ll have that table and that key on 90% of my
schemas. If, on error message we just have which schema belongs that table
would be great.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Reading WALs

2020-03-14 Thread PegoraroF10
On my built in logical replication I see that sometimes it tooks a long time
to be updated with master. The question is how can I see what is being done
on replica ? I know I can have a WAL reader, I tried WAL2JSON, it listen all
data which comes from master, but as I understood when I use that plugin
that data is discarded, so, not replicated. 

Is that correct ? Can I see what is coming to replica server and apply that
change to server as usual ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Reading WALs

2020-03-14 Thread PegoraroF10
pg_Stat_Subscription does not anwer what I want. This view just show if
replication is being processed but not exactly what it´s doing. 
Is it replicating a huge table which an erroneous proess updated several
times ?
Replication process is recreating what index and that is spending how much
time ?
What WAL, table, operation is the one being processed ?

So I would like to have a listener on replica server just to see all WAL
info is coming, just that.
Then I can go back to production and see what´s could run better, if
application, triggers and so on.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Bug on version 12 ?

2020-05-15 Thread PegoraroF10
select To_Json(Current_Timestamp);
returns "2020-05-15T14:49:44.266342+00:00" on version 11.7 or 12.3

So I have lots of JSONS which have timestamp on them.

select 
  JS ->> 'mydate'::text, 
  to_timestamp((JS ->> 'mydate'), '-MM-DD HH24:MI:SS')
from (select '{"somefield": true, "otherfield": true, "mydate":
"2020-04-02T18:26:50.941531-03:00"}'::jsonb) j(JS)

This SQL works fine on 11.7 but not on 12.3 version.

ERROR: invalid value "T1" for "HH24"
Detail: Value must be an integer.
Where: SQL function "castimmutabletimestamp" statement 1

Is that a version 12 bug or a server configuration ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Bug on version 12 ?

2020-05-15 Thread PegoraroF10
Both are ISO, MDY



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Bug on version 12 ?

2020-05-15 Thread PegoraroF10
Ok Tom but then you cannot go back and forth, like this ...

select to_timestamp(jsonb_build_object('mydate',
current_timestamp)->>'mydate', '-MM-DD HH24:MI:SS');

works on 11.7 but not on 12.3.





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Bug on version 12 ?

2020-05-15 Thread PegoraroF10
I understood, but the problem is that I cannot just migrate from 11 to 12, I
have to carefully verify all code before migration.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Server goes to Recovery Mode when run a SQL

2020-08-12 Thread PegoraroF10
Hi Michael, so sorry for the delay. I did never had that problem again, so
tougth it was solved, but seems not. We have two different SQL which pushs
server to recovery mode. This SQL is the hardest one, because you´ll have to
wait some time to get the error. Just runs this SQL, opens your HTOP and
wait, you´ll see your memory growing and growing until ... 

PostgreSQL 11.5 (Ubuntu 11.5-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled
by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit

But same problem occurs on version 10.

with feriados as (
  SELECT dia, repete
  FROM (VALUES ('2014-10-11'::DATE, FALSE), ('2014-10-13',
FALSE),('2014-10-14', FALSE), ('2014-10-15', FALSE), ('2014-10-16', FALSE),
   ('2014-10-17', FALSE), ('2014-12-19', FALSE), ('2014-12-20',FALSE),
('2014-12-22', FALSE), ('2014-12-23', FALSE),
   ('2014-12-24', FALSE), ('2014-12-26', FALSE), ('2014-12-27',FALSE),
('2014-12-29', FALSE), ('2014-12-30', FALSE),
   ('2014-12-31', FALSE), ('2015-01-02', FALSE), ('2015-01-03',FALSE),
('2015-01-04', FALSE), ('2015-02-16', FALSE),
   ('2015-02-17', FALSE), ('2015-04-03', FALSE), ('2015-04-04',FALSE),
('2015-06-04', FALSE), ('2015-12-18', FALSE),
   ('2015-12-19', FALSE), ('2015-12-21', FALSE), ('2015-12-22',FALSE),
('2015-12-23', FALSE), ('2015-12-24', FALSE),
   ('2015-12-26', FALSE), ('2015-12-28', FALSE), ('2015-12-29',FALSE),
('2015-12-30', FALSE), ('2015-12-31', FALSE),
   ('2016-01-02', FALSE), ('2016-01-04', FALSE), ('2016-01-05',FALSE),
('2016-01-06', FALSE), ('2016-01-07', FALSE),
   ('2016-01-08', FALSE), ('2016-01-09', FALSE), ('2016-02-08',FALSE),
('2016-03-09', FALSE), ('2016-03-25', FALSE),
   ('2016-03-26', FALSE), ('2016-05-26', FALSE), ('2016-12-24',FALSE),
('2016-12-28', FALSE), ('2016-12-29', FALSE),
   ('2016-12-30', FALSE), ('2016-12-31', FALSE), ('2017-01-02',FALSE),
('2017-01-03', FALSE), ('2017-01-04', FALSE),
   ('2017-01-05', FALSE), ('2017-01-06', FALSE), ('2017-01-07',FALSE),
('2017-02-25', FALSE), ('2017-02-27', FALSE),
   ('2017-03-09', FALSE), ('2017-04-14', FALSE), ('2017-04-15',FALSE),
('2017-06-15', FALSE), ('2017-09-30', FALSE),
   ('2017-12-18', FALSE), ('2017-12-19', FALSE), ('2017-12-20',FALSE),
('2017-12-21', FALSE), ('2017-12-22', FALSE),
   ('2017-12-23', FALSE), ('2017-12-26', FALSE), ('2017-12-27',FALSE),
('2017-12-28', FALSE), ('2017-12-29', FALSE),
   ('2017-12-30', FALSE), ('2018-01-02', FALSE), ('2018-01-03',FALSE),
('2018-01-04', FALSE), ('2018-01-05', FALSE),
   ('2018-01-06', FALSE), ('2018-01-07', FALSE), ('2018-02-12',FALSE),
('2018-02-13', FALSE), ('2018-03-30', FALSE),
   ('2018-03-31', FALSE), ('2018-04-30', FALSE), ('2018-05-31',FALSE),
('2018-10-15', FALSE), ('2018-12-18', FALSE),
   ('2018-12-19', FALSE), ('2018-12-20', FALSE), ('2018-12-21',FALSE),
('2018-12-22', FALSE), ('2018-12-24', FALSE),
   ('2018-12-26', FALSE), ('2018-12-27', FALSE), ('2018-12-28',FALSE),
('2018-12-29', FALSE), ('2018-12-31', FALSE),
   ('2019-01-01', TRUE), ('2019-01-02', FALSE), ('2019-01-03',FALSE),
('2019-01-04', FALSE), ('2019-01-05', FALSE),
   ('2019-01-07', FALSE), ('2019-02-09', TRUE), ('2019-03-09',TRUE),
('2019-04-21', TRUE), ('2019-05-01', TRUE),
   ('2019-09-07', TRUE), ('2019-10-12', TRUE), ('2019-11-02',
TRUE),('2019-11-15', TRUE), ('2019-12-19', TRUE),
   ('2019-12-20', TRUE), ('2019-12-21', TRUE), ('2019-12-22',
TRUE),('2019-12-23', TRUE), ('2019-12-25', TRUE),
   ('2019-12-26', TRUE), ('2019-12-27', TRUE)) x (dia, repete)
), materias as (
  SELECT * from (VALUES
(593, 11091, 'AAC - Ética e Segurança Digital', 9, 120,
'2019/01/30'::DATE, '2019/01/30'::DATE, 60, 120, 0),
(593, 11085, 'AAC - Computação nas Nuvens', 12, 60, NULL, NULL, 60, 120,
60)
  ) x (turma_id, materia_id, materia, sequencia, previsto, dataini, datafim,
tempoatividade, minutosaula, minutosrestantes)
), col_diasaula(turma_id, dia, tempoaula) as(select * from (values(593, 3,
time '02:00')) as x(turma_id, dia, tempoaula)
), aulasporsemana as (
   select * from (values (593,1)) x (turma_id, quantidade)
), assistidas (turma_id, sequencia, conteudo_id, conteudo, prevista,
aularealizada, tempoatividade, aulasNoDia, dia, minutoaula,
minutosassistidos, cargaconteudo, cargarestante) as (
  SELECT
materias.turma_id,
materias.sequencia,
materias.materia_id,
materias.materia,
coalesce(realizada.prevista, 1),
realizada.aularealizada,
materias.tempoatividade,
(realizada.minutosassistidos / materias.tempoatividade),
realizada.dia,
materias.minutosaula,
realizada.minutosassistidos,
materias.previsto,
coalesce(materias.previsto - (row_number() OVER AulaDaMateria *
realizada.minutosassistidos), materias.previsto)
  FROM materias
  LEFT JOIN LATERAL (
SELECT TRUE, tsrange(col_aula.data, (col_aula.data + ( col_aula.tempo
||' minute')::INTERVAL)) dia, 0 prevista, (extract(EPOCH FROM
col_aula.tempo) / 60) minutosassistidos
FROM (VALUES
 (593, 11091, TIMESTAMP '2019-01-30 19:00:00', TIME '02:00'),
 (593, 11091, '2019-02

Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
This is the way you can test to push your server immediatelly to recovery
mode. This one you´ll get the problem instantaneously, differently from the
last one which you have to wait a long time to happen.

A master detail relation with an additional table of configurations. Usually
this master has 20 or 40 thousand records and detail has 10 or 20 times
more. Configuration table is a key-value records, both text which I store
values of any type, just prefixing it with S: for string B: for boolean and
so on.

Tests were done on ...
PostgreSQL 11.5 (Ubuntu 11.5-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled
by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit

drop table if exists ctr;create table ctr(Contrato_ID integer primary key,
Matricula Date, Status smallint);
drop table if exists rec;create table rec(Receber_ID serial primary key,
Contrato_ID integer, Parcela smallint, TipoRecebimento smallint, Vencimento
Date, Quitacao Date, Valor Numeric, Calculo Numeric, ValorPago Numeric);
drop table if exists var;create table var(Var_ID serial primary key, Name
text, VarValue text);

Populate master detail tables with some random values ...

with Contratos(Contrato_ID, Matricula, Status, Parcelas, ValorParcela,
Quitadas) as (select g, (Current_Date - (trunc(random()*2000)||'
day')::interval)::Date, trunc(random()*9)+1,
greatest(trunc(random()*20)::integer,11), (random()*200+100)::numeric(15,2),
least(trunc(random()*20)::integer,5) from generate_series(1,2,1) g),
  Receber(Contrato_ID, Parcela, TipoRecebimento, Vencimento, Valor) as
(select Contrato_ID, Linha, trunc(random()*10), Matricula + ('1
month'::interval * linha), ValorParcela from Contratos join lateral (select
Linha from generate_series(1,Parcelas,1) Linha) g on true),
  insContratos as (insert into CTR select Contrato_ID, Matricula, Status
from Contratos returning *),
  insReceber as (insert into REC(Contrato_ID, Parcela, TipoRecebimento,
Vencimento, Quitacao, Valor, Calculo, ValorPago) select R.Contrato_ID,
Parcela, TipoRecebimento, Vencimento, case when Parcela<=Quitadas then
Vencimento else null end, Valor, Valor*.9, case when Parcela<=Quitadas then
Valor else null end from Receber R inner join Contratos using(Contrato_ID)
returning *)
select (select count(*) from insContratos), count(*) from insReceber;

populate configuration table ... (this table usually has 2 or 3 thousand
records but with these ones the problem occurred already)

insert into var(Name, VarValue) select * from (Values
('/Config/StatusVisitaObrigaRetorno','S:2,17'),
('/Config/TemWhatsApp','B:True'),
('/Config/TempoRetornarAtriuido','S:03:00'),
('/Config/TempoRetornarTrabalhando','S:25'),
('/Config/Timezone','S:America/Sao_Paulo'),
('/Config/TipoFonteFunilVendas','I:7'),
('/Config/TipoRecebimentoCancelamento','S:6,7,10'),
('/Config/TipoRecebimentoPadraoCobranca','S:4'),
('/Config/TitularIdadeMinima','F:18'),
('/Config/TreinamentoCaixa','B:True'),
('/Config/TreinamentoNaoAlunos','S:Palestra'),
('/Config/TurmaContrato','B:False')) x;

This one works because I used a CTE

explain (analyze, buffers, verbose, costs) with Vars(VarValue) as (select
string_to_array(substring(VarValue from 3),',')::smallint[] from Var where
Name = '/Config/TipoRecebimentoCancelamento')
select C.contrato_id,
NullIf(count(Receber_ID) filter (where ((status <> 6) or (TipoRecebimento =
any(VarValue))) and (Vencimento < current_date) and (Quitacao is null)),0),
NullIf(sum(valor) filter (where ((status <> 6) or (TipoRecebimento =
any(VarValue))) and (Vencimento < current_date) and (Quitacao is null)),0),
NullIf(sum(calculo) filter (where ((status <> 6) or (TipoRecebimento =
any(VarValue))) and (Vencimento < current_date) and (Quitacao is null)),0),
NullIf(min(Vencimento::date) filter (where ((status <> 6) or
(TipoRecebimento = any(VarValue))) and (Quitacao is null)), null),
NullIf(count(Receber_ID) filter (where ((status <> 6) or (TipoRecebimento =
any(VarValue))) and (Quitacao is null)),0),
NullIf(sum(valor) filter (where ((status <> 6) or (TipoRecebimento =
any(VarValue))) and (Quitacao is null)),0),
NullIf(sum(Valor),0),
NullIf(sum(ValorPago),0)
from ctr C left join rec using(Contrato_ID) cross join Vars group by
C.Contrato_ID;

This is the one my server goes to recovery mode.

explain (analyze, buffers, verbose, costs) select C.contrato_id,
NullIf(count(Receber_ID) filter (where ((status <> 6) or (TipoRecebimento in
(select substring(VarValue from 3) from Var where Name =
'/Config/TipoRecebimentoCancelamento'))) and (Vencimento < current_date) and
(Quitacao is null)),0),
NullIf(sum(valor) filter (where ((status <> 6) or (TipoRecebimento in
(select substring(VarValue from 3) from Var where Name =
'/Config/TipoRecebimentoCancelamento'))) and (Vencimento < current_date) and
(Quitacao is null)),0),
NullIf(sum(calculo) filter (where ((status <> 6) or (TipoRecebimento in
(select substring(VarValue from 3) from Var where Name =
'/Config/TipoRecebimentoCancelamento'))) and (Vencimento < current_date) and
(Quitaca

Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
Ok, it works if I cast TipoRecebimento to text.

(TipoRecebimento::text in (select substring(VarValue from 3) from Var where
Name = '/Config/TipoRecebimentoCancelamento'))

But now, how can I know what custom operator it´s trying to use, because
some day I can get in troubles again because that.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
Correct, on yours it gives an error message, but on mine it tries to execute
and goes to recovery.

So, how can I know which custom operator is being used on that comparison ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
It works, shouldn´t but works. Results False

And I have created these two operators a long time ago. 

CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = text,
RIGHTARG = public.i32
);
CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = public.i32,
RIGHTARG = text
);

Droped them and works as expected, giving me same message like yours.

DROP OPERATOR public.= (text,public.i32);
DROP OPERATOR public.= (public.i32,text);




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
And the operator function, just to you know, is only a conversion to text

CREATE FUNCTION public.fntextonumero(ftext text, finteger public.i32)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
  SELECT ftext = finteger::text;
$$;

CREATE FUNCTION public.fntextonumero(finteger public.i32, ftext text)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
  SELECT ftext = finteger::text;
$$;




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Server goes to Recovery Mode when run a SQL

2020-08-14 Thread PegoraroF10
Well, I didn´t know where was the problem exactly, then ...
The entire script to see that problem is 

create domain public.i32 as integer;

CREATE FUNCTION public.fntextonumero(ftext text, finteger public.i32)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
  SELECT ftext = finteger::text;
$$;
CREATE FUNCTION public.fntextonumero(finteger public.i32, ftext text)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
  SELECT ftext = finteger::text;
$$;

CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = text,
RIGHTARG = public.i32
);
CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = public.i32,
RIGHTARG = text
);

Then that script I´ve sent you this morning.
This way I think you can get the problem. Is this a Postgres problem ? Well,
I created these operators because Firebird accepts comparisons between
numeric and text values, and as we converted from that ... I found a way to
continue comparing them. The problem ocurred now because the IN operator. 



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Cluster and Vacuum Full

2020-10-05 Thread PegoraroF10
I have tables which are Master Detail and usually my program loads all detail
records of a master record. So I configured CLUSTER on all those detail
tables to use an index which is the relation with master table. With that I
can load less records to Shared Buffers because all detaild records are on
sequencial pages, that´s fine.

I know that to have this working I have to periodically run CLUSTER, ok too.

But today instead of CLUSTER I just ran VACUUM FULL. How vacuum full
recreates entirelly that table, like cluster does, I thought it would use
that index too, but didn´t. I didn´t see what sequence of records it did but
was not the one I´ve configured with cluster. 

Is that expected or Vacuum should obbey cluster configuration ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Primary Key index with Include

2018-11-28 Thread PegoraroF10
As you know version 11 gives us the possibility to include aditional columns
to an index.
So, if I recreate all my primary key indexes to contain a description
column, like ...

CREATE UNIQUE INDEX CONCURRENTLY Products_pk ON Products(Product_ID)
INCLUDE(Name);
ALTER TABLE Products ADD CONSTRAINT Products_pk PRIMARY KEY(Product_ID)
USING INDEX Products_pk

Then, when I do
select Item_ID, Product_ID, Name from Items inner join Products
using(Product_ID)

I thought It would do a index scan only for Products table, but it does not,
why ?
What do I need do to use index scan only when searching Product_ID and Name
on table Products ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Transition Tables doesn´t have OID

2018-12-01 Thread PegoraroF10
I´m trying to use transition tables for auditing purposes.

create trigger MyTableAudit_UPD after update on MyTable referencing old
table as Transition_old new table as Transition_new for each statement
execute procedure AuditTable();

create or replace function AuditTable() returns trigger language plpgsql as
$$
if (TG_OP = 'UPDATE') then
  insert into audittable(table_name, oid, audit_action, user_id,
table_schema, values_old, values_new)
  select TG_TABLE_NAME, Transition_new.oid, TG_OP, CURRENT_USER,
TG_TABLE_SCHEMA,  row_to_json(Transition_old.*)::jsonb,
row_to_json(Transition_new.*)::jsonb from Transition_new inner join
Transition_old on Transition_new.OID = Transition_old.OID;
elsif (TG_OP = 'DELETE') then
  insert into audittable(table_name, oid, audit_action, user_id,
table_schema, values_old)
  select TG_TABLE_NAME, Transition_old.oid, TG_OP, CURRENT_USER,
TG_TABLE_SCHEMA, row_to_json(Transition_old.*)::jsonb from Transition_old;
end if;

[42703] ERROR: column transition_new.oid does not exist Where: função
PL/pgSQL audittable() linha 14 em comando SQL

I would like to user OID value because we change our primary keys,
sometimes, OID doesn´t.

So, there is a way to get OID on transition tables ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Transition Tables doesn´t have OID

2018-12-01 Thread PegoraroF10
yes, mytable has OID column

select oid, ID, Name from MyTable limit 3

oid id  name
279515  1104Carol 
279516  1106Dalva 
279517  11008354Melissa 




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Transition Tables doesn´t have OID

2018-12-01 Thread PegoraroF10
ok
then, help me to find alternatives to it.
As I sad, sometimes whe change our PK, so using OID would be a smart way to
have a join between old and new transition tables and we would like to use
transition tables because each statement is a lot faster than each row for
auditing purposes.

So, whats the alternative ? One trigger for each row just for changing PK
values (will occur few times) and another trigger for each statement to do
the logging ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Server goes to Recovery Mode when run a SQL

2019-02-03 Thread PegoraroF10
I have a complex query which puts my server in recovery mode every time I run
it. I don´t need to say that recovery mode is a situation you don´t want
your server goes to.*If I´m using some subselects I´ll get that
situation*with  StatusTrabalhando(Intkey) as (select Intkey from sys_Lookup
where Grupo = $$StatusDigitacao$$ and Intkey in (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoTrabalhando$$)), 
StatusAgendados(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusVisita$$ and Intkey in (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoAgendados$$) and Intkey not in
(select unnest(string_to_array(substring(VarValue from 3),$$,$$)) from
sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs,Extra1, Extra2, Extra3,
Extra4, Colegio, Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia,
Status, lkStatus, lkFonte, lkDigitador, lkPesquisador,   
lkTelemarketing, lkEscolaridade, Endereco, Bairro, Cidade, Estado, CEP,
Repetido, Impresso, Etiqueta, WhatsApp, VisitaNome, VisitaStatus, Comercial,
Midia, Prioridade, DataHoraAlteracao, Fluxo)...*but if I run those
subselects previously, get those values and put the results to my statement,
then it works. *with   StatusDigitacaoReceptivoDescartarAgendados(Intkey) as
(select unnest(string_to_array(substring(VarValue from 3),$$,$$)) from
sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$),  
StatusDigitacaoReceptivoAgendados(IntKey) as (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoAgendados$$),  
StatusDigitacaoReceptivoTrabalhando(IntKey) as (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoTrabalhando$$)select (select
IntKey from StatusDigitacaoReceptivoDescartarAgendados), (select IntKey from
StatusDigitacaoReceptivoAgendados), (select IntKey from
StatusDigitacaoReceptivoTrabalhando);*Then i get these results (8, 14 and
17) and replace those subselects with these values and run, now it runs
fine.*with  StatusTrabalhando(Intkey) as (select Intkey from sys_Lookup
where Grupo = $$StatusDigitacao$$ and Intkey in (*8*)), 
StatusAgendados(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusVisita$$ and Intkey in (*14*) and Intkey not in (*17*)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs,Extra1, Extra2, Extra3,
Extra4, Colegio, Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia,
Status, lkStatus, lkFonte, lkDigitador, lkPesquisador,   
lkTelemarketing, lkEscolaridade, Endereco, Bairro, Cidade, Estado, CEP,
Repetido, Impresso, Etiqueta, WhatsApp, VisitaNome, VisitaStatus, Comercial,
Midia, Prioridade, DataHoraAlteracao, Fluxo)*My original query, which worked
for some days and then went to recovery mode was ...*with 
StatusTrabalhando(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusDigitacao$$ andIntkey in (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoTrabalhando$$)), 
StatusAgendados(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusVisita$$ andIntkey in (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoAgendados$$) andIntkey not in
(select unnest(string_to_array(substring(VarValue from 3),$$,$$)) from
sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs,Extra1, Extra2, Extra3,
Extra4, Colegio, Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia,
Status, lkStatus, lkFonte, lkDigitador, lkPesquisador,   
lkTelemarketing, lkEscolaridade, Endereco, Bairro, Cidade, Estado, CEP,
Repetido, Impresso, Etiqueta, WhatsApp, VisitaNome, VisitaStatus, Comercial,
Midia, Prioridade, DataHoraAlteracao, Fluxo);   as (select 
D.Digitacao_ID,
D.Fonte_ID, D.Curso_ID, Digitador_ID, D.Pesquisador_ID, D.Telemarketing_ID,
D.Nome, Curso.Descricao Curso, D.Telefone, D.Celular, D.Nascimento, D.Sexo,
D.Escolaridade, D.Cadastro, D.Email, D.Idade, D.Obs, D.Extra1, D.Extra2,
D.Extra3, D.Extra4, D.Colegio, D.Serie, D.Turma, D.Turno, D.AnoLetivo,
D.Pai, D.Mae, D.Sequencia, D.Status, Status.Descricao lkStatus, Fonte.Fonte
lkFonte, DigitadorPessoa.Apeli

Re: Server goes to Recovery Mode when run a SQL

2019-02-03 Thread PegoraroF10
I´m using Postgres 10 on ubuntu in a Google VM (8 cores, 32Gb RAM, 250Gb SSD)
and DB has 70GB



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Server goes to Recovery Mode when run a SQL

2019-02-03 Thread PegoraroF10
Just to be readable ...

And ... server log has only "the database system is in recovery mode" every
time I run that query.

I have a complex query which puts my server in recovery mode every time I
run it. 
I don´t need to say that recovery mode is a situation you don´t want your
server goes to. 
If I´m using some subselects I´ll get that situation 

with 
StatusTrabalhando(Intkey) as 
  (select Intkey from sys_Lookup where Grupo = $$StatusDigitacao$$ and
Intkey in (select unnest(string_to_array(substring(VarValue from 3),$$,$$))
from sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoTrabalhando$$)), 
StatusAgendados(Intkey) as 
  (select Intkey from sys_Lookup where Grupo = $$StatusVisita$$ and Intkey
in (select unnest(string_to_array(substring(VarValue from 3),$$,$$)) from
sys_Var where Name = $$/Company/StatusDigitacaoReceptivoAgendados$$) and 
Intkey not in (select unnest(string_to_array(substring(VarValue from
3),$$,$$)) from sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs, Extra1, Extra2, Extra3, Extra4,
Colegio, 
  Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia, Status, lkStatus,
lkFonte, lkDigitador, lkPesquisador, lkTelemarketing, lkEscolaridade,
Endereco, Bairro, Cidade, Estado, CEP, Repetido, Impresso, Etiqueta,
WhatsApp, 
  VisitaNome, VisitaStatus, Comercial, Midia, Prioridade, DataHoraAlteracao,
Fluxo) ... 

But if I run those subselects previously, get result values and put the
results to my statement, then it works. 
with 
StatusDigitacaoReceptivoDescartarAgendados(Intkey) as (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoDescartarAgendados$$), 
StatusDigitacaoReceptivoAgendados(IntKey) as (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoAgendados$$), 
StatusDigitacaoReceptivoTrabalhando(IntKey) as (select
unnest(string_to_array(substring(VarValue from 3),$$,$$)) from sys_Var where
Name = $$/Company/StatusDigitacaoReceptivoTrabalhando$$) 
select (select IntKey from StatusDigitacaoReceptivoDescartarAgendados),
(select IntKey from StatusDigitacaoReceptivoAgendados), (select IntKey from
StatusDigitacaoReceptivoTrabalhando); 

Then i get these results (8, 14 and 17) and replace those subselects with
these values and run, now it runs fine. 
with 
StatusTrabalhando(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusDigitacao$$ and Intkey in (8)), 
StatusAgendados(Intkey) as (select Intkey from sys_Lookup where Grupo =
$$StatusVisita$$ and Intkey in (14) and Intkey not in (17)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs, Extra1, Extra2, Extra3, Extra4,
Colegio, 
  Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia, Status, lkStatus,
lkFonte, lkDigitador, lkPesquisador, lkTelemarketing, lkEscolaridade,
Endereco, Bairro, Cidade, Estado, CEP, Repetido, Impresso, Etiqueta,
WhatsApp, 
  VisitaNome, VisitaStatus, Comercial, Midia, Prioridade, DataHoraAlteracao,
Fluxo) 

My original query, which worked for some days and then went to recovery mode
was ... 
with 
StatusTrabalhando(Intkey) as 
  (select Intkey from sys_Lookup where Grupo = $$StatusDigitacao$$ and
Intkey in (select unnest(string_to_array(substring(VarValue from 3),$$,$$))
from sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoTrabalhando$$)), 
StatusAgendados(Intkey) as 
  (select Intkey from sys_Lookup where Grupo = $$StatusVisita$$ and Intkey
in (select unnest(string_to_array(substring(VarValue from 3),$$,$$)) from
sys_Var where Name = $$/Company/StatusDigitacaoReceptivoAgendados$$) and 
  Intkey not in (select unnest(string_to_array(substring(VarValue from
3),$$,$$)) from sys_Var where Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs, Extra1, Extra2, Extra3, Extra4,
Colegio, 
  Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia, Status, lkStatus,
lkFonte, lkDigitador, lkPesquisador, lkTelemarketing, lkEscolaridade,
Endereco, Bairro, Cidade, Estado, CEP, Repetido, Impresso, Etiqueta,
WhatsApp, 
  VisitaNome, VisitaStatus, Comercial, Midia, Prioridade, DataHoraAlteracao,
Fluxo) as 
  (select D.Digitacao_ID, D.Fonte_ID, D.Curso_ID, Digitador_ID,
D.Pesquisador_ID, D.Telemarketing_ID, D.Nome, Curso.Descricao Curso,
D.Telefone, D.Celular, D.Nascimento, D.Sexo, D.Escolaridade, D.Cadastro,
D.Email, D.Idade, 
  D.Obs, D.Extra1, D.Extra2, D.Extra3, D.Extra4, D.Colegio, D.Serie,
D.Turma, D.Turno, D.AnoLetivo, D.Pai, D.

Re: Server goes to Recovery Mode when run a SQL

2019-02-04 Thread PegoraroF10
Nothing was change, Postgres 10, Ubuntu 16.04 and Schema was the same, before
and after that problem.
Well, that database is replicated and on replicated server that problem
doesn´t occur.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Server goes to Recovery Mode when run a SQL

2019-02-04 Thread PegoraroF10
About replication ... Logical Replication with CREATE
PUBLICATION/SUBSCRIPTION.

Yes, some DDL commands were ran on that server but none of them were related
with that select.
Let me explain better. We have a single server with a single database on it.
Each customer has its own schema and connects to it to work exclusively on
that schema. So, sometimes we add some customer or change something on an
old one. But this DDL change we could ran is not related with that schema we
are talking. Schemas can have different structures but that schema which
puts my server on recovery mode was not changed.

Maybe something related happened some days ago. When we start a new customer
we add a schema, put all their tables on it and it´s ok. Our server has
today 90 schemas and each schema has 100 tables, resulting in 9000 tables.
Some days ago we added 5 new customers on same day, so we added 500 tables.
Then, when we did that some selects on system tables were very very slow and
that was only solved when we did a REINDEX DATABASE. Even REINDEX SYSTEM did
not solve. Is this problem related with recovery mode of my server ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Server goes to Recovery Mode when run a SQL

2019-02-08 Thread PegoraroF10
*Well, now we have two queries which stops completelly our postgres server.
That problem occurs on 10.6 and 11.1 versions.
On both server the problem is the same. 
Linux logs of old crash are:*
Feb  1 18:39:53 fx-cloudserver kernel: [  502.405788] show_signal_msg: 5
callbacks suppressedFeb  1 18:39:53 fx-cloudserver kernel: [  502.405791]
postgres[10195]: segfault at 24 ip 555dc6a71cb0 sp 7ffc5f91db38
error 4 in postgres[555dc69b4000+6db000]
*
Postgres log of old crash:*
2019-02-01 18:39:53.229 -02,,,1523,,5c54aca9.5f3,3,,2019-02-01 18:31:37
-02,,0,LOG,0,"server process (PID 10195) was terminated by signal 11:
Segmentation fault","Failed process was running: WITH  
StatusTrabalhando(Intkey) AS (SELECT Intkey FROM sys_Lookup WHERE Grupo =
$$StatusDigitacao$$ AND Intkey in (SELECT
unnest(string_to_array(substring(VarValue FROM 3),$$,$$)) FROM sys_Var WHERE
Name = $$/Company/StatusDigitacaoReceptivoTrabalhando$$)), 
StatusAgendados(Intkey) AS (SELECT Intkey FROM sys_Lookup WHERE Grupo =
$$StatusVisita$$ AND Intkey in (SELECT
unnest(string_to_array(substring(VarValue FROM 3),$$,$$)) FROM sys_Var WHERE
Name = $$/Company/StatusDigitacaoReceptivoAgendados$$)  

AND Intkey NOT in (SELECT unnest(string_to_array(substring(VarValue FROM
3),$$,$$)) FROM sys_Var WHERE Name =
$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)), 
Todos(Digitacao_ID, Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,
Telemarketing_ID, Nome, Curso, Telefone, Celular, Nascimento, Sexo,
Escolaridade, Cadastro, Email, Idade, Obs,Extra1, Extra2, Extra3,
Extra4, Colegio, Serie, Turma, Turno, AnoLetivo, Pai, Mae, Sequencia,
Status"""2019-02-01 18:39:53.229
-02,,,1523,,5c54aca9.5f3,4,,2019-02-01 18:31:37
-02,,0,LOG,0,"terminating any other active server processes",""
*
Postgres logs of new crash:*
2019-02-08 17:21:16.650 -02,,,16321,,5c4f39b8.3fc1,25625,,2019-01-28
15:19:52 -02,,0,LOG,0,"server process (PID 10321) was terminated by
signal 11: Segmentation fault","Failed process was running:
withStatusTrabalhando(Intkey) as  (select Intkey from sys_Lookup where Grupo
= $$StatusDigitacao$$ andIntkey in (select
unnest(string_to_array(substring(VarValue from 3),$$,$$))from sys_Var where
Name
=$$/Company/StatusDigitacaoReceptivoTrabalhando$$)),StatusAgendados(Intkey)
as  (select Intkey from sys_Lookup where Grupo = $$StatusVisita$$ and
Intkeyin (select unnest(string_to_array(substring(VarValue from 3),$$,$$))
fromsys_Var where Name = $$/Company/StatusDigitacaoReceptivoAgendados$$) and 
Intkey not in (select unnest(string_to_array(substring(VarValue
from3),$$,$$)) from sys_Var where Name
=$$/Company/StatusDigitacaoReceptivoDescartarAgendados$$)),Todos(Digitacao_ID,
Fonte_ID, Curso_ID, Digitador_ID, Pesquisador_ID,Telemarketing_ID, Nome,
Curso, Telefone, Celular, Nascimento, Sexo,Escolaridade, Cadastro, Email,
Idade, Obs, Extra1, Extra2, Extra3, Extra4,Colegio,  Serie, Turma, Turno,
AnoLetivo, Pai, Mae, Sequencia, Status, lkStatus,lkFonte, lkDigitador,
lkPesquisador"""2019-02-08 17:21:16.650
-02,,,16321,,5c4f39b8.3fc1,25626,,2019-01-28 15:19:52
-02,,0,LOG,0,"terminating any other active server
processes",""2019-02-08 17:21:16.650
-02,"postgres","f10db",10138,"74.125.92.65:44342",5c5dcd7d.279a,2,"idle",2019-02-08
16:42:05 -02,107/0,0,WARNING,57P02,"terminating connection because of crash
of another server process","The postmaster has commanded this server process
to roll back the current transaction and exit, because another server
process exited abnormally and possibly corrupted shared memory.","In a
moment you should be able to reconnect to the database and repeat your
command.",,,""2019-02-08 17:21:16.650
-02,"postgres","f10db",9667,"173.194.101.228:63516",5c5dbdee.25c3,2,"idle",2019-02-08
15:35:42 -02,20/0,0,WARNING,57P02,"terminating connection because of crash
of another server process","The postmaster has commanded this server process
to roll back the current transaction and exit, because another server
process exited abnormally and possibly corrupted shared memory.","In a
moment you should be able to reconnect to the database and repeat your
command.",,,""2019-02-08 17:21:16.650
-02,"postgres","f10db",10096,"74.125.92.68:50186",5c5dcd77.2770,2,"idle",2019-02-08
16:41:59 -02,48/0,0,WARNING,57P02,"terminating connection because of crash
of another server process","The postmaster has commanded this server process
to roll back the current transaction and exit, because another server
process exited abnormally and possibly corrupted shared memory.","In a
moment you should be able to reconnect to the database and repeat your
command.",,,""2019-02-08 17:21:16.651
-02,"postgres","f10db",9696,"74.125.115.163:48542",5c5dbe07.25e0,2,"idle",2019-02-08
15:36:07 -02,49/0,0,WARNING,57P02,"terminating connection because of crash
of another server process","The postmaster has commanded this server process
to roll back the cur

Re: Server goes to Recovery Mode when run a SQL

2019-02-13 Thread PegoraroF10
*Here is a SQL which will get that recovery mode. You can run it on any
database because we created it with FROM VALUES, so ...*
But this one is that one which grows and grows memory use until all memory
and swap space are gone, so problem occurs. That other SQL which gives us
the same problem but immediatelly we couldn´t replay it without our entire
database. Even if we extract just that schema to a new DB it doesn´t go to
recovery mode. We will think a little bit more to create something you can
test.
with feriados as (  SELECT dia, repete  FROM (VALUES
('2014-10-11'::DATE, FALSE), ('2014-10-13', FALSE), ('2014-10-14', FALSE),
('2014-10-15', FALSE), ('2014-10-16', FALSE),   ('2014-10-17',
FALSE), ('2014-12-19', FALSE), ('2014-12-20', FALSE), ('2014-12-22', FALSE),
('2014-12-23', FALSE),   ('2014-12-24', FALSE), ('2014-12-26',
FALSE), ('2014-12-27', FALSE), ('2014-12-29', FALSE), ('2014-12-30', FALSE),
  
('2014-12-31', FALSE), ('2015-01-02', FALSE), ('2015-01-03', FALSE),
('2015-01-04', FALSE), ('2015-02-16', FALSE),   ('2015-02-17',
FALSE), ('2015-04-03', FALSE), ('2015-04-04', FALSE), ('2015-06-04', FALSE),
('2015-12-18', FALSE),   ('2015-12-19', FALSE), ('2015-12-21',
FALSE), ('2015-12-22', FALSE), ('2015-12-23', FALSE), ('2015-12-24', FALSE),
  
('2015-12-26', FALSE), ('2015-12-28', FALSE), ('2015-12-29', FALSE),
('2015-12-30', FALSE), ('2015-12-31', FALSE),   ('2016-01-02',
FALSE), ('2016-01-04', FALSE), ('2016-01-05', FALSE), ('2016-01-06', FALSE),
('2016-01-07', FALSE),   ('2016-01-08', FALSE), ('2016-01-09',
FALSE), ('2016-02-08', FALSE), ('2016-03-09', FALSE), ('2016-03-25', FALSE),
  
('2016-03-26', FALSE), ('2016-05-26', FALSE), ('2016-12-24', FALSE),
('2016-12-28', FALSE), ('2016-12-29', FALSE),   ('2016-12-30',
FALSE), ('2016-12-31', FALSE), ('2017-01-02', FALSE), ('2017-01-03', FALSE),
('2017-01-04', FALSE),   ('2017-01-05', FALSE), ('2017-01-06',
FALSE), ('2017-01-07', FALSE), ('2017-02-25', FALSE), ('2017-02-27', FALSE),
  
('2017-03-09', FALSE), ('2017-04-14', FALSE), ('2017-04-15', FALSE),
('2017-06-15', FALSE), ('2017-09-30', FALSE),   ('2017-12-18',
FALSE), ('2017-12-19', FALSE), ('2017-12-20', FALSE), ('2017-12-21', FALSE),
('2017-12-22', FALSE),   ('2017-12-23', FALSE), ('2017-12-26',
FALSE), ('2017-12-27', FALSE), ('2017-12-28', FALSE), ('2017-12-29', FALSE),
  
('2017-12-30', FALSE), ('2018-01-02', FALSE), ('2018-01-03', FALSE),
('2018-01-04', FALSE), ('2018-01-05', FALSE),   ('2018-01-06',
FALSE), ('2018-01-07', FALSE), ('2018-02-12', FALSE), ('2018-02-13', FALSE),
('2018-03-30', FALSE),   ('2018-03-31', FALSE), ('2018-04-30',
FALSE), ('2018-05-31', FALSE), ('2018-10-15', FALSE), ('2018-12-18', FALSE),
  
('2018-12-19', FALSE), ('2018-12-20', FALSE), ('2018-12-21', FALSE),
('2018-12-22', FALSE), ('2018-12-24', FALSE),   ('2018-12-26',
FALSE), ('2018-12-27', FALSE), ('2018-12-28', FALSE), ('2018-12-29', FALSE),
('2018-12-31', FALSE),   ('2019-01-01', TRUE), ('2019-01-02',
FALSE), ('2019-01-03', FALSE), ('2019-01-04', FALSE), ('2019-01-05', FALSE),
  
('2019-01-07', FALSE), ('2019-02-09', TRUE), ('2019-03-09', TRUE),
('2019-04-21', TRUE), ('2019-05-01', TRUE),   ('2019-09-07', TRUE),
('2019-10-12', TRUE), ('2019-11-02', TRUE), ('2019-11-15', TRUE),
('2019-12-19', TRUE),   ('2019-12-20', TRUE), ('2019-12-21', TRUE),
('2019-12-22', TRUE), ('2019-12-23', TRUE), ('2019-12-25', TRUE),  
('2019-12-26', TRUE), ('2019-12-27', TRUE)   ) x (dia, repete)),
materias as (  SELECT * from (VALUES(593, 11091, 'AAC - Ética e
Segurança Digital', 9, 120, '2019/01/30'::DATE, '2019/01/30'::DATE, 60, 120,
0),(593, 11085, 'AAC - Computação nas Nuvens', 12, 60, NULL, NULL, 60,
120, 60)  ) x (turma_id, materia_id, materia, sequencia, previsto, dataini,
datafim, tempoatividade, minutosaula, minutosrestantes)), aulasporsemana as
(   select * from (values (593,1)) x (turma_id, quantidade)), assistidas
(turma_id, sequencia, conteudo_id, conteudo, prevista, aularealizada,
tempoatividade, aulasNoDia, dia, minutoaula, minutosassistidos,
cargaconteudo, cargarestante) as (  SELECTmaterias.turma_id,   
materias.sequencia,materias.materia_id,materias.materia,   
coalesce(realizada.prevista, 1),realizada.aularealizada,   
materias.tempoatividade,(realizada.minutosassistidos /
materias.tempoatividade),realizada.dia,materias.minutosaula,   
realizada.minutosassistidos,materias.previsto,   
coalesce(materias.previsto - (row_number() OVER AulaDaMateria *
realizada.minutosassistidos), materias.previsto)  FROM materias  LEFT JOIN
LATERAL (SELECT TRUE, tsrange(col_aula.data, (col_aula.data + (
col_aula.tempo || ' minute')::INTERVAL)) dia, 0 prevista, (extract(EPOCH
FROM col_aula.tempo) / 60) minutosassistidosFROM (VALUES (593,
11091, TIMESTAMP '2019-01-30 19:00:00', TIME '0

Re: Server goes to Recovery Mode when run a SQL

2019-02-13 Thread PegoraroF10
is a sql FROM VALUES, why do you need tables ? 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Server goes to Recovery Mode when run a SQL

2019-02-13 Thread PegoraroF10
sorry, it´s a missing part of the CTE
that constant should be on beginning part of it.

with feriados as (
  SELECT dia, repete
  FROM (
VALUES ('2014-10-11'::DATE, FALSE), ('2014-10-13', FALSE),
('2014-10-14', FALSE), ('2014-10-15', FALSE), ('2014-10-16', FALSE),
   ('2014-10-17', FALSE), ('2014-12-19', FALSE), ('2014-12-20',
FALSE), ('2014-12-22', FALSE), ('2014-12-23', FALSE),
   ('2014-12-24', FALSE), ('2014-12-26', FALSE), ('2014-12-27',
FALSE), ('2014-12-29', FALSE), ('2014-12-30', FALSE),
   ('2014-12-31', FALSE), ('2015-01-02', FALSE), ('2015-01-03',
FALSE), ('2015-01-04', FALSE), ('2015-02-16', FALSE),
   ('2015-02-17', FALSE), ('2015-04-03', FALSE), ('2015-04-04',
FALSE), ('2015-06-04', FALSE), ('2015-12-18', FALSE),
   ('2015-12-19', FALSE), ('2015-12-21', FALSE), ('2015-12-22',
FALSE), ('2015-12-23', FALSE), ('2015-12-24', FALSE),
   ('2015-12-26', FALSE), ('2015-12-28', FALSE), ('2015-12-29',
FALSE), ('2015-12-30', FALSE), ('2015-12-31', FALSE),
   ('2016-01-02', FALSE), ('2016-01-04', FALSE), ('2016-01-05',
FALSE), ('2016-01-06', FALSE), ('2016-01-07', FALSE),
   ('2016-01-08', FALSE), ('2016-01-09', FALSE), ('2016-02-08',
FALSE), ('2016-03-09', FALSE), ('2016-03-25', FALSE),
   ('2016-03-26', FALSE), ('2016-05-26', FALSE), ('2016-12-24',
FALSE), ('2016-12-28', FALSE), ('2016-12-29', FALSE),
   ('2016-12-30', FALSE), ('2016-12-31', FALSE), ('2017-01-02',
FALSE), ('2017-01-03', FALSE), ('2017-01-04', FALSE),
   ('2017-01-05', FALSE), ('2017-01-06', FALSE), ('2017-01-07',
FALSE), ('2017-02-25', FALSE), ('2017-02-27', FALSE),
   ('2017-03-09', FALSE), ('2017-04-14', FALSE), ('2017-04-15',
FALSE), ('2017-06-15', FALSE), ('2017-09-30', FALSE),
   ('2017-12-18', FALSE), ('2017-12-19', FALSE), ('2017-12-20',
FALSE), ('2017-12-21', FALSE), ('2017-12-22', FALSE),
   ('2017-12-23', FALSE), ('2017-12-26', FALSE), ('2017-12-27',
FALSE), ('2017-12-28', FALSE), ('2017-12-29', FALSE),
   ('2017-12-30', FALSE), ('2018-01-02', FALSE), ('2018-01-03',
FALSE), ('2018-01-04', FALSE), ('2018-01-05', FALSE),
   ('2018-01-06', FALSE), ('2018-01-07', FALSE), ('2018-02-12',
FALSE), ('2018-02-13', FALSE), ('2018-03-30', FALSE),
   ('2018-03-31', FALSE), ('2018-04-30', FALSE), ('2018-05-31',
FALSE), ('2018-10-15', FALSE), ('2018-12-18', FALSE),
   ('2018-12-19', FALSE), ('2018-12-20', FALSE), ('2018-12-21',
FALSE), ('2018-12-22', FALSE), ('2018-12-24', FALSE),
   ('2018-12-26', FALSE), ('2018-12-27', FALSE), ('2018-12-28',
FALSE), ('2018-12-29', FALSE), ('2018-12-31', FALSE),
   ('2019-01-01', TRUE), ('2019-01-02', FALSE), ('2019-01-03',
FALSE), ('2019-01-04', FALSE), ('2019-01-05', FALSE),
   ('2019-01-07', FALSE), ('2019-02-09', TRUE), ('2019-03-09',
TRUE), ('2019-04-21', TRUE), ('2019-05-01', TRUE),
   ('2019-09-07', TRUE), ('2019-10-12', TRUE), ('2019-11-02', TRUE),
('2019-11-15', TRUE), ('2019-12-19', TRUE),
   ('2019-12-20', TRUE), ('2019-12-21', TRUE), ('2019-12-22', TRUE),
('2019-12-23', TRUE), ('2019-12-25', TRUE),
   ('2019-12-26', TRUE), ('2019-12-27', TRUE)
   ) x (dia, repete)
), materias as (
  SELECT * from (VALUES
(593, 11091, 'AAC - Ética e Segurança Digital', 9, 120,
'2019/01/30'::DATE, '2019/01/30'::DATE, 60, 120, 0),
(593, 11085, 'AAC - Computação nas Nuvens', 12, 60, NULL, NULL, 60, 120,
60)
  ) x (turma_id, materia_id, materia, sequencia, previsto, dataini, datafim,
tempoatividade, minutosaula, minutosrestantes)
)


, col_diasaula(turma_id, dia, tempoaula) as(
  select * from (values(593, 3, time '02:00')
  ) as x(turma_id, dia, tempoaula)), 



), aulasporsemana as (
   select * from (values (593,1)) x (turma_id, quantidade)
), assistidas (turma_id, sequencia, conteudo_id, conteudo, prevista,
aularealizada, tempoatividade, aulasNoDia, dia, minutoaula,
minutosassistidos, cargaconteudo, cargarestante) as (
  SELECT
materias.turma_id,
materias.sequencia,
materias.materia_id,
materias.materia,
coalesce(realizada.prevista, 1),
realizada.aularealizada,
materias.tempoatividade,
(realizada.minutosassistidos / materias.tempoatividade),
realizada.dia,
materias.minutosaula,
realizada.minutosassistidos,
materias.previsto,
coalesce(materias.previsto - (row_number() OVER AulaDaMateria *
realizada.minutosassistidos), materias.previsto)
  FROM materias
  LEFT JOIN LATERAL (
SELECT TRUE, tsrange(col_aula.data, (col_aula.data + ( col_aula.tempo ||
' minute')::INTERVAL)) dia, 0 prevista, (extract(EPOCH FROM col_aula.tempo)
/ 60) minutosassistidos
FROM (VALUES
 (593, 11091, TIMESTAMP '2019-01-30 19:00:00', TIME '02:00'),
 (593, 11091, '2019-02-06 19:00:00', '01:00')
) col_aula (turma_id, materia_id, data, tempo)
WHERE col_aula.materia_id = materias.materia_id AND col_aula.turma_id =
materias.turma_id
ORDER BY col_aula.data, se

Refresh Publication takes hours and doesn´t finish

2019-02-28 Thread PegoraroF10
*We use logical replication from a PG version 10.6 to a 11.2. Both are Ubuntu
16.04.We have a hundred schemas with more or less a hundred tables, so
number of tables is about 10.000. All replication is ok but when we try to
do a REFRESH SUBSCRIPTION because we added a new schema, it takes hours and
doesn´t finish. Then, if I go to our master server and do a select * from
pg_publication_tables it doesn´t respond too. Then, analysing the source of
view pg_publication_tables ...*
create view pg_publication_tables as  SELECT p.pubname, n.nspname AS
schemaname, c.relname AS tablename FROM pg_publication p,  (pg_class c JOIN
pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.oid IN (SELECT
pg_get_publication_tables.relid FROM pg_get_publication_tables((p.pubname)
:: text) pg_get_publication_tables (relid)));
If we run both statements of that view separately 
SELECT string_agg(pg_get_publication_tables.relid::text,',') FROM
pg_get_publication_tables(('MyPublication')::text) pg_get_publication_tables
(relid);
*put all those oids retrieved on that IN of the view*
select * from pg_Class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE
c.oid IN (
*OIDs List*
);
*Then it responds immediatelly*
So, the question is .. can we change this view to select faster ? Just
rewriting that view to a better select will solve ?Is this view used by
REFRESH SUBSCRIPTION ? We think yes because if we run refresh subscription
or select from view it doesn´t respond, so ...



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

PK and FK using Hash index

2019-03-22 Thread PegoraroF10
On version 10 and 11 hash index was redesigned and now seems to be better
than btree. 
Their speed, size and memory consuption are better, so ... 
Why is not possible to have all PK and FK just using hash indices ? The only
thing I need on a PK and FK is search and join, so hash index responds very
well.

I know a hash doesn´t accept unique index, so I cannot create a primary key
on it. 
But I think it would be great, isn´t it ?

There is something wrong with my thought ? 
If not, there is a way to have PK/FH structure using Hash index ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Refresh Publication takes hours and doesn´t finish

2019-05-20 Thread PegoraroF10
I tried sometime ago ... but with no responses, I ask you again.
pg_publication_tables is a view that is used to refresh publication, but as
we have 15.000 tables, it takes hours and doesn´t complete. If I change that
view I can have an immediate result. The question is: Can I change that view
? There is some trouble changing those system views ?

Original View is ...
create view pg_catalog.pg_publication_tables as
SELECT p.pubname, n.nspname AS schemaname, c.relname AS tablename FROM
pg_publication p,
(pg_class c JOIN pg_namespace n ON ((n.oid = c.relnamespace))) 
WHERE (c.oid IN (SELECT pg_get_publication_tables.relid FROM
pg_get_publication_tables((p.pubname)::text)
pg_get_publication_tables(relid)));
This way it takes 45 minutes to respond.

I changed it to ... 
create or replace pg_catalog.view pg_publication_tables as SELECT p.pubname,
n.nspname AS schemaname, c.relname AS tablename from pg_publication p inner
join pg_get_publication_tables(p.pubname) pt on true inner join pg_class c
on pt.relid = c.oid inner join pg_namespace n ON (n.oid = c.relnamespace);
This one takes just one or two seconds.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Re: Refresh Publication takes hours and doesn´t finish

2019-05-21 Thread PegoraroF10
I cannot because we created a replication for ALL TABLES



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Refresh Publication takes hours and doesn´t finish

2019-05-21 Thread PegoraroF10
Restart Postgres means exactly what ? We tried just restart the service but
we tried to refresh publication the old view was used because it took 2hours
and gave us a timeout.

I found some people talking that I need to initdb, but initdb means recreate
entirely my database or just reinstall my postgres server ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Trigger bug ?

2019-05-22 Thread PegoraroF10
We created a usual trigger which seemed to be not firing or not working
properly. Then we put some raise notices on it and saw that it was returning
Null values. But why, it´s after insert and is a primary key, cannot be
null.

create function MyFunction() returns trigger as $$
begin
  raise '%', new.MyPK;
end$$ language plpgsql;

create trigger MyTrigger after insert on MyTable execute procedure
MyFunction();

We forgot FOR EACH ROW/STATEMENT when created our trigger.

On docs it´s written {ROW/STATEMENT} between {}, so it´s needed. But why did
it accepted our command to create that trigger ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




  1   2   >