wrong message when trying to create an already existing index

2018-03-10 Thread legrand legrand
Hello,
When trying to create an already existing index (in pg 9.5)

SQL> create  index if not exists NEWINDEX on SCHEMA.TABLE(COL);
  > relation "NEWINDEX" already exists, skipping

message speaks about relation (and not index)

Would it be possible that this message reports the correct object type ?
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: wrong message when trying to create an already existing index

2018-03-10 Thread legrand legrand
I thougth that thoses messages where using relation's relkind:
  r = ordinary table, 
  i = index, 
  S = sequence, 
  t = TOAST table, 
  v = view, 
  m = materialized view, 
  c = composite type, 
  f = foreign table, 
  p = partitioned table

wouldn't it be easier to read for beginners ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: wrong message when trying to create an already existing index

2018-03-10 Thread legrand legrand
OK, that noted !
thank you for the quick answers

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: wrong message when trying to create an already existing index

2018-03-10 Thread legrand legrand
> regression=# create index mv1 on t1 (f1);
...
> ERROR:  materialized view "mv1" already exists 

Is in fact the one I prefer ;^)

I come from a DBMS world where Tables and Indexes do not share the same name
space,
and have to change my mind !

Thanks you Tom for pointing that.

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pg_stat_statements: password in command is not obfuscated

2018-03-23 Thread legrand legrand
Hello,

It seems that passwords used in commands are not removed when caught by
pg_stat_statements
(they are not "normalized" being utility statements) 

exemple:
alter role tt with password '123';

select query from public.pg_stat_statements
where query like '%password%';

query

alter role tt with password '123';

Do you think its a bug ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Postgres Foreign Data Wrapper and DB2 LUW

2018-03-27 Thread legrand legrand
Hello,
not sure that there is a dedicated fdw for DB2 LUW,
but you may try one of
ODBC_FDW (and maybe JDBC_FDW)
as decribed here:
https://wiki.postgresql.org/wiki/Foreign_data_wrappers

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pg_stat_statements : how to catch non successfully finished statements ?

2018-04-26 Thread legrand legrand
Hello all,

I was wondering if there is a hook to collect non successfully finished SQL
statements in pg_stat_statements (timed-out, cancelled, killed, or simply
errored) ?

Thanks in advance
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements : how to catch non successfully finished statements ?

2018-04-28 Thread legrand legrand
OK I see ...

This hook should be used only for ERROR (no WARNING nor NOTICE ...)
and the only real interesting information is  
  ErrorData -> internalquery; /* text of internally-generated query */

This doesn't permit to (re)build the link to queryid 
(that is based on parse tree, but not available here)

So the only solution is to had queryId to ErrorData in this hook
or create a new hook fired on ERROR and containing queryId ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements : how to catch non successfully finished statements ?

2018-04-29 Thread legrand legrand
OK no need of a new hook. 
I'll try to implement what you suggest here, but this clearly exceeds my
(poor) development skill.

As you also noticed, in addition to collect this SQL statement counters for
the corresponding QueryId,
I would have been interested in its PlanId ... but it's an other subject

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements : how to catch non successfully finished statements ?

2018-05-06 Thread legrand legrand
Progress report on this subject:

1/ Some clarifications: 

What is expected here is to update pgss counters for ALL the queries that
have been executed,
taking into account queries finished in SUCCESS and thoses finised with
ERROR.

Main interest here is to catch queries that are cancelled or interrupted
by a timeout after a long running period, that could be hours in BI
reporting environments.

Errors during parsing are not catched, because there is no hook available
before parsing 
and because witout parse tree available, QueryId can not be calculated. 
This is not a problem yet because the objective is not to count errors.

There was a remark off-list saying that cummulating SUCCESS and ERROR
counters
for the same query, could be a problem for thoses monitoring AVG indicators 
(becomming smaller than the SUCCESS ones).

One proposal is to add a Boolean flag (success) in the key of pgss:

dbid, userid, queryid, success, calls, total_time, ...
1 1   123  t100  100 000
1 1   123  f 101 000 000

  

2/ Modifying pgss_ExecutorRun (as suggested by Tom Lane) with:


PG_CATCH();
{
/* Added part to get counters on errors */
EState   *estate;
if (queryDesc->totaltime && pgss_enabled())
{
estate = queryDesc->estate;
InstrStopNode(queryDesc->totaltime, 
estate->es_processed);
InstrEndLoop(queryDesc->totaltime);

pgss_store(queryDesc->sourceText,
   queryDesc->plannedstmt->queryId,
   queryDesc->plannedstmt->stmt_location,
   queryDesc->plannedstmt->stmt_len,
   queryDesc->totaltime->total * 1000.0,
/* convert to msec */
   queryDesc->estate->es_processed,
   &queryDesc->totaltime->bufusage,
   NULL);
}

nested_level--;
PG_RE_THROW();
}
PG_END_TRY();


permits to catch simple queries in pgss (to be enhanced for utility
statements, pl/pgsql, 
parallel queries, ...).
Would such a code have a chance to be validated ?

Feedback is welcome.
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements : how to catch non successfully finished statements ?

2018-05-09 Thread legrand legrand
Hello,

Here is a patch that  :
- adds a new guc: pg_stat_statements.track_errors boolean (default to true),
- capture of DML, DDL, PL/PGSQL commands in error into pgss.

There is always a risk that new code used in PG_CATCH (mainly pgss_store)
gives an error.
I'm not able to tell when it could occur and what would be the impact ...

see
pgss_with_errors.patch
  

pgss_with_errors.txt
  

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Help in Postgresql

2018-05-23 Thread legrand legrand
Hello Moohanad,

Did you check for pg_stat_statements
https://www.postgresql.org/docs/10/static/pgstatstatements.html ?
This is based on postgres hooks and will give you: db,user,query id, query
text

There are many developments trying to add start /end time, planid, plan text
as described in
http://www.postgresql-archive.org/Poc-pg-stat-statements-with-planid-td6014027.html

remark: this extension doen't care about statements finished in error or
timeout
 
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Return select statement with sql case statement

2018-07-04 Thread legrand legrand
Hello,

sorry your description is not clear ...
why do you use a GROUP BY on product without aggregation function min, max,
sum ?

where is defined numberOfPremiumDays ?


may be using UNION can solve your problem:

select 
numberOfPremiumDays,
product_id,
premium_price,
period_price
from product, PremiumDays
where occupation_type_id = 1
and numberOfPremiumDays = date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp))
UNION ALL
select 
numberOfPremiumDays,
product_id,
classic_price,
period_price
from product1, PremiumDays
where occupation_type_id = 1
and numberOfPremiumDays != date_part('day',
('2018-11-05'::timestamp) - ('2018-11-01'::timestamp))

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements in the query column I am not getting the exact query

2018-08-19 Thread legrand legrand
Hi,
This is the expected behavior, like that sql statements with différent
values are shared in pg_stat_statements ...
There is no parameter to change this, if you really want query paramèters
you néed to have a look at statements logging, but be carrefull this can
generate a huge volume of log files depending on your  workload ...

See log_statements at
https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



[proposal] pg_stat_statements: extension timing instrumentation

2018-09-29 Thread legrand legrand


Don't you have been surprised by the duration of a query (seen in psql with
\timing)
and the duration for the same query found in pgss ?

It seems that writting the first query text to file takes some time,
that high execution rate on the same query could generate waits on locks
as when pgss is under pressure with too much distinct queries to store
compared 
to the max statements allowed.

All those "waits" are co-located in pgss_store function. What about adding a
pgss_time counter 
in pgss to measure the duration of pgss_store (a guc could be added to
enable/disable this) ?

Knowing that there are also plans to add a planing counter in PG12, I would
suggest to add

plan_time,
exec_time,
pgss_time

the sum of those 3 counters being total_time.

This could help in investigating write contentions in pg_stat_statements
query file,
helping to define that max queries should be increased, ...

A prototype is available on demand.

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: survey: pg_stat_statements total_time and entry deallocation

2018-10-05 Thread legrand legrand
Hello,

What about adding a log message for each entry_dealloc() execution? 
it could be usefull to start thinking increasing pg_stat_statements.max.

is there any rule regarding the acceptable max value ? I'm playing in test
with a 20 000 value without any problem, could it extendend to 100 000 ?

In a system with pressure on numbers of pgss lines, and regular
entry_dealloc() executions, low frequency  entries are evicted firsts, and
this should still be the same for new small queries with usage based on
total_time. 

Maybe there is a third way, that would be to evict queries based on the
"oldest modification time" ... 
This would garantee that latest queries would be kept long enough to be
collected by aggregation tools.

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



no queryId in post_parse_analyze hook when row is locked

2018-10-11 Thread legrand legrand
Hello,

When playing with extension pgsentinel,
it seems that post_parse_analyze hook doesn't return query->queryId
(initialized by a similar hook in pg_stat_statements)
when current row is locked.

Is that possible ?

More details found at
https://github.com/pgsentinel/pgsentinel/issues/19

Thanks in advance
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: [proposal] pg_stat_statements: extension timing instrumentation

2018-10-21 Thread legrand legrand
An other way is to use "waits" reporting ...
something like :

pgss_store (...)
pgstat_report_wait_start(PG_WAIT_EXTENSION);
...
pgstat_report_wait_end();

gives waits of type "Extension", name "Extension" when spending time in this
part of pgss.

This can help those working with waits sampling
as found in pg_stat_activity, pg_wait_sampling or pgsentinel.

note: some of the contentions are already reported as  "LWLock" /
"pg_stat_statements"

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: no queryId in post_parse_analyze hook when row is locked

2018-10-21 Thread legrand legrand
Problem in pgsentinel has been identified, and should be fixed soon.

it has nothing to do with post_parse_analyze hook.

Sorry for the noise.

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Partitioning an existing table - pg10.6

2019-07-05 Thread legrand legrand
Hello,

Trying to do what I suggested, I understood it doesn't work ;o(

Logical replication can only work between two distinct databases,
and it seems that the replicated table name is the same as its source ...

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Questions about Partitioned Tables and Indexes

2019-10-02 Thread legrand legrand
Hi,

what you proposed seems correct to me.
I don't know how to list indexes from a partitionned index.

You can check if your indexes are valid using:

   select i.relname as indexname,i.relkind, t.relname as tablename,
t.relkind, idx.indisvalid
from pg_class i
join pg_index idx on idx.indexrelid = i.oid
join pg_class t on t.oid = idx.indrelid
where t.relname like 'test%'


to be really sure, try EXPLAIN with statements that should use those indexes
like

postgres=# explain select * from test_part where date_key >0;
  QUERY PLAN
---
 Append  (cost=6.53..43.80 rows=614 width=60)
   ->  Bitmap Heap Scan on test_201908  (cost=6.53..20.37 rows=307 width=60)
 Recheck Cond: (date_key > '0'::numeric)
 ->  Bitmap Index Scan on test_201908_pkey  (cost=0.00..6.45
rows=307 width=0)
   Index Cond: (date_key > '0'::numeric)
   ->  Bitmap Heap Scan on test_201909  (cost=6.53..20.37 rows=307 width=60)
 Recheck Cond: (date_key > '0'::numeric)
 ->  Bitmap Index Scan on test_201909_pkey  (cost=0.00..6.45
rows=307 width=0)
   Index Cond: (date_key > '0'::numeric)
(9 rows)

or

postgres=# explain select * from test_part where metric >0;
 QUERY PLAN
-
 Append  (cost=6.53..43.80 rows=614 width=60)
   ->  Bitmap Heap Scan on test_201908  (cost=6.53..20.37 rows=307 width=60)
 Recheck Cond: (metric > '0'::numeric)
 ->  Bitmap Index Scan on test_idx1_201908  (cost=0.00..6.45
rows=307 width=0)
   Index Cond: (metric > '0'::numeric)
   ->  Bitmap Heap Scan on test_201909  (cost=6.53..20.37 rows=307 width=60)
 Recheck Cond: (metric > '0'::numeric)
 ->  Bitmap Index Scan on test_201909_metric_idx  (cost=0.00..6.45
rows=307 width=0)
   Index Cond: (metric > '0'::numeric)
(9 rows)

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: pg_stat_statements : how to catch non successfully finished statements ?

2020-04-30 Thread legrand legrand
Tom Lane-2 wrote
> legrand legrand <

> legrand_legrand@

> > writes:
>> So the only solution is to had queryId to ErrorData in this hook
>> or create a new hook fired on ERROR and containing queryId ?
> 
> I see no particular need for a new hook.  What's needed here is for
> pgss_ExecutorRun (and maybe some other existing functions in
> pg_stat_statements) to go ahead and record the statement when they catch
> an error thrown out of standard_ExecutorRun, rather than just updating
> the module's nested_level variable and re-throwing.
> 
> The hard part here is that you have to be really careful what you do in
> a PG_CATCH block, because the only thing you know for sure about the
> backend's state is that it's not good.  Catalog fetches are right out,
> and anything that might itself throw an error had best be avoided as
> well.  (Which, among other things, means that examining executor state
> would be a bad idea, and I'm not even sure you'd want to traverse the plan
> tree.)
> 
> I'm not convinced that it's practical for pg_stat_statements to make a new
> shared hashtable entry under those constraints.  But figuring out how to
> minimize the risks around that is the stumbling block, not lack of a hook.
> 
>   regards, tom lane

As far as I have been testing this with *cancelled* queries (Cancel, 
pg_cancel_backend(), statement_timeout, ...), I haven't found any problem.
 
Would limiting the PG_CATCH block to thoses *cancelled* queries  
and *no other error*, be an alternate solution ?

If yes, is there a way to identify what was the reason of the error when 
entering the PG_CATCH block (and point me to any exemple) ?

Thanks in advance.

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: pg_stat_statements : how to catch non successfully finished statements ?

2020-05-01 Thread legrand legrand
Tom Lane-2 wrote
> legrand legrand <

> legrand_legrand@

> > writes:
>> Tom Lane-2 wrote
>>> The hard part here is that you have to be really careful what you do in
>>> a PG_CATCH block, because the only thing you know for sure about the
>>> backend's state is that it's not good.  Catalog fetches are right out,
>>> and anything that might itself throw an error had best be avoided as
>>> well.  (Which, among other things, means that examining executor state
>>> would be a bad idea, and I'm not even sure you'd want to traverse the
>>> plan
>>> tree.)
>>> I'm not convinced that it's practical for pg_stat_statements to make a
>>> new
>>> shared hashtable entry under those constraints.  But figuring out how to
>>> minimize the risks around that is the stumbling block, not lack of a
>>> hook.
> 
>> As far as I have been testing this with *cancelled* queries (Cancel, 
>> pg_cancel_backend(), statement_timeout, ...), I haven't found any
>> problem.
>> Would limiting the PG_CATCH block to thoses *cancelled* queries  
>> and *no other error*, be an alternate solution ?
> 
> I do not see that that would make one iota of difference to the risk that
> the executor state tree is inconsistent at the instant the error is
> thrown.  You can't test your way to the conclusion that it's safe, either
> (much less that it'd remain safe); your test cases surely haven't hit
> every CHECK_FOR_INTERRUPTS call in the backend.
> 
>   regards, tom lane


new try:

 Considering that executor state tree is limited to QueryDesc->estate,
 that would mean that rows processed can not be trusted, but that 
 queryid, buffers and *duration* (that is the more important one)
 can still be used ?
  
 Knowing that shared hashtable entries are now (in pg13) created during 
 planning time. There is no need to create a new one for execution error: 
 just update counters (current ones or new columns like "errors" , 
 "total_error_time",  ... added to pg_stat_statements view).
 
Is that better ?
 
Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Migration from Oracle to Postgres

2020-05-05 Thread legrand legrand
Jean Claude wrote
> Hi guys,
> 
> I hope you're doing well.
> 
> Simple question
> What are the best methods to migrate a huge table from Oracle to Postgres
> ?
> 
> Cheers

Hello,

Depending on the datatype, and if the databases are co-located on the same
server,
you may try oracle_fdw (except for lobs, where it is slow)
or a sqlplus extract from Oracle, then load using copy command in
PostgreSQL;

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: pg_stat_sql_plans ALPHA released

2020-06-01 Thread legrand legrand
A new version of pg_stat_sql_plans (version 0.2) has been released.

Main changes are:
- planid is build at planning time, making it reusable by cached plans,
- expose current queryid, planid per pid in pg_stat_activity,
- planning counters are now calculated per (queryid,planid),
- includes a specific 'minimal' explain plan, for performances,
- ...

It needs PG13 (as it is using new planner hook query_string) and is 
still available at https://github.com/legrandlegrand/pg_stat_sql_plans

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: SV: Using Postgres jdbc driver with Oracle SQL Developer

2020-07-06 Thread legrand legrand
Try Replacing hostname by hostname/Database?
Don’t Forget ?

Regards
PAscal




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: simple reporting tools for postgres in aws

2020-12-07 Thread legrand legrand
Hello,

If you know Oracle, maybe you also know Sql developer reporting features,
and as SQL developer can connect to Postgres,
it can be used for free (it is what Oracle claims, but it should be double
verified)

https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r40/Chart/12cChart.html

Regards
PAscal





--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Suggestion: provide a "TRUNCATE PARTITION" command

2021-01-08 Thread legrand legrand
Hello,

maybe a naïve plpgsql as proposed in
https://www.postgresql-archive.org/Partitionning-support-for-Truncate-Table-WHERE-td5933642.html
may be an answer

Regards
PAscal



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Suggestion: provide a "TRUNCATE PARTITION" command

2021-01-08 Thread legrand legrand
What is interesting here with the TRUNCATE WHERE (and in the proposed
plpgsql) is to offer the end user a way to perform a transparent truncate or
delete totally independent of the partitioning scheme (if any, or even if it
has changed).



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: How to get partition info for a partition table?

2018-10-27 Thread legrand legrand
see
https://www.postgresql.org/docs/9.5/static/app-psql.html
for psql options an commands:

psql -d ... -U ...
\d+ measurement_year_month

...
Partition key: RANGE (date_part('year'::text, logdate),
date_part('month'::text, logdate))
Number of partitions: 0


if you want to know how postgresql is retrieving those informations
try -E option for ECHO_HIDDEN

psql -d ... -U ... -E
\d+ measurement_year_month



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: editable spreadsheet style interface

2018-10-31 Thread legrand legrand
Tim Clarke-2 wrote
> On 30/10/2018 20:32, Martin Mueller wrote:
>>
>> I have used Aqua Data Studio for several years. Jetbrains recently 
>> released a similar product. Academic licensing is affordable (~ $200 a 
>> year) and very cheap if considered in terms of the time it saves you.
>>
>> *From: *David Gauthier <

> davegauthierpg@

> >
>> *Date: *Tuesday, October 30, 2018 at 2:06 PM
>> *To: *"

> pgsql-general@

> " <

> pgsql-general@

> >
>> *Subject: *editable spreadsheet style interface
>>
>> I think I know the answer to this one but I'll ask anyway...
>>
>> Is there a spreadsheet style interface to a PG DB where users can...
>>
>> - lock records
>>
>> - edit records
>>
>> - submit changes (transaction)
>>
>> Is there any after-market tool for PG that does something like this ?
>>
> 
> Have a look here https://www.postgresql.org/download/product-categories/ 
> under "Administration/Development". I'm using pgAdmin right now.
> 
> 
> Tim Clarke

SQLeo http://sqleo.sourceforge.net/index.html is in that list (free), and
can
- insert, update, delete records,
- choose the update, delete key (if no pk),
- view the changes in sql format,
- apply to db,
- commit or rollback (if not in auto-commit mode)

(without locking)

see
http://sqleo.sourceforge.net/guide/English_SQLeo_AdvancedHelp.htm#_Toc467095120

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread legrand legrand
Ravi Krishna-9 wrote
> Well your information needs some update.
> 
> - On AIX, IBM had no issues selling Oracle, a rival to DB2.
> - IBM Global Services, a consulting unit was the single biggest sales
> force for Oracle Installations outside
>   Oracle. In other words, they ended up using Oracle for projects done by
> IGM-GS more than DB2.
>   This was true some 10 yrs ago. Oracle use to point it out , with glee.
> - Many years ago I was informed that Enterprise DB was funded by IBM. 
> Don't know how much it is true.

There (is)was a IBM DB2 Oracle compatibily feature, and IBM was interested
by EDB Oracle compatibility one:
https://www.enterprisedb.com/fr/news/enterprisedb-and-ibm%C2%AE-collaborate-integrate-technology-new-version-db2

But IBM didn't bought EDB and it that will make it even more difficult for
Oracle ;o)

Regards
PAscal  



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Syndicating PostgreSQL mailing list to Discourse

2018-11-21 Thread legrand legrand
Hello,

What you are proposing with https://rubytalk.org/ seems very interesting.

It offers a quick view on mobile of "latests posts for all sites" in one
click, 
and many other grouping /filtering options (that miss PostgreSQL website),
for users that don't use fat client mailling list system (like me).

This seems even better than nabble www.postgresql-archive.org, that is 
mobile friendly, and even *even* better as you don't include Ads.

As you can see Pg community members are very frightened by this option that 
would permit "seamlessly interact with the mailing list" (there are many
demands
to ask nabble to remove it,
https://www.postgresql-archive.org/postgresql-archive-org-td6035059i20.html#a6059185).
 

Maybe you will have a better answer if you propose a pure Read-Only mailling
list system 
- without any possibility to reply from your site,
- promising NO Ads for ever (and explaining how you get the money for
running costs),
- ...

Are there any other mobile users here, to vote for this solution (maybe
adding other restrictions) ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pg_stat_statements: can we extend the object names to the qualified names?

2018-11-27 Thread legrand legrand
A part of the answer would be to store
explain (verbose on) select count(*) from t1;
result in pg_stat_statements for the corresponding query...

(Verbose On) gives the "qualified names":

 QUERY PLAN
---
Aggregate  (cost=19.38..19.39 rows=1 width=8)
   Output: count(*)
   ->  Seq Scan on s1.t1  (cost=0.00..17.50 rows=750 width=0)   
 
 Output: a1, a2, a3, a4, a5, a6


Extension pg_store_plans
https://github.com/ossc-db/pg_store_plans
can do it.

Extesion auto_explain can help also.

There are also some tryies to extend pg_stat_statements 
with plans see 
https://www.postgresql-archive.org/FEATURE-PATCH-pg-stat-statements-with-plans-td5940964.html

Regards
PAscal






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Change from 9.6 to 11?

2018-12-20 Thread legrand legrand
Hello, this seems as if some data was missing on a joined table ...

could you compare the result of
EXPLAIN ANALYZE
for that statement
between both databases ?

and maybe share them ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: (Again) Column Store on PostGreSQL

2019-01-04 Thread legrand legrand
Hi,

VOPS, is one of the propotypes worked on 
see

https://www.postgresql.org/message-id/4fb855c3-22b9-444f-21bf-114fa23cc...@postgrespro.ru

https://github.com/postgrespro/vops

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Adding LEFT JOIN to a query has increased execution time 10 times

2019-01-06 Thread legrand legrand
Alexander Farber wrote
> Good evening, thank you for the useful hints!
> 
> With the further help of the IRC folks the query has been optimized (7-10
> seconds -> 0.3 second) by adding the following indices:
> 
> CREATE INDEX ON words_games (player1, COALESCE(finished, 'INFINITY'));
> CREATE INDEX ON words_games (player2, COALESCE(finished, 'INFINITY'));
> CREATE INDEX ON words_moves (gid, played DESC);
> CREATE INDEX ON words_social (uid, stamp DESC);
> CREATE INDEX ON words_geoip USING SPGIST (block);
> 
> and switching to LEFT JOIN LATERAL for finding the most recent records in
> words_moves and words_social tables:
> 
> [...]
> 
>  Planning time: 0.587 ms
>  Execution time: 0.367 ms
> (36 rows)
> 
> I was told that it still could be improved (by rearranging WHERE clauses?)
> 
> Regards
> Alex

Hi Alexander,

It seems that you have done a very nice tuning exercise with this query, 
that finishes now in less than 1 ms !!!

and I have learned about LEFT JOIN LATERAL syntax too !

As you didn't spoke about DML activity ... May I suggest you to take some
time to monitor the
application before to continue optimizing this query ?

Take time to check that:
- the result is ok,
- performances are stable,
- there is no regression on other queries,
- inserts,updates, deletes, copy are still working fast,
- size of added objects are coherent and stable,
- query complexity stay manageable,
- there is no other application part to optimize,
- ...

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: oracle_fwd - is it safe or not?

2019-01-31 Thread legrand legrand
Hi,

We have use it to "archive" 2 Oracle databases (8i and 9i) 
to pg 9.5 on windows (for a target of more than 250GB).

We also use it to monitor our Oracle 11g databases, 
storing some performances / capacity planning data 
(like ASH, AWR, ...) in Postgres.

In all cases, developer support was premium.

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: oracle_fwd - is it safe or not?

2019-01-31 Thread legrand legrand
lup wrote
> On 1/31/19 12:48 PM, legrand legrand wrote:
>>
>> In all cases, developer support was premium.
>>
> Can you please expand on "support was premium".  I'm not sure if that 
> was the level of support purchased, or perhaps an indication that 
> support was heavily used. Or have I missed it entirely?

Sorry for not being clear,
we had many questions (and one bug)
all answers (and one fix) 
where always fast and high level quality

Regards
PAscal 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



pg_stat_statements doesn't track commit from pl/pgsql blocks

2019-02-17 Thread legrand legrand
Hello,

It seems that pgss doesn't track commit (nor rollback) commands from
pl/pgsql blocks.
using psql in version 11.1:

select pg_stat_statements_reset();
do $$ begin commit; end $$;
select calls,query from pg_stat_statements;

...

I don't know how difficult it would be to fix it,
but this could help in trouble shouting.

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



RE: pg_stat_statements doesn't track commit from pl/pgsql blocks

2019-02-22 Thread legrand legrand

forgot to cc pgsql-general list


De : legrand legrand 
Envoyé : vendredi 22 février 2019 20:26
À : Bruce Momjian
Objet : RE: pg_stat_statements doesn't track commit from pl/pgsql blocks

Hello Bruce,

thank you for taking time to answer.
yes, I was expecting something like

SELECT calls,query FROM pg_stat_statements;
 calls |   query
---+---
 1 | select pg_stat_statements_reset()
 1 | do $$ begin commit; end $$
 1 | commit

as I didn't found any other place to track this 'commit' information.

nb: I don't "want" anything this is just an open question,
and I'm perfectly able to ear that its not expected or not easy to implement.

Regards
PAscal




Re: Display View Columns and Their Source Tables and Columns

2019-04-21 Thread legrand legrand
Hi,

I don't know if it is possible ...
the only way I found seems to use pg_depend and pg_rewrite
as described here
https://pgdba.org/post/2018/04/dependency_ladder/

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Partitioning an existing table - pg10.6

2019-07-03 Thread legrand legrand
Hello,

I didn’t test it myself but maybe using  logical réplication could help ...
See https://www.postgresql.org/docs/10/logical-replication.html

Operations 
- create parttable
- Feed it using réplication
- when sync : stop app, stop réplication, rename tables
- maybe you can réplicate from parttable to nonpart as a rollback plan 
- ...

To be tested carrefully 
Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




declarative partition by list, "other values" syntax

2017-11-20 Thread legrand legrand
Hello,

I was trying to create a partitionned table by list,
that has on special partition for "other values"

for exemple
create table wiki_data_part(
   category varchar(20),
   tim_id bigint,
   pag_id bigint,
   requests int,
   size bigint
) 
PARTITION BY LIST (category );

CREATE TABLE wiki_data_part_a PARTITION OF wiki_data_part
FOR VALUES IN ('ang.q','ace.mw', ...);

CREATE TABLE wiki_data_part_b PARTITION OF wiki_data_part
FOR VALUES IN ('bs.s','bh.mw','bar','br.d', ...);

How could I create a partition containing the other values (like MINVALUE,
MAXVALUE for range partition) ?

CREATE TABLE wiki_data_part_others PARTITION OF wiki_data_part
FOR VALUES IN (others);

could not find it in doc
https://www.postgresql.org/docs/10/static/sql-createtable.html

Thanks in advance
Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



One Partition by list is always chosen by planner

2017-11-20 Thread legrand legrand
Hello,

after creating a table wiki_data_part with
partition by list (category);

and creating partitions like
CREATE TABLE wiki_data_part_a PARTITION OF wiki_data_part
FOR VALUES IN ('ang.q',...,'arc');
CREATE TABLE wiki_data_part_b PARTITION OF wiki_data_part
FOR VALUES IN ('bs.s',...,'bg.n');

copy table wiki_data_part from ...;

analyze wiki_data_part;

explain select * from wiki_data_part where category='en'

| Append  (cost=0.00..21595.75 rows=4 width=102)

  
|   ->  Seq Scan on wiki_data_part_e  (cost=0.00..21578.00 rows=1 width=102)

  
| Filter: ((category)::text = 'en'::text)   

  
|   ->  Seq Scan on wiki_data_part_s  (cost=0.00..17.75 rows=3 width=102)   

  
| Filter: ((category)::text = 'en'::text)

partition wiki_data_part_s (that has more than 100 values in its list) is
always scanned,
even when where predicates are not in its values list ...

Problem occurs on
PostgreSQL 10.0, compiled by Visual C++ build 1800, 64-bit
even without data loaded.

If this is a problem of max values, maybe this could be added in doc ?


wiki_data_wrong_part_s_chosen.sql

  



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: One Partition by list is always chosen by planner

2017-11-21 Thread legrand legrand
yes it is

show constraint_exclusion
partition

and if I explain the same query with an other filter

explain select * from wiki_data_part where category='fr'

| Append  (cost=0.00..14010.76 rows=291609 width=48)

  
|   ->  Seq Scan on wiki_data_part_f  (cost=0.00..9975.04 rows=291339
width=48)   

| Filter: ((category)::text = 'fr'::text)   

  
|   ->  Seq Scan on wiki_data_part_s  (cost=0.00..4035.72 rows=270 width=50)

  
| Filter: ((category)::text = 'fr'::text)

wiki_data_part_s is always chosen in the plan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: One Partition by list is always chosen by planner

2017-11-22 Thread legrand legrand
Hello,

thank you for this answer.
I just wanted to add a few informations.

This table has in fact around 20 partitions,

explain select * from wiki_data_part where category='fr' 
returns only 2 partitions (meaning that constraint_exclusion works)

the partition that is always scanned has more than 100 distinct values in
its partition list.

After splitting this partition into 2 partitions the problem is gone ...

For me this is as if the planner was not able to check partitions with too
much values.
There is no error with the query result, just a limited impact on
performances.

Regards
PAscal




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: declarative partition by list, "other values" syntax

2017-11-26 Thread legrand legrand
Found it in pg 11devel:

CREATE TABLE wiki_data_part_a PARTITION OF wiki_data_part DEFAULT;




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: seq vs index scan in join query

2017-11-29 Thread legrand legrand
Hi,

Could you give us the partitions (ranges values) and indexes definition for
result table ?

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Partition pruning / agg push down for star schema in pg v11

2017-11-29 Thread legrand legrand
Hello,

Working on Oracle migration POCs, I'm very interested in v11 and declarative
partitioning optimizations.

I have a typical star schema, having dimension tables "product", "calendar"
and "country" and a fact table "sales". 
This fact table is partitionned by time (range by month) and country (list).

Will query like:

select product.name, calendar.month, sum(sales.net_price) 
from sales
 inner join product on (product.id = sales.cust_id)
 inner join country on (country.id = sales.country_id)
 inner join calendaron (calendar.id = sales.calendar_id)
where 
 country.name = 'HERE'
 and calendar.year = '2017'
group by product.name,calendar.month

be able to identify needed partitions ?

nb: the query has predicates on dimension tables not on columns used for
fact table partitioning:
- country.name vs sales.country_id,
- calendar.year vs sales.calendar_id.


Second question: will some aggregation be pushed to the fact table ?

Something like
select product.name,calendar.month,agg.sum_net 
from
  (select product_id,calendar_id,sum(net_price) as sum_net
   from sales
inner join country  on (country.id = sales.country_id)
inner join calendar on (calendar.id = sales.calendar_id)
   where 
country.name = 'HERE'
and calendar.year = '2017') agg
 inner join product on (product.id = agg.cust_id)
 inner join calendaron (calendar.id = agg.calendar_id)
group by product.name,calendar.month


Thanks in advance (commitfest or patches references are welcome)

Regards
PAscal  





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: PostgreSQL and Data Warehouse optimizations

2017-12-01 Thread legrand legrand
Hi,

Parallel Queries are now available and development is very active for
Declarative Partitioning (arrived with pg 10),
many new features are currently developped in pg 11, (for exemple hash
partition, partition wise ... most regarding partition usage optimizations)
see https://commitfest.postgresql.org/16/

There are also many works regarding Sharding (using FDW with remote
partitions) and pocs regarding compression, vectorization ...

I'm like you, I very interested in dataware house / datamart / OLAP queries
...
I've asked about
http://www.postgresql-archive.org/Partition-pruning-agg-push-down-for-star-schema-in-pg-v11-td5994678.html
that seems to me a kind of pre-requisite before trying to migrate ...

Sure pg 11 will be great for us ;o)

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Partition pruning / agg push down for star schema in pg v11

2017-12-03 Thread legrand legrand
Adding partitioning on the dim tables, with the same keys as those used in
the fact table,
gives any star schema a good chance to use Partition Wise Join / Aggregate
plans.

Will test it soon
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Partition pruning / agg push down for star schema in pg v11

2017-12-05 Thread legrand legrand
Partition pruning doen't work in this case
as described at
http://www.postgresql-archive.org/Partition-pruning-for-Star-Schema-td5995163.html#a5995168

Partition wise join works for ONLY ONE dim table (that is better than
nothing).



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Does the delete+insert is fater and less cpu consumer than update?

2017-12-14 Thread legrand legrand
If you use partitioned tables,
and just want to reload data from some partitions

you can then use the truncate partition syntax
or develop a procedure like described here

https://www.postgresql.org/message-id/am4pr03mb171327323dcd2069a532756190...@am4pr03mb1713.eurprd03.prod.outlook.com

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Bulk Insert/Update Scenario

2018-01-04 Thread legrand legrand
Hi,

check documentation  Populate a database
  

this explains how to create a dummy table,
load it using COPY command,
and then INSERT / UPDATE target tables (using ON CONFLICT if needed)

You can also investigate:
-  file_fdw   
extension (that permits to use text files as tables)
-  pg_bulkload    extension (that
permits to load data like Oracle loader do)

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html