RE: EXTERNAL: Re: Database name with semicolon

2018-06-28 Thread Joby John
Hi Pavel,

Thanks for coming back to me.

The version of the Postgres server where I am trying is: 9.5.3
Yes, I tried putting the database name in double quotes with no luck.
Please note we are using PostgreSQL Unicode ODBC driver (version 10.2) to 
communicate with the server.

Regards,
Joby John

From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: 27 June 2018 15:26
To: Joby John 
Cc: pgsql-gene...@postgresql.org
Subject: EXTERNAL: Re: Database name with semicolon

Hi

2018-06-27 15:22 GMT+02:00 Joby John 
mailto:Joby.John@nccgroup.trust>>:
Hi,

We have a windows application which communicate to Postgres database via 
PostgreSQL ODBC drivers for windows.
One of our customers has a database with semicolon in its name (e.g.: “db; 
name”) and our application is failing to establish connection to that database 
saying like it could not find the database.
We tried all known possibilities by enclosing the DB name under quotes, curly 
brackets etc. but none solved the problem.

I wonder whether there is any possible way to achieve connection to such 
database. If not, probably it’s better not to support semicolons within the 
database name.
Not sure whether I need to submit this query as a bug or not. Any help is much 
appreciated.

modern Postgresql has not any problems with special chars

postgres=# create database "bad name ";
CREATE DATABASE
postgres=# \q
[pavel@nemesis ~]$ psql 'bad name '
psql (11beta1)
Type "help" for help.

bad name =#

What is version of Postgres?

Did you try double quotes?

Regards

Pavel



Thanks and Regards,
Joby John

Joby John
Software Developer
NCC Group
Kings Court, Kingston Road, Leatherhead, KT22 7SL

Telephone: +44 1372 383 800
Mobile:
Website: www.nccgroup.trust
Twitter: @NCCGroupplc

[https://www.nccgroup.trust/static/img/emaillogo/ncc-group-logo.png]



This email is sent for and on behalf of NCC Group. NCC Group is the trading 
name of NCC Services Limited (Registered in England CRN: 2802141). The ultimate 
holding company is NCC Group plc (Registered in England CRN: 4627044). This 
email may be confidential and/or legally privileged.


Joby John
Software Developer
NCC Group
Kings Court, Kingston Road, Leatherhead, KT22 7SL

Telephone: +44 1372 383 800
Mobile: 
Website: www.nccgroup.trust
Twitter: @NCCGroupplc
[https://www.nccgroup.trust/static/img/emaillogo/ncc-group-logo.png]  



This email is sent for and on behalf of NCC Group. NCC Group is the trading 
name of NCC Services Limited (Registered in England CRN: 2802141). The ultimate 
holding company is NCC Group plc (Registered in England CRN: 4627044). This 
email may be confidential and/or legally privileged.


plperl and plperlu language extentsions

2018-06-28 Thread Niles Oien
Hi,

I am running postgres 10.4 on CentOS 7.5. I am having trouble getting the
plperl language extension going :

netdrms02.nispdc.nso.edu:5432 postgres@nsocu=# CREATE EXTENSION plperl;
ERROR:  58P01: could not open extension control file
"/usr/pgsql-10/share/extension/plperl.control": No such file or directory
LOCATION:  parse_extension_control_file, extension.c:494

Sure enough that file is not there :

[production@netdrms02 ~]$ ls /usr/pgsql-10/share/extension/
plpgsql--1.0.sql  plpgsql.control  plpgsql--unpackaged--1.0.sql


Yet plperl would seem to be installed :

# yum list | grep postgres | grep perl
postgresql-plperl.x86_64  9.2.23-3.el7_4
 base
postgresql10-plperl.x86_6410.4-1PGDG.rhel7
 pgdg10

I need to do this for both plperl and plperlu. I think there is perhaps
something I don't have installed, but I can't tell what? I apologize if
this is obvious, the language extensions are not something I deal with
regularly.

If it helps, here is the result of "select version();" :
PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-28), 64-bit

Thanks for considering this,

Niles.


-- 
Niles Oien, National Solar Observatory, Boulder Colorado USA


Re: Example setup for Odyssey connection pooler?

2018-06-28 Thread Pablo Hendrickx

Thank you this was helpful


On 27-06-18 15:17, Adrian Klaver wrote:

On 06/27/2018 06:07 AM, Pablo Hendrickx wrote:

Please reply to list also.
Ccing list.


Yes that's the config file. How do I start Odyssey as a service?


There are some example scripts:

https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fyandex%2Fodyssey%2Ftree%2Fmaster%2Fscripts&data=02%7C01%7Cpablo.hendrickx%40exitas.be%7C3d454da8336943aee59608d5dc305565%7C49c3d703357947bfa8887c913fbdced9%7C0%7C0%7C636657022514514779&sdata=ap9KCsZHMlcfzyoG7gM4HEn%2FIOBL2wj%2FkJmYs0L36Qs%3D&reserved=0 



If this does not help you might want to contact the folks at Yandex at 
the email listed here:


https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fyandex&data=02%7C01%7Cpablo.hendrickx%40exitas.be%7C3d454da8336943aee59608d5dc305565%7C49c3d703357947bfa8887c913fbdced9%7C0%7C0%7C636657022514514779&sdata=kFdy92J6v9VILV00LnY%2BO%2FVXgwVOHJz5Jdvj7%2BOoya8%3D&reserved=0 



opensou...@yandex-team.ru



On 27-06-18 14:53, Adrian Klaver wrote:

On 06/27/2018 04:37 AM, Pablo Hendrickx wrote:

Hi Postgres!

The installation instructions for Odyssey are very clear and 
helpful, but there are no setup instructions to be found for this 
multithreading tool.

Can someone provide a simple example setup?


?:









--

Met vriendelijke groeten,

Pablo Hendrickx
Open Source DBA

+32 489 73 09 37
pablo.hendri...@exitas.be 

Quality. Passion. Personality

www.exitas.be  | Veldkant 31 | 2550 Kontich




help understanding create statistic

2018-06-28 Thread Luca Ferrari
Hi all,
in order to better understand this feature of 10, I've created a table
like this:

CREATE TABLE expenses(
   ...
   day date,
   year int,
   CHECK( year = EXTRACT( year FROM day ) )
);

so that I can ensure 'year' and 'day' are tied together:

SELECT
count(*) FILTER( WHERE year = 2016 ) AS by_year,
count(*) FILTER( WHERE EXTRACT( year FROM day ) = 2016 ) AS by_day
FROM expenses;
-[ RECORD 1 ]-
by_year | 8784
by_day  | 8784

Then I created a statistic:

CREATE STATISTICS stat_day_year ( dependencies )
ON day, year
FROM expenses;

select * from pg_statistic_ext ;
-[ RECORD 1 ]---+-
stxrelid| 42833
stxname | stat_day_year
stxnamespace| 2200
stxowner| 16384
stxkeys | 3 5
stxkind | {f}
stxndistinct|
stxdependencies | {"3 => 5": 1.00}

Now, having an index on the extract year of day as follows:

CREATE INDEX idx_year
ON expenses ( EXTRACT( year FROM day ) );

why is the planner not choosing to use such index on a 'year' raw query?

EXPLAIN SELECT * FROM expenses
WHERE year = 2016;
  QUERY PLAN
---
 Gather  (cost=1000.00..92782.34 rows=8465 width=32)
   Workers Planned: 2
   ->  Parallel Seq Scan on expenses  (cost=0.00..90935.84 rows=3527 width=32)
 Filter: (year = 2016)

The number of rows are correct, but I was expecting it to use the same
index as a query like "WHERE EXTRACT( year FROM day) = 2016" triggers.

Even altering the year column to not null does show any change, and
this is the plan obtained turning off seq_scan (to see the costs):

EXPLAIN ANALYZE SELECT * FROM expenses
WHERE year = 2016;
QUERY PLAN
--
 Seq Scan on expenses  (cost=100.00..1127402.44 rows=8451
width=32) (actual time=972.734..2189.300 rows=8784 loops=1)
   Filter: (year = 2016)
   Rows Removed by Filter: 4991216


Am I misunderstaing this functional dependency?

Thanks,
Luca



Analyze plan of foreign data wrapper

2018-06-28 Thread Mathieu PUJOL
Hello!
I'am writing a foreign data wrapper. To avoid returning data for a column
that is not used, I parse 'targetlist' and 'qual' of the Plan.
I'am able to find Var nodes but I can't figure out how i could now if this
node is related to a column my foreign table.
I guess that the field 'location' is the solution but I don't understand
how to use it to retrieve target table oid and column index.
I hope someone could help me.
Thanks in advance
-- 
Mathieu PUJOL
Backend Leader
Tel : 05.81.33.13.36
REAL FUSIO - 3D Computer Graphics
3 rue Dieudonné Costes 31700 Blagnac
mathieu.pu...@realfusio.com - http://www.realfusio.com


Re: plperl and plperlu language extentsions

2018-06-28 Thread Luca Ferrari
On Thu, Jun 28, 2018 at 9:26 AM Niles Oien  wrote:
>
> Yet plperl would seem to be installed :
>
> # yum list | grep postgres | grep perl
> postgresql-plperl.x86_64  9.2.23-3.el7_4   base
> postgresql10-plperl.x86_6410.4-1PGDG.rhel7 pgdg10
>

Reinstalling the package?
Is the correct package for the 10 distribution?

Luca



Re: help understanding create statistic

2018-06-28 Thread David Rowley
On 28 June 2018 at 21:38, Luca Ferrari  wrote:
> CREATE INDEX idx_year
> ON expenses ( EXTRACT( year FROM day ) );
>
> why is the planner not choosing to use such index on a 'year' raw query?
>
> EXPLAIN SELECT * FROM expenses
> WHERE year = 2016;

The expression in the where clause must match the indexed expression.
You'd need to add an index on just (year) for that to work.

> Am I misunderstaing this functional dependency?

Yeah, the statistics are just there to drive the planner's costing.
They won't serve as proof for anything else.

All you've done by creating those stats is to allow better estimates
for queries such as:

SELECT * FROM expenses WHERE day = '2018-06-28' and year = 2018;

> stxdependencies | {"3 => 5": 1.00}

It would appear that "3" is the attnum for day and "5" is year. All
that tells the planner is that on the records sampled during analyze
is that each "day" had about exactly 1 year.

There's nothing then to stop you going and adding a record with the
day '2017-01-01' and the year 2018. The stats will remain the same
until you analyze the table again.

If those stats didn't exist, the planner would have multiplied the
selectivity estimates of each item in the WHERE clause individually.
So if about 10% of records had year=2018, and 0.01% had '2018-06-28',
then the selectivity would have been 0.1 *  0.001.   With a functional
dependency of 1, the selectivity just becomes 0.001.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



JSONB arrays

2018-06-28 Thread Victor Noagbodji
Hey people,

I ended up with the following to check (or similar to return) the intersection 
of two list of values on a JSONB object:

array_length(
array(
select jsonb_array_elements_text(col1)
intersect
select jsonb_array_elements_text(col2)
), 1) > 0

Is there a better way?

Thanks


Index Gin Creation is taking long time..

2018-06-28 Thread Anto Aravinth
Hello,

I'm trying to create an index:

create index search_idx on so2 using gin (to_tsvector('english',posts));

Looks like its running atleast for 8hours :( Totally I have 47M records in
so2. Not sure why its taking so long time.

Any idea or tips to debug while the index creation is going on?

Thanks,
Anto.


RE: Index Gin Creation is taking long time..

2018-06-28 Thread Steven Winfield



From: Anto Aravinth [mailto:anto.aravinth@gmail.com]
Sent: 28 June 2018 11:34
To: pgsql-generallists.postgresql.org
Subject: Index Gin Creation is taking long time..

Hello,

I'm trying to create an index:


create index search_idx on so2 using gin (to_tsvector('english',posts));

Looks like its running atleast for 8hours :( Totally I have 47M records in so2. 
Not sure why its taking so long time.

Any idea or tips to debug while the index creation is going on?

Thanks,
Anto.



· Try increasing maintenance_work_mem and restarting? 
https://www.postgresql.org/docs/10/static/gin-tips.html

· “top -p ” can show you if you are CPU or I/O bound.

· “perf top -p ” on the backend that is doing the index creation 
is a good way to see where it is spending time.

Good luck!

Steven.

This email is confidential. If you are not the intended recipient, please 
advise us immediately and delete this message. 
The registered name of Cantab- part of GAM Systematic is Cantab Capital 
Partners LLP. 
See - http://www.gam.com/en/Legal/Email+disclosures+EU for further information 
on confidentiality, the risks of non-secure electronic communication, and 
certain disclosures which we are required to make in accordance with applicable 
legislation and regulations. 
If you cannot access this link, please notify us by reply message and we will 
send the contents to you.

GAM Holding AG and its subsidiaries (Cantab – GAM Systematic) will collect and 
use information about you in the course of your interactions with us. 
Full details about the data types we collect and what we use this for and your 
related rights is set out in our online privacy policy at 
https://www.gam.com/en/legal/privacy-policy. 
Please familiarise yourself with this policy and check it from time to time for 
updates as it supplements this notice.


Re: Database name with semicolon

2018-06-28 Thread joby.john@nccgroup.trust
> See if something like below works:
> {db; name}



Hi,

Thanks for the responses.

I tried putting curly brackets around the value of database like you
mentioned.
{db; name}
But the server was treating the brackets as part of the database name and so
did not work.
I tried double quotes like 
"db; name" which also didn't work.

Not sure what else I can try or whether the ODBC driver supports database
name with a semicolon at all.
I know that in the case of password we can put the string enclosed within
curly brackets to escape special characters but not appears to be working
for the database.

Regards,
Joby 



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



RE: JSONB arrays

2018-06-28 Thread Steven Winfield
  From: Victor Noagbodji [mailto:vnoagbo...@amplify-nation.com]
  Sent: 28 June 2018 11:28
  To: pgsql-general
  Subject: JSONB arrays

  Hey people,

  I ended up with the following to check (or similar to return) the 
intersection of two list of values on a JSONB object:

  array_length(
  array(
  select jsonb_array_elements_text(col1)
  intersect
  select jsonb_array_elements_text(col2)
  ), 1) > 0

  Is there a better way?

  Thanks


Try the @> operator:

select '[1, 2, 3, 4]'::jsonb @> '[4, 2]'::jsonb


Steve.



Re: Database name with semicolon

2018-06-28 Thread Thiemo Kellner

Zitat von "joby.john@nccgroup.trust" :


Not sure what else I can try or whether the ODBC driver supports database
name with a semicolon at all.


Not knowing much about ODBC, how about single quotes?

And if you pass the entire URL, would it be possible to pass instead  
values to Kind of function?


Kind regards Thiemo




This message was sent using IMP, the Internet Messaging Program.




Re: plperl and plperlu language extentsions

2018-06-28 Thread Adrian Klaver

On 06/27/2018 08:27 AM, Niles Oien wrote:


Hi,

I am running postgres 10.4 on CentOS 7.5. I am having trouble getting 
the plperl language extension going :


netdrms02.nispdc.nso.edu:5432  
postgres@nsocu=# CREATE EXTENSION plperl;
ERROR:  58P01: could not open extension control file 
"/usr/pgsql-10/share/extension/plperl.control": No such file or directory

LOCATION:  parse_extension_control_file, extension.c:494

Sure enough that file is not there :

[production@netdrms02 ~]$ ls /usr/pgsql-10/share/extension/
plpgsql--1.0.sql  plpgsql.control  plpgsql--unpackaged--1.0.sql


Yet plperl would seem to be installed :

# yum list | grep postgres | grep perl
postgresql-plperl.x86_64                  9.2.23-3.el7_4               base
postgresql10-plperl.x86_64                10.4-1PGDG.rhel7
  pgdg10


So where did you get the packages to install Postgres itself?:

a) The CentOsS repos

b) The PGDG repos



I need to do this for both plperl and plperlu. I think there is perhaps 
something I don't have installed, but I can't tell what? I apologize if 
this is obvious, the language extensions are not something I deal with 
regularly.


If it helps, here is the result of "select version();" :
PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 
20150623 (Red Hat 4.8.5-28), 64-bit


Thanks for considering this,

Niles.


--
Niles Oien, National Solar Observatory, Boulder Colorado USA



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Sv: Re: Sv: Re: CTE optimization fence

2018-06-28 Thread Alvaro Herrera
On 2018-Jun-27, Andreas Joseph Krogh wrote:

> I know. I hate the INLINE proposal and hope default-behaviour will be like in 
> other DBs, inline like sub-query as default. GUC for preserving fence is what 
> I 
> hope will happen.

I am probably biased ;-) but I like my proposal #1 in this email
https://www.postgresql.org/message-id/20170503173305.fetj4tz7kd56tjlr%40alvherre.pgsql
where you have to mark queries were the current behavior is desired
(WITH MATERIALIZED), and leave all other queries alone.  I don't
remember any dissent on that.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: could not read block 0 in file : read only 0 of 8192 bytes when doing nasty on immutable index function

2018-06-28 Thread Andres Freund
Hi,

Luca, it'd be cool if you could not top-quote on this list - Thanks!

Others - this seems like a potentially more longstanding bug, see the
bottom of the mail.

Peter, looks like you might be involved specifically.


On 2018-06-28 08:26:58 +0200, Luca Ferrari wrote:
> Got it: it happens if you drop and recreate the index. It shows up
> either setting max_parallel_maintanance_workers to zero or a greater
> value.
> testdb=> create table t( pk serial, t text );
> CREATE TABLE
> testdb=> insert into t( t ) values( 'hello' ), ('world');
> INSERT 0 2
> testdb=> create or replace function f_fake( i int )
> returns text
> as $body$
> declare
>   v_t text;
> begin
>select t into strict v_t
>from t where pk = i limit 1;
>return v_t;
> exception
>   when no_data_found then return 'a';
> end
> $body$
> language plpgsql immutable;
> CREATE FUNCTION
> testdb=> create index idx_fake on t ( f_fake( pk ) );
> CREATE INDEX
> testdb=> drop index idx_fake;
> DROP INDEX
>
> testdb=> create index idx_fake on t ( f_fake( pk ) );
> 2018-06-28 10:23:18.275 CEST [892] ERROR:  could not read block 0 in
> file "base/16392/16538": read only 0 of 8192 bytes
> 2018-06-28 10:23:18.275 CEST [892] CONTEXT:  SQL statement "select t
>  from t where pk = i limit 1"
> PL/pgSQL function f_fake(integer) line 5 at SQL statement
> 2018-06-28 10:23:18.275 CEST [892] STATEMENT:  create index idx_fake
> on t ( f_fake( pk ) );
> ERROR:  could not read block 0 in file "base/16392/16538": read only 0
> of 8192 bytes
> CONTEXT:  SQL statement "select tfrom t where pk =
> i limit 1"
> PL/pgSQL function f_fake(integer) line 5 at SQL statement

The backtrace for the error is:

#0  mdread (reln=0x55c314f5e580, forknum=MAIN_FORKNUM, blocknum=0, 
buffer=0x7f010eaa8b80 "") at 
/home/andres/src/postgresql/src/backend/storage/smgr/md.c:785
#1  0x55c31416d8aa in smgrread (reln=0x55c314f5e580, forknum=MAIN_FORKNUM, 
blocknum=0, buffer=0x7f010eaa8b80 "")
at /home/andres/src/postgresql/src/backend/storage/smgr/smgr.c:628
#2  0x55c31412cfb3 in ReadBuffer_common (smgr=0x55c314f5e580, 
relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=0, mode=RBM_NORMAL, 
strategy=0x0,
hit=0x7ffe19a7e8eb) at 
/home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:890
#3  0x55c31412c8d6 in ReadBufferExtended (reln=0x7f0111a552f0, 
forkNum=MAIN_FORKNUM, blockNum=0, mode=RBM_NORMAL, strategy=0x0)
at /home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:664
#4  0x55c31412c7b3 in ReadBuffer (reln=0x7f0111a552f0, blockNum=0) at 
/home/andres/src/postgresql/src/backend/storage/buffer/bufmgr.c:596
#5  0x55c313db47c8 in _bt_getbuf (rel=0x7f0111a552f0, blkno=0, access=1) at 
/home/andres/src/postgresql/src/backend/access/nbtree/nbtpage.c:736
#6  0x55c313db4297 in _bt_getrootheight (rel=0x7f0111a552f0) at 
/home/andres/src/postgresql/src/backend/access/nbtree/nbtpage.c:608
#7  0x55c3140a3179 in get_relation_info (root=0x55c314f782a8, 
relationObjectId=121577, inhparent=false, rel=0x55c314f78bd0)
at /home/andres/src/postgresql/src/backend/optimizer/util/plancat.c:412
#8  0x55c3140a8894 in build_simple_rel (root=0x55c314f782a8, relid=1, 
parent=0x0) at 
/home/andres/src/postgresql/src/backend/optimizer/util/relnode.c:221
#9  0x55c31406e183 in add_base_rels_to_query (root=0x55c314f782a8, 
jtnode=0x55c314f77e40)
at /home/andres/src/postgresql/src/backend/optimizer/plan/initsplan.c:113
#10 0x55c31406e1c9 in add_base_rels_to_query (root=0x55c314f782a8, 
jtnode=0x55c314f77da8)
at /home/andres/src/postgresql/src/backend/optimizer/plan/initsplan.c:121
#11 0x55c314072b33 in query_planner (root=0x55c314f782a8, 
tlist=0x55c314f78798, qp_callback=0x55c31407874b ,
qp_extra=0x7ffe19a7ec40) at 
/home/andres/src/postgresql/src/backend/optimizer/plan/planmain.c:144
#12 0x55c314075cb7 in grouping_planner (root=0x55c314f782a8, 
inheritance_update=false, tuple_fraction=1)
at /home/andres/src/postgresql/src/backend/optimizer/plan/planner.c:1901
#13 0x55c3140743c6 in subquery_planner (glob=0x55c314f78210, 
parse=0x55c314f77988, parent_root=0x0, hasRecursion=false, tuple_fraction=0)
at /home/andres/src/postgresql/src/backend/optimizer/plan/planner.c:966
#14 0x55c3140730a1 in standard_planner (parse=0x55c314f77988, 
cursorOptions=256, boundParams=0x55c314f76620)
at /home/andres/src/postgresql/src/backend/optimizer/plan/planner.c:405
#15 0x55c314072e3a in planner (parse=0x55c314f77988, cursorOptions=256, 
boundParams=0x55c314f76620)
at /home/andres/src/postgresql/src/backend/optimizer/plan/planner.c:263
#16 0x55c31416ffeb in pg_plan_query (querytree=0x55c314f77988, 
cursorOptions=256, boundParams=0x55c314f76620)
at /home/andres/src/postgresql/src/backend/tcop/postgres.c:809
#17 0x55c31417011e in pg_plan_queries (querytrees=0x55c314f77928, 
cursorOptions=256, boundParams=0x55c314f76620)
at /home/andres/src/postgresql/src/

Re: Database name with semicolon

2018-06-28 Thread Pavel Stehule
2018-06-28 12:10 GMT+02:00 joby.john@nccgroup.trust <
Joby.John@nccgroup.trust>:

> > See if something like below works:
> > {db; name}
>
>
>
> Hi,
>
> Thanks for the responses.
>
> I tried putting curly brackets around the value of database like you
> mentioned.
> {db; name}
> But the server was treating the brackets as part of the database name and
> so
> did not work.
> I tried double quotes like
> "db; name" which also didn't work.
>

please, can you try

'"db; name"' .. double quotes nested in apostrophes



> Not sure what else I can try or whether the ODBC driver supports database
> name with a semicolon at all.
> I know that in the case of password we can put the string enclosed within
> curly brackets to escape special characters but not appears to be working
> for the database.
>
> Regards,
> Joby
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>


incorrect resource manager data checksum in record

2018-06-28 Thread Devin Christensen
I've been seeing this issue in multiple separate hot standby replication
chains of PostgreSQL servers (5 so far). There are 4 servers in each chain
(some running Ubuntu 14.04 and others Ubuntu 16.04. and PostgreSQL >= 10.1
and <= 11). We also have a mix of ext4 and zfs file systems. Here are the
details for each chain:

First chain
===
dc1-pg105 (pg 10.1, ub 14.04.5) (primary)
   |
   V
dc1-pg205 (pg 10.3, ub 16.04.4)
   |
   V
dc2-pg105 (pg 10.1, ub 14.04.5) <-- error first occurs here
   |
   V
dc2-pg205 (pg 10.3, ub 16.04.4) <-- and also effects this node


Second chain
===
dc1-pg106 (pg 10.1, ub 14.04.5, ext4) (primary)
   |
   V
dc1-pg206 (pg 10.3, ub 16.04.4, zfs)
   |
   V
dc2-pg106 (pg 10.1, ub 14.04.5, ext4) <-- error first occurs here
   |
   V
dc2-pg206 (pg 10.3, ub 16.04.4, zfs) <-- and also effects this node


Third chain
===
dc1-pg107 (pg 10.1, ub 14.04.5, ext4) (primary)
   |
   V
dc1-pg207 (pg 10.3, ub 16.04.4, zfs)
   |
   V
dc2-pg107 (pg 10.1, ub 14.04.5, ext4) <-- error first occurs here
   |
   V
dc2-pg207 (pg 10.3, ub 16.04.4, zfs) <-- and also effects this node


Fourth chain
===
dc1-pg108 (pg 10.3, ub 16.04.4, ext4) (primary)
   |
   V
dc1-pg208 (pg 10.3, ub 16.04.4, zfs)
   |
   V
dc2-pg108 (pg 10.3, ub 16.04.4, ext4) <-- error first occurs here
   |
   V
dc2-pg208 (pg 10.3, ub 16.04.4, zfs) <-- and also effects this node


Fifth chain
===
dc1-pg110 (pg 10.3, ub 16.04.4, ext4) (primary)
   |
   V
dc1-pg210 (pg 10.3, ub 16.04.4, zfs)
   |
   V
dc2-pg110 (pg 10.3, ub 16.04.4, ext4) <-- error first occurs here
   |
   V
dc2-pg210 (pg 10.3, ub 16.04.4, zfs) <-- and also effects this node


The pattern is the same, regardless of ubuntu or postgresql versions. I'm
concerned this is somehow a ZFS corruption bug, because the error always
occurs downstream of the first ZFS node and ZFS is a recent addition. I
don't know enough about what this error means, and haven't found much
online. When I restart the nodes effected, replication resumes normally,
with no known side-effects that I've discovered so far, but I'm no longer
confident that the data downstream from the primary is valid. Really not
sure how best to start tackling this issue, and hoping to get some
guidance. The error is infrequent. We have 11 total replication chains, and
this error has occurred on 5 of those chains in approximately 2 months.


Re: plperl and plperlu language extentsions

2018-06-28 Thread Adrian Klaver

On 06/28/2018 12:21 PM, Niles Oien wrote:


I got it working, although I found it a bit odd. Although "yum list" 
showed postgresql10-plperl.x86_64 as being installed, in fact :


I don't use RH based systems much so I failed to catch it earlier. From 
your earlier post:


# yum list | grep postgres | grep perl
postgresql-plperl.x86_64  9.2.23-3.el7_4 
base
postgresql10-plperl.x86_6410.4-1PGDG.rhel7 
pgdg10



http://yum.baseurl.org/wiki/YumCommands

"   yum list

By default 'yum list' without any options will list all packages in 
all the repositories and all the packages installed on your system. 
Note: 'yum list all' and 'yum list' give the same output.

"

So you where seeing packages in the repo's that where not installed yet 
as well as any that where installed.


If I am following you need to use:

yum list installed

to see only installed packages.



yum install postgresql10-plperl

went ahead as if it was not installed, and I was now able to create the 
language extensions.


Thanks, all,

Niles.


On Thu, Jun 28, 2018 at 8:42 AM, Niles Oien > wrote:




Thanks for responding!

I installed Postgres 10 from the PG repos. I believe the system came
with Postgres 9.2 on it but I never used that. Postgres 10 is first
in the path.



On Thu, Jun 28, 2018 at 7:03 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 06/27/2018 08:27 AM, Niles Oien wrote:


Hi,

I am running postgres 10.4 on CentOS 7.5. I am having
trouble getting the plperl language extension going :

netdrms02.nispdc.nso.edu:5432

> postgres@nsocu=#
CREATE EXTENSION plperl;
ERROR:  58P01: could not open extension control file
"/usr/pgsql-10/share/extension/plperl.control": No such file
or directory
LOCATION:  parse_extension_control_file, extension.c:494

Sure enough that file is not there :

[production@netdrms02 ~]$ ls /usr/pgsql-10/share/extension/
plpgsql--1.0.sql  plpgsql.control  plpgsql--unpackaged--1.0.sql


Yet plperl would seem to be installed :

# yum list | grep postgres | grep perl
postgresql-plperl.x86_64                  9.2.23-3.el7_4   
            base
postgresql10-plperl.x86_64                10.4-1PGDG.rhel7 
             pgdg10



So where did you get the packages to install Postgres itself?:

a) The CentOsS repos

b) The PGDG repos



I need to do this for both plperl and plperlu. I think there
is perhaps something I don't have installed, but I can't
tell what? I apologize if this is obvious, the language
extensions are not something I deal with regularly.

If it helps, here is the result of "select version();" :
PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc
(GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit

Thanks for considering this,

Niles.


-- 
Niles Oien, National Solar Observatory, Boulder Colorado USA




-- 
Adrian Klaver

adrian.kla...@aklaver.com 




-- 
Niles Oien, National Solar Observatory, Boulder Colorado USA





--
Niles Oien, National Solar Observatory, Boulder Colorado USA



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Analyze plan of foreign data wrapper

2018-06-28 Thread Laurenz Albe
Mathieu PUJOL wrote:
> I'am writing a foreign data wrapper. To avoid returning data for a column 
> that is not used, I parse 'targetlist' and 'qual' of the Plan.
> I'am able to find Var nodes but I can't figure out how i could now if this 
> node is related to a column my foreign table.

For a Var v, v->varattno contains the attribute number of the column.
That is the same as the attnum column in pg_attribute.

If v->varattno == 0, it is a whole-row reference, like in

   SELECT mytab FROM mytab;

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: incorrect resource manager data checksum in record

2018-06-28 Thread Thomas Munro
On Fri, Jun 29, 2018 at 5:44 AM, Devin Christensen 
wrote:
> The pattern is the same, regardless of ubuntu or postgresql versions. I'm
> concerned this is somehow a ZFS corruption bug, because the error always
> occurs downstream of the first ZFS node and ZFS is a recent addition. I
> don't know enough about what this error means, and haven't found much
> online. When I restart the nodes effected, replication resumes normally,
> with no known side-effects that I've discovered so far, but I'm no longer
> confident that the data downstream from the primary is valid. Really not
> sure how best to start tackling this issue, and hoping to get some
guidance.
> The error is infrequent. We have 11 total replication chains, and this
error
> has occurred on 5 of those chains in approximately 2 months.

It's possible and sometimes expected to see that error when there has been
a crash, but you didn't mention that.  From your description it sounds like
it's happening in the middle of streaming, right?  My first thought was
that the filesystem change is surely a red herring.  But... I did find this
similar complaint that involves an ext4 primary and a btrfs replica:

https://dba.stackexchange.com/questions/116569/postgresql-docker-incorrect-resource-manager-data-checksum-in-record-at-46f-6

I'm having trouble imagining how the filesystem could be triggering a
problem though (unless ZoL is dramatically less stable than on other
operating systems, "ZFS ate my bytes" seems like a super unlikely theory).
Perhaps by being slower, it triggers a bug elsewhere?  We did have a report
recently of ZFS recycling WAL files very slowly (presumably because when it
moves the old file to become the new file, it finishes up slurping it back
into memory even though we're just going to overwrite it, and it can't see
that because our writes don't line up with the ZFS record size, possibly
unlike ye olde write-in-place 4k block filesystems, but that's just my
guess).  Does your machine have ECC RAM?

-- 
Thomas Munro
http://www.enterprisedb.com


Re: incorrect resource manager data checksum in record

2018-06-28 Thread Devin Christensen
> From your description it sounds like it's happening in the middle of
streaming, right?

Correct. None of the instances in the chain experience a crash. Most of the
time I see the "incorrect resource manager data checksum in record" error,
but I've also seen it manifested as:

invalid magic number 8813 in log segment 0003AEC2009C, offset
15335424

> I did find this similar complaint that involves an ext4 primary and a
btrfs replica:

It is interesting that my issue occurs on the first hop from ZFS to ext4. I
have not seen any instances of this happening going from the ext4 primary
to the first ZFS replica.

> We did have a report recently of ZFS recycling WAL files very slowly

Do you know what version of ZFS that effected? We're currently on 0.6.5.6,
but could upgrade to 0.7.5 on Ubuntu 18.04

> Does your machine have ECC RAM?

Yes, all the servers have registered ECC RAM.

---

I'm considering changing the replication configuration from:

ext4 -> zfs -> ext4 -> zfs

to

ext4 -> zfs -> zfs -> ext4

If the issue only occurs downstream of ZFS, this will give me twice as many
chances for it to occur, and I would expect to see some instances where
only the last ext4 node is effected, and some where the last ZFS and the
last ext4 node is effected. Not sure how much it helps, but at least I
might be able to collect more data until I find a reliable way to reproduce.

---

FYI, I'd be happy to discuss paid consulting if this is an issue in your
wheelhouse and that's something you're interested in.


On Thu, Jun 28, 2018 at 4:13 PM Thomas Munro 
wrote:

> On Fri, Jun 29, 2018 at 5:44 AM, Devin Christensen 
> wrote:
> > The pattern is the same, regardless of ubuntu or postgresql versions. I'm
> > concerned this is somehow a ZFS corruption bug, because the error always
> > occurs downstream of the first ZFS node and ZFS is a recent addition. I
> > don't know enough about what this error means, and haven't found much
> > online. When I restart the nodes effected, replication resumes normally,
> > with no known side-effects that I've discovered so far, but I'm no longer
> > confident that the data downstream from the primary is valid. Really not
> > sure how best to start tackling this issue, and hoping to get some
> guidance.
> > The error is infrequent. We have 11 total replication chains, and this
> error
> > has occurred on 5 of those chains in approximately 2 months.
>
> It's possible and sometimes expected to see that error when there has been
> a crash, but you didn't mention that.  From your description it sounds like
> it's happening in the middle of streaming, right?  My first thought was
> that the filesystem change is surely a red herring.  But... I did find this
> similar complaint that involves an ext4 primary and a btrfs replica:
>
>
> https://dba.stackexchange.com/questions/116569/postgresql-docker-incorrect-resource-manager-data-checksum-in-record-at-46f-6
>
> I'm having trouble imagining how the filesystem could be triggering a
> problem though (unless ZoL is dramatically less stable than on other
> operating systems, "ZFS ate my bytes" seems like a super unlikely theory).
> Perhaps by being slower, it triggers a bug elsewhere?  We did have a report
> recently of ZFS recycling WAL files very slowly (presumably because when it
> moves the old file to become the new file, it finishes up slurping it back
> into memory even though we're just going to overwrite it, and it can't see
> that because our writes don't line up with the ZFS record size, possibly
> unlike ye olde write-in-place 4k block filesystems, but that's just my
> guess).  Does your machine have ECC RAM?
>
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: incorrect resource manager data checksum in record

2018-06-28 Thread Thomas Munro
On Fri, Jun 29, 2018 at 1:14 PM, Devin Christensen  wrote:
>> From your description it sounds like it's happening in the middle of
>> streaming, right?
>
> Correct. None of the instances in the chain experience a crash. Most of the
> time I see the "incorrect resource manager data checksum in record" error,
> but I've also seen it manifested as:
>
> invalid magic number 8813 in log segment 0003AEC2009C, offset
> 15335424

I note that that isn't at a segment boundary.  Is that also the case
for the other error?

One theory would be that there is a subtle FS cache coherency problem
between writes and reads of a file from different processes
(causality), on that particular stack.  Maybe not too many programs
pass data through files with IPC to signal progress in this kinda
funky way, but that'd certainly be a violation of POSIX if it didn't
work correctly and I think people would know about that so I feel a
bit silly suggesting it.  To follow that hypothesis to the next step:
I suppose it succeeds after you restart because it requests the whole
segment again and gets a coherent copy all the way down the chain.
Another idea would be that our flush pointer tracking and IPC is
somehow subtly wrong and that's exposed by different timing leading to
incoherent reads, but I feel like we would know about that by now too.

I'm not really a replication expert, so I could be missing something
simple here.  Anyone?

>> I did find this similar complaint that involves an ext4 primary and a
>> btrfs replica:
>
> It is interesting that my issue occurs on the first hop from ZFS to ext4. I
> have not seen any instances of this happening going from the ext4 primary to
> the first ZFS replica.

I happen to have a little office server that uses ZFS so I left it
chugging through a massive pgbench session with a chain of 3 replicas
while I worked on other stuff, and didn't see any problems (no ext4
involved though, this is a FreeBSD box).  I also tried
--wal-segsize=1MB (a feature of 11) to get some more frequent
recycling happening just in case it was relevant.

>> We did have a report recently of ZFS recycling WAL files very slowly
>
> Do you know what version of ZFS that effected? We're currently on 0.6.5.6,
> but could upgrade to 0.7.5 on Ubuntu 18.04

I think that issue is fundamental/all versions, and has something to
with the record size (if you have 128KB ZFS records and someone writes
8KB, it probably needs to read a whole 128KB record in, whereas with
ext4 et al you have 4KB blocks and the OS can very often skip reading
it in because it can see you're entirely overwriting blocks), and
possibly the COW design too (I dunno).  Here's the recent thread,
which points back to an older one, from some Joyent guys who I gather
are heavy ZFS users:

https://www.postgresql.org/message-id/flat/CACPQ5FpEY9CfUF6XKs5sBBuaOoGEiO8KD4SuX06wa4ATsesaqg%40mail.gmail.com

There was a ZoL bug that made headlines recently but that was in 0.7.7
so not relevant to your case.

-- 
Thomas Munro
http://www.enterprisedb.com



Windows 10 got stuck with PostgreSQL at starting up. Adding delay lets it avoid.

2018-06-28 Thread TAKATSUKA Haruka
I got a trouble in PostgreSQL 9.3.x on Windows 10.
I would like to add new delay code as an official build option.

Windows 10 sometime (approximately once in 300 tries) hung up 
at OS starting up. The logs say it happened while the PostgreSQL 
service was starting. When OS stopped, some postgres auxiliary 
process were started and some were not started yet. 

The Windows dump say some threads of the postgres auxiliary process
are waiting OS level locks and the logon processes’thread are
also waiting a lock. MS help desk said that PostgreSQL’s OS level 
deadlock caused OS freeze. I think it is strange story. But, 
in fact, it not happened in repeated tests when I got rid of 
PostgreSQL from the initial auto-starting services.

I tweaked PostgreSQL 9.3.x (the newest from the repository) to add 
0.5 or 3.0 seconds delay after each sub process starts. 
And then the hung up was gone. This test patch is attached. 
It is only implemented for Windows. Also, I did not use existing 
pg_usleep because it contains locking codes (e.g. WaitForSingleObject
and Enter/LeaveCriticalSection).

Although Windows OS may have some problems, I think we should have
a means to avoid it. Can PostgreSQL be accepted such delay codes
as build-time options by preprocessor variables?


Thanks,
Takatsuka Haruka
diff --git a/src/backend/postmaster/postmaster.c 
b/src/backend/postmaster/postmaster.c
index d6fc2ed..ff03ebd 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -398,6 +398,30 @@ extern int optreset;   /* might not be 
declared by system headers */
 static DNSServiceRef bonjour_sdref = NULL;
 #endif
 
+#define USE_AFTER_AUX_FORK_SLEEP 3000
+
+#ifdef USE_AFTER_AUX_FORK_SLEEP
+#ifndef WIN32
+#define AFTER_AUX_FORK_SLEEP()
+#else
+#define AFTER_AUX_FORK_SLEEP() do { SleepEx(USE_AFTER_AUX_FORK_SLEEP, FALSE); 
} while(0)
+#endif
+#else
+#define AFTER_AUX_FORK_SLEEP()
+#endif
+
+#define USE_AFTER_BACKEND_FORK_SLEEP 500
+
+#ifdef USE_AFTER_BACKEND_FORK_SLEEP
+#ifndef WIN32
+#define AFTER_BACKEND_FORK_SLEEP()
+#else
+#define AFTER_BACKEND_FORK_SLEEP() do { SleepEx(USE_AFTER_BACKEND_FORK_SLEEP, 
FALSE); } while(0)
+#endif
+#else
+#define AFTER_BACKEND_FORK_SLEEP()
+#endif
+
 /*
  * postmaster.c - function prototypes
  */
@@ -1709,6 +1733,7 @@ ServerLoop(void)
 */
StreamClose(port->sock);
ConnFree(port);
+   AFTER_BACKEND_FORK_SLEEP();
}
}
}
@@ -2801,11 +2826,20 @@ reaper(SIGNAL_ARGS)
 * situation, some of them may be alive already.
 */
if (!IsBinaryUpgrade && AutoVacuumingActive() && 
AutoVacPID == 0)
+   {
AutoVacPID = StartAutoVacLauncher();
+   AFTER_AUX_FORK_SLEEP(); 
+   }
if (XLogArchivingActive() && PgArchPID == 0)
+   {
PgArchPID = pgarch_start();
+   AFTER_AUX_FORK_SLEEP();
+   }
if (PgStatPID == 0)
+   {
PgStatPID = pgstat_start();
+   AFTER_AUX_FORK_SLEEP();
+   }
 
/* some workers may be scheduled to start now */
maybe_start_bgworker();
@@ -5259,6 +5293,7 @@ StartChildProcess(AuxProcType type)
/*
 * in parent, successful fork
 */
+   AFTER_AUX_FORK_SLEEP();
return pid;
 }
 


Re: Windows 10 got stuck with PostgreSQL at starting up. Adding delay lets it avoid.

2018-06-28 Thread Thomas Kellerer
TAKATSUKA Haruka schrieb am 29.06.2018 um 08:03:
> I got a trouble in PostgreSQL 9.3.x on Windows 10.
> I would like to add new delay code as an official build option.
> 
> Windows 10 sometime (approximately once in 300 tries) hung up 
> at OS starting up. The logs say it happened while the PostgreSQL 
> service was starting. When OS stopped, some postgres auxiliary 
> process were started and some were not started yet. 
> 
> The Windows dump say some threads of the postgres auxiliary process
> are waiting OS level locks and the logon processes’thread are
> also waiting a lock. MS help desk said that PostgreSQL’s OS level 
> deadlock caused OS freeze. I think it is strange story. But, 
> in fact, it not happened in repeated tests when I got rid of 
> PostgreSQL from the initial auto-starting services.
> 
> I tweaked PostgreSQL 9.3.x (the newest from the repository) to add 
> 0.5 or 3.0 seconds delay after each sub process starts. 
> And then the hung up was gone. This test patch is attached. 
> It is only implemented for Windows. Also, I did not use existing 
> pg_usleep because it contains locking codes (e.g. WaitForSingleObject
> and Enter/LeaveCriticalSection).
> 
> Although Windows OS may have some problems, I think we should have
> a means to avoid it. Can PostgreSQL be accepted such delay codes
> as build-time options by preprocessor variables?

Did you try setting the service to "delayed start"?