Re: jsonpath syntax extensions

2022-03-21 Thread Erik Rijkers



Op 21-03-2022 om 21:13 schreef Greg Stark:

Hm. Actually... These changes were split off from the JSON_TABLE
patches? Are they still separate or have they been merged into those
other patches since? I see the JSON_TABLE thread is getting more
comments do those reviews include these patches?



They are separate.

FWIW, I've done all my JSON_PATH testing both without and with these 
syntax extensions (but I've done no code review.)  I like these 
extensions but as you say -- there seems to be not much interest.



Erik


On Mon, 21 Mar 2022 at 16:09, Greg Stark  wrote:


This patch seems to be getting ignored. Like David I'm a bit puzzled
because it doesn't seem like an especially obscure or difficult patch
to review. Yet it's been multiple years without even a superficial
"does it meet the coding requirements" review let alone a design
review.

Can we get a volunteer to at least give it a quick once-over? I don't
think it's ideal to be doing this in the last CF but neither is it
very appetizing to just shift it to the next CF without a review after
two years...

On Thu, 27 Feb 2020 at 10:58, Nikita Glukhov  wrote:


Hi, hackers!

Attached patches implement several useful jsonpath syntax extensions.
I already published them two years ago in the original SQL/JSON thread,
but then after creation of separate threads for SQL/JSON functions and
JSON_TABLE I forgot about them.

A brief description of the patches:

1. Introduced new jsonpath modifier 'pg' which is used for enabling
PostgreSQL-specific extensions.  This feature was already proposed in the
discussion of jsonpath's like_regex implementation.

2. Added support for raw jbvObject and jbvArray JsonbValues inside jsonpath
engine.  Now, jsonpath can operate with JSON arrays and objects only in
jbvBinary form.  But with introduction of array and object constructors in
patches #4 and #5 raw in-memory jsonb containers can appear in jsonpath engine.
In some places we can iterate through jbvArrays, in others we need to encode
jbvArrays and jbvObjects into jbvBinay.

3. SQL/JSON sequence construction syntax. A simple comma-separated list can be
used to concatenate single values or sequences into a single resulting sequence.

  SELECT jsonb_path_query('[1, 2, 3]', 'pg $[*], 4, 2 + 3');
   jsonb_path_query
  --
   1
   2
   3
   4
   5

  SELECT jsonb_path_query('{ "a": [1, 2, 3], "b": [4, 5] }',
 'pg ($.a[*], $.b[*]) ? (@ % 2 == 1)');
jsonb_path_query
  --
   1
   3
   5


Patches #4-#6 implement ECMAScript-like syntax constructors and accessors:

4. Array construction syntax.
This can also be considered as enclosing a sequence constructor into brackets.

  SELECT jsonb_path_query('[1, 2, 3]', 'pg [$[*], 4, 2 + 3]');
   jsonb_path_query
  --
   [1, 2, 3, 4, 5]

Having this feature, jsonb_path_query_array() becomes somewhat redundant.


5. Object construction syntax.  It is useful for constructing derived objects
from the interesting parts of the original object.  (But this is not sufficient
to "project" each object in array, item method like '.map()' is needed here.)

  SELECT jsonb_path_query('{"b": 2}', 'pg { a : 1, b : $.b, "x y" : $.b + 3 }');
  jsonb_path_query
  ---
   { "a" : 1, "b": 3, "x y": 5 }

Fields with empty values are simply skipped regardless of lax/strict mode:

  SELECT jsonb_path_query('{"a": 1}', 'pg { b : $.b, a : $.a ? (@ > 1) }');
   jsonb_path_query
  --
   {}


6. Object subscription syntax.  This gives us ability to specify what key to
extract on runtime.  The syntax is the same as ordinary array subscription
syntax.

  -- non-existent $.x is simply skipped in lax mode
  SELECT jsonb_path_query('{"a": "b", "b": "c"}', 'pg $[$.a, "x", "a"]');
   jsonb_path_query
  --
   "c"
   "b"

  SELECT jsonb_path_query('{"a": "b", "b": "c"}', 'pg $[$fld]', '{"fld": "b"}');
   jsonb_path_query
  --
   "c"

--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company




--
greg









wrong link in acronyms.sgml

2020-12-02 Thread Erik Rijkers

Hi

I just noticed that in

  https://www.postgresql.org/docs/13/acronyms.html
(i.e., doc/src/sgml/acronyms.sgml)

there is under lemma 'HOT' a link with URL:

  
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD


Is this deliberate? Surely pointing 13-docs into HEAD-docs is wrong?

I see no good alternative place to point it too than:

  
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;h=68c6709aa885d39f4a77c85b6c9a7c937c1a1518;hb=refs/heads/REL_13_STABLE



I made a patch to point there.

But such links with a hash are probably not maintainable; I don't know 
if a URL can be forged without the hash? Probably not.


Otherwise it may be better to remove the link altogether and just have 
the acronym lemma: 'HOT' and explanation 'Heap-Only Tuple'.



Erik Rijkers





--- ./doc/src/sgml/acronyms.sgml.orig	2020-12-02 09:49:53.922841584 +0100
+++ ./doc/src/sgml/acronyms.sgml	2020-12-02 10:20:53.042665809 +0100
@@ -300,7 +300,7 @@
 
  
   https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;hb=HEAD";>Heap-Only
+  url="https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/heap/README.HOT;h=68c6709aa885d39f4a77c85b6c9a7c937c1a1518;hb=refs/heads/REL_13_STABLE";>Heap-Only
   Tuples
  
 


Re: pgsql: Add key management system

2020-12-25 Thread Erik Rijkers

On 2020-12-25 16:19, Bruce Momjian wrote:


Add key management system
doc/src/sgml/database-encryption.sgml |  97 +


Attached are a few typos.

I also noticed that this option does not occur in the initdb --help:

  -u  --copy-encryption-keys

Was that deliberate?


Thanks,

Erik Rijkers






--- ./doc/src/sgml/database-encryption.sgml.orig	2020-12-25 19:11:55.809303009 +0100
+++ ./doc/src/sgml/database-encryption.sgml	2020-12-25 19:22:22.558936395 +0100
@@ -13,7 +13,7 @@
   log from being able to access the data stored in those files.
   For example, when using cluster file encryption, users who have read
   access to the cluster directories for backup purposes will not be able
-  to decrypt the data stored in the these files.
+  to decrypt the data stored in these files.
  
 
  
@@ -24,7 +24,7 @@
   Key one is used to encrypt write-ahead log (WAL) files.  Two different
   keys are used so that primary and standby servers can use different zero
   (heap/index/temp) keys, but the same one (WAL) key, so that these keys
-  can eventually be rotated by switching the primary to the standby as
+  can eventually be rotated by switching the primary to the standby
   and then changing the WAL key.
  
 
@@ -68,7 +68,7 @@
During the initdb process, if
--cluster-key-command is specified, two data-level
encryption keys are created.   These two keys are then encrypted with
-   the key enryption key (KEK) supplied by the cluster key command before
+   the key encryption key (KEK) supplied by the cluster key command before
being stored in the database directory.  The key or passphrase that
derives the key must be supplied from the terminal or stored in a
trusted key store, such as key vault software, hardware security module.
@@ -87,7 +87,7 @@
   
 
   
-   The data encryption keys are randomly generated and are of 128, 192,
+   The data encryption keys are randomly generated and are 128, 192,
or 256-bits in length.  They are encrypted by the key encryption key
(KEK) using Advanced Encryption Standard (AES256)
encryption in Galois/Counter Mode (GCM), which also


Re: proposal: schema variables

2020-12-25 Thread Erik Rijkers

On 2020-12-26 05:52, Pavel Stehule wrote:
so 19. 12. 2020 v 7:57 odesílatel Pavel Stehule 


napsal:
[schema-variables-20201222.patch.gz (~]


Hi

only rebase



rebase and comments fixes



Hi Pavel,

This file is the exact same as the file you sent Tuesday. Is it a 
mistake?









Re: proposal: schema variables

2021-01-08 Thread Erik Rijkers

On 2021-01-08 07:20, Pavel Stehule wrote:

Hi

just rebase

[schema-variables-20200108.patch]


Hey Pavel,

My gcc 8.3.0 compile says:
(on debian 10/Buster)

utility.c: In function ‘CreateCommandTag’:
utility.c:2332:8: warning: this statement may fall through 
[-Wimplicit-fallthrough=]

tag = CMDTAG_SELECT;
^~~
utility.c:2334:3: note: here
   case T_LetStmt:
   ^~~~


compile, check, check-world, runs without further problem.

I also changed a few typos/improvements in the documentation, see 
attached.


One thing I wasn't sure of: I have assumed that
  ON TRANSACTIONAL END RESET

should be
  ON TRANSACTION END RESET

and changed it accordingly, please double-check.


Erik Rijkers
--- doc/src/sgml/ref/create_variable.sgml.orig	2021-01-08 17:40:20.181823036 +0100
+++ doc/src/sgml/ref/create_variable.sgml	2021-01-08 17:59:46.976127524 +0100
@@ -16,7 +16,7 @@
 
  
   CREATE VARIABLE
-  define a new permissioned typed schema variable
+  define a schema variable
  
 
  
@@ -29,24 +29,24 @@
   Description
 
   
-   The CREATE VARIABLE command creates a new schema variable.
+   The CREATE VARIABLE command creates a schema variable.
Schema variables, like relations, exist within a schema and their access is
controlled via GRANT and REVOKE commands.
-   Their changes are non-transactional by default.
+   Changing a schema variable is non-transactional by default.
   
 
   
The value of a schema variable is local to the current session. Retrieving
a variable's value returns either a NULL or a default value, unless its value
is set to something else in the current session with a LET command. By default,
-   the content of variable is not transactional, alike regular variables in PL languages.
+   the content of a variable is not transactional. This is the same as in regular variables in PL languages.
   
 
   
-   Schema variables are retrieved by the regular SELECT SQL command.
-   Their value can be set with the LET SQL command.
-   Notably, while schema variables share properties with tables, they cannot be updated
-   with UPDATE commands.
+   Schema variables are retrieved by the SELECT SQL command.
+   Their value is set with the LET SQL command.
+   While schema variables share properties with tables, their value cannot be updated
+   with an UPDATE command.
   
  
 
@@ -76,7 +76,7 @@
 name
 
  
-  The name (optionally schema-qualified) of the variable to create.
+  The name, optionally schema-qualified, of the variable.
  
 

@@ -85,7 +85,7 @@
 data_type
 
  
-  The name (optionally schema-qualified) of the data type of the variable to be created.
+  The name, optionally schema-qualified, of the data type of the variable.
  
 

@@ -105,7 +105,7 @@
 NOT NULL
 
  
-  The NOT NULL clause forbid to set the variable to
+  The NOT NULL clause forbids to set the variable to
   a null value. A variable created as NOT NULL and without an explicitly
   declared default value cannot be read until it is initialized by a LET
   command. This obliges the user to explicitly initialize the variable
@@ -118,22 +118,22 @@
 DEFAULT default_expr
 
  
-  The DEFAULT clause assigns a default data to
-  schema variable.
+  The DEFAULT clause can be used to assign a default value to
+  a schema variable.
  
 

 

-ON COMMIT DROP, ON TRANSACTIONAL END RESET
+ON COMMIT DROP, ON TRANSACTION END RESET
 
  
   The ON COMMIT DROP clause specifies the behaviour
-  of temporary schema variable at transaction commit. With this clause the
+  of a temporary schema variable at transaction commit. With this clause the
       variable is dropped at commit time. The clause is only allowed
-      for temporary variables. The ON TRANSACTIONAL END RESET
+      for temporary variables. The ON TRANSACTION END RESET
   clause enforces the variable to be reset to its default value when
-  the transaction is either commited or rolled back.
+  the transaction is committed or rolled back.
  
 

@@ -145,7 +145,7 @@
   Notes
 
   
-   Use DROP VARIABLE command to remove a variable.
+   Use the DROP VARIABLE command to remove a variable.
   
  
 
--- doc/src/sgml/ref/discard.sgml.orig	2021-01-08 18:02:25.837531779 +0100
+++ doc/src/sgml/ref/discard.sgml	2021-01-08 18:40:09.973630164 +0100
@@ -104,6 +104,7 @@
 DISCARD PLANS;
 DISCARD TEMP;
 DISCARD SEQUENCES;
+DISCARD VARIABLES;
 
 

--- doc/src/sgml/ref/drop_variable.sgml.orig	2021-01-08 18:05:28.643147771 +0100
+++ doc/src/sgml/ref/drop_variable.sgml	2021-01-08 18:07:17.876113523 +0100
@@ -16,7 +16,7 @@
 
  
   DROP VARIABLE
-  removes a schema variable
+  remove a schema variable
  
 
  
@@ -52,7 +52,7 @@
 name
 
  
-  The name (optionally schema-qualified) of a schema variable.
+  The name, optionally schema-qualified, o

logical replication seems broken

2021-02-12 Thread Erik Rijkers
Hello,

I am seeing errors in replication in a test program that I've been running for 
years with very little change (since 2017, really [1]).

The symptom:
HEAD-replication fails (most of the time) when cascading 3 instances (master+2 
replicas).

HEAD-replication works with 2 instances (master+replica).

I have also compiled a server on top of 4ad31bb2ef25 (avoiding some recent 
changes) - and this server runs the same test program without failure; so I 
think the culprit might be somewhere in those changes.  Or (always possible) 
there might be something my testing does wrong - but then again, I do this test 
a few times every week, and it never fails.

This weekend I can dig into it some more (make a self-sufficient example) but I 
thought I'd mention it already. perhaps one of you see the light immediately...


Erik Rijkers

[1] 
https://www.postgresql.org/message-id/flat/3897361c7010c4ac03f358173adbcd60%40xs4all.nl




Re: logical replication seems broken

2021-02-13 Thread Erik Rijkers
> On 02/13/2021 11:49 AM Amit Kapila  wrote:
> 
> On Fri, Feb 12, 2021 at 10:00 PM  wrote:
> >
> > > On 02/12/2021 1:51 PM Amit Kapila  wrote:
> > >
> > > On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers  wrote:
> > > >
> > > > I am seeing errors in replication in a test program that I've been 
> > > > running for years with very little change (since 2017, really [1]).
> >
> > Hi,
> >
> > Here is a test program.  Careful, it deletes stuff.  And it will need some 
> > changes:
> >
> 
> Thanks for sharing the test. I think I have found the problem.
> Actually, it was an existing code problem exposed by the commit
> ce0fdbfe97. In pgoutput_begin_txn(), we were sometimes sending the
> prepare_write ('w') message but then the actual message was not being
> sent. This was the case when we didn't found the origin of a txn. This
> can happen after that commit because we have now started using origins
> for tablesync workers as well and those origins are removed once the
> tablesync workers are finished. We might want to change the behavior
> related to the origin messages as indicated in the comments but for
> now, fixing the existing code.
> 
> Can you please test if the attached fixes the problem at your end as well?

> [fix_origin_message_1.patch]

I compiled just now a binary from HEAD, and a binary from HEAD+patch

HEAD is still broken; your patch rescues it, so yes, fixed.

Maybe a test (check or check-world) should be added to run a second replica?  
(Assuming that would have caught this bug)


Thanks,

Erik Rijkers
 






> 
> -- 
> With Regards,
> Amit Kapila.




Re: Additional Chapter for Tutorial

2020-10-30 Thread Erik Rijkers

On 2020-10-30 11:57, Jürgen Purtz wrote:

On 26.10.20 15:53, David G. Johnston wrote:

Removing -docs as moderation won’t let me cross-post.



Hi,

I applied 0009-architecture-vs-master.patch to head
and went through architecture.sgml (only that file),
then produced the attached .diff


And I wrote down some separate items:

1.
'Two Phase Locking' and 'TPL' should be, I think,
'Two-Phase Commit'. Please someone confirm.
(no changes made)

2.
To compare xid to sequence because they similarly 'count up' seems a bad 
idea.

(I don't think it's always true in the case of sequences)
(no changes made)

3.
'accesses' seems a somewhat strange word most of the time just 'access' 
may be better.  Not sure - native speaker wanted. (no changes made)


4.
'heap', in postgres, means often (always?) files. But more generally, 
the meaning is more associated with memory.  Therefore it would be good 
I think to explicitly use 'heap file' at least in the beginning once to 
make clear that heap implies 'safely written away to disk'.  Again, I'm 
not quite sure if my understanding is correct - I have made no changes 
in this regard.




Erik Rijkers
--- doc/src/sgml/architecture.sgml.orig	2020-10-30 15:19:54.469275256 +0100
+++ doc/src/sgml/architecture.sgml	2020-10-30 17:28:24.835233482 +0100
@@ -19,19 +19,18 @@
 In the case of PostgreSQL, the server
 launches a single process for each client connection, referred to as a
 Backend process.
-Those Backend processes handle the client's requests by acting on the
+Such a Backend process handles the client's requests by acting on the
 Shared Memory.
 This leads to other activities (file access, WAL, vacuum, ...) of the
 Instance. The
 Instance is a group of server-side processes acting on a common
-Shared Memory. Notably, PostgreSQL does not utilize application
-threading within its implementation.
+Shared Memory. PostgreSQL does not utilize threading.

 

-The first step in an Instance start is the start of the
+The first step when an Instance starts is the start of the
 Postmaster.
-He loads the configuration files, allocates Shared Memory, and
+It loads the configuration files, allocates Shared Memory, and
 starts the other processes of the Instance:
 Background Writer,
 Checkpointer,
@@ -66,32 +65,32 @@

 When a client application tries to connect to a
 database,
-this request is handled initially by the Postmaster. He
+this request is handled initially by the Postmaster. It
 starts a new Backend process and instructs the client
 application to connect to it. All further client requests
-go to this process and are handled by it.
+are handled by this process.

 

 Client requests like SELECT or
 UPDATE usually lead to the
-necessity to read or write some data. This is carried out
+necessity to read or write data. This is carried out
 by the client's backend process. Reads involve a page-level
-cache housed in Shared Memory (for details see:
+cache, located in Shared Memory (for details see:
 ) for the benefit of all processes
-in the instance. Writes also involve this cache, in additional
+in the instance. Writes also use this cache, in addition
 to a journal, called a write-ahead-log or WAL.

 

-Shared Memory is limited in size. Thus, it becomes necessary
+Shared Memory is limited in size and it can become necessary
 to evict pages. As long as the content of such pages hasn't
 changed, this is not a problem. But in Shared Memory also
 write actions take place. Modified pages are called dirty
 pages or dirty buffers and before they can be evicted they
-must be written back to disk. This happens regularly by the
+must be written to disk. This happens regularly by the
 Background Writer and the Checkpointer process to ensure
-that the disk version of the pages are kept up-to-date.
+that the disk version of the pages are up-to-date.
 The synchronisation from RAM to disk consists of two steps.

 
@@ -109,7 +108,7 @@
 Shared Memory. The parallel running WAL Writer process
 reads them and appends them to the end of the current
 WAL file.
-Such sequential writes are much faster than writes to random
+Such sequential writes are faster than writes to random
 positions of heap and index files. All WAL records created
 out of one dirty page must be transferred to disk before the
 dirty page itself can be transferred to disk in the second step.
@@ -119,19 +118,19 @@
 Second, the transfer of dirty buffers from Shared Memory to
 files must take place. This is the primary task of the
 Background Writer process. Because I/O activities can block
-other processes significantly, it starts

Re: Additional Chapter for Tutorial

2020-11-01 Thread Erik Rijkers

On 2020-11-01 16:38, Jürgen Purtz wrote:

On 30.10.20 17:45, Erik Rijkers wrote:


And I wrote down some separate items:

1.
'Two Phase Locking' and 'TPL' should be, I think,
'Two-Phase Commit'. Please someone confirm.
(no changes made)

Erik Rijkers


All suggestions so far are summarized in the attached patch with the
following exceptions:

- 'Two Phase Locking' is the intended term.


OK, so what is 'Two Phase Locking'?  The term is not explained, and not 
used anywhere else in the manual.  You propose to introduce it here, in 
the tutorial.  I don't know what it means, and I am not really a 
beginner.


'Two Phase Locking' should be explained somewhere, and how it relates 
(or not) to Two-Phase Commit (2PC), don't you agree?



Erik Rijkers


























Re: Additional Chapter for Tutorial

2020-11-02 Thread Erik Rijkers

On 2020-11-02 09:26, Jürgen Purtz wrote:


OLD:

    A first approach to implement protections against concurrent
    access to the same data may be the locking of critical
    rows. Two such techniques are:
    Optimistic Concurrency Control (OCC)
    and Two Phase Locking (2PL).
    PostgreSQL implements a third, more
    sophisticated technique: Multiversion Concurrency
    Control (MVCC). The crucial advantage of MVCC ...

Proposal:

    A first approach to implement protections against concurrent
    access to the same data may be the locking of critical
    rows.
    PostgreSQL implements a more
    sophisticated technique which avoids any locking:
Multiversion Concurrency
    Control (MVCC). The crucial advantage of MVCC ...

Any thoughts or other suggestions?



Yes, just leave it out. Much better, as far as I'm concerned.

Erik






Re: Split copy.c

2020-11-02 Thread Erik Rijkers

On 2020-11-02 10:03, Heikki Linnakangas wrote:

While looking at the parallel copy patches, it started to annoy me how
large copy.c is. It confuses my little head. (Ok, it's annoyed me many
times in the past, but I haven't done anything about it.)



[0001-Split-copy.c-into-...o.c-and-copyfrom.c.patch]


There seems to be an oversight of contrib/file_fdw. (debian 10, gcc 
8.3.0)


After:

./configure  
--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy 
--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy/bin.fast 
--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.split_copy/lib.fast 
--with-pgport=6973 --quiet --enable-depend--with-libxml 
--with-libxslt --with-zlib --with-openssl  --enable-tap-tests  
--with-extra-version=_split_copy_1102_90d8



... these errors+warnings from contrib/file_fdw:


-- [2020.11.02 10:31:53 split_copy/1] make contrib
file_fdw.c:108:2: error: unknown type name ‘CopyState’
  CopyState cstate;   /* COPY execution state */
  ^
file_fdw.c: In function ‘fileBeginForeignScan’:
file_fdw.c:658:2: error: unknown type name ‘CopyState’; did you mean 
‘CopyToState’?

  CopyState cstate;
  ^
  CopyToState
file_fdw.c:680:10: warning: passing argument 3 of ‘BeginCopyFrom’ from 
incompatible pointer type [-Wincompatible-pointer-types]

  filename,
  ^~~~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:60:76: note: expected ‘Node *’ {aka 
‘struct Node *’} but argument is of type ‘char *’
 extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, 
Node *whereClause,
  
~~^~~
file_fdw.c:681:10: error: incompatible type for argument 4 of 
‘BeginCopyFrom’

  is_program,
  ^~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:61:23: note: expected ‘const char *’ 
but argument is of type ‘_Bool’

   const char *filename,
   ^~~~
In file included from ../../src/include/access/tupdesc.h:19,
 from ../../src/include/access/htup_details.h:19,
 from file_fdw.c:18:
../../src/include/nodes/pg_list.h:65:19: warning: passing argument 6 of 
‘BeginCopyFrom’ from incompatible pointer type 
[-Wincompatible-pointer-types]

 #define NIL  ((List *) NULL)
  ~^~
file_fdw.c:683:10: note: in expansion of macro ‘NIL’
  NIL,
  ^~~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:62:48: note: expected 
‘copy_data_source_cb’ {aka ‘int (*)(void *, int,  int)’} but argument is 
of type ‘List *’ {aka ‘struct List *’}
   bool is_program, copy_data_source_cb data_source_cb, List 
*attnamelist, List *options);

^~
file_fdw.c:678:11: error: too few arguments to function ‘BeginCopyFrom’
  cstate = BeginCopyFrom(NULL,
   ^
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:60:22: note: declared here
 extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, 
Node *whereClause,

  ^
file_fdw.c: In function ‘fileIterateForeignScan’:
file_fdw.c:714:20: warning: cast to pointer from integer of different 
size [-Wint-to-pointer-cast]

  errcallback.arg = (void *) festate->cstate;
^
file_fdw.c:731:30: warning: passing argument 1 of ‘NextCopyFrom’ makes 
pointer from integer without a cast [-Wint-conversion]

  found = NextCopyFrom(festate->cstate, NULL,
   ~~~^~~~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:64:40: note: expected ‘CopyFromState’ 
{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’

 extern bool NextCopyFrom(CopyFromState cstate, ExprContext *econtext,

  ~~^~
file_fdw.c: In function ‘fileReScanForeignScan’:
file_fdw.c:751:21: warning: passing argument 1 of ‘EndCopyFrom’ makes 
pointer from integer without a cast [-Wint-conversion]

  EndCopyFrom(festate->cstate);
  ~~~^~~~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:63:39: note: expected ‘CopyFromState’ 
{aka ‘struct CopyFromStateData *’} but argument is of type ‘int’

 extern void EndCopyFrom(CopyFromState cstate);
 ~~^~
file_fdw.c:755:17: warning: passing argument 3 of ‘BeginCopyFrom’ from 
incompatible pointer type [-Wincompatible-pointer-types]

  festate->filename,
  ~~~^~
In file included from file_fdw.c:24:
../../src/include/commands/copy.h:60:76: note: expected ‘Node *’ {aka 
‘struct Node *’} but argument is of type ‘char *’
 extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, 
Node *whereClause,
  
~~^~~
file_fdw.c:756:

Re: Split copy.c

2020-11-02 Thread Erik Rijkers

On 2020-11-02 12:19, Heikki Linnakangas wrote:

On 02/11/2020 11:36, Erik Rijkers wrote:

On 2020-11-02 10:03, Heikki Linnakangas wrote:
While looking at the parallel copy patches, it started to annoy me 
how
large copy.c is. It confuses my little head. (Ok, it's annoyed me 
many

times in the past, but I haven't done anything about it.)



[0001-Split-copy.c-into-...o.c-and-copyfrom.c.patch]


There seems to be an oversight of contrib/file_fdw. (debian 10, gcc
8.3.0)


Ah yeah, I missed updating file_fdw. Here's a new patch.


Something still not quite right in the compile-with-assert:

-- [2020.11.02 12:49:12 split_copy/0] make core: make --quiet -j 4   
(speed 0=debug_assertions  speed 1=fast)



In file included from ../../../src/include/postgres.h:46,
 from copyto.c:15:
copyto.c: In function ‘BeginCopyTo’:
copyto.c:477:11: error: ‘is_from’ undeclared (first use in this 
function); did you mean ‘is_program’?

   Assert(!is_from);
   ^~~
../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’
   if (!(condition)) \
 ^
copyto.c:477:11: note: each undeclared identifier is reported only once 
for each function it appears in

   Assert(!is_from);
   ^~~
../../../src/include/c.h:790:9: note: in definition of macro ‘Assert’
   if (!(condition)) \
 ^
make[3]: *** [../../../src/Makefile.global:921: copyto.o] Error 1
make[2]: *** [common.mk:39: commands-recursive] Error 2
make[2]: *** Waiting for unfinished jobs
make[1]: *** [Makefile:42: all-backend-recurse] Error 2
make: *** [GNUmakefile:11: all-src-recurse] Error 2







Re: Additional Chapter for Tutorial

2020-11-07 Thread Erik Rijkers

On 2020-11-07 13:24, Jürgen Purtz wrote:



Because there have been no more comments in the last days I created a
consolidated patch. It contains Erik's suggestion and some tweaks for
the text size within graphics.

[0011-architecture.patch]


Hi,

I went through architecture.sgml once more; some proposed changes 
attached.


And in some .svg files I noticed 'jungest' which should be 'youngest', I 
suppose.

I did not change them but below is filelist of  grep -l 'jung'.

./doc/src/sgml/images/freeze-ink.svg
./doc/src/sgml/images/freeze-ink-svgo.svg
./doc/src/sgml/images/freeze-raw.svg
./doc/src/sgml/images/wraparound-ink.svg
./doc/src/sgml/images/wraparound-ink-svgo.svg
./doc/src/sgml/images/wraparound-raw.svg


Thanks,

Erik


--- doc/src/sgml/architecture.sgml.orig	2020-11-07 14:05:50.188396026 +0100
+++ doc/src/sgml/architecture.sgml	2020-11-07 20:04:27.890983873 +0100
@@ -24,7 +24,7 @@
 This leads to other activities (file access, WAL, vacuum, ...) of the
 Instance. The
 Instance is a group of server-side processes acting on a common
-Shared Memory. PostgreSQL does not utilize threading.
+Shared Memory. PostgreSQL does not use threading.

 

@@ -78,7 +78,7 @@
 cache, located in Shared Memory (for details see:
 ) for the benefit of all processes
 in the instance. Writes also use this cache, in addition
-to a journal, called a write-ahead-log or WAL.
+to a journal, called the write-ahead-log or WAL.

 

@@ -90,20 +90,20 @@
 must be written to disk. This happens regularly by the
 Checkpointer and the Background Writer processes to ensure
 that the disk version of the pages are up-to-date.
-The synchronisation from RAM to disk consists of two steps.
+The synchronization from RAM to disk consists of two steps.

 

 First, whenever the content of a page changes, a
 WAL record
-is created out of the delta-information (difference between the
+is created from the delta-information (difference between the
 old and the new content) and stored in another area of
 Shared Memory. The parallel running WAL Writer process
 reads them and appends them to the end of the current
 WAL file.
 Such sequential writes are faster than writes to random
 positions of heap and index files. All WAL records created
-out of one dirty page must be transferred to disk before the
+from one dirty page must be transferred to disk before the
 dirty page itself can be transferred to disk in the second step.

 
@@ -123,22 +123,22 @@
 Checkpoints.
 A Checkpoint is a point in time when all older dirty buffers,
 all older WAL records, and finally a special Checkpoint record
-are written and flushed to disk. Heap and index files
-on the one hand and WAL files on the other hand are in sync.
-Previous WAL is no longer required. In other words,
+are written and flushed to disk. Heap and index files,
+and WAL files are now in sync.
+Older WAL is no longer required. In other words,
 a possibly occurring recovery, which integrates the delta
 information of WAL into heap and index files, will happen
-by replaying only WAL past the last recorded checkpoint.
-This limits the amount of WAL which needs to be replayed
+by replaying only WAL past the last-recorded checkpoint.
+This limits the amount of WAL to be replayed
 during recovery in the event of a crash.

 

-While the Checkpointer ensures that the database system can crash
-and restart itself in a valid state, the administrator needs
+While the Checkpointer ensures that the database system can,
+after a crash, restart itself in a valid state, the administrator needs
 to handle the case where the heap or other files become
 corrupted (and possibly the locally written WAL, though that is
-less common). The options and details are covered extensively
+less common). Options and details are covered
 in the backup and restore section ().
 For our purposes here, just note that the WAL Archiver process
 can be enabled and configured to run a script on filled WAL
@@ -153,7 +153,7 @@
 

 The Logger writes text lines about serious and less serious
-events which can happen during database access, e.g., wrong
+events that may happen during database access, e.g., wrong
 password, no permission, long-running queries, etc.

 
@@ -262,7 +262,7 @@
 all tablespace names, and all
 role names are automatically
 available throughout the cluster, independent from
-the database or schema in which they where defined originally.
+the database or schema in which they were defined originally.
 
 shows the relation between the object types.

@@ -410,13 +410,11 @@

 

-A first approach to implement protections against concurrent
-access to the same data may be the locking of critical rows.
-PostgreSQL

Re: Supporting = operator in gin/gist_trgm_ops

2020-11-14 Thread Erik Rijkers

On 2020-11-14 06:30, Alexander Korotkov wrote:


[v4-0001-Handle-equality...in-contrib-pg_trgm.patch (~]

I'm going to push this if no objections.



About the sgml, in doc/src/sgml/pgtrgm.sgml :


Beginning in PostgreSQL 14, these indexes 
also support equality operator (simple comparison operators are not 
supported).


should be:

Beginning in PostgreSQL 14, these indexes 
also support the equality operator (simple comparison operators are not 
supported).


(added 'the')


And:

Although these indexes might have lower the performance of equality 
operator

search than regular B-tree indexes.

should be (I think - please check the meaning)

Although these indexes might have a lower performance with equality 
operator

search than with regular B-tree indexes.


I am not sure I understood this last sentence correctly. Does this mean 
the slower trgm index might be chosen over the faster btree?



Thanks,

Erik Rijkers





Re: Supporting = operator in gin/gist_trgm_ops

2020-11-14 Thread Erik Rijkers

On 2020-11-14 12:53, Julien Rouhaud wrote:
On Sat, Nov 14, 2020 at 6:07 PM Alexander Korotkov 
 >


   Note that those indexes may not be as afficient as regulat B-tree 
indexes

   for equality operator.



'afficient as regulat'  should be
'efficient as regular'


Sorry to be nitpicking - it's the one thing I'm really good at :P

Erik




Re: Supporting = operator in gin/gist_trgm_ops

2020-11-15 Thread Erik Rijkers

On 2020-11-15 06:55, Alexander Korotkov wrote:


> Sorry to be nitpicking - it's the one thing I'm really good at :P


Hi Alexander,

The last touch... (you forgot the missing 'the')

thanks!

Erik Rijkers
--- doc/src/sgml/pgtrgm.sgml.orig	2020-11-15 08:00:54.607816533 +0100
+++ doc/src/sgml/pgtrgm.sgml	2020-11-15 08:17:23.243316332 +0100
@@ -421,7 +421,7 @@
trigram-based index searches for LIKE, ILIKE,
~ and ~* queries.  Beginning in
PostgreSQL 14, these indexes also support
-   equality operator (simple comparison operators are not supported).
+   the equality operator (simple comparison operators are not supported).
Note that those indexes may not be as efficient as regular B-tree indexes
for equality operator.
   


doc CREATE INDEX

2020-11-16 Thread Erik Rijkers

This one seems to have fallen by the wayside.

Erik
--- doc/src/sgml/ref/create_index.sgml.orig	2020-11-16 13:04:29.923760413 +0100
+++ doc/src/sgml/ref/create_index.sgml	2020-11-16 13:04:54.260093095 +0100
@@ -746,7 +746,7 @@
   
 
   
-   The regularly system collects statistics on all of a table's
+   The system regularly collects statistics on all of a table's
columns.  Newly-created non-expression indexes can immediately
use these statistics to determine an index's usefulness.
For new expression indexes, it is necessary to run 

Re: Additional Chapter for Tutorial - arch-dev.sgml

2020-11-20 Thread Erik Rijkers

On 2020-11-15 19:45, Jürgen Purtz wrote:




(smallish) Changes to arch-dev.sgml

Erik
--- ./doc/src/sgml/arch-dev.sgml.orig	2020-11-20 19:14:21.576775798 +0100
+++ ./doc/src/sgml/arch-dev.sgml	2020-11-20 21:43:10.435370787 +0100
@@ -7,7 +7,7 @@
Author

 This chapter originated as part of
-, Stefan Simkovics'
+ Stefan Simkovics'
 Master's Thesis prepared at Vienna University of Technology under the direction
 of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr.

@@ -17,10 +17,7 @@
This chapter gives an overview of the internal structure of the
backend of PostgreSQL.  After having
read the following sections you should have an idea of how a query
-   is processed. This chapter does not aim to provide a detailed
-   description of the internal operation of
-   PostgreSQL, as such a document would be
-   very extensive. Rather, this chapter is intended to help the reader
+   is processed.  This chapter is intended to help the reader
understand the general sequence of operations that occur within the
backend from the point at which a query is received, to the point
at which the results are returned to the client.
@@ -30,8 +27,8 @@
The Path of a Query
 

-Here we give a short overview of the stages a query has to pass in
-order to obtain a result.
+Here we give a short overview of the stages a query has to pass 
+to obtain a result.

 

@@ -125,7 +122,7 @@
 use a supervisor process (also
 master process) that spawns a new
 server process every time a connection is requested. This supervisor
-process is called postgres and listens at a
+process is called postgres (formerly 'postmaster') and listens at a
 specified TCP/IP port for incoming connections. Whenever a request
 for a connection is detected the postgres
 process spawns a new server process. The server tasks
@@ -230,7 +227,7 @@
 
  A detailed description of bison or
  the grammar rules given in gram.y would be
- beyond the scope of this paper. There are many books and
+ beyond the scope of this manual. There are many books and
  documents dealing with flex and
  bison. You should be familiar with
  bison before you start to study the
@@ -343,8 +340,8 @@

 
  In some situations, examining each possible way in which a query
- can be executed would take an excessive amount of time and memory
- space. In particular, this occurs when executing queries
+ can be executed would take an excessive amount of time and memory.
+ In particular, this occurs when executing queries
  involving large numbers of join operations. In order to determine
  a reasonable (not necessarily optimal) query plan in a reasonable amount
  of time, PostgreSQL uses a Genetic
@@ -411,7 +408,7 @@
 merge join: Each relation is sorted on the join
 attributes before the join starts. Then the two relations are
 scanned in parallel, and matching rows are combined to form
-join rows. This kind of join is more
+join rows. This kind of join is
 attractive because each relation has to be scanned only once.
 The required sorting might be achieved either by an explicit sort
 step, or by scanning the relation in the proper order using an
@@ -442,7 +439,7 @@
  If the query uses fewer than 
  relations, a near-exhaustive search is conducted to find the best
  join sequence.  The planner preferentially considers joins between any
- two relations for which there exist a corresponding join clause in the
+ two relations for which there exists a corresponding join clause in the
  WHERE qualification (i.e., for
  which a restriction like where rel1.attr1=rel2.attr2
  exists). Join pairs with no join clause are considered only when there


Re: proposal: schema variables

2021-01-14 Thread Erik Rijkers

On 2021-01-14 07:35, Pavel Stehule wrote:

[schema-variables-20210114.patch.gz]



Build is fine. My (small) list of tests run OK.

I did notice a few more documentation peculiarities:


'The PostgreSQL has schema variables'  should be
'PostgreSQL has schema variables'


A link to the LET command should be added to the 'See Also' of the 
CREATE VARIABLE, ALTER VARIABLE, and DROP VARIABLE pages. (After all, 
the LET command is the most interesting)
Similarly, an ALTER VARIABLE link should be added to the 'See Also' 
section of LET.



Somehow, the sgml in the doc files causes too large spacing in the html, 
example:

I copy from the LET html:
   'if that is defined.  If no explicit'
   (6 spaces between 'defined.' and 'If')
Can you have a look?  Sorry - I can't find the cause.  It occurs on a 
few more places in the newly added sgml/html.  The unwanted spaces are 
visible also in the pdf.

(firefox 78.6.1, debian)


Thanks,

Erik Rijkers






Re: proposal: schema variables

2021-01-18 Thread Erik Rijkers

On 2021-01-18 10:59, Pavel Stehule wrote:



and here is the patch
[schema-variables-20200118.patch.gz ]



One small thing:

The drop variable synopsis is:

DROP VARIABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]

In that text following it, 'RESTRICT' is not documented. When used it 
does not give an error but I don't see how it 'works'.



Erik







cfbot building docs - serving results

2021-01-19 Thread Erik Rijkers

Hi Thomas,

I am wondering if the cfbot at the moment is building the docs 
(html+pdf), for the patches that it tests.  I suppose that it does?  If 
so, what happens with the resulting (doc)files? To /dev/null?   They are 
not available as far as I can see.  Would it be feasible to make them 
available, either serving the html, or to make docs html+pdf a 
downloadable zipfile?


(it would also be useful to be able see at a glance somewhere if the 
patch contains sgml-changes at all...)



Thanks,

Erik Rijkers




Re: SQL/JSON: functions

2021-01-23 Thread Erik Rijkers

On 2021-01-20 03:49, Nikita Glukhov wrote:


[0001-Add-common-SQL-JSON-clauses-v52.patch.gz]
[0002-SQL-JSON-constructors-v52.patch.gz]
[0003-IS-JSON-predicate-v52.patch.gz]
[0004-SQL-JSON-query-functions-v52.patch.gz]
[0005-SQL-JSON-functions-for-json-type-v52.patch.gz]
[0006-GUC-sql_json-v52.patch.gz]


Hi,

I read through the file func.sgml (only that file) and put the 
errors/peculiarities in the attached diff.  (Small stuff; typos really)



Your patch includes a CREATE TABLE my_films + INSERT, to run the 
examples against.  I think this is a great idea and we should do it more 
often.


But, the table has a text-column to contain the subsequently inserted 
json values. The insert runs fine but it turns out that some later 
examples queries only run against a jsonb column.  So I propose to 
change:

  CREATE TABLE my_films (js text);
to:
  CREATE TABLE my_films (js jsonb);

This change is not yet included in the attached file.  An alternative 
would be to cast the text-column in the example queries as js::jsonb



I also noticed that some errors were different in the sgml file than 'in 
the event':



   SELECT JSON_QUERY(js, '$.favorites[*].kind' ERROR ON ERROR) FROM 
my_films_jsonb;
   (table 'my_films_jsonb' is the same as your 'my_films', but with js 
as a jsonb column)


manual says: "ERROR: more than one SQL/JSON item"
 in reality: "ERROR: JSON path expression in JSON_QUERY should return 
singleton item without wrapper"
and:   "HINT: use WITH WRAPPER clause to wrap SQL/JSON item 
sequence into array"



Thanks,

Erik Rijkers



--
Nikita Glukhov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--- ./doc/src/sgml/func.sgml.orig	2021-01-20 14:52:35.564407275 +0100
+++ ./doc/src/sgml/func.sgml	2021-01-23 11:09:11.582465755 +0100
@@ -16968,7 +16968,7 @@
  
 
  
-   All SQL/JSON functions fall into one of the two groups.
+   All SQL/JSON functions fall into one of two groups.
Constructor functions
generate JSON data from values of SQL types.
Query functions
@@ -17034,7 +17034,7 @@
  Description
 
  
-  JSON function generates a JSON
+  The JSON function generates a JSON
   from a text data.
  
 
@@ -17049,7 +17049,7 @@
 
  
   String expression that provides the JSON text data.
-  Accepted any character strings (text, char, etc.)
+  Accepts any character strings (text, char, etc.)
   or binary strings (bytea) in UTF8 encoding.
   For null input, SQL null value is returned.
  
@@ -17110,7 +17110,7 @@
 
  Notes
  
-  Alternatively, you can construct JSON values simply
+  Alternatively, you can construct JSON values by simply
   using PostgreSQL-specific casts to 
   json and jsonb types.
  
@@ -17118,7 +17118,7 @@
 
  Examples
  
-  Construct a JSON the provided strings:
+  Construct a JSON using the provided strings:
  
 
 SELECT JSON('{ "a" : 123, "b": [ true, "foo" ], "a" : "bar" }');
@@ -17173,7 +17173,7 @@
   JSON.
   For null input, SQL  null
   (not a JSON null) value is returned.
-  For any scalar other than a number, a Boolean, the text representation
+  For any scalar other than a number or a Boolean, the text representation
   will be used, with escaping as necessary to make it a valid
   JSON string value.
   For details, see
@@ -17208,7 +17208,7 @@
 
  Examples
  
-  Construct a JSON from the provided values various types:
+  Construct a JSON from the provided value of various type:
  
 
 SELECT JSON_SCALAR(123.45);
@@ -17250,7 +17250,7 @@
  Description
 
  
-  JSON_OBJECT function generates a JSON
+  The JSON_OBJECT function generates a JSON
   object from SQL or JSON data.
  
 
@@ -17463,7 +17463,7 @@
  Description
 
  
-  JSON_OBJECTAGG function aggregates the provided data
+  The JSON_OBJECTAGG function aggregates the provided data
   into a JSON object. You can use this function to combine values
   stored in different table columns into pairs. If you specify a GROUP BY
   or an ORDER BY clause, this function returns a separate JSON object
@@ -17689,7 +17689,7 @@
  Description
 
  
-  JSON_ARRAY function constructs a JSON array from
+  The JSON_ARRAY function constructs a JSON array from
   the provided SQL or JSON data.
  
 
@@ -17855,7 +17855,7 @@
  Description
 
  
-  JSON_ARRAYAGG function aggregates the provided SQL
+  The JSON_ARRAYAGG function aggregates the provided SQL
   or JSON data into a JSON array.
  
 
@@ -18038,7 +18038,7 @@
  Description
 
  
-  JSON_EXISTS function checks whether the provided
+  The JSON_EXISTS function checks whet

Re: Routine usage information schema tables

2021-02-09 Thread Erik Rijkers
> On 02/09/2021 3:06 PM Peter Eisentraut  
> wrote:
> 
>  
> Several information schema views track dependencies between 
> functions/procedures and objects used by them.  These had not been

> [0001-Routine-usage-information-schema-tables.patch]

Spotted one typo:

included here ony if
included here only if


Erik Rijkers




Re: Schema variables - new implementation for Postgres 15

2021-12-18 Thread Erik Rijkers

Op 19-12-2021 om 07:23 schreef Pavel Stehule:



I am sending new versions of patches.

I hope I solved all Tomas's objections.

1. The schema variables were renamed to session variables
2. I fixed issues related to creating, dropping variables under 
subtransactions + regress tests

3. I fixed issues in pg_dump + regress tests



> [0001-schema-variables-20211219.patch]
> [0002-schema-variables-20211219.patch]

Hi Pavel,

I get an error during test 'session_variables'.

(on the upside, my own little testsuite runs without error)

thanks,

Erik Rijkers









diff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/expected/session_variables.out
   2021-12-19 07:48:40.422821574 +0100
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.schema_variables/src/test/regress/results/session_variables.out
2021-12-19 08:00:29.286820957 +0100
@@ -32,11 +32,11 @@
 SET ROLE TO DEFAULT;
 -- check output
 \dV+ var1
-List of variables
-  Schema  | Name |  Type   | Is nullable | Is mutable | Default | Owner | 
Transactional end action |   Access privileges| Description 
---+--+-+-++-+---+--++-
- svartest | var1 | numeric | t   | t  | | pavel |  
| pavel=SW/pavel+| 
-  |  | | || |   |  
| var_test_role=SW/pavel | 
+   List of 
variables
+  Schema  | Name |  Type   | Is nullable | Is mutable | Default |  Owner   | 
Transactional end action | Access privileges | Description 
+--+--+-+-++-+--+--+---+-
+ svartest | var1 | numeric | t   | t  | | aardvark |   
   | aardvark=SW/aardvark +| 
+  |  | | || |  |   
   | var_test_role=SW/aardvark | 
 (1 row)
 
 REVOKE ALL ON VARIABLE var1 FROM var_test_role;
@@ -745,10 +745,10 @@
 CREATE VARIABLE var1 AS int DEFAULT 100;
 COMMENT ON VARIABLE var1 IS 'some variable comment';
 \dV+ var1
-   List of 
variables
-  Schema  | Name |  Type   | Is nullable | Is mutable | Default | Owner | 
Transactional end action | Access privileges |  Description  
---+--+-+-++-+---+--+---+---
- svartest | var1 | integer | t   | t  | 100 | pavel |  
|   | some variable comment
+List of 
variables
+  Schema  | Name |  Type   | Is nullable | Is mutable | Default |  Owner   | 
Transactional end action | Access privileges |  Description  
+--+--+-+-++-+--+--+---+---
+ svartest | var1 | integer | t   | t  | 100 | aardvark |   
   |   | some variable comment
 (1 row)
 
 DROP VARIABLE var1;


Re: Schema variables - new implementation for Postgres 15

2021-12-19 Thread Erik Rijkers

> [0001-schema-variables-20211219-2.patch]
> [0002-schema-variables-20211219-2.patch]

Hi Pavel,

You said earlier
> 1. The schema variables were renamed to session variable

But I still see:
$ grep -Eic 'schema variable' postgres.html
15

(postgres.html from 'make postgres.html')

So that rename doesn't seem finished.


Erik







biblio.sgml dead link

2022-01-04 Thread Erik Rijkers

On master, doc/src/sgml/biblio.sgml  has a biblioentry for a pdf from ISO:

"Information technology — Database languages — SQL Technical Reports —
Part SQL Notation support 6: (JSON) for JavaScript Object"

That pdf was a 2017 edition but the url now points to .zip that no 
longer exists.


The replacement is a ~200 euro pdf (2021).  I'd be thankful if someone 
would send the pdf to me; maybe I can update my JSON tests.


And we should remove that entry from the bibliography (or have it point 
to the new page [1]).



Erik Rijkers


[1] https://www.iso.org/standard/78937.html




Re: support for MERGE

2022-01-14 Thread Erik Rijkers

Op 13-01-2022 om 13:43 schreef Alvaro Herrera:

Apologies, there was a merge failure and I failed to notice.  Here's the
correct patch.

(This is also in github.com/alvherre/postgres/tree/merge-15)



[20220113/v6-0001-MERGE-SQL-Command-following-SQL-2016.patch]


Good morning,


I got into this crash (may be the same as Jaime's):

#-
# bash

t1=table1
t2=table2

psql -qX << SQL
drop table if exists $t1 cascade;
drop table if exists $t2 cascade;
create table $t1 as select /*cast(id::text as jsonb) js,*/ id from 
generate_series(1,20) as f(id);
create table $t2 as select /*cast(id::text as jsonb) js,*/ id from 
generate_series(1,20) as f(id);

delete from $t1 where id % 2 = 1;
delete from $t2 where id % 2 = 0;
(   select 't1 - target', count(*) t1 from $t1
  union all select 't2 - source', count(*) t2 from $t2 ) order by 1;

merge into $t1 as t1 using $t2 as t2 on t1.id = t2.id
when not matched and (t2.id > 10) and (t1.id > 10)
 then  do nothing
when not matched then  insert values ( id )
when matched then  do nothing ;

(   select 't1 - target', count(*) t1 from $t1
  union all select 't2 - source', count(*) t2 from $t2 ) order by 1 ;

SQL
#-

Referencing alias 't1' in the WHEN NOT MATCHED member seems what 
triggers his crash: when I remove the phrase 'and (t1.id > 10)', the 
statement finishes correctly.



And I don't know if it is related but if I use this phrase:

when not matched and (id > 10)

I get:

ERROR:  column "id" does not exist
LINE 2: when not matched and id > 0 -- (t2.id > 10) and (t1.id > 10)
 ^
HINT:  There is a column named "id" in table "t1", but it cannot be 
referenced from this part of the query.


Is that hint correct? Seems a bit strange.


Thanks,

Erik Rijkers




Re: support for MERGE

2022-01-14 Thread Erik Rijkers




Op 13-01-2022 om 13:43 schreef Alvaro Herrera:

Apologies, there was a merge failure and I failed to notice.  Here's the
correct patch.

(This is also in github.com/alvherre/postgres/tree/merge-15)



[v6-0001-MERGE-SQL-Command-following-SQL-2016]


I read though the MERGE-docs; some typos:

'For example, given MERGE foo AS f'
'For example, given MERGE INTO foo AS f'

'that clause clause'
'that clause'

'This is same as'
'This is the same as'

'for certain type of action'
'for certain types of action'

'The MERGE allows the user'
'MERGE allows the user'
(from the paragraph 'Read Committed Isolation Level'.  Likely 
copied from the paragraph above: 'The DELETE'; but there it refers to an 
earlier mention of DELETE.)



Erik Rijkers




Re: proposal: schema variables - doc

2021-03-17 Thread Erik Rijkers


> On 2021.03.13. 07:01 Pavel Stehule  wrote:
> Hi
> fresh rebase
> [schema-variables-20210313.patch.gz]


Hi Pavel,

I notice that the phrase 'schema variable' is not in the index at the end 
('bookindex.html').  Not good.

It is also not in the index at the front of the manual - also not good.

Maybe these two (front and back index) can be added?


If a user searches the pdf, the first occurrence he finds is at:

  43.13.2.4. Global variables and constants
  (in itself that occurrence/mention is all right, but is should not be the 
first find, I think)

(I think there was in earlier versions of the patch an entry in the 'contents', 
i.e., at the front of the manual).  I think it would be good to have it in the 
front-index, pointing to either LET or CREATE VARIABLE, or maybe even to a 
small introductory paragraph somewhere else (again, I seem to remember that 
there was one in an earlier patch version).


Of the new commands that this patch brings, 'LET' is the most immediately 
illuminating for a user (even when a CREATE VARIABLE has to be done first.  
There is an entry 'LET' in the index (good), but it would be better if that 
with LET-entry too the phrase 'schema variable' occurred.  (I don't know if 
that's possible)


Then, in the CREATE VARIABLE paragraphs it says
   'Changing a schema variable is non-transactional by default.'

I think that, unless there exists a mode where schema vars can be made 
transactional, 'by default' should be deleted (and there is no such 
'transactional mode' for schema variables, is there?).  The 'Description' also 
has such a 'By default' which is better removed for the same reason.


In the CREATE VARIABLE page the example is:

CREATE VARIABLE var1 AS integer;
SELECT var1;

I suggest to make that

CREATE VARIABLE var1 AS date;
LET var1 = (select current_date);
SELECT var1;

So that the example immediately shows an application of functionality.


Thanks,

Erik Rijkers













> 
> Pavel




Re: pl/pgsql feature request: shorthand for argument and local variable references

2021-03-24 Thread Erik Rijkers
> On 2021.03.24. 08:09 Pavel Stehule  wrote:
> [...]
> [plpgsql-routine-label-20210324.patch]


Hi,

In that sgml

"the functions' arguments"

should be

"the function's arguments"


Erik




Re: truncating timestamps on arbitrary intervals

2021-03-24 Thread Erik Rijkers
> On 2021.03.24. 16:38 Peter Eisentraut  
> wrote:

> 
> Committed.
> 

'In cases full units' seems strange.

Not a native speaker but maybe the attached changes are improvements?


Erik Rijkers--- ./doc/src/sgml/func.sgml.orig	2021-03-24 18:16:01.269515354 +0100
+++ ./doc/src/sgml/func.sgml	2021-03-24 18:18:31.695819520 +0100
@@ -9907,13 +9907,13 @@

 

-In cases full units (1 minute, 1 hour, etc.), it gives the same result as
+In case of full units (1 minute, 1 hour, etc.), it gives the same result as
 the analogous date_trunc call, but the difference is
 that date_bin can truncate to an arbitrary interval.

 

-The stride interval cannot contain units of month
+The stride interval cannot contain units of a month
 or larger.

   


Re: SQL/JSON: JSON_TABLE

2021-03-26 Thread Erik Rijkers
> On 2021.03.26. 21:28 Andrew Dunstan  wrote:
> On 3/25/21 8:10 AM, David Steele wrote:
> > On 1/20/21 8:42 PM, Nikita Glukhov wrote:
> >> Thank you for review.
> >>
> >> Attached 45th version of the patches. "SQL/JSON functions" patch
> >> corresponds to
> >> v52 patch set posted in the separate thread.
> >
> > Another rebase needed (http://cfbot.cputube.org/patch_32_2902.log),
> > marked Waiting on Author.
> >
> > I can see that Álvaro suggested that the patch be split up so it can
> > be reviewed and committed in pieces. It looks like you've done that to
> > some extent, but I wonder if more can be done. In particular, it looks
> > like that first patch could be broken up -- at lot.
> >
> >
> 
> I've rebased this. Note that the large first patch is just the
> accumulated patches from the 'SQL/JSON functions' thread, and should be
> reviewed there. Only patches 2 thru 4 should be reviewed here. In fact
> there are no changes at all in those patches from the previous set other
> than a little patch fuzz. The only substantial changes are in patch 1,
> which had bitrotted. However, I'm posting a new series to keep the
> numbering in sync.
> 
> If the cfbot is happy I will set back to 'Needs review'

> 0001-SQL-JSON-functions-v46.patch
> 0002-JSON_TABLE-v46.patch
> 0003-JSON_TABLE-PLAN-DEFAULT-clause-v46.patch
> 0004-JSON_TABLE-PLAN-clause-v46.patch


Hi,

The four v46 patches apply fine, but on compile I get (debian/gcc):

make --quiet -j 4
make[3]: *** No rule to make target 'parse_jsontable.o', needed by 
'objfiles.txt'.  Stop.
make[3]: *** Waiting for unfinished jobs
make[2]: *** [parser-recursive] Error 2
make[2]: *** Waiting for unfinished jobs
make[1]: *** [all-backend-recurse] Error 2
make: *** [all-src-recurse] Error 2
common.mk:39: recipe for target 'parser-recursive' failed
Makefile:42: recipe for target 'all-backend-recurse' failed
GNUmakefile:11: recipe for target 'all-src-recurse' failed


Erik




Re: SQL/JSON: JSON_TABLE

2021-03-30 Thread Erik Rijkers
> On 2021.03.27. 02:12 Nikita Glukhov  wrote:
> 
> Attached 47th version of the patches.
> 
[..]
> 
> I have added forgotten files and fixed the first patch.
> 
> [0001-SQL-JSON-functions-v47.patch]
> [0002-JSON_TABLE-v47.patch]
> [0003-JSON_TABLE-PLAN-DEFAULT-clause-v47.patch]
> [0004-JSON_TABLE-PLAN-clause-v47.patch]

Hi,

Apply, build all fine.  It also works quite well, and according to 
specification, as far as I can tell.

But today I ran into:

ERROR:  function ExecEvalJson not in llvmjit_types.c

I think that it is caused by:

set enable_bitmapscan = off;

(I installed llvm a few days ago. llvm-3.9-dev on this debian stretch).


This is the test sql I concocted, which runs fine with enable_bitmapscan on 
(the default):

select jt1.* 
from myjsonfile100k as t(js, id) 
  , json_table(
  t.js
   , '$' columns (
"lastname"   textpath  '$. "lastname" '
  , "firstname"  textpath  '$. "firstname"'
  , "date"   textpath  '$. "date" '
  , "city"   textpath  '$. "city" '
  , "country"textpath  '$. "country"  '
  , "name 0(1)"  textpath  '$. "array"[0] '
  , "name 4(5)"  textpath  '$. "array"[4] '
  , "names"  text[]  path  '$. "array"'
  , "randfloat"  float   path  '$. "random float" '
)
) as jt1
where  js @> ('[ { "city": "Santiago de Cuba" } ]')
   and js[0]->>'firstname' = 'Gilda'
;
ERROR:  function ExecEvalJson not in llvmjit_types.c

That statement only errors out if the table is large enough. I have no time now 
to make a sample table but if no-one understands the problem off-hand, I'll try 
to construct such a table later this week (the one I'm using is large, 1.5 GB).


Erik Rijkers




Re: SQL/JSON: JSON_TABLE

2021-03-30 Thread Erik Rijkers


> On 2021.03.30. 22:25 Nikita Glukhov  wrote:
> 
>  
> On 30.03.2021 19:56, Erik Rijkers wrote:
> 
> >> On 2021.03.27. 02:12 Nikita Glukhov  wrote:
> >>
> >> Attached 47th version of the patches.
> > Hi,
> >
> > Apply, build all fine.  It also works quite well, and according to 
> > specification, as far as I can tell.
> >
> > But today I ran into:
> >
> > ERROR:  function ExecEvalJson not in llvmjit_types.c
> >
> > I think that it is caused by:
> >
> > set enable_bitmapscan = off;
> >
> > (I installed llvm a few days ago. llvm-3.9-dev on this debian stretch).
> >
> >
> > This is the test sql I concocted, which runs fine with enable_bitmapscan on 
> > (the default):
> >
> > select jt1.*
> > from myjsonfile100k as t(js, id)
> >, json_table(
> >t.js
> > , '$' columns (
> >  "lastname"   textpath  '$. "lastname" '
> >, "firstname"  textpath  '$. "firstname"'
> >, "date"   textpath  '$. "date" '
> >, "city"   textpath  '$. "city" '
> >, "country"textpath  '$. "country"  '
> >, "name 0(1)"  textpath  '$. "array"[0] '
> >, "name 4(5)"  textpath  '$. "array"[4] '
> >, "names"  text[]  path  '$. "array"'
> >, "randfloat"  float   path  '$. "random float" '
> >  )
> > ) as jt1
> > where  js @> ('[ { "city": "Santiago de Cuba" } ]')
> > and js[0]->>'firstname' = 'Gilda'
> > ;
> > ERROR:  function ExecEvalJson not in llvmjit_types.c
> >
> > That statement only errors out if the table is large enough. I have no time 
> > now to make a sample table but if no-one understands the problem off-hand, 
> > I'll try to construct such a table later this week (the one I'm using is 
> > large, 1.5 GB).
> 
> Thank you for testing.
> 
> 
> I think you can try to add 3 missing functions references to the end of
> src/backend/jit/llvm/llvmjit_types.c:
> 
>   void       *referenced_functions[] =
>   {
>   ...
>   ExecEvalXmlExpr,
> +    ExecEvalJsonConstructor,
> +    ExecEvalIsJsonPredicate,
> +    ExecEvalJson,
>   MakeExpandedObjectReadOnlyInternal,
>   ...
>   };
> 
> 
> If this fixes problem, I will add this to the new version of the patches.

It does almost fix it, but in the above is a typo:
+  ExecEvalIsJsonPredicate should to be changed to:
+  ExecEvalJsonIsPredicate.

With that change the problem vanishes.

Thanks!

Erik Rijkers








> 
> -- 
> Nikita Glukhov
> Postgres Professional:http://www.postgrespro.co   
> <http://www.postgrespro.com>The Russian Postgres Company




Re: fix old confusing JSON example

2021-04-03 Thread Erik Rijkers
> On 2021.04.03. 14:01 Erik Rijkers  wrote:
>  
> Hello,
> 
> Attached is a small but confusing mistake in the json documentation (a @@ 
> instead of @?) that has been there since version 12.  (It took me quite some 
> time to figure that out while testing with the recent SQL/JSON patches -- 
> which I initially blamed).
>  
> To be applied from 12, 13, and master.

Oops, sent to wrong list.

Let me add some arguments for the change:

The original text is:
--
Also, GIN index supports @@ and @? operators, which perform jsonpath matching.

  SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == "qui"';
  SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] ? (@ == 
"qui")';

--
So, that gives information on two operators, and then gives one example query 
for each.  Clearly, the second example was meant to illustrate a where-clause 
with the  @?  operator.

Small change to prevent great confusion (I'll admit it took me far too long to 
understand this).

thanks,

Erik Rijkers




















> 
> Thanks,
> 
> Erik Rijkers--- doc/src/sgml/json.sgml.orig	2021-04-03 13:47:29.484510936 +0200
+++ doc/src/sgml/json.sgml	2021-04-03 13:47:53.028866752 +0200
@@ -489,7 +489,7 @@
 SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == "qui"';
 
 
-SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] ? (@ == "qui")';
+SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @? '$.tags[*] ? (@ == "qui")';
 
 GIN index extracts statements of following form out of
 jsonpath: accessors_chain = const.


hba.c:3160:18: warning: comparison of unsigned enum expression

2021-04-07 Thread Erik Rijkers
Recently (last day or so), I get this warning from gcc 10.2:

-
hba.c:3160:18: warning: comparison of unsigned enum expression < 0 is always 
false [-Wtautological-compare]
if (auth_method < 0 || USER_AUTH_LAST < auth_method)
~~~ ^ ~
1 warning generated.
-

Erik




Re: hba.c:3160:18: warning: comparison of unsigned enum expression

2021-04-07 Thread Erik Rijkers
> On 2021.04.07. 13:57 Michael Paquier  wrote:
> 
>  
> On Wed, Apr 07, 2021 at 01:24:01PM +0200, Magnus Hagander wrote:
> > On Wed, Apr 7, 2021 at 1:01 PM Erik Rijkers  wrote:
> > > Recently (last day or so), I get this warning from gcc 10.2:

> [gcc-hba-warning.patch]

FWIW, this fixes the warning.

(and no, I don't think I am using special gcc settings..)

Erik




Re: SQL/JSON: JSON_TABLE

2021-04-12 Thread Erik Rijkers
> On 2021.03.27. 02:12 Nikita Glukhov  wrote:
> Attached 47th version of the patches.

We're past feature freeze for 14 and alas, JSON_TABLE has not made it.

I have tested quite a bit with it and because I didn't find any trouble with 
functionality or speed, I wanted to at least mention that here once.

I looked at v47, these files
> [0001-SQL-JSON-functions-v47.patch]
> [0002-JSON_TABLE-v47.patch]
> [0003-JSON_TABLE-PLAN-DEFAULT-clause-v47.patch]
> [0004-JSON_TABLE-PLAN-clause-v47.patch]
> [manual_addition_fixed.patch]  # for this see [1], [2]

   (v47 doesn't apply anymore, as cfbot shows, but instances can still be built 
on top of 6131ffc43ff from 30 march 2021)

I hope it will fare better next round, version 15.

Thanks,

Erik Rijkers

[1] 
https://www.postgresql.org/message-id/69eefc5a-cabc-8dd3-c689-93da038c0d6a%40postgrespro.ru
[2] 
https://www.postgresql.org/message-id/19181987.22943.1617141503618%40webmailclassic.xs4all.nl


> -- 
> Nikita Glukhov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company




Re: Postgres perl module namespace

2021-10-19 Thread Erik Rijkers

Op 19-10-2021 om 20:54 schreef Andrew Dunstan:




Discussion has gone quiet and the tree is now relatively quiet, so now
seems like a good time to do this. See attached patches.



> [0001-move-perl-test-modules-to-PostgreSQL-Test-namespace.patch ]
> [0002-move-PostgreSQL-Test-PostgresVersion-up-in-the-names.patch]


Those patches gave some complains about 
PostgreSQL/Test/PostgresVersion.pm being absent so I added this 
deletion.  I'm not sure that's correct but it enabled the build and 
check-world ran without errors.



Erik Rijkers
--- src/test/perl/Makefile.orig2	2021-10-19 21:40:38.388116778 +0200
+++ src/test/perl/Makefile	2021-10-19 21:40:52.208346619 +0200
@@ -23,13 +23,11 @@
 	$(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/SimpleTee.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/SimpleTee.pm'
 	$(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/RecursiveCopy.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/RecursiveCopy.pm'
 	$(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/Cluster.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/Cluster.pm'
-	$(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/PostgresVersion.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'
 
 uninstall:
 	rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/Utils.pm'
 	rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/SimpleTee.pm'
 	rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/RecursiveCopy.pm'
 	rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/Cluster.pm'
-	rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'
 
 endif


Re: [PATCH] Proposal for HIDDEN/INVISIBLE column

2021-10-27 Thread Erik Rijkers

Op 27-10-2021 om 16:33 schreef Gilles Darold:


- Fix documentation about some sgml tag broken.


About ROWTYPE generating an error when SELECT * INTO or RETURNING * INTO
is used with unexpanded column, I have kept things like that because it
is the normal behavior. I have checked on others database engine and
this is the same.1



And finally I found the reason of the diff on compression.out and 
collate.linux.utf8.out, new version v5 of the patch attached.



> [ 0001-hidden-column-v5.patch ]


This warning during compile from gcc 11.2:

pg_dump.c: In function ‘dumpTableSchema’:
pg_dump.c:16327:56: warning: comparison of constant ‘0’ with boolean 
expression is always true [-Wbool-compare]

16327 | if (tbinfo->attisunexpanded[j] >= 0)
  |^~

Otherwise, build, make check, chekc-world are OK.  Also the pdf builds ok.

Thanks,

Erik Rijkers










Re: [PATCH] Proposal for HIDDEN/INVISIBLE column

2021-10-28 Thread Erik Rijkers

Op 27-10-2021 om 18:02 schreef Gilles Darold:


Otherwise, build, make check, chekc-world are OK.  Also the pdf builds
ok.


Thanks Erik, new version v6 attached.


Hi,

Anther small thing: the test_decoding module was overlooked, I think. 
Below is output from make check-world (this error does not occur in master)



Erik


== running regression test queries==
test ddl  ... FAILED 1210 ms
test xact ... ok   22 ms
test rewrite  ... ok  176 ms
test toast... ok  292 ms
test permissions  ... ok   24 ms
test decoding_in_xact ... ok   23 ms
test decoding_into_rel... ok   33 ms
test binary   ... ok   16 ms
test prepared ... ok   21 ms
test replorigin   ... ok   23 ms
test time ... ok   22 ms
test messages ... ok   26 ms
test spill... ok 2407 ms
test slot ... ok  424 ms
test truncate ... ok   21 ms
test stream   ... ok   31 ms
test stats... ok 1097 ms
test twophase ... ok   46 ms
test twophase_stream  ... ok   28 ms
== shutting down postmaster   ==

===
 1 of 19 tests failed.
===

The differences that caused some tests to fail can be viewed in the
file 
"/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/regression.diffs". 
 A copy of the test summary that you see
above is saved in the file 
"/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/regression.out".


../../src/makefiles/pgxs.mk:451: recipe for target 'check' failed
make[2]: *** [check] Error 1
make[2]: Leaving directory 
'/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding'

Makefile:94: recipe for target 'check-test_decoding-recurse' failed
make[1]: *** [check-test_decoding-recurse] Error 2
make[1]: Leaving directory 
'/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib'

GNUmakefile:71: recipe for target 'check-world-contrib-recurse' failed
make: *** [check-world-contrib-recurse] Error 2


diff -U3 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/expected/ddl.out
 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/results/ddl.out
--- 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/expected/ddl.out
 2021-10-27 22:50:11.953480787 +0200
+++ 
/home/aardvark/pg_stuff/pg_sandbox/pgsql.hide_column/contrib/test_decoding/results/ddl.out
  2021-10-28 00:50:05.019304167 +0200
@@ -438,12 +438,12 @@
 WITH (user_catalog_table = true)
 ;
 \d+ replication_metadata
- Table 
"public.replication_metadata"
-  Column  |  Type   | Collation | Nullable | Default   
   | Storage  | Stats target | Description 
---+-+---+--+--+--+--+-
- id   | integer |   | not null | 
nextval('replication_metadata_id_seq'::regclass) | plain|  | 
- relation | name|   | not null |   
   | plain|  | 
- options  | text[]  |   |  |   
   | extended |  | 
+   Table 
"public.replication_metadata"
+  Column  |  Type   | Collation | Nullable | Default   
   | Expanded | Storage  | Stats target | Description 
+--+-+---+--+--+--+--+--+-
+ id   | integer |   | not null | 
nextval('replication_metadata_id_seq'::regclass) |  | plain|
  | 
+ relation | name|   | not null |   
   |  | plain|  | 
+ options  | text[]  |   |  |   
   |  | extended |  | 
 Indexes:
 "replication_metadata_pkey" PRIMARY KEY, btree (id)
 Options: user_catalog_table=true
@@ -452,12 +452,12 @@
 VALUES ('foo', ARRAY['a', 'b']);
 ALTER TABLE replication_metadata RESET (user_catalog_table);
 \d+ replication_metadata
- Table 
"public.replication_metadata"
-  Column  |  Type   | Collation | Nullable | Default   
   | 

Re: 2021-11-11 release announcement draft

2021-11-09 Thread Erik Rijkers

Op 09-11-2021 om 17:25 schreef Jonathan S. Katz:


Attached please find a draft of the release announcement for 2021-11-11. 


"overflowed-subsraction"  should probably be
"overflowed-subtraction"


Erik





Re: WIP: Covering + unique indexes.

2018-03-28 Thread Erik Rijkers

On 2018-03-28 16:59, Anastasia Lubennikova wrote:

Here is the new version of the patch set.


I can't get these to apply:

patch -b -l -F 25 -p 1 < 
/home/aardvark/download/pgpatches/0110/covering_indexes/20180328/0001-Covering-core-v8.patch



1 out of 19 hunks FAILED -- saving rejects to file 
src/backend/utils/cache/relcache.c.rej



$ cat src/backend/utils/cache/relcache.c.rej
--- src/backend/utils/cache/relcache.c
+++ src/backend/utils/cache/relcache.c
@@ -542,7 +542,7 @@
attp = (Form_pg_attribute) 
GETSTRUCT(pg_attribute_tuple);


if (attp->attnum <= 0 ||
-   attp->attnum > relation->rd_rel->relnatts)
+   attp->attnum > 
RelationGetNumberOfAttributes(relation))
elog(ERROR, "invalid attribute number %d for 
%s",
 attp->attnum, 
RelationGetRelationName(relation));






Erik Rijkers




Re: WIP: Covering + unique indexes.

2018-04-05 Thread Erik Rijkers

On 2018-04-05 00:09, Alexander Korotkov wrote:

Hi!

Thank you for review!  Revised patchset is attached.
[0001-Covering-core-v12.patch]
[0002-Covering-btree-v12.patch]
[0003-Covering-amcheck-v12.patch]
[0004-Covering-natts-v12.patch]


Really nice performance gains.

I read through the docs and made some changes.  I hope it can count as 
improvement.


It would probably also be a good idea to add the term "covering index" 
somewhere, at least in the documentation's index; the term does now not 
occur anywhere.  (This doc-patch does not add it)


thanks,

Erik Rijkers
--- doc/src/sgml/ref/create_index.sgml.orig	2018-04-05 14:36:00.904617793 +0200
+++ doc/src/sgml/ref/create_index.sgml	2018-04-05 15:49:03.778805965 +0200
@@ -148,31 +148,27 @@
   INCLUDE
   

-An optional INCLUDE clause allows to specify the
-list of columns which will be included in the non-key part of the index.
-Columns listed in this clause cannot co-exist as index key columns,
-and vice versa.  The INCLUDE columns exist solely to
+The optional INCLUDE clause specifies a
+list of columns which will be included as a non-key part in the index.
+Columns listed in this clause cannot also be present as index key columns.
+The INCLUDE columns exist solely to
 allow more queries to benefit from index-only scans
-by including specified columns into the index.  Values of these columns
+by including the values of the specified columns in the index.  These values
 would otherwise have to be obtained by reading the table's heap.
-Having these columns in the INCLUDE clause
-in some cases allows PostgreSQL to skip
-the heap read completely.

 

-In the UNIQUE indexes, uniqueness is only enforced
+In UNIQUE indexes, uniqueness is only enforced
 for key columns.  Columns listed in the INCLUDE
-clause have no influence to uniqueness enforcement.  Other constraints
+clause have no effect on uniqueness enforcement.  Other constraints
 (PRIMARY KEY and EXCLUDE) work the same way.

 

-Columns listed in the INCLUDE clause doesn't need
-appropriate operator class to exist.  Therefore,
-INCLUDE clause if useful to add non-key index
-columns, whose data types don't have operator classes defined for
-given access method.
+Columns listed in the INCLUDE clause don't need
+appropriate operator classes; the clause can contain non-key index
+columns whose data types don't have operator classes defined for
+a given access method.

 

@@ -182,12 +178,12 @@
 

 Currently, only the B-tree index access method supports this feature.
-In B-tree indexes, values of columns listed in the
-INCLUDE clause are included into leaf tuples which
-are linked to the heap tuples, but aren't included into pivot tuples
+In B-tree indexes, the values of columns listed in the
+INCLUDE clause are included in leaf tuples which
+are linked to the heap tuples, but are not included into pivot tuples
 used for tree navigation.  Therefore, moving columns from the list of
 key columns to the INCLUDE clause can slightly
-reduce index size and improve tree branching factor.
+reduce index size and improve the tree branching factor.

   
  


Re: WIP: Covering + unique indexes. (the good and the bad)

2018-04-07 Thread Erik Rijkers

On 2018-04-06 20:08, Alexander Korotkov wrote:


[0001-Covering-v15.patch]



After some more testing I notice there is also a down-side/slow-down to 
this patch that is not so bad but more than negligible, and I don't 
think it has been mentioned (but I may have missed something in this 
thread that's now been running for 1.5 year, not to mention the 
tangential btree-thread(s)).


I attach my test-program, which compares master (this morning) with 
covered_indexes (warning: it takes a while to generate the used tables).


The test tables are created as:
  create table $t (c1 int, c2 int, c3 int, c4 int);
  insert into $t (select x, 2*x, 3*x, 4 from generate_series(1, 
$rowcount) as x);
  create unique index ${t}uniqueinclude_idx on $t using btree (c1, c2) 
include (c3, c4);


or for HEAD, just:
  create unique index ${t}unique_idx on $t using btree (c1, c2);


Here is typical output (edited a bit to prevent email-mangling):

test1:
-- explain analyze select c1, c2 from nt0___1 where c1 < 1   
-- 250x
unpatched 6511: 100M rows Execution Time:  (normal/normal)  98 %  exec 
avg: 2.44
  patched 6976: 100M rows Execution Time: (covered/normal) 108 %  exec 
avg: 2.67
   test1 patched / 
unpatched: 109.49 %


test4:
-- explain analyze select c1, c2 from nt0___1 where c1 < 1 
and c3 < 20
unpatched 6511: 100M rows Execution Time:  (normal/normal)  95 %exec 
avg: 1.56
  patched 6976: 100M rows Execution Time: (covered/normal)  60 %exec 
avg: 0.95
   test4 patched / 
unpatched:  60.83 %



So the main good thing is that 60%, a good improvement -- but that ~109% 
(a slow-down) is also quite repeatable.


(there are a more goodies from the patch (like improved insert-speed) 
but I just wanted to draw attention to this particular slow-down too)


I took all timings from explain analyze versions of the statements, on 
the assumption that that would be quite comparable to 'normal' querying. 
(please let me know if that introduces error).



# \dti+ nt0___1*
   List of relations
 Schema |   Name   | Type  |  Owner   |  
Table  |  Size

+--+---+--+-+
 public | nt0___1  | table | aardvark |  
   | 4224 MB
 public | nt0___1uniqueinclude_idx | index | aardvark | 
nt0___1 | 3004 MB



(for what it's worth, I'm in favor of getting this patch into v11 
although I can't say I followed the technical details too much)



thanks,


Erik Rijkers



#!/bin/env perl
#!/opt/perl-5.26/bin/perl
use strict;
use warnings;
use DBI;
use Time::HiRes qw/tv_interval gettimeofday/;
use Getopt::Long;
$| = 1;
our $PGPORT_VANILLA  =  6511;
our $PGPORT_COVERING_INDEXES =  6976;
our $SQL_REPEAT = 251;

main();
exit;

sub size_unit { 
  1_000_000
}
sub main {

  my $size   = 100; #rowcount in millions; this $size variable determines the table used

  GetOptions ("size=i" => \$size)   or die("Error in command line arguments\n");

  my $dbh_patched  = connectdb_covering_indexes();
  my $dbh_vanilla  = connectdb_vanilla();
  my $port_patched = check_debug_state( $dbh_patched  );
  my $port_vanilla = check_debug_state( $dbh_vanilla );

  # create tables on patched instance
  for my $n (1, 10, 100) { # , 250 ) {
my $rowcount = $n * size_unit();
create_tables($dbh_patched, $port_patched, $rowcount, my $overwrite = 0);
  }

  # create tables on vanilla instance
  for my $n (1, 10, 100) { # , 250 ) {
my $rowcount = $n * size_unit();
create_tables($dbh_vanilla, $port_vanilla, $rowcount, my $overwrite = 0);
  }

# print sprintf("-- Perl %vd\n", $^V)
#, "-- ", $dbh_vanilla->selectrow_arrayref( "select version()" )->[0], "\n"
#, "-- ", $dbh_patched->selectrow_arrayref( "select version()" )->[0], "\n" ;

  my $c1 = 1; ##  5000 + int(rand(5000)) + 1;
  my $c3 =20; ##   20 + int(rand(  30)) + 1;

  #  $c1 =  5000; ##  5000 + int(rand(5000)) + 1;
  #  $c3 =20; ##   20 + int(rand(  30)) + 1;

  # enable to vary WHERE-clause a little bit:
  if (0) {
 $c1 = 5000 + int(rand(5000)) + 1;
 $c3 =   20 + int(rand(  30)) + 1;
  }
  

  my $vanilla = test1($dbh_vanilla, $port_vanilla, $size, $c1);
  my $patched = test1($dbh_patched, $port_patched, $size, $c1);
  print " "x84, sprintf( "%6.2f %%  <- test1, patched / unpatched\n",  ((average($patched) * 100) / average($vanilla)) );

# test2($dbh_vanilla, $port_vanilla, $size, $c1);
# test2($dbh_patched, $port_patched, $size, $c1);

# test3($dbh_vanilla, $port_vanilla, $size, $c1, $c3);
# test3($dbh_patched, $port_patched, $size, $c1, $c3);

  $v

Re: WIP: Covering + unique indexes. (the good and the bad)

2018-04-07 Thread Erik Rijkers

On 2018-04-07 14:27, Alexander Korotkov wrote:

On Sat, Apr 7, 2018 at 2:57 PM, Erik Rijkers  wrote:


On 2018-04-06 20:08, Alexander Korotkov wrote:


[0001-Covering-v15.patch]

After some more testing I notice there is also a down-side/slow-down 
to
this patch that is not so bad but more than negligible, and I don't 
think

it has been mentioned (but I may have missed something in this thread
that's now been running for 1.5 year, not to mention the tangential
btree-thread(s)).

I attach my test-program, which compares master (this morning) with
covered_indexes (warning: it takes a while to generate the used 
tables).


The test tables are created as:
  create table $t (c1 int, c2 int, c3 int, c4 int);
  insert into $t (select x, 2*x, 3*x, 4 from generate_series(1, 
$rowcount)

as x);
  create unique index ${t}uniqueinclude_idx on $t using btree (c1, c2)
include (c3, c4);

or for HEAD, just:
  create unique index ${t}unique_idx on $t using btree (c1, c2);



Do I understand correctly that you compare unique index on (c1, c2) 
with
master to unqiue index on (c1, c2) include (c3, c4) with patched 
version?

If so then I think it's wrong to say about down-side/slow-down of this
patch based on this comparison.
Patch *does not* cause slowdown in this case.  Patch provides user a 
*new
option* which has its advantages and disadvantages.  And what you 
compare
is advantages and disadvantages of this option, not slow-down of the 
patch.

In the case you compare *the same* index on master and patched version,
then it's possible to say about slow-down of the patch.


OK, I take your point -- you are right.  Although my measurement was (I 
think) correct, my comparison was not (as Teodor wrote, not quite 
'fair').


Sorry, I should have better thought that message through.  The somewhat 
longer time is indeed just a disadvantage of this new option, to be 
balanced against the advantages that are pretty clear too.



Erik Rijkers



Re: don't see materialized views in information_schema

2019-09-11 Thread Erik Rijkers

On 2019-09-11 08:14, Pavel Stehule wrote:

Hi

[matviews not showing up in information_schema.tables]

Is it expected behave? Tested on master branch.


I think it is; it has been like this all along.

( matviews are in pg_matviews. )





Re: Define jsonpath functions as stable

2019-09-17 Thread Erik Rijkers

On 2019-09-17 17:38, Jonathan S. Katz wrote:

On 9/16/19 6:39 PM, Jonathan S. Katz wrote:
[regex.patch]


A few things/typos caught my eye:

1.
'implementation' seems the wrong word in sentence:

"Several other parts of the SQL standard
also define LIKE_REGEX equivalents that refer
to this implementation, including the
SQL/JSON path like_regex filter."

As I understand this text, 'concept' seems better.
I'd drop 'also', too.

2.
'whereas the POSIX will those'  should be
'whereas POSIX will regard those'
 or maybe 'read those'

3.
+ The SQL/JSON standard borrows its definition for how regular 
expressions
+ from the LIKE_REGEX operator, which in turns 
uses the

+ XQuery standard.
That sentence needs the verb 'work', no?  'for how regular expressions 
work [..]'

Or alternatively drop 'how'.


thanks,

Erik Rijkers






patch tester symbols

2019-03-06 Thread Erik Rijkers

Hi,

Context: I'm trying to compile the jsonpath v36 patches (these apply 
OK), and on top of those the jsonfunctions and jsontable patch series.


That fails for me (on 
0001-Implementation-of-SQL-JSON-path-language-v36.patch), and now I'm 
wondering why that does not agree with what the patch-tester page shows 
( http://commitfest.cputube.org/ ).


The patch-tester page does not explain what the colors and symbols mean. 
 Of course one can guess 'red' and 'cross' is bad, and 'green' and 
'check' is good.


But some questions remain:
- Some symbols' color is 'filled-in' (solid), and some are not.  What 
does that mean?

- For each patch there are three symbols; what do those three stand for?
- I suppose there is a regular schedule of apply and compile of each 
patch.  How often does it happen?  Can I see how recent a particular 
reported state is?


Can you throw some light on this?

thanks,

Erik Rijkes



Re: patch tester symbols

2019-03-06 Thread Erik Rijkers

On 2019-03-06 13:00, Nikita Glukhov wrote:

On 06.03.2019 13:11, Erik Rijkers wrote:


Context: I'm trying to compile the jsonpath v36 patches (these apply 
OK), and on top of those the jsonfunctions and jsontable patch series.



Patch 0001-Implementation-of-SQL-JSON-path-language-v36.patch from the
"SQL/JSON: functions" patch set is a simply squash of all 6 jsonpath
patches on which this patch set depends. I included it into this patch
set just for testing in patch-tester (I guess patch-tester cannot
handle patch dependencies). If you apply SQL/JSON patch sets step by
step, then you need to apply only patch 0002 from "SQL/JSON:
functions" and "SQL/JSON: JSON_TABLE".


Ah, that explains it. I suppose I should have guessed that.

Applied in that way it built fine (apply, compile, check-world OK).

Thank you,

Erik Rijkers



Re: [HACKERS] proposal: schema variables

2019-03-24 Thread Erik Rijkers

On 2019-03-24 06:57, Pavel Stehule wrote:

Hi

rebase against current master



I ran into this:

(schema 'varschema2' does not exist):

drop variable varschema2.testv cascade;
ERROR:  schema "varschema2" does not exist
create variable if not exists testv as text;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


(both statements are needed to force the crash)


thanks,

Erik Rijkers





Re: [HACKERS] proposal: schema variables

2019-03-25 Thread Erik Rijkers

On 2019-03-24 10:32, Pavel Stehule wrote:

ne 24. 3. 2019 v 10:25 odesílatel Erik Rijkers  napsal:


On 2019-03-24 06:57, Pavel Stehule wrote:
> Hi
>
> rebase against current master

I ran into this:

(schema 'varschema2' does not exist):

drop variable varschema2.testv cascade;
ERROR:  schema "varschema2" does not exist
create variable if not exists testv as text;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost


(both statements are needed to force the crash)



I cannot to reproduce it.
 [backtrace and stuff]


Sorry, I don't have the wherewithal to get more info but I have repeated 
this now on 4 different machines (debian jessie/stretch; centos).


I did notice that sometimes those two offending lines
"
  drop variable varschema2.testv cascade;
  create variable if not exists testv as text;
"
have to be repeated a few times (never more than 4 or 5 times) before 
the crash occurs (signal 11: Segmentation fault).



Erik Rijkers





Re: [HACKERS] generated columns

2019-03-30 Thread Erik Rijkers

On 2019-01-16 22:40, Erik Rijkers wrote:


If you add a generated column to a file_fdw foreign table, it works OK
wih VIRTUAL (the default) but with STORED it adds an empty column,
silently.  I would say it would make more sense to get an error.


VIRTUAL is gone, but that other issue is still there:  STORED in a 
file_fdw foreign table still silently creates the column which then 
turns out to be useless on SELECT, with an error like:


"ERROR:  column some_column_name is a generated column
DETAIL:  Generated columns cannot be used in COPY."

Maybe it'd be possible to get an error earlier, i.e., while trying to 
create such a useless column?



thanks,

Erik Rijkers










Re: FETCH FIRST clause PERCENT option

2019-08-19 Thread Erik Rijkers

On 2019-08-19 01:33, Ryan Lambert wrote:
The following review has been posted through the commitfest 
application:

make installcheck-world:  tested, passed
Implements feature:   tested, passed
Spec compliant:   not tested
Documentation:tested, passed

The latest patch [1] and the cleanup patch [2] apply cleanly to the
latest master (5f110933).  I reviewed the conversation and don't see
any outstanding questions or concerns.  Updating status to ready for
committer.

[1] > 
https://www.postgresql.org/message-id/attachment/103028/percent-incremental-v6.patch
[2] > 
https://www.postgresql.org/message-id/attachment/103157/percent-incremental-v6-comment-cleanup.patch


Ryan Lambert

The new status of this patch is: Ready for Committer


Hi,

(running with those two patches applied)

  select * from onek where thousand < 5 order by thousand fetch first -1 
percent rows only

is correctly caught (with "ERROR:  PERCENT must not be negative") but:

  select * from  onek where thousand < 5 order by thousand fetch first 
101 percent rows only
is not. It doesn't return, and cannot be Ctrl-C'ed out of, which I guess 
is another bug?



thanks,

Erik Rijkers







Re: FETCH FIRST clause PERCENT option

2019-08-19 Thread Erik Rijkers

On 2019-08-19 11:18, Surafel Temesgen wrote:


[..]

[percent-incremental-v7.patch]


Thanks.


Another little thing, not sure it's a bug:

limit interprets its argument by rounding up or down as one would 
expect:


table onek limit 10.4;  --> gives 10 rows
table onek limit 10.6;  --> gives 11 rows

but  FETCH count PERCENT  does not do that; it rounds always up.

select * from (table onek limit 100) f fetch first 10.4 percent rows 
only; --> gives 11 rows
select * from (table onek limit 100) f fetch first 10.6 percent rows 
only; --> gives 11 rows


I see that it's documented in the .sgml to behave as it does, but it 
seems wrong to me;

shouldn't that 10.4-percent-query yield 10 rows instead of 11?


thanks,

Erik Rijkers












Re: row filtering for logical replication

2019-09-01 Thread Erik Rijkers

On 2019-09-01 02:28, Euler Taveira wrote:
Em dom, 3 de fev de 2019 às 07:14, Andres Freund  
escreveu:


As far as I can tell, the patch has not been refreshed since. So I'm
marking this as returned with feedback for now. Please resubmit once
ready.


I fix all of the bugs pointed in this thread. I decide to disallow



0001-Remove-unused-atttypmod-column-from-initial-table-sy.patch
0002-Store-number-of-tuples-in-WalRcvExecResult.patch
0003-Refactor-function-create_estate_for_relation.patch
0004-Rename-a-WHERE-node.patch
0005-Row-filtering-for-logical-replication.patch
0006-Print-publication-WHERE-condition-in-psql.patch
0007-Publication-where-condition-support-for-pg_dump.patch
0008-Debug-for-row-filtering.patch


Hi,

The first 4 of these apply without error, but I can't get 0005 to apply. 
This is what I use:


patch --dry-run -b -l -F 5 -p 1 < 
/home/aardvark/download/pgpatches/0130/logrep_rowfilter/20190901/0005-Row-filtering-for-logical-replication.patch



checking file doc/src/sgml/catalogs.sgml
Hunk #1 succeeded at 5595 (offset 8 lines).
checking file doc/src/sgml/ref/alter_publication.sgml
checking file doc/src/sgml/ref/create_publication.sgml
checking file src/backend/catalog/pg_publication.c
checking file src/backend/commands/publicationcmds.c
Hunk #1 succeeded at 352 (offset 8 lines).
Hunk #2 succeeded at 381 (offset 8 lines).
Hunk #3 succeeded at 539 (offset 8 lines).
Hunk #4 succeeded at 570 (offset 8 lines).
Hunk #5 succeeded at 601 (offset 8 lines).
Hunk #6 succeeded at 626 (offset 8 lines).
Hunk #7 succeeded at 647 (offset 8 lines).
Hunk #8 succeeded at 679 (offset 8 lines).
Hunk #9 succeeded at 693 (offset 8 lines).
checking file src/backend/parser/gram.y
checking file src/backend/parser/parse_agg.c
checking file src/backend/parser/parse_expr.c
Hunk #4 succeeded at 3571 (offset -2 lines).
checking file src/backend/parser/parse_func.c
Hunk #1 succeeded at 2516 (offset -13 lines).
checking file src/backend/replication/logical/tablesync.c
checking file src/backend/replication/logical/worker.c
checking file src/backend/replication/pgoutput/pgoutput.c
Hunk #1 FAILED at 12.
Hunk #2 succeeded at 60 (offset 2 lines).
Hunk #3 succeeded at 336 (offset 2 lines).
Hunk #4 succeeded at 630 (offset 2 lines).
Hunk #5 succeeded at 647 (offset 2 lines).
Hunk #6 succeeded at 738 (offset 2 lines).
1 out of 6 hunks FAILED
checking file src/include/catalog/pg_publication.h
checking file src/include/catalog/pg_publication_rel.h
checking file src/include/catalog/toasting.h
checking file src/include/nodes/nodes.h
checking file src/include/nodes/parsenodes.h
Hunk #1 succeeded at 3461 (offset -1 lines).
Hunk #2 succeeded at 3486 (offset -1 lines).
checking file src/include/parser/parse_node.h
checking file src/include/replication/logicalrelation.h
checking file src/test/regress/expected/publication.out
Hunk #1 succeeded at 116 (offset 9 lines).
checking file src/test/regress/sql/publication.sql
Hunk #1 succeeded at 69 with fuzz 1 (offset 9 lines).
checking file src/test/subscription/t/013_row_filter.pl


perhaps that can be fixed?

thanks,

Erik Rijkers




Re: row filtering for logical replication

2019-09-02 Thread Erik Rijkers

On 2019-09-02 01:43, Euler Taveira wrote:
Em dom, 1 de set de 2019 às 06:09, Erik Rijkers  
escreveu:


The first 4 of these apply without error, but I can't get 0005 to 
apply.

This is what I use:


Erik, I generate a new patch set with patience diff algorithm. It
seems it applies cleanly.



It did apply cleanly, thanks.

But I can't get it to correctly do the partial replication in the 
attached pgbench-script (similar versions of which script I also used 
for earlier versions of the patch, last year).


There are complaints in the log (both pub and sub) like:
ERROR:  trying to store a heap tuple into wrong type of slot

I have no idea what causes that.

I attach a zip:

$ unzip -l logrep_rowfilter.zip
Archive:  logrep_rowfilter.zip
  Length  DateTimeName
-  -- -   
17942  2019-09-03 00:47   logfile.6525
10412  2019-09-03 00:47   logfile.6526
 6913  2019-09-03 00:47   logrep_rowfilter_2_nodes.sh
 3371  2019-09-03 00:47   output.txt
- ---
38638 4 files

That bash script runs 2 instances (as compiled on my local setup so it 
will not run as-is) and tries for one minute to get a slice of the 
pgbench_accounts table replicated.  One minute is short but I wanted 
short logfiles; I have tried the same up to 20 minutes without the 
replication completing.  I'll try even longer but in the meantime I hope 
you can figure out why these errors occur.



thanks,


Erik Rijkers


<>


Re: row filtering for logical replication

2019-09-02 Thread Erik Rijkers

On 2019-09-03 05:32, Euler Taveira wrote:
Em ter, 3 de set de 2019 às 00:16, Alexey Zagarin  
escreveu:


There are complaints in the log (both pub and sub) like:
ERROR: trying to store a heap tuple into wrong type of slot

I have no idea what causes that.

Yeah, I've seen that too. It was fixed by Alexey Kondratov, in line 
955 of 0005-Row-filtering-for-logical-replication.patch it should be 
&TTSOpsHeapTuple instead of &TTSOpsVirtual.


Ops... exact. That was an oversight while poking with different types 
of slots.


OK, I'll consider Alexey Kondratov's set of patches as the current 
state-of-the-art then.  (They still apply.)


I found a problem where I'm not sure it's a bug:

The attached bash script does a test by setting up pgbench tables on 
both master and replica, and then sets up logical replication for a 
slice of pgbench_accounts. Then it does a short pgbench run, and loops 
until the results become identical(ok) (or breaks out after a certain 
time (NOK=not ok)).


It turns out this did not work until I added a wait state after the 
CREATE SUBSCRIPTION.  It always fails without the wait state, and always 
works with the wait state.


Do you agree this is a bug?


thanks (also to both Alexeys :))


Erik Rijkers


PS
by the way, this script won't run as-is on other machines; it has stuff 
particular to my local setup.



#!/bin/bash

# postgres binary compiled with 
#
# pgpatches/0130/logrep_rowfilter/20190902/v2-0001-Remove-unused-atttypmod-column-from-initial-table.patch
# pgpatches/0130/logrep_rowfilter/20190902/v2-0002-Store-number-of-tuples-in-WalRcvExecResult.patch   
# pgpatches/0130/logrep_rowfilter/20190902/v2-0003-Refactor-function-create_estate_for_relation.patch 
# pgpatches/0130/logrep_rowfilter/20190902/v2-0004-Rename-a-WHERE-node.patch  
# pgpatches/0130/logrep_rowfilter/20190902/v2-0005-Row-filtering-for-logical-replication.patch
# pgpatches/0130/logrep_rowfilter/20190902/v2-0006-Print-publication-WHERE-condition-in-psql.patch
# pgpatches/0130/logrep_rowfilter/20190902/v2-0007-Publication-where-condition-support-for-pg_dump.patch  
# pgpatches/0130/logrep_rowfilter/20190902/v2-0008-Debug-for-row-filtering.patch  
# pgpatches/0130/logrep_rowfilter/20190902/v2-0009-Add-simple-BDR-test-for-row-filtering.patch
 

unset PGDATABASE PGPORT PGSERVICE
export PGDATABASE=postgres

root_dir=/tmp/cascade/logrep_rowfilter

mkdir -p $root_dir

BIN=$HOME/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin

export PATH=$BIN:$PATH

  initdb=$BIN/initdb
postgres=$BIN/postgres
  pg_ctl=$BIN/pg_ctl
baseport=6525
   port1=$(( $baseport + 0 )) 
   port2=$(( $baseport + 1 ))
 appname=rowfilter

num_instances=2
scale=1  where="where (aid between 4 and 5-1)"
 #  scale=10 where="where (aid between 40 and 40+5-1)"
  clients=64
 duration=20
 wait=10  BASTA_COUNT=40  #   7200   #  wait seconds in total  

if [[ -d $root_dir/instance1 ]]; then rm -rf $root_dir/instance1; fi
if [[ -d $root_dir/instance2 ]]; then rm -rf $root_dir/instance2; fi
if [[ -d $root_dir/instance1 ]]; then exit ; fi
if [[ -d $root_dir/instance2 ]]; then exit ; fi

devel_file=/tmp/bugs
echo filterbug>$devel_file

for n in `seq 1 $num_instances`
do
  instance=instance$n
  server_dir=$root_dir/$instance
  data_dir=$server_dir/data
  port=$(( $baseport + $n -1 ))
  logfile=$server_dir/logfile.$port
  echo "-- $initdb --pgdata=$data_dir --encoding=UTF8 --pwfile=$devel_file "
   $initdb --pgdata=$data_dir --encoding=UTF8 --pwfile=$devel_file  &> /dev/null 
  ( $postgres  -D $data_dir -p $port \
--wal_level=logical --logging_collector=on \
--client_min_messages=warning \
--log_directory=$server_dir --log_filename=logfile.${port} \
--log_replication_commands=on & ) &> /dev/null
done 

echo "sleep 3s"
sleep 3

echo "
  drop table if exists pgbench_accounts;
  drop table if exists pgbench_branches;
  drop table if exists pgbench_tellers;
  drop table if exists pgbench_history;" | psql -qXp $port1 \
&& echo "
  drop table if exists pgbench_accounts;
  drop table if exists pgbench_branches;
  drop table if exists pgbench_tellers;
  drop table if exists pgbench_history;" | psql -qXp $port2 \
&& pgbench -p $port1 -qis $scale \
&& echo "alter table pgbench_history add column hid serial primary key;" \
  | psql -q1Xp $port1 && pg_dump -F c -p $port1 \
 --exclude-table-data=pgbench_history  \
 --exclude-table-data=pgbench_accounts \
 --exclude-table-data=pgbench_branches \
 --exclude-table-data=pgbench_tellers  \
   -t pgbench_history -t pgbench_accounts \
   -t pgbench_branches -t pgbench_tellers \
  | pg_restore -1 -p $port2 -d postgres


pub1=pub_${port1}_to_${port2}
sub1=sub_${port2}_fr_${port1}
echo -ne "
cr

bug? import foreign schema forgets to import column description

2018-01-04 Thread Erik Rijkers

Hi all,

Using  IMPORT FOREIGN SCHEMA  I notice that the column descriptions of 
the remote tables are not taken along to the imported schema.  I am 
usgin postgres_fdw to 'import' a schema with foreign tables (but the 
same omission happens with ordinary tables as well).


This is a bug, no?  Surely the description strings  should be available 
at the receiving end.


Do you agree this is an oversight that should be fixed?


thanks,


Erik Rijkers





Re: pgsql: Centralize json and jsonb handling of datetime types

2018-01-16 Thread Erik Rijkers

On 2018-01-17 01:29, Andrew Dunstan wrote:

Centralize json and jsonb handling of datetime types

[...]

https://git.postgresql.org/pg/commitdiff/cc4feded0a31d2b732d4ea68613115cb720e624e

Modified Files
--
src/backend/utils/adt/date.c  |   6 +--
src/backend/utils/adt/json.c  | 122 
--

src/backend/utils/adt/jsonb.c |  70 
src/include/utils/date.h  |   4 +-
src/include/utils/jsonapi.h   |   2 +
5 files changed, 109 insertions(+), 95 deletions(-)


Latest gcc 7.2.0 compile shows these warnings (I suppose these come from 
this commit):


Compiling core:

In file included from gram.y:63:0:
../../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ 
declared inside parameter list will not be visible outside of this 
definition or declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ 
declared inside parameter list will not be visible outside of this 
definition or declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);

  ^
In file included from formatting.c:92:0:
../../../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ 
declared inside parameter list will not be visible outside of this 
definition or declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ 
declared inside parameter list will not be visible outside of this 
definition or declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);



... while contrib adds:

In file included from btree_gin.c:12:0:
../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);

  ^
In file included from btree_utils_num.c:9:0:
../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);

  ^
In file included from btree_time.c:9:0:
../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);

  ^
In file included from btree_date.c:9:0:
../../src/include/utils/date.h:76:41: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration

 extern int time2tm(TimeADT time, struct pg_tm *tm, fsec_t *fsec);
 ^
../../src/include/utils/date.h:77:46: warning: ‘struct pg_tm’ declared 
inside parameter list will not be visible outside of this definition or 
declaration
 extern int timetz2tm(TimeTzADT *time, struct pg_tm *tm, fsec_t *fsec, 
int *tzp);



No errors, and 'make check' and 'make check-world' are both OK.


thanks,

Erik Rijkers



Re: Add RANGE with values and exclusions clauses to the Window Functions

2018-01-27 Thread Erik Rijkers

On 2018-01-27 11:49, Oliver Ford wrote:

On Sat, Jan 27, 2018 at 7:40 AM, Erik Rijkers  wrote:

On 2018-01-27 00:35, Oliver Ford wrote:


Attached patch implements an extensible version of the RANGE with
values clause. It doesn't actually add any more type support than was


[...]


I've tested that the existing regression tests in previous versions
still pass, and also added new tests for descending mode.



Hi,

Regression tests only succeed for assert-disabled compiles; they fail 
when

assert-enabled:

I used (Centos 6.9):


Could you please try the attached version? It works for me with asserts 
enabled.




[0001-window-frame-v8.patch]

Yes, that fixed it, thanks.


Problem seems to be with an existing Assert in catcache.c:1545:

Assert(nkeys > 0 && nkeys < cache->cc_nkeys);

The "<" needs to be "<=" (and is changed in the attached patch).
AFAICT this was never a problem before purely because no code before
this patch called SearchSysCacheList4, so they always called with
fewer keys than the number available. But it's surely correct to
assert that the number of keys supplied is less than or equal to, not
less than, the number of keys in the cache.




Re: Add RANGE with values and exclusions clauses to the Window Functions

2018-01-30 Thread Erik Rijkers

On 2018-01-30 17:08, Oliver Ford wrote:

On Tue, Jan 30, 2018 at 10:48 AM, Oliver Ford  wrote:


I will send out v10 soon with the desc functions removed and the
EXCLUDE_NO_OTHERS define removed.


Here it is. Exclude No Others is still in the parser, but does
nothing. All desc functions are removed, replaced with a sortByAsc
bool. It no longer changes catversion.


There must be a small difference here but I don't even see it...

Sorry to be bothering you with these tiny things :)

thanks,

Erik Rijkers




Re: Add RANGE with values and exclusions clauses to the Window Functions

2018-01-30 Thread Erik Rijkers

Once more trying to attach the  regression.diffs


On 2018-01-30 17:31, Erik Rijkers wrote:

On 2018-01-30 17:08, Oliver Ford wrote:
On Tue, Jan 30, 2018 at 10:48 AM, Oliver Ford  
wrote:


I will send out v10 soon with the desc functions removed and the
EXCLUDE_NO_OTHERS define removed.


Here it is. Exclude No Others is still in the parser, but does
nothing. All desc functions are removed, replaced with a sortByAsc
bool. It no longer changes catversion.


There must be a small difference here but I don't even see it...

Sorry to be bothering you with these tiny things :)

thanks,

Erik Rijkers
*** /var/data1/pg_stuff/pg_sandbox/pgsql.frame_range/src/test/regress/expected/window.out	2018-01-30 17:13:41.463633724 +0100
--- /var/data1/pg_stuff/pg_sandbox/pgsql.frame_range/src/test/regress/results/window.out	2018-01-30 17:16:59.607871830 +0100
***
*** 1172,1178 
  (10 rows)
  
  SELECT pg_get_viewdef('v_window');
!  pg_get_viewdef
  ---
SELECT i.i, +
   sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+
--- 1172,1178 
  (10 rows)
  
  SELECT pg_get_viewdef('v_window');
! pg_get_viewdef 
  ---
SELECT i.i, +
   sum(i.i) OVER (ORDER BY i.i ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+
***
*** 1198,1204 
  (10 rows)
  
  SELECT pg_get_viewdef('v_window');
!   pg_get_viewdef 
  -
SELECT i.i,   +
   sum(i.i) OVER (ORDER BY i.i GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+
--- 1198,1204 
  (10 rows)
  
  SELECT pg_get_viewdef('v_window');
!  pg_get_viewdef  
  -
SELECT i.i,   +
   sum(i.i) OVER (ORDER BY i.i GROUPS BETWEEN 1 PRECEDING AND 1 FOLLOWING) AS sum_rows+

==



Re: [Proposal] Global temporary tables

2020-01-19 Thread Erik Rijkers

On 2020-01-19 18:04, 曾文旌(义从) wrote:

2020年1月14日 下午9:20,Pavel Stehule  写道:
út 14. 1. 2020 v 14:09 odesílatel 曾文旌(义从) <mailto:wenjing@alibaba-inc.com>> napsal:



[global_temporary_table_v4-pg13.patch ]


Hi,

This patch doesn't quiet apply for me:

patching file src/backend/access/common/reloptions.c
patching file src/backend/access/gist/gistutil.c
patching file src/backend/access/hash/hash.c
Hunk #1 succeeded at 149 (offset 3 lines).
patching file src/backend/access/heap/heapam_handler.c
patching file src/backend/access/heap/vacuumlazy.c
patching file src/backend/access/nbtree/nbtpage.c
patching file src/backend/access/table/tableam.c
patching file src/backend/access/transam/xlog.c
patching file src/backend/catalog/Makefile
Hunk #1 FAILED at 44.
1 out of 1 hunk FAILED -- saving rejects to file 
src/backend/catalog/Makefile.rej

[...]
   (The rest applies without errors)

src/backend/catalog/Makefile.rej contains:


--- src/backend/catalog/Makefile
+++ src/backend/catalog/Makefile
@@ -44,6 +44,8 @@ OBJS = \
storage.o \
toasting.o

+OBJS += storage_gtt.o
+
 BKIFILES = postgres.bki postgres.description postgres.shdescription

 include $(top_srcdir)/src/backend/common.mk


Can you have a look?


thanks,

Erik Rijkers










Re: [HACKERS] generated columns

2019-04-02 Thread Erik Rijkers

On 2019-04-02 14:43, Peter Eisentraut wrote:

On 2019-04-01 10:52, Peter Eisentraut wrote:

On 2019-03-31 05:49, Erik Rijkers wrote:

STORED in a
file_fdw foreign table still silently creates the column which then
turns out to be useless on SELECT, with an error like:

"ERROR:  column some_column_name is a generated column
DETAIL:  Generated columns cannot be used in COPY."

Maybe it'd be possible to get an error earlier, i.e., while trying to
create such a useless column?


I'll look into it.


I've been trying to create a test case for file_fdw for this, but I'm
not getting your result.  Can you send a complete test case?



Ah, I had not noticed before: with an asterisk ('select * from table' ) 
one gets no error, just empty values.


An actual error seems to occur when one mentions the 
generated-column-name explicitly in the select-list.


select "id", "Ratio Log2 GEN" from ;
"
ERROR:  column "Ratio Log2 GEN" is a generated column
DETAIL:  Generated columns cannot be used in COPY.
"

That's from a quick test here at work; maybe that gives you enough info.

If that doesn't make it repeatable (for you) I'll make a more complete 
example this evening (from home).






Re: [HACKERS] generated columns

2019-04-02 Thread Erik Rijkers

On 2019-04-02 15:36, Erik Rijkers wrote:

On 2019-04-02 14:43, Peter Eisentraut wrote:

On 2019-04-01 10:52, Peter Eisentraut wrote:

On 2019-03-31 05:49, Erik Rijkers wrote:

STORED in a
file_fdw foreign table still silently creates the column which then
turns out to be useless on SELECT, with an error like:

"ERROR:  column some_column_name is a generated column
DETAIL:  Generated columns cannot be used in COPY."

Maybe it'd be possible to get an error earlier, i.e., while trying 
to

create such a useless column?


I'll look into it.


I've been trying to create a test case for file_fdw for this, but I'm
not getting your result.  Can you send a complete test case?


attached is run_ft.sh  which creates a text file:  /tmp/pg_head.txt
then sets it up as a foreign table, and adds a generated column.

Then selects a succesful select, followed by a error-producing select.

Some selects are succesful but some fail.  I'm not sure why it sometimes 
fails  (it's not just the explicitness of the generated-column-name like 
I suggested earlier).




My output of run_ft.sh is below.


$ ./run_ft.sh
create schema if not exists "tmp";
CREATE SCHEMA
create server if not exists "tmpserver" foreign data wrapper file_fdw;
CREATE SERVER
drop   foreign table if exists tmp.pg_head cascade;
DROP FOREIGN TABLE
create foreign table   tmp.pg_head (
"Gene"  text,
"Ratio H/L normalized Exp1" numeric
)
server tmpserver
options (
delimiter E'\t'
  , format 'csv'
  , header 'TRUE'
  , filename  '/tmp/pg_head.txt'
);
CREATE FOREIGN TABLE
alter foreign table tmp.pg_head
   add column "Ratio H/L normalized Exp1 Log2 (Generated column)" 
numeric generated always as (case when "Ratio H/L normalized Exp1" > 0 
then log(2, "Ratio H/L normalized Exp1") else  null end) stored

;
ALTER FOREIGN TABLE
-- this is OK (although the generated-column values are all empty/null)
select
 "Gene"
   , "Ratio H/L normalized Exp1"
   , "Ratio H/L normalized Exp1 Log2 (Generated column)"
from tmp.pg_head
limit 3 ;
  Gene  | Ratio H/L normalized Exp1 | Ratio H/L normalized Exp1 Log2 
(Generated column)

+---+---
 Dhx9   |   NaN |
 Gapdh  |   0.42288 |
 Gm8797 |   0.81352 |
(3 rows)

-- but this fails
select
"Gene"
   , "Ratio H/L normalized Exp1 Log2 (Generated column)"
from tmp.pg_head
limit 3 ;
ERROR:  column "Ratio H/L normalized Exp1 Log2 (Generated column)" is a 
generated column

DETAIL:  Generated columns cannot be used in COPY.


#!/bin/bash

echo "Gene	Ratio H/L normalized Exp1
Dhx9	NaN
Gapdh	0.42288
Gm8797	0.81352
Aldh2	0.89913
Ccdc12	NaN
Hip1	NaN
Hist1h2aa	0.66911
Tpm2	0.57535
Fasn	NaN
Aldoa	0.61898
Unc13b	NaN
Wrn	0.0050816
Psma1	NaN
Ldha	0.90211
Numa1	NaN" > /tmp/pg_head.txt

psql -Xa << FT_SETUP_TXT

create schema if not exists "tmp";
create server if not exists "tmpserver" foreign data wrapper file_fdw;
drop   foreign table if exists tmp.pg_head cascade;
create foreign table   tmp.pg_head (
"Gene"  text,
"Ratio H/L normalized Exp1" numeric
)
server tmpserver
options (
delimiter E'\t'
  , format 'csv'
  , header 'TRUE'
  , filename  '/tmp/pg_head.txt'
);

alter foreign table tmp.pg_head
   add column "Ratio H/L normalized Exp1 Log2 (Generated column)" numeric generated always as (case when "Ratio H/L normalized Exp1" > 0 then log(2, "Ratio H/L normalized Exp1") else  null end) stored
;

FT_SETUP_TXT

psql -qXa << SQL_TXT

-- this is OK (although the generated-column values are all empty/null)
select 
 "Gene" 
   , "Ratio H/L normalized Exp1"
   , "Ratio H/L normalized Exp1 Log2 (Generated column)"
from tmp.pg_head
limit 3 ;

-- but this fails
select 
"Gene"
   , "Ratio H/L normalized Exp1 Log2 (Generated column)"
from tmp.pg_head
limit 3 ;

SQL_TXT





Re: PostgreSQL 12 Beta 1 press release draft

2019-05-21 Thread Erik Rijkers

On 2019-05-22 05:39, Jonathan S. Katz wrote:


Speaking of feedback, please provide me with your feedback on the
technical correctness of this announcement so I can incorporate changes
prior to the release.


Here are a few changes.

Main change: generated columns exist only in the STORED variety. VIRTUAL 
will hopefully later be added.



thanks,

Erik Rijkers--- 12beta1.md.orig	2019-05-22 06:33:16.286099932 +0200
+++ 12beta1.md	2019-05-22 06:48:24.279966057 +0200
@@ -30,12 +30,12 @@
 Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently,
 which lets you perform a [`REINDEX`](https://www.postgresql.org/docs/devel/sql-reindex.html) operation
 without blocking any writes to the index. The inclusion of this feature should
-help with length index rebuilds that could cause potential downtime evens when
-administration a PostgreSQL database in a production environment.
+help with lengthy index rebuilds that could cause potential downtime when
+administrating a PostgreSQL database in a production environment.
 
 PostgreSQL 12 extends the abilities of several of the specialized indexing
 mechanisms. The ability to create covering indexes, i.e. the `INCLUDE` clause
-that was introduced in PostgreSQL 11, have now been added to GiST indexes.
+that was introduced in PostgreSQL 11, has now been added to GiST indexes.
 SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN)
 queries for data types that support the distance (`<->`) operation.
 
@@ -49,7 +49,7 @@
 
 Common table expressions (aka `WITH` queries) can now be automatically inlined
 in a query if they are a) not recursive, b) do not have any side-effects and
-c) are only referenced once in a later part of a query. These removes a known
+c) are only referenced once in a later part of a query. This removes a known
 "optimization fence" that has existed since the introduction of the `WITH`
 clause in PostgreSQL 8.4
 
@@ -88,7 +88,7 @@
 PostgreSQL 12 now supports case-insensitive and accent-insensitive collations
 for ICU provided collations, also known as "[nondeterministic collations](https://www.postgresql.org/docs/devel/collation.html#COLLATION-NONDETERMINISTIC)".
 When used, these collations can provide convenience for comparisons and sorts,
-but can also lead to a performance penalty depending as a collation may need to
+but can also lead to a performance penalty as a collation may need to
 make additional checks on a string.
 
 ### Most-common Value Statistics
@@ -102,10 +102,9 @@
 
 PostgreSQL 12 lets you create [generated columns](https://www.postgresql.org/docs/devel/ddl-generated-columns.html)
 that compute their values based on the contents of other columns. This feature
-provides two types of generated columns:
-
-- Stored generated columns, which are computed on inserts and updated and are saved on disk
-- Virtual generated columns, which are computed only when a column is read as part of a query
+provides only one type of generated column: Stored generated columns, which are computed on inserts
+and updated and are saved on disk. Virtual generated columns (computed only when a column
+is read as part of a query) are not yet implemented.
 
 ### Pluggable Table Storage Interface
 
@@ -128,7 +127,7 @@
 
 ### Authentication
 
-GSSAPI now supports client and server-side encryption and can be specified in
+GSSAPI now supports client- and server-side encryption and can be specified in
 the [`pg_hba.conf`](https://www.postgresql.org/docs/devel/auth-pg-hba-conf.html)
 file using the `hostgssenc` and `hostnogssenc` record types. PostgreSQL 12 also
 allows for LDAP servers to be discovered based on `DNS SRV` records if


Re: [HACKERS] proposal: schema variables

2018-10-23 Thread Erik Rijkers

[schema-variables-20181007-01.patch.gz]


Hi,

I tried to test your schema-variables patch but got stuck here instead 
(after applying succesfully on top of e6f5d1acc):


make[2]: *** No rule to make target 
'../../../src/include/catalog/pg_variable.h', needed by 'bki-stamp'.  
Stop.

make[1]: *** [submake-catalog-headers] Error 2
make[1]: *** Waiting for unfinished jobs
make: *** [submake-generated-headers] Error 2
Makefile:141: recipe for target 'submake-catalog-headers' failed
src/Makefile.global:370: recipe for target 'submake-generated-headers' 
failed



thanks,

Erik Rijkers



Re: [HACKERS] generated columns

2018-10-30 Thread Erik Rijkers

On 2018-10-30 09:35, Peter Eisentraut wrote:


[v5-0001-Generated-columns.patch ]


Hi,

I couldn't get this to apply to current head.

I tried:

patch --dry-run --ignore-whitespace -p 0 -F 5 < 
v5-0001-Generated-columns.patch


and varied both -p and -F paramaters to no avail. Am I doing it wrong?


--- 8< ---
$ patch --ignore-whitespace -p 0 -F 5 < v5-0001-Generated-columns.patch
(Stripping trailing CRs from patch; use --binary to disable.)
can't find file to patch at input line 81
Perhaps you used the wrong -p or --strip option?
The text leading up to this was:
--
|From dae07c731d80021bf78b8d89a8eb14408dbd023a Mon Sep 17 00:00:00 2001
|From: Peter Eisentraut 
|Date: Mon, 29 Oct 2018 17:46:12 +0100
|Subject: [PATCH v5] Generated columns
[...]
| src/test/regress/parallel_schedule|   2 +-
| src/test/regress/serial_schedule  |   1 +
| src/test/regress/sql/create_table_like.sql|  14 +
| src/test/regress/sql/generated.sql| 408 ++
| 60 files changed, 2731 insertions(+), 92 deletions(-)
| create mode 100644 src/test/regress/expected/generated.out
| create mode 100644 src/test/regress/sql/generated.sql
|
|diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
|index 9edba96fab..567913c3b6 100644
|--- a/doc/src/sgml/catalogs.sgml
|+++ b/doc/src/sgml/catalogs.sgml
--
File to patch:
--- 8< ---


Thanks,

Erik Rijkes



Re: [HACKERS] generated columns

2018-10-31 Thread Erik Rijkers

On 2018-10-31 09:15, Simon Riggs wrote:

On Wed, 31 Oct 2018 at 07:58, Erikjan Rijkers  wrote:



I have also noticed that logical replication isn't possible on tables
with a generated column.  That's a shame but I suppsoe that is as
expected.



Couldn't see anything like that in the patch. Presumably unintended
consequence. The generated value needs to be in WAL, so decoding it 
should

be trivial.



These log messages occur on attempting at logical replication:

( table t1 has no generated columns; replicates fine.
  table t2 has one generated column; replication fails: see below )

LOG:  database system is ready to accept connections
LOG:  logical replication apply worker for subscription "sub1" has 
started
LOG:  logical replication table synchronization worker for subscription 
"sub1", table "t1" has started
LOG:  logical replication table synchronization worker for subscription 
"sub1", table "t2" has started
LOG:  logical replication table synchronization worker for subscription 
"sub1", table "t1" has finished

ERROR:  column "i2" is a generated column
DETAIL:  Generated columns cannot be used in COPY.
LOG:  background worker "logical replication worker" (PID 22252) exited 
with exit code 1





Virtual columns wouldn't need to be replicated.

I guess we might choose to replicate generated cols as a value, or 
leave

them out and let them be generated on the downstream side. The default
should be to just treat them as a value.

--
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: row filtering for logical replication

2018-11-01 Thread Erik Rijkers

On 2018-11-01 01:29, Euler Taveira wrote:

Em qua, 28 de fev de 2018 às 20:03, Euler Taveira
 escreveu:

The attached patches add support for filtering rows in the publisher.



I ran pgbench-over-logical-replication with a WHERE-clause and could not 
get this to do a correct replication.  Below is the output of the 
attached test program.



$ ./logrep_rowfilter.sh
--
/home/aardvark/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin.fast/initdb 
--pgdata=/tmp/cascade/instance1/data --encoding=UTF8 --pwfile=/tmp/bugs

--
/home/aardvark/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin.fast/initdb 
--pgdata=/tmp/cascade/instance2/data --encoding=UTF8 --pwfile=/tmp/bugs

--
/home/aardvark/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin.fast/initdb 
--pgdata=/tmp/cascade/instance3/data --encoding=UTF8 --pwfile=/tmp/bugs

sleep 3s
dropping old tables...
creating tables...
generating data...
10 of 10 tuples (100%) done (elapsed 0.09 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
create publication pub_6515_to_6516;
alter publication pub_6515_to_6516 add table pgbench_accounts where (aid 
between 4 and 6-1) ; --> where 1

alter publication pub_6515_to_6516 add table pgbench_branches;
alter publication pub_6515_to_6516 add table pgbench_tellers;
alter publication pub_6515_to_6516 add table pgbench_history;
create publication pub_6516_to_6517;
alter publication pub_6516_to_6517 add table pgbench_accounts ; -- where 
(aid between 4 and 6-1) ; --> where 2

alter publication pub_6516_to_6517 add table pgbench_branches;
alter publication pub_6516_to_6517 add table pgbench_tellers;
alter publication pub_6516_to_6517 add table pgbench_history;

create subscription pub_6516_from_6515 connection 'port=6515 
application_name=rowfilter'

   publication pub_6515_to_6516 with(enabled=false);
alter subscription pub_6516_from_6515 enable;
create subscription pub_6517_from_6516 connection 'port=6516 
application_name=rowfilter'

   publication pub_6516_to_6517 with(enabled=false);
alter subscription pub_6517_from_6516 enable;
-- pgbench -p 6515 -c 16 -j 8 -T 5 -n postgres#  scale 1
transaction type: 
scaling factor: 1
query mode: simple
number of clients: 16
number of threads: 8
duration: 5 s
number of transactions actually processed: 80
latency average = 1178.106 ms
tps = 13.581120 (including connections establishing)
tps = 13.597443 (excluding connections establishing)

   accounts  branches   tellers   history
   - - - -
6515   6546b1f0f 2d328ed28 7406473b0 7c1351523e8c07347b
6516   6546b1f0f 2d328ed28 d41d8cd98 d41d8cd98e7235f541
6517   f7c0791c8 d9c63e471 d41d8cd98 d41d8cd9830892eea1   NOK

6515   6546b1f0f 2d328ed28 7406473b0 7c1351523e8c07347b
6516   6546b1f0f 2d328ed28 7406473b0 5a54cf7c5191ae1af3
6517   6546b1f0f 2d328ed28 7406473b0 5a54cf7c5191ae1af3   NOK

6515   6546b1f0f 2d328ed28 7406473b0 7c1351523e8c07347b
6516   6546b1f0f 2d328ed28 7406473b0 5a54cf7c5191ae1af3
6517   6546b1f0f 2d328ed28 7406473b0 5a54cf7c5191ae1af3   NOK

[...]

I let that run for 10 minutes or so but that pgbench_history table 
md5-values (of ports 6516 and 6517) do not change anymore, which shows 
that it is and remains different from the original pgbench_history table 
in 6515.



When there is a where-clause this goes *always* wrong.

Without a where-clause all logical replication tests were OK.  Perhaps 
the error is not in our patch but something in logical replication.


Attached is the test program (will need some tweaking of PATHs, 
PG-variables (PGPASSFILE) etc).  This is the same program I used in 
march when you first posted a version of this patch alhough the error is 
different.



thanks,


Erik Rijkers





#!/bin/bash

# postgres binary compiled with 
#
# 20181101
#   0001-Remove-unused-atttypmod-column-from-initial-table-sy.patch
#   0002-Store-number-of-tuples-in-WalRcvExecResult.patch
#   0003-Refactor-function-create_estate_for_relation.patch
#   0004-Rename-a-WHERE-node.patch
#   0005-Row-filtering-for-logical-replication.patch
#   0006-Print-publication-WHERE-condition-in-psql.patch
#   0007-Publication-where-condition-support-for-pg_dump.patch
#   0008-Debug-for-row-filtering.patch
#

unset PGDATABASE PGPORT PGSERVICE
export PGDATABASE=postgres

scale=1

root_dir=/tmp/cascade

BIN=$HOME/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin.fast

export PATH=$BIN:$PATH

  initdb=$BIN/initdb
postgres=$BIN/postgres
  pg_ctl=$BIN/pg_ctl
baseport=6515
 appname=rowfilter

if [[ -d $root_dir/instance1 ]]; then rm -rf $root_dir/instance1; fi
if [[ -d $root_dir/instance2 ]]; then rm -rf $root_dir/instance2; fi
if [[ -d $root_dir/instance3 ]]; then rm -rf $root_dir/instance3; fi
if [[ -d $root_dir/instance1 ]]; then exit ; fi
if [[ -d $root_dir/instance2 ]]; then exit ; fi
if [[ -d $root_dir/instance3 ]]; then exit ; fi

devel_file=/tmp/bugs
echo filterbug>$devel

Re: row filtering for logical replication

2018-11-01 Thread Erik Rijkers

On 2018-11-01 08:56, Erik Rijkers wrote:

On 2018-11-01 01:29, Euler Taveira wrote:

Em qua, 28 de fev de 2018 às 20:03, Euler Taveira
 escreveu:

The attached patches add support for filtering rows in the publisher.



I ran pgbench-over-logical-replication with a WHERE-clause and could
not get this to do a correct replication.  Below is the output of the
attached test program.


$ ./logrep_rowfilter.sh


I have noticed that the failure to replicate correctly can be avoided by 
putting a wait state of (on my machine) at least 3 seconds between the 
setting up of the subscription and the start of pgbench.  See the bash 
program I attached in my previous mail.  The bug can be avoided by a 
'sleep 5' just before the start of the actual pgbench run.


So it seems this bug is due to some timing error in your patch (or 
possibly in logical replication itself).



Erik Rijkers





Re: row filtering for logical replication

2018-11-01 Thread Erik Rijkers

On 2018-11-02 02:59, Euler Taveira wrote:
Em qui, 1 de nov de 2018 às 05:30, Erik Rijkers  
escreveu:

> I ran pgbench-over-logical-replication with a WHERE-clause and could
> not get this to do a correct replication.  Below is the output of the
> attached test program.
>
>
> $ ./logrep_rowfilter.sh


Erik, thanks for testing.


So it seems this bug is due to some timing error in your patch (or
possibly in logical replication itself).


It is a bug in the new synchronization code. I'm doing some code
cleanup/review and will post a new patchset after I finish it. If you
want to give it a try again, apply the following patch.

diff --git a/src/backend/replication/logical/tablesync.c
b/src/backend/replication/logical/tablesync.c
index e0eb73c..4797e0b 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
[...]



That does indeed fix it.

Thank you,

Erik Rijkers




tickling the lesser contributor's withering ego

2018-11-04 Thread Erik Rijkers

I wouldn't mind if this page:
  https://www.postgresql.org/community/contributors/

contained a link to (contributors v11):
  
https://www.postgresql.org/docs/11/static/release-11.html#id-1.11.6.5.6


and to (contributors v10)
  
https://www.postgresql.org/docs/current/static/release-11.html#id-1.11.6.5.6


In fact, I think I would like it.

I hope you agree.


thanks,


Erik Rijkers




prewarm compiler warnings

2018-11-23 Thread Erik Rijkers

gcc-6.3.0 complains about this:

autoprewarm.c: In function ‘autoprewarm_main’:
autoprewarm.c:210:9: warning: variable ‘rc’ set but not used 
[-Wunused-but-set-variable]

   int   rc;

The warning comes only in master.

The warnings are a bit annoying; the attached silences them by removing 
those assignments.


Thanks,

Erik Rijkers


--- ./contrib/pg_prewarm/autoprewarm.c.orig	2018-11-23 13:34:08.237110218 +0100
+++ ./contrib/pg_prewarm/autoprewarm.c	2018-11-23 13:34:38.333492968 +0100
@@ -207,8 +207,6 @@
 	/* Periodically dump buffers until terminated. */
 	while (!got_sigterm)
 	{
-		int			rc;
-
 		/* In case of a SIGHUP, just reload the configuration. */
 		if (got_sighup)
 		{
@@ -219,7 +217,7 @@
 		if (autoprewarm_interval <= 0)
 		{
 			/* We're only dumping at shutdown, so just wait forever. */
-			rc = WaitLatch(&MyProc->procLatch,
+			WaitLatch(&MyProc->procLatch,
 		   WL_LATCH_SET | WL_EXIT_ON_PM_DEATH,
 		   -1L,
 		   PG_WAIT_EXTENSION);
@@ -248,7 +246,7 @@
 			}
 
 			/* Sleep until the next dump time. */
-			rc = WaitLatch(&MyProc->procLatch,
+			WaitLatch(&MyProc->procLatch,
 		   WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
 		   delay_in_ms,
 		   PG_WAIT_EXTENSION);


psql --csv and other parameters

2018-11-27 Thread Erik Rijkers

I don't know if this really is a bug but it seems wrong to me:

psql -A --csv -Xc "select * from pg_namespace order by 1"

gives a difference result than

psql --csv -A -Xc "select * from pg_namespace order by 1"


I would say both should give the same result, and
that result should be the same as from:

psql --csv -Xc "select * from pg_namespace order by 1"


Thanks,

Erik Rijkers





Re: pgbench - allow to specify scale as a size

2018-02-17 Thread Erik Rijkers

On 2018-02-17 10:20, Fabien COELHO wrote:

After Karel Moppel piece on pgbench scale/size conversion, it occured
to me that having this as an option would be nice.

https://www.cybertec-postgresql.com/en/a-formula-to-calculate-pgbench-scaling-factor-for-target-db-size/

Here is a attempt at extending --scale so that it can be given a size.

  pgbench -i --scale=124G ...

The approximated database size is also shown in the end-of-run summary.



[pgbench-scale-size-1.patch]


Seem a nice addition but something isn't quite right; with '-s 50' (no 
unit)  I get: 'scale 50 too small':


$ pgbench -is 50
scale 50 too small, rounded to 1
dropping old tables...
creating tables...
generating data...
10 of 10 tuples (100%) done (elapsed 0.13 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.

echo '\dti+ pgbench_accounts*' | psql -qX

  List of relations
 Schema | Name  | Type  |  Owner   |  Table   |  
Size   | Description

+---+---+--+--+-+-
 public | pgbench_accounts  | table | aardvark |  | 
13 MB   |
 public | pgbench_accounts_pkey | index | aardvark | pgbench_accounts | 
2208 kB |

(2 rows)


thanks,

Erik Rijkers



Re: row filtering for logical replication

2018-03-01 Thread Erik Rijkers

On 2018-03-01 00:03, Euler Taveira wrote:

The attached patches add support for filtering rows in the publisher.



001-Refactor-function-create_estate_for_relation.patch
0002-Rename-a-WHERE-node.patch
0003-Row-filtering-for-logical-replication.patch



Comments?


Very, very useful.  I really do hope this patch survives the 
late-arrival-cull.


I built this functionality into a test program I have been using and in 
simple cascading replication tests it works well.


I did find what I think is a bug (a bug easy to avoid but also easy to 
run into):
The test I used was to cascade 3 instances (all on one machine) from 
A->B->C

I ran a pgbench session in instance A, and used:
  in A: alter publication pub0_6515 add table pgbench_accounts where 
(aid between 4 and 6-1);

  in B: alter publication pub1_6516 add table pgbench_accounts;

The above worked well, but when I did the same but used the filter in 
both publications:
  in A: alter publication pub0_6515 add table pgbench_accounts where 
(aid between 4 and 6-1);
  in B: alter publication pub1_6516 add table pgbench_accounts where 
(aid between 4 and 6-1);


then the replication only worked for (pgbench-)scale 1 (hence: very 
little data); with larger scales it became slow (taking many minutes 
where the above had taken less than 1 minute), and ended up using far 
too much memory (or blowing up/crashing altogether).  Something not 
quite right there.


Nevertheless, I am much in favour of acquiring this functionality as 
soon as possible.



Thanks,


Erik Rijkers













Re: row filtering for logical replication

2018-03-01 Thread Erik Rijkers

On 2018-03-01 16:27, Erik Rijkers wrote:

On 2018-03-01 00:03, Euler Taveira wrote:

The attached patches add support for filtering rows in the publisher.



001-Refactor-function-create_estate_for_relation.patch
0002-Rename-a-WHERE-node.patch
0003-Row-filtering-for-logical-replication.patch



Comments?


Very, very useful.  I really do hope this patch survives the 
late-arrival-cull.


I built this functionality into a test program I have been using and
in simple cascading replication tests it works well.

I did find what I think is a bug (a bug easy to avoid but also easy to
run into):
The test I used was to cascade 3 instances (all on one machine) from 
A->B->C

I ran a pgbench session in instance A, and used:
  in A: alter publication pub0_6515 add table pgbench_accounts where
(aid between 4 and 6-1);
  in B: alter publication pub1_6516 add table pgbench_accounts;

The above worked well, but when I did the same but used the filter in
both publications:
  in A: alter publication pub0_6515 add table pgbench_accounts where
(aid between 4 and 6-1);
  in B: alter publication pub1_6516 add table pgbench_accounts where
(aid between 4 and 6-1);

then the replication only worked for (pgbench-)scale 1 (hence: very
little data); with larger scales it became slow (taking many minutes
where the above had taken less than 1 minute), and ended up using far
too much memory (or blowing up/crashing altogether).  Something not
quite right there.

Nevertheless, I am much in favour of acquiring this functionality as
soon as possible.



Attached is 'logrep_rowfilter.sh', a demonstration of above-described 
bug.


The program runs initdb for 3 instances in /tmp (using ports 6515, 6516, 
and 6517) and sets up logical replication from 1->2->3.


It can be made to work by removing de where-clause on the second 'create 
publication' ( i.e., outcomment the $where2 variable ).




Thanks,


Erik Rijkers
#!/bin/sh

# postges binary with 
#
#  0001-Refactor-function-create_estate_for_relation.patch
#  0002-Rename-a-WHERE-node.patch
#  0003-Row-filtering-for-logical-replication.patch
#

unset PGDATABASE PGPORT PGSERVICE
export PGDATABASE=postgres

scale=10

root_dir=/tmp/cascade

BIN=$HOME/pg_stuff/pg_installations/pgsql.logrep_rowfilter/bin.fast

export PATH=$BIN:$PATH

  initdb=$BIN/initdb
postgres=$BIN/postgres
  pg_ctl=$BIN/pg_ctl
baseport=6515

if [[ -d $root_dir/instance1 ]]; then rm -rf $root_dir/instance1; fi
if [[ -d $root_dir/instance2 ]]; then rm -rf $root_dir/instance2; fi
if [[ -d $root_dir/instance3 ]]; then rm -rf $root_dir/instance3; fi
if [[ -d $root_dir/instance1 ]]; then exit ; fi
if [[ -d $root_dir/instance2 ]]; then exit ; fi
if [[ -d $root_dir/instance3 ]]; then exit ; fi

devel_file=/tmp/bugs
echo filterbug>$devel_file

num_instances=3

for n in `seq 1 $num_instances`
do
  instance=instance$n
  server_dir=$root_dir/$instance
  data_dir=$server_dir/data
  port=$(( 6515 + $n -1 ))
  logfile=$server_dir/logfile.$port
  echo "-- $initdb --pgdata=$data_dir --encoding=UTF8 --pwfile=$devel_file "
   $initdb --pgdata=$data_dir --encoding=UTF8 --pwfile=$devel_file  &> /dev/null
  ( $postgres  -D $data_dir -p $port \
--wal_level=logical --logging_collector=on \
--client_min_messages=warning \
--log_directory=$server_dir --log_filename=logfile.${port} \
--log_replication_commands=on & ) &> /dev/null
done 

echo "sleep 3s"
sleep 3

echo "
  drop table if exists pgbench_accounts;
  drop table if exists pgbench_branches;
  drop table if exists pgbench_tellers;
  drop table if exists pgbench_history;" | psql -qXp 6515 \
&& echo "
  drop table if exists pgbench_accounts;
  drop table if exists pgbench_branches;
  drop table if exists pgbench_tellers;
  drop table if exists pgbench_history;" | psql -qXp 6516 \
&& pgbench -p 6515 -qis $scale \
&& echo "alter table pgbench_history add column hid serial primary key;" \
  | psql -q1Xp 6515 && pg_dump -F c -p 6515 \
 --exclude-table-data=pgbench_history  \
 --exclude-table-data=pgbench_accounts \
 --exclude-table-data=pgbench_branches \
 --exclude-table-data=pgbench_tellers  \
   -t pgbench_history -t pgbench_accounts \
   -t pgbench_branches -t pgbench_tellers \
  | pg_restore -1 -p 6516 -d postgres

appname=rowfilter

   where="where (aid between 4 and 6-1)"
  where2="where (aid between 4 and 6-1)"

echo "
create publication pub1;
alter publication pub1 add table pgbench_accounts $where ; --> where 1
alter publication pub1 add table pgbench_branches;
alter publication pub1 add table pgbench_tellers;
alter publication pub1 add table pgbench_history;
" | psql -p 6515 -aqtAX

if [[ $num_instances -eq 3 ]]; then

  pg_dump -F c -p 6515 \
 --exclude-table-data=pgbench_history  \
 --exclude-table-data=pgbench_accounts \
 --excl

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2018-03-02 Thread Erik Rijkers

On 2018-03-03 01:55, Tomas Vondra wrote:

Hi there,

attached is an updated patch fixing all the reported issues (a bit more
about those below).


Hi,

0007-Track-statistics-for-streaming-spilling.patch  won't apply.  All 
the other patches apply ok.


patch complaints with:

patching file doc/src/sgml/monitoring.sgml
patching file src/backend/catalog/system_views.sql
Hunk #1 succeeded at 734 (offset 2 lines).
patching file src/backend/replication/logical/reorderbuffer.c
patching file src/backend/replication/walsender.c
patching file src/include/catalog/pg_proc.h
Hunk #1 FAILED at 2903.
1 out of 1 hunk FAILED -- saving rejects to file 
src/include/catalog/pg_proc.h.rej

patching file src/include/replication/reorderbuffer.h
patching file src/include/replication/walsender_private.h
patching file src/test/regress/expected/rules.out
Hunk #1 succeeded at 1861 (offset 2 lines).

Attached the produced reject file.


thanks,

Erik Rijkers--- src/include/catalog/pg_proc.h
+++ src/include/catalog/pg_proc.h
@@ -2903,7 +2903,7 @@
 DESCR("statistics: information about currently active backends");
 DATA(insert OID = 3318 (  pg_stat_get_progress_info			  PGNSP PGUID 12 1 100 0 0 f f f f t t s r 1 0 2249 "25" "{25,23,26,26,20,20,20,20,20,20,20,20,20,20}" "{i,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{cmdtype,pid,datid,relid,param1,param2,param3,param4,param5,param6,param7,param8,param9,param10}" _null_ _null_ pg_stat_get_progress_info _null_ _null_ _null_ ));
 DESCR("statistics: information about progress of backends running maintenance command");
-DATA(insert OID = 3099 (  pg_stat_get_wal_senders	PGNSP PGUID 12 1 10 0 0 f f f f f t s r 0 0 2249 "" "{23,25,3220,3220,3220,3220,1186,1186,1186,23,25}" "{o,o,o,o,o,o,o,o,o,o,o}" "{pid,state,sent_lsn,write_lsn,flush_lsn,replay_lsn,write_lag,flush_lag,replay_lag,sync_priority,sync_state}" _null_ _null_ pg_stat_get_wal_senders _null_ _null_ _null_ ));
+DATA(insert OID = 3099 (  pg_stat_get_wal_senders	PGNSP PGUID 12 1 10 0 0 f f f f f t s r 0 0 2249 "" "{23,25,3220,3220,3220,3220,1186,1186,1186,23,25,20,20,20,20,20,20}" "{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{pid,state,sent_lsn,write_lsn,flush_lsn,replay_lsn,write_lag,flush_lag,replay_lag,sync_priority,sync_state,spill_txns,spill_count,spill_bytes,stream_txns,stream_count,stream_bytes}" _null_ _null_ pg_stat_get_wal_senders _null_ _null_ _null_ ));
 DESCR("statistics: information about currently active replication");
 DATA(insert OID = 3317 (  pg_stat_get_wal_receiver	PGNSP PGUID 12 1 0 0 0 f f f f f f s r 0 0 2249 "" "{23,25,3220,23,3220,23,1184,1184,3220,1184,25,25}" "{o,o,o,o,o,o,o,o,o,o,o,o}" "{pid,status,receive_start_lsn,receive_start_tli,received_lsn,received_tli,last_msg_send_time,last_msg_receipt_time,latest_end_lsn,latest_end_time,slot_name,conninfo}" _null_ _null_ pg_stat_get_wal_receiver _null_ _null_ _null_ ));
 DESCR("statistics: information about WAL receiver");


libpq.sgml: "server ejectes GSS" -> server rejects GSS

2024-04-08 Thread Erik Rijkers

Typo. fix:

-attempted first. If the server ejectes GSS encryption, SSL is
+attempted first. If the server rejects GSS encryption, SSL is

Erik--- doc/src/sgml/libpq.sgml.orig	2024-04-09 06:28:36.254541932 +0200
+++ doc/src/sgml/libpq.sgml	2024-04-09 06:30:55.818541454 +0200
@@ -1820,7 +1820,7 @@

 Note that if gssencmode is set
 to prefer, a GSS connection is
-attempted first. If the server ejectes GSS encryption, SSL is
+attempted first. If the server rejects GSS encryption, SSL is
 negotiated over the same TCP connection using the traditional postgres
 protocol, regardless of sslnegotiation. In other
 words, the direct SSL handshake is not used, if a TCP connection has


Re: remaining sql/json patches

2024-03-04 Thread Erik Rijkers

Op 3/4/24 om 10:40 schreef Amit Langote:

Hi Jian,

Thanks for the reviews and sorry for the late reply. Replying to all
emails in one.


> [v40-0001-Add-SQL-JSON-query-functions.patch]
> [v40-0002-Show-function-name-in-TableFuncScan.patch]
> [v40-0003-JSON_TABLE.patch]

In my hands (applying with patch), the patches, esp. 0001, do not apply. 
 But I see the cfbot builds without problem so maybe just ignore these 
FAILED lines.  Better get them merged - so I can test there...


Erik


checking file doc/src/sgml/func.sgml
checking file src/backend/catalog/sql_features.txt
checking file src/backend/executor/execExpr.c
Hunk #1 succeeded at 48 with fuzz 2 (offset -1 lines).
Hunk #2 succeeded at 88 (offset -1 lines).
Hunk #3 succeeded at 2419 (offset -1 lines).
Hunk #4 succeeded at 4195 (offset -1 lines).
checking file src/backend/executor/execExprInterp.c
Hunk #1 succeeded at 72 (offset -1 lines).
Hunk #2 succeeded at 180 (offset -1 lines).
Hunk #3 succeeded at 485 (offset -1 lines).
Hunk #4 succeeded at 1560 (offset -1 lines).
Hunk #5 succeeded at 4242 (offset -1 lines).
checking file src/backend/jit/llvm/llvmjit_expr.c
checking file src/backend/jit/llvm/llvmjit_types.c
checking file src/backend/nodes/makefuncs.c
Hunk #1 succeeded at 856 (offset -1 lines).
checking file src/backend/nodes/nodeFuncs.c
Hunk #1 succeeded at 233 (offset -1 lines).
Hunk #2 succeeded at 517 (offset -1 lines).
Hunk #3 succeeded at 1019 (offset -1 lines).
Hunk #4 succeeded at 1276 (offset -1 lines).
Hunk #5 succeeded at 1617 (offset -1 lines).
Hunk #6 succeeded at 2381 (offset -1 lines).
Hunk #7 succeeded at 3429 (offset -1 lines).
Hunk #8 succeeded at 4164 (offset -1 lines).
checking file src/backend/optimizer/path/costsize.c
Hunk #1 succeeded at 4878 (offset -1 lines).
checking file src/backend/optimizer/util/clauses.c
Hunk #1 succeeded at 50 (offset -3 lines).
Hunk #2 succeeded at 415 (offset -3 lines).
checking file src/backend/parser/gram.y
checking file src/backend/parser/parse_expr.c
checking file src/backend/parser/parse_target.c
Hunk #1 succeeded at 1988 (offset -1 lines).
checking file src/backend/utils/adt/formatting.c
Hunk #1 succeeded at 4465 (offset -1 lines).
checking file src/backend/utils/adt/jsonb.c
Hunk #1 succeeded at 2159 (offset -4 lines).
checking file src/backend/utils/adt/jsonfuncs.c
checking file src/backend/utils/adt/jsonpath.c
Hunk #1 FAILED at 68.
Hunk #2 succeeded at 1239 (offset -1 lines).
1 out of 2 hunks FAILED
checking file src/backend/utils/adt/jsonpath_exec.c
Hunk #1 succeeded at 229 (offset -5 lines).
Hunk #2 succeeded at 2866 (offset -5 lines).
Hunk #3 succeeded at 3751 (offset -5 lines).
checking file src/backend/utils/adt/ruleutils.c
Hunk #1 succeeded at 474 (offset -1 lines).
Hunk #2 succeeded at 518 (offset -1 lines).
Hunk #3 succeeded at 8303 (offset -1 lines).
Hunk #4 succeeded at 8475 (offset -1 lines).
Hunk #5 succeeded at 8591 (offset -1 lines).
Hunk #6 succeeded at 9808 (offset -1 lines).
Hunk #7 succeeded at 9858 (offset -1 lines).
Hunk #8 succeeded at 10039 (offset -1 lines).
Hunk #9 succeeded at 10909 (offset -1 lines).
checking file src/include/executor/execExpr.h
checking file src/include/nodes/execnodes.h
checking file src/include/nodes/makefuncs.h
checking file src/include/nodes/parsenodes.h
checking file src/include/nodes/primnodes.h
checking file src/include/parser/kwlist.h
checking file src/include/utils/formatting.h
checking file src/include/utils/jsonb.h
checking file src/include/utils/jsonfuncs.h
checking file src/include/utils/jsonpath.h
checking file src/interfaces/ecpg/preproc/ecpg.trailer
checking file src/test/regress/expected/sqljson_queryfuncs.out
checking file src/test/regress/parallel_schedule
checking file src/test/regress/sql/sqljson_queryfuncs.sql
checking file src/tools/pgindent/typedefs.list




Re: Schema variables - new implementation for Postgres 15 (typo)

2022-12-13 Thread Erik Rijkers

Op 14-12-2022 om 05:54 schreef Pavel Stehule:

Hi

fresh rebase


typo alert:

v20221214-0003-LET-command.patch contains

errmsg("target session varible is of type %s"

('varible' should be 'variable')

Erik




convey privileges -> confer privileges

2023-01-06 Thread Erik Rijkers

Can we change 'convey' to 'confer' in these recent doc changes?

Maybe 'convey a privilege' isn't exactly wrong but it leaves you 
wondering what exactly is meant.


Thanks,

Erik

--- doc/src/sgml/ref/createuser.sgml.orig	2023-01-05 21:37:35.803839575 +0100
+++ doc/src/sgml/ref/createuser.sgml	2023-01-05 21:38:14.700390046 +0100
@@ -47,7 +47,7 @@
CREATEROLE privilege.
Being a superuser implies the ability to bypass all access permission
checks within the database, so superuser access should not be granted
-   lightly. CREATEROLE also conveys
+   lightly. CREATEROLE also confers
very extensive privileges.
   
 
--- doc/src/sgml/user-manag.sgml.orig	2023-01-05 21:30:14.905548605 +0100
+++ doc/src/sgml/user-manag.sgml	2023-01-05 21:34:48.945471335 +0100
@@ -207,10 +207,10 @@
 SECURITY LABEL commands.


-However, CREATEROLE does not convey the ability to
-create SUPERUSER roles, nor does it convey any
+However, CREATEROLE does not confer the ability to
+create SUPERUSER roles, nor does it confer any
 power over SUPERUSER roles that already exist.
-Furthermore, CREATEROLE does not convey the power
+Furthermore, CREATEROLE does not confer the power
 to create REPLICATION users, nor the ability to
 grant or revoke the REPLICATION privilege, nor the
 ability to modify the role properties of such users.  However, it does


logrep stuck with 'ERROR: int2vector has too many elements'

2023-01-15 Thread Erik Rijkers

Hello,

Logical replication sometimes gets stuck with
  ERROR:  int2vector has too many elements

I can't find the exact circumstances that cause it but it has something 
to do with many columns (or adding many columns) in combination with 
perhaps generated columns.


This replication test, in a slightly different form, used to work. This 
is also suggested by the fact that the attached runs without errors in 
REL_15_STABLE but gets stuck in HEAD.


What it does: it initdbs and runs two instances, primary and replica. In 
the primary 'pgbench -is1' done, and many columns, including 1 generated 
column, are added to all 4 pgbench tables. This is then 
pg_dump/pg_restored to the replica, and a short pgbench is run. The 
result tables on primary and replica are compared for the final result. 
(To run it will need some tweaks to directory and connection parms)


I ran it on both v15 and v16 for 25 runs: with the parameters as given 
15 has no problem while 16 always got stuck with the int2vector error. 
(15 can actually be pushed up to the max of 1600 columns per table 
without errors)


Both REL_15_STABLE and 16devel built from recent master on Debian 10, 
gcc 12.2.0.


I hope someone understands what's going wrong.

Thanks,

Erik Rijkers

logrepbug.sh
Description: application/shellscript


Re: logrep stuck with 'ERROR: int2vector has too many elements'

2023-01-15 Thread Erik Rijkers

On 1/15/23 12:33, Alvaro Herrera wrote:

On 2023-Jan-15, Erik Rijkers wrote:


Hello,

Logical replication sometimes gets stuck with
   ERROR:  int2vector has too many elements


Weird.  This error comes from int2vectorin which amusingly only wants to
read up to FUNC_MAX_ARGS values in the array (100 in the default config,
but it can be changed in pg_config_manual.h).  I wonder how come we
haven't noticed this before ... surely we use int2vector's for other
things than function argument lists nowadays.

At the same time, I don't understand why it fails in 16 but not in 15.
Maybe something changed in the way we process the column lists in 16?


I wrote as comment in the script, but that's maybe vague so let me be 
more explicit: 16 also accepts many columns, up to 1600, without error, 
as long as that is not combined with generated column(s) such as in the 
script. It seems the combination becomes quickly problematic. Although 
adding just 50 columns + a generated column is still ok, 100 is already 
too high (see the ADD_COLUMNS variable in my script).


Weird indeed.


Erik




Re: SQL/JSON revisited

2023-03-28 Thread Erik Rijkers

Op 3/27/23 om 20:54 schreef Alvaro Herrera:

Docs amended as I threatened.  Other than that, this has required more


> [v12-0001-SQL-JSON-constructors.patch]
> [v12-0001-delta-uniqueifyJsonbObject-bugfix.patch]

In doc/src/sgml/func.sgml, some minor stuff:

'which specify the data type returned'  should be
'which specifies the data type returned'

In the json_arrayagg() description, it says:
'If ABSENT ON NULL is specified, any NULL values are omitted.'
That's true, but as omitting NULL values is the default (i.e., also 
without that clause) maybe it's better to say:

'Any NULL values are omitted unless NULL ON NULL is specified'


I've found no bugs in functionality.

Thanks,

Erik Rijkers




Re: SQL/JSON revisited

2023-03-29 Thread Erik Rijkers

Op 3/29/23 om 12:27 schreef Alvaro Herrera:

On 2023-Mar-28, Erik Rijkers wrote:


In the json_arrayagg() description, it says:
'If ABSENT ON NULL is specified, any NULL values are omitted.'
That's true, but as omitting NULL values is the default (i.e., also without
that clause) maybe it's better to say:
'Any NULL values are omitted unless NULL ON NULL is specified'


Doh, somehow I misread your report and modified the json_object()
documentation instead after experimenting with it (so now the
ABSENT/NULL ON NULL clause is inconsistenly described everywhere).
Would you mind submitting a patch fixing this mistake?


I think the json_object text was OK.  Attached are some changes where 
they were needed IMHO.


Erik



... and pushed it now, after some more meddling.

I'll rebase the rest of the series now.
--- doc/src/sgml/func.sgml.orig	2023-03-29 12:45:45.013598284 +0200
+++ doc/src/sgml/func.sgml	2023-03-29 14:24:41.966456134 +0200
@@ -15830,10 +15830,10 @@
  Constructs a JSON array from either a series of
  value_expression parameters or from the results
  of query_expression,
- which must be a SELECT query returning a single column. If
- ABSENT ON NULL is specified, NULL values are ignored.
- This is always the case if a
- query_expression is used.
+ which must be a SELECT query returning a single column. 
+ If the input is a series of value_expressions, NULL values are omitted
+ unless NULL ON NULL is specified.  If a query_expression is used NULLs
+ are always ignored.
 
 
  json_array(1,true,json '{"a":null}')
@@ -20310,13 +20310,14 @@
  ORDER BY sort_expression 
  { NULL | ABSENT } ON NULL 
  RETURNING data_type  FORMAT JSON  ENCODING UTF8   )
+json


 Behaves in the same way as json_array
 but as an aggregate function so it only takes one
 value_expression parameter.
-If ABSENT ON NULL is specified, any NULL
-values are omitted.
+NULL values are omitted unless NULL ON NULL
+is specified.
 If ORDER BY is specified, the elements will
 appear in the array in that order rather than in the input order.



Re: SQL/JSON revisited (documentation)

2023-04-11 Thread Erik Rijkers

Hi,

IS JSON is documented as:

expression IS [ NOT ] JSON
  [ { VALUE | SCALAR | ARRAY | OBJECT } ]
  [ { WITH | WITHOUT } UNIQUE [ KEYS ] ]

which is fine but 'VALUE' is nowhere mentioned
(except in the commit-message as: IS JSON [VALUE] )

Unless I'm mistaken 'VALUE' does indeed not change an IS JSON statement, 
so to document we could simply insert this line (as in the attached):


"The VALUE key word is optional noise."

Somewhere in its text in func.sgml, which is now:

"This predicate tests whether expression can be parsed as JSON, possibly 
of a specified type.  If SCALAR or ARRAY or OBJECT is specified, the 
test is whether or not the JSON is of that particular type. If WITH 
UNIQUE KEYS is specified, then any object in the expression is also 
tested to see if it has duplicate keys."



Erik Rijkers
--- doc/src/sgml/func.sgml.orig 2023-04-12 06:16:40.517722315 +0200
+++ doc/src/sgml/func.sgml  2023-04-12 06:30:56.410837805 +0200
@@ -16037,6 +16037,7 @@

 This predicate tests whether expression can 
be
 parsed as JSON, possibly of a specified type.
+The VALUE key word is optional noise.
 If SCALAR or ARRAY or
 OBJECT is specified, the
 test is whether or not the JSON is of that particular type. If


Re: 2023-05-11 release announcement draft

2023-05-06 Thread Erik Rijkers

Op 5/7/23 om 05:37 schreef Jonathan S. Katz:
Attached is a draft of the release announcement for the upcoming update 
release on May 11, 2023.


Please provide any suggestions, corrections, or notable omissions no 
later than 2023-05-11 0:00 AoE.


'leak in within a'  should be
'leak within a'

Erik




Re: SQL/JSON: JSON_TABLE

2022-03-26 Thread Erik Rijkers

Op 25-03-2022 om 21:30 schreef Andrew Dunstan:


On 3/22/22 10:55, Daniel Gustafsson wrote:

On 22 Mar 2022, at 16:31, Andrew Dunstan  wrote:
I'm planning on pushing the functions patch set this week and json-table
next week.

My comments from 30827b3c-edf6-4d41-bbf1-298181874...@yesql.se are yet to be
addressed (or at all responded to) in this patchset.  I'll paste the ones which
still apply to make it easier:




I think I have fixed all those. See attached. I haven't prepared a new
patch set for SQL/JSON functions because there's just one typo to fix,
but I won't forget it. Please let me know if there's anything else you see.


At this stage I think I have finished with the actual code, and I'm
concentrating on improving the docs a bit.


> [ v59 ]


FWIW, I went through func.sgml (of v59) once.


Erik Rijkers
--- doc/src/sgml/func.sgml.orig	2022-03-25 22:17:13.908660140 +0100
+++ doc/src/sgml/func.sgml	2022-03-26 12:08:46.593271826 +0100
@@ -17673,8 +17673,8 @@
  Description
 
  
-  JSON function generates a JSON
-  from a text data.
+  The JSON function generates JSON
+  from text data.
  
 
 
@@ -17688,7 +17688,7 @@
 
  
   String expression that provides the JSON text data.
-  Accepted any character strings (text, char, etc.)
+  Accepts any character strings (text, char, etc.)
   or binary strings (bytea) in UTF8 encoding.
   For null input, SQL null value is returned.
  
@@ -17757,7 +17757,7 @@
 
  Examples
  
-  Construct a JSON the provided strings:
+  Construct JSON using the provided strings:
  
 
 SELECT JSON('{ "a" : 123, "b": [ true, "foo" ], "a" : "bar" }');
@@ -17794,8 +17794,8 @@
  Description
 
  
-  JSON_SCALAR function generates a scalar
-  JSON from a SQL data.
+  The JSON_SCALAR function generates scalar
+  JSON from SQL data.
  
 
 
@@ -17808,11 +17808,11 @@
 
 
  
-  Expression that provides the data for constructing a
+  Expression that provides the data for constructing
   JSON.
   For null input, SQL  null
-  (not a JSON null) value is returned.
-  For any scalar other than a number, a Boolean, the text representation
+  (not JSON null) value is returned.
+  For any scalar other than a number or a Boolean, the text representation
   will be used, with escaping as necessary to make it a valid
   JSON string value.
   For details, see
@@ -17847,7 +17847,7 @@
 
  Examples
  
-  Construct a JSON from the provided values various types:
+  Construct JSON from provided values of various type:
  
 
 SELECT JSON_SCALAR(123.45);
@@ -18753,7 +18753,7 @@
 
 -- Strict mode with ERROR on ERROR clause
 SELECT JSON_EXISTS(jsonb '{"a": [1,2,3]}', 'strict $.a[5]' ERROR ON ERROR);
-ERROR: Invalid SQL/JSON subscript
+ERROR:  jsonpath array subscript is out of bounds
 (1 row)
 
 
@@ -18795,11 +18795,11 @@
  Description
 
   
-   JSON_VALUE function extracts a value from the provided
+   The JSON_VALUE function extracts a value from the provided
JSON data and converts it to an SQL scalar.
If the specified JSON path expression returns more than one
SQL/JSON item, an error occurs. To extract
-   an SQL/JSON array or object, use .
+   an SQL/JSON array or object, see .
   
 
 
@@ -18885,19 +18885,19 @@
  
 
 
-SELECT JSON_VALUE('"123.45"', '$' RETURNING float);
+SELECT JSON_VALUE('"123.45"'::jsonb, '$' RETURNING float);
  json_value
 
  123.45
 (1 row)
 
-SELECT JSON_VALUE('123.45', '$' RETURNING int ERROR ON ERROR);
+SELECT JSON_VALUE('123.45'::jsonb, '$' RETURNING int ERROR ON ERROR);
  json_value
 
 123
 (1 row)
 
-SELECT JSON_VALUE('"03:04 2015-02-01"', '$.datetime("HH24:MI -MM-DD")' RETURNING date);
+SELECT JSON_VALUE('"03:04 2015-02-01"'::jsonb, '$.datetime("HH24:MI -MM-DD")' RETURNING date);
  json_value 
 
  2015-02-01
@@ -18907,10 +18907,10 @@
 ERROR:  invalid input syntax for integer: "123.45"
 
 SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
-ERROR: SQL/JSON scalar required
+ERROR:  JSON path expression in JSON_VALUE should return singleton scalar item
 
 SELECT JSON_VALUE(jsonb '[1,2]', 'strict $[*]' ERROR ON ERROR);
-ERROR: more than one SQL/JSON item
+ERROR:  JSON path expression in JSON_VALUE should return singleton scalar item
 
 
  
@@ -18920,13 +18920,13 @@
  
 
 SELECT JSON_VALUE(jsonb '[1]', 'strict $' ERROR ON ERROR);
-ERROR:

TRAP: FailedAssertion("HaveRegisteredOrActiveSnapshot()", File: "toast_internals.c", Line: 670, PID: 19403)

2022-03-27 Thread Erik Rijkers

Hi,

On master I got a FailedAssertion("HaveRegisteredOrActiveSnapshot()"
on an assert-enabled instance and with (I think) data over a certain length.

I whittled it down to the attached bash (careful - it drops stuff).  It 
has 5 tsv-data lines (one long line) that COPY slurps into a table.  The 
middle, third line causes the problem, later on.  Shortening the long 
line to somewhere below 2000 characters fixes it again.


More info in the attached .sh file.

If debug-assert is 'off', the problem does not occur. (REL_14_STABLE 
also does not have the problem, assertions or not)


thanks,

Erik Rijkers



bugsnapshot.sh
Description: application/shellscript


Re: TRAP: FailedAssertion("HaveRegisteredOrActiveSnapshot()", File: "toast_internals.c", Line: 670, PID: 19403)

2022-03-29 Thread Erik Rijkers

Op 29-03-2022 om 12:50 schreef Matthias van de Meent:

On Tue, 29 Mar 2022 at 11:10, Kyotaro Horiguchi  wrote:


At Tue, 29 Mar 2022 17:06:21 +0900 (JST), Kyotaro Horiguchi 
 wrote in

At Mon, 28 Mar 2022 18:36:46 +0900 (JST), Kyotaro Horiguchi 
 wrote in
Then, I found that portal->holdSnapshot is that.  I came up with the
attached.  It does the follows:

1. Teach PlannedStmtRequiresSnapshot() to return true for FetchStmt.

2. Use holdSnapshot in RunFromStore if any.


The rerpducer is reduced to as small as the following.

CREATE TABLE t (a text);
INSERT INTO t VALUES('some random text');
BEGIN;
DECLARE c CURSOR FOR SELECT * FROM t;
FETCH ALL IN c;

But I haven't come up with a reasonable way to generate the 'some
random text' yet.


I gave up and took a straightforward way to generate one.

I don't like that it uses a fixed length for the random text, but
anyway it works for now...


An shorter (?) reproducer might be the following, which forces any
value for 'a' to be toasted and thus triggering the check in
init_toast_snapshot regardless of value length:

CREATE TABLE t (a text);
ALTER TABLE t ALTER COLUMN a SET STORAGE EXTERNAL;
INSERT INTO t VALUES ('toast');
BEGIN;
DECLARE c CURSOR FOR SELECT * FROM t;
FETCH ALL IN c;


Excellent. That indeed immediately forces the error.

(and the patch prevents it)

Thanks!




Enjoy,

-Matthias





TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-16 Thread Erik Rijkers

Hi,

I get this crash running the attached test program. On my slow-disked 
and old desktop it occurs once in 20 or so runs (it is yet another 
installment of an old test that runs pgbench with logical replication).


15devel compiled from d3609dd25.

(The bash deletes stuff, and without my environment it will need some 
tweaking)


Thanks,

Erik Rijkers





statbug.sh
Description: application/shellscript


Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-18 Thread Erik Rijkers

Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:

At Mon, 18 Apr 2022 10:57:02 +0200, Erikjan Rijkers  wrote in

Hm.  Just now I've recompiled and retried and after 5 runs got the
same crash.  Then tried on another machine (also old, I'm afraid),
and built 1a8b11053 and ran the same thing.  That failed on the first
try, and made core dump from which I extracted:


Thanks!


gdb ~/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres
core-postgres-6-500-500-8289-1650269886 -ex bt -ex q

#2 0x00973fcb in ExceptionalCondition
#(conditionName=conditionName@entry=0xb20d76 "tabstat->trans == trans",
#errorType=errorType@entry=0x9c7c2b "FailedAssertion",
 fileName=fileName@entry=0xb20d0b "pgstat_relation.c",
 lineNumber=lineNumber@entry=508) at assert.c:69
#3 0x0086b77f in AtEOXact_PgStat_Relations
#(xact_state=xact_state@entry=0x26f0b50, isCommit=isCommit@entry=true)
#at pgstat_relation.c:508


Could you read tabstat, *tabstat, trans, *trans here?


To be honest I'm not sure how to, but I gave it a try:

GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from 
/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.

[New LWP 21839]

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: logical replication worker for 
subscription 16411   '.

Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x0097400b in ExceptionalCondition 
(conditionName=conditionName@entry=0xb20df6 "tabstat->trans == trans", 
errorType=errorType@entry=0x9c7cab "FailedAssertion",
fileName=fileName@entry=0xb20d8b "pgstat_relation.c", 
lineNumber=lineNumber@entry=508) at assert.c:69
#3  0x0086b7bf in AtEOXact_PgStat_Relations 
(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at 
pgstat_relation.c:508
#4  0x0086ec4f in AtEOXact_PgStat (isCommit=isCommit@entry=true, 
parallel=parallel@entry=false) at pgstat_xact.c:54

#5  0x005bd2a3 in CommitTransaction () at xact.c:2360
#6  0x005be5d5 in CommitTransactionCommand () at xact.c:3048
#7  0x007ee76b in apply_handle_commit_internal 
(commit_data=commit_data@entry=0x7fffb90aa8e0) at worker.c:1532
#8  0x007efb09 in apply_handle_commit (s=0x7fffb90aaa80) at 
worker.c:845

#9  apply_dispatch () at worker.c:2473
#10 0x007f11e7 in LogicalRepApplyLoop (last_received=74695984) 
at worker.c:2757

#11 start_apply () at worker.c:3526
#12 0x007f179f in ApplyWorkerMain () at worker.c:3782
#13 0x007bdbb3 in StartBackgroundWorker () at bgworker.c:858
#14 0x007c3251 in do_start_bgworker (rw=) at 
postmaster.c:5802

#15 maybe_start_bgworkers () at postmaster.c:6026
#16 0x007c3b75 in sigusr1_handler 
(postgres_signal_arg=) at postmaster.c:5191

#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x007c41e6 in ServerLoop () at postmaster.c:1757
#20 0x007c5c4b in PostmasterMain () at postmaster.c:1465
#21 0x00720d0e in main (argc=11, argv=0x2cbf590) at main.c:202
(gdb) f 3
#3  0x0086b7bf in AtEOXact_PgStat_Relations 
(xact_state=xact_state@entry=0x2d9ab50, isCommit=isCommit@entry=true) at 
pgstat_relation.c:508

508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = 
(gdb) p *tabstat
value has been optimized out
(gdb) p trans
$2 = 
(gdb) p *trans
value has been optimized out
(gdb)





#4 0x0086ec0f in AtEOXact_PgStat (isCommit=isCommit@entry=true,
#parallel=parallel@entry=false) at pgstat_xact.c:54
#5  0x005bd2a3 in CommitTransaction () at xact.c:2360
#6  0x005be5d5 in CommitTransactionCommand () at xact.c:3048
#7 0x007ee72b in apply_handle_commit_internal
#(commit_data=commit_data@entry=0x7ffe4606a7a0) at worker.c:1532
#8 0x007efac9 in apply_handle_commit (s=0x7ffe4606a940) at
#worker.c:845
#9  apply_dispatch () at worker.c:2473
#10 0x007f11a7 in LogicalRepApplyLoop (last_received=74454600)
#at worker.c:2757
#11 start_apply () at worker.c:3526
#12 0x007f175f in ApplyWorkerMain () at worker.c:3782


regards.






Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-19 Thread Erik Rijkers

Op 19-04-2022 om 11:25 schreef Kyotaro Horiguchi:

Thaks Erik.

At Tue, 19 Apr 2022 07:00:30 +0200, Erik Rijkers  wrote in

Op 19-04-2022 om 02:15 schreef Kyotaro Horiguchi:

Could you read tabstat, *tabstat, trans, *trans here?


To be honest I'm not sure how to, but I gave it a try:




I rebuilt newest master (a62bff74b135)  with

export CUSTOM_COPT="-O0 -g"

The 12th run of statbug.sh crashed and gave a corefile.


GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from 
/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin/postgres...done.

[New LWP 25058]

warning: Can't read pathname for load map: Input/output error.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: logical replication worker for 
subscription 16411   '.

Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x00b3bada in ExceptionalCondition (conditionName=0xd389a1 
"tabstat->trans == trans", errorType=0xd388b2 "FailedAssertion", 
fileName=0xd388a0 "pgstat_relation.c", lineNumber=508) at assert.c:69
#3  0x009bf5dc in AtEOXact_PgStat_Relations 
(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508
#4  0x009c4107 in AtEOXact_PgStat (isCommit=true, 
parallel=false) at pgstat_xact.c:54

#5  0x00583764 in CommitTransaction () at xact.c:2360
#6  0x00584354 in CommitTransactionCommand () at xact.c:3048
#7  0x0090b34e in apply_handle_commit_internal 
(commit_data=0x7ffd024b5940) at worker.c:1532
#8  0x0090a287 in apply_handle_commit (s=0x7ffd024b59b0) at 
worker.c:845

#9  0x0090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473
#10 0x0090d41c in LogicalRepApplyLoop (last_received=74680880) 
at worker.c:2757

#11 0x0090e974 in start_apply (origin_startpos=0) at worker.c:3526
#12 0x0090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782
#13 0x008c7623 in StartBackgroundWorker () at bgworker.c:858
#14 0x008d1557 in do_start_bgworker (rw=0x30ff0a0) at 
postmaster.c:5802

#15 0x008d1903 in maybe_start_bgworkers () at postmaster.c:6026
#16 0x008d09ba in sigusr1_handler (postgres_signal_arg=10) at 
postmaster.c:5191

#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x008cc6c1 in ServerLoop () at postmaster.c:1757
#20 0x008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at 
postmaster.c:1465

#21 0x007c9256 in main (argc=11, argv=0x30d6590) at main.c:202
(gdb) f 3
#3  0x009bf5dc in AtEOXact_PgStat_Relations 
(xact_state=0x31b1b50, isCommit=true) at pgstat_relation.c:508

508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = (PgStat_TableStatus *) 0x319e630
(gdb) p *tabstat
$2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f, 
t_counts = {t_numscans = 9187201950435737471, t_tuples_returned = 
9187201950435737471, t_tuples_fetched = 9187201950435737471,
t_tuples_inserted = 9187201950435737471, t_tuples_updated = 
9187201950435737471, t_tuples_deleted = 9187201950435737471, 
t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,
t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples = 
9187201950435737471, t_changed_tuples = 9187201950435737471, 
t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},

  relation = 0x7f7f7f7f7f7f7f7f}
(gdb) p trans
$3 = (PgStat_TableXactStatus *) 0x31b1ba8
(gdb) p *trans
$4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, 
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop 
= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,

  parent = 0x319e630, next = 0x31b1ab8}
(gdb)



Looks like we're one step further, no?


Erik



(gdb) p tabstat
$1 = 


Great! It is that.  But unfortunately they are optimized out..  Could
you cause the crash with -O0 binary?  You will see the variable with
it.  You can rebuild with the option as follows.

$ make clean; make install CUSTOM_COPT="-O0 -g"

You can dump only the whole xact_state chain from the current core
file but the result will give a bit obscure hint for diagnosis.

regards.






Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-19 Thread Erik Rijkers

Op 19-04-2022 om 19:36 schreef Andres Freund:

Hi,

On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:

The 12th run of statbug.sh crashed and gave a corefile.


I ran through quite a few iterations by now, without reproducing :(

I guess there's some timing issue and you're hitting on your system
due to the slower disks.



Program terminated with signal 6, Aborted.
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00357d6324f5 in raise () from /lib64/libc.so.6
#1  0x00357d633cd5 in abort () from /lib64/libc.so.6
#2  0x00b3bada in ExceptionalCondition (conditionName=0xd389a1
"tabstat->trans == trans", errorType=0xd388b2 "FailedAssertion",
fileName=0xd388a0 "pgstat_relation.c", lineNumber=508) at assert.c:69
#3  0x009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,
isCommit=true) at pgstat_relation.c:508
#4  0x009c4107 in AtEOXact_PgStat (isCommit=true, parallel=false) at
pgstat_xact.c:54
#5  0x00583764 in CommitTransaction () at xact.c:2360
#6  0x00584354 in CommitTransactionCommand () at xact.c:3048
#7  0x0090b34e in apply_handle_commit_internal
(commit_data=0x7ffd024b5940) at worker.c:1532
#8  0x0090a287 in apply_handle_commit (s=0x7ffd024b59b0) at
worker.c:845
#9  0x0090ce3a in apply_dispatch (s=0x7ffd024b59b0) at worker.c:2473
#10 0x0090d41c in LogicalRepApplyLoop (last_received=74680880) at
worker.c:2757
#11 0x0090e974 in start_apply (origin_startpos=0) at worker.c:3526
#12 0x0090f156 in ApplyWorkerMain (main_arg=0) at worker.c:3782
#13 0x008c7623 in StartBackgroundWorker () at bgworker.c:858
#14 0x008d1557 in do_start_bgworker (rw=0x30ff0a0) at
postmaster.c:5802
#15 0x008d1903 in maybe_start_bgworkers () at postmaster.c:6026
#16 0x008d09ba in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5191
#17 
#18 0x00357d6e1683 in __select_nocancel () from /lib64/libc.so.6
#19 0x008cc6c1 in ServerLoop () at postmaster.c:1757
#20 0x008cc0aa in PostmasterMain (argc=11, argv=0x30d6590) at
postmaster.c:1465
#21 0x007c9256 in main (argc=11, argv=0x30d6590) at main.c:202
(gdb) f 3
#3  0x009bf5dc in AtEOXact_PgStat_Relations (xact_state=0x31b1b50,
isCommit=true) at pgstat_relation.c:508
508 Assert(tabstat->trans == trans);
(gdb) p tabstat
$1 = (PgStat_TableStatus *) 0x319e630
(gdb) p *tabstat
$2 = {t_id = 2139062143, t_shared = 127, trans = 0x7f7f7f7f7f7f7f7f,
t_counts = {t_numscans = 9187201950435737471, t_tuples_returned =
9187201950435737471, t_tuples_fetched = 9187201950435737471,
 t_tuples_inserted = 9187201950435737471, t_tuples_updated =
9187201950435737471, t_tuples_deleted = 9187201950435737471,
t_tuples_hot_updated = 9187201950435737471, t_truncdropped = 127,
 t_delta_live_tuples = 9187201950435737471, t_delta_dead_tuples =
9187201950435737471, t_changed_tuples = 9187201950435737471,
t_blocks_fetched = 9187201950435737471, t_blocks_hit = 9187201950435737471},
   relation = 0x7f7f7f7f7f7f7f7f}
(gdb) p trans
$3 = (PgStat_TableXactStatus *) 0x31b1ba8
(gdb) p *trans
$4 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0,
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop = 0,
deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,
   parent = 0x319e630, next = 0x31b1ab8}
(gdb)


Could you print out
p xact_state
p *xact_state
p xact_state->first
p *xact_state->first

Do you have the server log file for the failed run / instance?



(gdb) p xact_state
$5 = (PgStat_SubXactStatus *) 0x31b1b50

(gdb) p *xact_state
$6 = {nest_level = 1, prev = 0x0, pending_drops = {head = {prev = 
0x31b1b60, next = 0x31b1b60}}, pending_drops_count = 0, first = 0x31b1ba8}


(gdb) p xact_state->first
$7 = (PgStat_TableXactStatus *) 0x31b1ba8

(gdb) p *xact_state->first
$8 = {tuples_inserted = 1, tuples_updated = 0, tuples_deleted = 0, 
truncdropped = false, inserted_pre_truncdrop = 0, updated_pre_truncdrop 
= 0, deleted_pre_truncdrop = 0, nest_level = 1, upper = 0x0,

  parent = 0x319e630, next = 0x31b1ab8}
(gdb)


The logfile is attached.


Erik



Greetings,

Andres Freund2022-04-19 13:33:42.944 CEST [24981] LOG:  starting PostgreSQL 
15devel_HEAD_20220419_1308_a62bff74b135 on x86_64-pc-linux-gnu, compiled by gcc 
(GCC) 8.2.0, 64-bit
2022-04-19 13:33:42.945 CEST [24981] LOG:  listening on IPv4 address 
"127.0.0.1", port 6526
2022-04-19 13:33:43.010 CEST [24981] LOG:  listening on Unix socket 
"/tmp/.s.PGSQL.6526"
2022-04-19 13:33:43.085 CEST [24991] LOG:  database system was shut down at 
2022-04-19 13:33:41 CEST
2022-04-19 13:33:43.181 CEST [24981] LOG:  database system is ready to accept 
connections
2022-04-19 13:33:51.040 CEST [25047] LOG:  received replication command: 
CREATE_REPLICATION_SLOT "pub_6527_from_6526" LOGICAL pgoutput (SNAPSHOT 
'nothing')
2022-04-19 13

Re: TRAP: FailedAssertion("tabstat->trans == trans", File: "pgstat_relation.c", Line: 508

2022-04-20 Thread Erik Rijkers

Op 20-04-2022 om 06:54 schreef Kyotaro Horiguchi:

At Tue, 19 Apr 2022 10:55:26 -0700, Andres Freund  wrote in

Hi,

On 2022-04-19 10:36:24 -0700, Andres Freund wrote:

On 2022-04-19 13:50:25 +0200, Erik Rijkers wrote:

The 12th run of statbug.sh crashed and gave a corefile.


I ran through quite a few iterations by now, without reproducing :(

I guess there's some timing issue and you're hitting on your system
due to the slower disks.


Ah. I found the issue. The new pgstat_report_stat(true) call in
LogicalRepApplyLoop()'s "timeout" section doesn't check if we're in a
transaction. And the transactional stats code doesn't handle that (never
has).

I think all that's needed is a if (IsTransactionState()) around that
pgstat_report_stat().


if (!IsTransactinoState()) ?


It might be possible to put an assertion into pgstat_report_stat(), but
I need to look at the process exit code to see if it is.


Inserting a sleep in pgoutput_commit_txn reproduced this. Crashes with
the same stack trace with the similar variable state.

diff --git a/src/backend/replication/pgoutput/pgoutput.c 
b/src/backend/replication/pgoutput/pgoutput.c
index b197bfd565..def4d751d3 100644
--- a/src/backend/replication/pgoutput/pgoutput.c
+++ b/src/backend/replication/pgoutput/pgoutput.c
@@ -568,6 +568,7 @@ pgoutput_commit_txn(LogicalDecodingContext *ctx, 
ReorderBufferTXN *txn,
return;
}
  
+	sleep(2);

OutputPluginPrepareWrite(ctx, true);
logicalrep_write_commit(ctx->out, txn, commit_lsn);
OutputPluginWrite(ctx, true);

The following  actuall works for this.

diff --git a/src/backend/replication/logical/worker.c 
b/src/backend/replication/logical/worker.c
index 4171371296..f4e5359513 100644
--- a/src/backend/replication/logical/worker.c
+++ b/src/backend/replication/logical/worker.c
@@ -2882,10 +2882,11 @@ LogicalRepApplyLoop(XLogRecPtr last_received)
send_feedback(last_received, requestReply, 
requestReply);
  
  			/*

-* Force reporting to ensure long idle periods don't 
lead to
-* arbitrarily delayed stats.
+* Force reporting to ensure long out-of-transaction 
idle periods
+* don't lead to arbitrarily delayed stats.
 */
-   pgstat_report_stat(true);
+   if (!IsTransactionState())
+   pgstat_report_stat(true);
}
}
  


Yes, that seems to fix it: I applied that latter patch, and ran my 
program 250x without errors. Then I removed it again an it gave the 
error within 15x.


thanks!

Erik



regards.






event trigger sgml touch-up

2023-10-16 Thread Erik Rijkers

Some small (grammatical) changes in event-trigger.sgml

(also one delete of 'community-we' (which I think is just confusing for 
the not-postgresql-community reader).



Erik--- doc/src/sgml/event-trigger.sgml.orig	2023-10-16 17:16:00.017452340 +0200
+++ doc/src/sgml/event-trigger.sgml	2023-10-16 17:22:32.965450992 +0200
@@ -40,7 +40,7 @@
  The login event occurs when an authenticated user logs
  into the system. Any bug in a trigger procedure for this event may
  prevent successful login to the system. Such bugs may be fixed by
- setting  is set to false
+ setting  to false
  either in a connection string or configuration file. Alternative is
  restarting the system in single-user mode (as event triggers are
  disabled in this mode). See the  reference
@@ -49,8 +49,8 @@
  To prevent servers from becoming inaccessible, such triggers must avoid
  writing anything to the database when running on a standby.
  Also, it's recommended to avoid long-running queries in
- login event triggers.  Notes that, for instance,
- cancelling connection in psql wouldn't cancel
+ login event triggers.  Note that, for instance,
+ cancelling a connection in psql wouldn't cancel
  the in-progress login trigger.

 
@@ -1359,7 +1359,7 @@
 END IF;
 
 -- The checks below cannot be performed on standby servers so
--- ensure the database is not in recovery before we perform any
+-- ensure the database is not in recovery before performing any
 -- operations.
 SELECT pg_is_in_recovery() INTO rec;
 IF rec THEN


Re: remaining sql/json patches

2023-11-10 Thread Erik Rijkers

Hi,

At the moment, what is the patchset to be tested?  The latest SQL/JSON 
server I have is from September, and it's become unclear to me what 
belongs to the SQL/JSON patchset.  It seems to me cfbot erroneously 
shows green because it successfully compiles later detail-patches (i.e., 
not the SQL/JSON set itself). Please correct me if I'm wrong and it is 
in fact possible to derive from cfbot a patchset that are the ones to 
use to build the latest SQL/JSON server.


Thanks!

Erik




  1   2   3   >