pg_upgrade operation failed if table created in --single user mode

2021-12-14 Thread tushar

Hi,

Please refer to this scenario where pg_upgrade operation is failing if 
the table is create in single-user mode.


PG v13
--connect to PG v13 using single user mode  ( ./postgres --single -D 
/tmp/data13 postgres )

--create table ( backend> create table r(n int); )
--exit  ( ctrl + D)

-- Perform pg_upgrade ( PG v13->PG v15)  )(./pg_upgrade -d data13 -D 
data15 -b /usr/psql-12/bin -B . )


it will fail with these messages

Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
  postgres
*failure*

Consult the last few lines of "pg_upgrade_dump_14174.log" for
the probable cause of the failure.
Failure, exiting

--cat pg_upgrade_dump_14174.log
--
--
--
--
pg_restore: creating TABLE "public.r"
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 200; 1259 14180 TABLE r edb
pg_restore: error: could not execute query: ERROR:  pg_type array OID 
value not set when in binary upgrade mode

Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('14181'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_class oids and relfilenodes
SELECT 
pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('14180'::pg_catalog.oid);
SELECT 
pg_catalog.binary_upgrade_set_next_heap_relfilenode('14180'::pg_catalog.oid);


CREATE TABLE "public"."r" (
    "n" integer
);

-- For binary upgrade, set heap's relfrozenxid and relminmxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '492', relminmxid = '1'
WHERE oid = '"public"."r"'::pg_catalog.regclass;

Is it expected ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: preserving db/ts/relfilenode OIDs across pg_upgrade (was Re: storing an explicit nonce)

2021-12-14 Thread tushar

On 12/14/21 2:35 AM, Robert Haas wrote:

I spent much of today reviewing 0001. Here's an updated version, so
far only lightly tested. Please check whether I've broken anything.

Thanks Robert, I tested from v96/12/13/v14 -> v15( with patch)
things are working fine i.e table /index relfilenode is preserved,
not changing after pg_upgrade.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: preserving db/ts/relfilenode OIDs across pg_upgrade (was Re: storing an explicit nonce)

2021-12-15 Thread tushar

On 12/15/21 12:09 AM, tushar wrote:

I spent much of today reviewing 0001. Here's an updated version, so
far only lightly tested. Please check whether I've broken anything.

Thanks Robert, I tested from v96/12/13/v14 -> v15( with patch)
things are working fine i.e table /index relfilenode is preserved,
not changing after pg_upgrade. 
I covered tablespace OIDs testing scenarios and that is also preserved 
after pg_upgrade.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: refactoring basebackup.c

2021-12-27 Thread tushar

On 11/22/21 11:05 PM, Jeevan Ladhe wrote:

Please find the lz4 compression patch here that basically has:

Thanks, Could you please rebase your patch, it is failing at my end -

[edb@centos7tushar pg15_lz]$ git apply /tmp/v8-0001-LZ4-compression.patch
error: patch failed: doc/src/sgml/ref/pg_basebackup.sgml:230
error: doc/src/sgml/ref/pg_basebackup.sgml: patch does not apply
error: patch failed: src/backend/replication/Makefile:19
error: src/backend/replication/Makefile: patch does not apply
error: patch failed: src/backend/replication/basebackup.c:64
error: src/backend/replication/basebackup.c: patch does not apply
error: patch failed: src/include/replication/basebackup_sink.h:285
error: src/include/replication/basebackup_sink.h: patch does not apply

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2021-12-28 Thread tushar

On 12/28/21 1:11 PM, Jeevan Ladhe wrote:
You need to apply Robert's v10 version patches 0002, 0003 and 0004, 
before applying the lz4 patch(v8 version).

Thanks, able to apply now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-03 Thread tushar

On 11/22/21 11:05 PM, Jeevan Ladhe wrote:

Please find the lz4 compression patch here that basically has:

One small issue, in the "pg_basebackup --help", we are not displaying
lz4 value under --server-compression option


[edb@tusharcentos7-v14 bin]$ ./pg_basebackup --help | grep 
server-compression

  --server-compression=none|gzip|gzip[1-9]

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-03 Thread tushar

On 11/22/21 11:05 PM, Jeevan Ladhe wrote:

Please find the lz4 compression patch here that basically has:
Please refer to this  scenario , where --server-compression is only 
compressing

base backup into lz4 format but not pg_wal directory

[edb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=lz4 
-Xstream -D foo


[edb@centos7tushar bin]$ ls foo
backup_manifest  base.tar.lz4  pg_wal.tar

this same is valid for gzip as well if server-compression is set to gzip

edb@centos7tushar bin]$ ./pg_basebackup -Ft --server-compression=gzip4 
-Xstream -D foo1


[edb@centos7tushar bin]$ ls foo1
backup_manifest  base.tar.gz  pg_wal.tar

if this scenario is valid then both the folders format should be in lz4 
format otherwise we should

get an error something like - not a valid option ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-05 Thread tushar

On 1/4/22 8:07 PM, Robert Haas wrote:

Before sending an email like this, it would be a good idea to read the
documentation for the --server-compression option.

Sure, Thanks Robert.

One scenario where I feel error message is confusing and if it is not 
supported at all then error message need to be a little bit more clear


if we use -z  (or -Z ) with -t , we are getting this error
[edb@centos7tushar bin]$  ./pg_basebackup -t server:/tmp/test0 -Xfetch -z
pg_basebackup: error: only tar mode backups can be compressed
Try "pg_basebackup --help" for more information.

but after removing -z option  backup is in tar mode only

edb@centos7tushar bin]$  ./pg_basebackup -t server:/tmp/test0 -Xfetch
[edb@centos7tushar bin]$ ls /tmp/test0
backup_manifest  base.tar

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-05 Thread tushar
On Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe 
wrote:

> Hi Tushar,
>
> You need to apply Robert's v10 version patches 0002, 0003 and 0004, before
> applying the lz4 patch(v8 version).
> Please let me know if you still face any issues.
>

Thanks, Jeevan.
I tested —server-compression option using different other options of
pg_basebackup, also checked -t/—server-compression from pg_basebackup of
v15 will
throw an error if the server version is v14 or below. Things are looking
good to me.
Two open  issues -
1)lz4 value is missing for --server-compression in pg_basebackup --help
2)Error messages need to improve if using -t server with -z/-Z

regards,


Re: extensible options syntax for replication parser?

2021-10-13 Thread tushar

On 10/5/21 10:26 PM, Robert Haas wrote:

Hearing no further comments, I've gone ahead and committed these
patches. I'm still slightly nervous that I may have missed some issue,
but I think at this point having the patches in the tree is more
likely to turn it up than any other course of action.
I have tested couple of scenarios of pg_basebackup / pg_receivewal 
/pg_recvlogical /  Publication(wal_level=logical) and
Subscription e.t.c against HEAD (with patches) and  cross-version 
testing. Things look good to me and no breakage was found.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: Minimal logical decoding on standbys

2019-03-01 Thread tushar

Hi,

While testing  this feature  found that - if lots of insert happened on 
the master cluster then pg_recvlogical is not showing the DATA 
information  on logical replication slot which created on SLAVE.


Please refer this scenario -

1)
Create a Master cluster with wal_level=logcal and create logical 
replication slot -
 SELECT * FROM pg_create_logical_replication_slot('master_slot', 
'test_decoding');


2)
Create a Standby  cluster using pg_basebackup ( ./pg_basebackup -D 
slave/ -v -R)  and create logical replication slot -
SELECT * FROM pg_create_logical_replication_slot('standby_slot', 
'test_decoding');


3)
X terminal - start  pg_recvlogical  , provide port= ( slave 
cluster)  and specify slot=standby_slot
./pg_recvlogical -d postgres  -p  -s 1 -F 1  -v --slot=standby_slot  
--start -f -


Y terminal - start  pg_recvlogical  , provide port=5432 ( master 
cluster)  and specify slot=master_slot
./pg_recvlogical -d postgres  -p 5432 -s 1 -F 1  -v --slot=master_slot  
--start -f -


Z terminal - run pg_bench  against Master cluster ( ./pg_bench -i -s 10 
postgres)


Able to see DATA information on Y terminal  but not on X.

but same able to see by firing this below query on SLAVE cluster -

SELECT * FROM pg_logical_slot_get_changes('standby_slot', NULL, NULL);

Is it expected ?

regards,
tushar

On 12/17/2018 10:46 PM, Petr Jelinek wrote:

Hi,

On 12/12/2018 21:41, Andres Freund wrote:

I don't like the approach of managing the catalog horizon via those
periodically logged catalog xmin announcements.  I think we instead
should build ontop of the records we already have and use to compute
snapshot conflicts.  As of HEAD we don't know whether such tables are
catalog tables, but that's just a bool that we need to include in the
records, a basically immeasurable overhead given the size of those
records.

IIRC I was originally advocating adding that xmin announcement to the
standby snapshot message, but this seems better.


If we were to go with this approach, there'd be at least the following
tasks:
- adapt tests from [2]
- enforce hot-standby to be enabled on the standby when logical slots
   are created, and at startup if a logical slot exists
- fix issue around btree_xlog_delete_get_latestRemovedXid etc mentioned
   above.
- Have a nicer conflict handling than what I implemented here.  Craig's
   approach deleted the slots, but I'm not sure I like that.  Blocking
   seems more appropriately here, after all it's likely that the
   replication topology would be broken afterwards.
- get_rel_logical_catalog() shouldn't be in lsyscache.[ch], and can be
   optimized (e.g. check wal_level before opening rel etc).


Once we have this logic, it can be used to implement something like
failover slots on-top, by having having a mechanism that occasionally
forwards slots on standbys using pg_replication_slot_advance().


Looking at this from the failover slots perspective. Wouldn't blocking
on conflict mean that we stop physical replication on catalog xmin
advance when there is lagging logical replication on primary? It might
not be too big deal as in that use-case it should only happen if
hs_feedback was off at some point, but just wanted to point out this
potential problem.



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Minimal logical decoding on standbys

2019-03-04 Thread tushar
:65061 bid[integer]:1 
abalance[integer]:0 filler[character]:' '
table public.pgbench_accounts: INSERT: aid[integer]:65062 bid[integer]:1 
abalance[integer]:0 filler[character]:' '
table public.pgbench_accounts: INSERT: aid[integer]:65063 bid[integer]:1 
abalance[integer]:0 filler[character]:' '
table public.pgbench_accounts: INSERT: aid[integer]:65064 bid[integer]:1 
abalance[integer]:0 filler[character]:' '


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: Minimal logical decoding on standbys

2019-03-04 Thread tushar

On 03/04/2019 04:54 PM, tushar wrote:
.)Perform pg_basebackup using --slot=decoding_standby  with option -R 
. modify port= , start the server 


set primary_slot_name = 'decoding_standby'  in the postgresql.conf file 
of slave.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Minimal logical decoding on standbys

2019-03-05 Thread tushar

On 03/04/2019 10:57 PM, Andres Freund wrote:

Note that hot_standby_feedback=on needs to be set on a standby, not on
the primary (although it doesn't do any harm there).


Right, This parameter was enabled on both Master and slave.

Is someone able to reproduce this issue ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Server Crash in logical decoding if used inside --single mode

2019-03-06 Thread tushar

Hi,

Steps to reproduce on Master Sources -

.) Perform initdb ( ./initdb -D data)

.) set wal_level=logical in postgresql.conf file

.)Connect to psql in single-user mode  ( ./postgres --single  -D data  
postgres)


.)Create logical replication slot followed by select * from 
pg_logical_slot_get_changes


backend>  SELECT * FROM pg_create_logical_replication_slot('m7', 
'test_decoding','f');

     1: slot_name    (typeid = 19, len = 64, typmod = -1, byval = f)
     2: lsn    (typeid = 3220, len = 8, typmod = -1, byval = t)
    
2019-03-06 17:27:42.080 GMT [1132] LOG:  logical decoding found 
consistent point at 0/163B9C8
2019-03-06 17:27:42.080 GMT [1132] DETAIL:  There are no running 
transactions.
2019-03-06 17:27:42.080 GMT [1132] STATEMENT:   SELECT * FROM 
pg_create_logical_replication_slot('m7', 'test_decoding','f');


     1: slot_name = "m7"    (typeid = 19, len = 64, typmod = -1, byval = f)
     2: lsn = "0/163BA00"    (typeid = 3220, len = 8, typmod = -1, 
byval = t)

    
backend> select * from pg_logical_slot_get_changes('m7',null,null);
     1: lsn    (typeid = 3220, len = 8, typmod = -1, byval = t)
     2: xid    (typeid = 28, len = 4, typmod = -1, byval = t)
     3: data    (typeid = 25, len = -1, typmod = -1, byval = f)
    
2019-03-06 17:28:04.979 GMT [1132] LOG:  starting logical decoding for 
slot "m7"
2019-03-06 17:28:04.979 GMT [1132] DETAIL:  Streaming transactions 
committing after 0/163BA00, reading WAL from 0/163B9C8.
2019-03-06 17:28:04.979 GMT [1132] STATEMENT:  select * from 
pg_logical_slot_get_changes('m7',null,null);


2019-03-06 17:28:04.979 GMT [1132] LOG:  logical decoding found 
consistent point at 0/163B9C8
2019-03-06 17:28:04.979 GMT [1132] DETAIL:  There are no running 
transactions.
2019-03-06 17:28:04.979 GMT [1132] STATEMENT:  select * from 
pg_logical_slot_get_changes('m7',null,null);


TRAP: FailedAssertion("!(slot != ((void *)0) && slot->active_pid != 0)", 
File: "slot.c", Line: 428)

Aborted (core dumped)

Stack trace -

(gdb) bt
#0  0x003746e325e5 in raise () from /lib64/libc.so.6
#1  0x003746e33dc5 in abort () from /lib64/libc.so.6
#2  0x008a96ad in ExceptionalCondition (conditionName=optimized out>, errorType=, fileName=optimized out>, lineNumber=)

    at assert.c:54
#3  0x00753253 in ReplicationSlotRelease () at slot.c:428
#4  0x00734dbd in pg_logical_slot_get_changes_guts 
(fcinfo=0x2771e48, confirm=true, binary=false) at logicalfuncs.c:355
#5  0x00640aa5 in ExecMakeTableFunctionResult 
(setexpr=0x27704f8, econtext=0x27703a8, argContext=out>, expectedDesc=0x2792c10, randomAccess=false)

    at execSRF.c:233
#6  0x00650c43 in FunctionNext (node=0x2770290) at 
nodeFunctionscan.c:95
#7  0x0063fbad in ExecScanFetch (node=0x2770290, 
accessMtd=0x650950 , recheckMtd=0x6501d0 
) at execScan.c:93
#8  ExecScan (node=0x2770290, accessMtd=0x650950 , 
recheckMtd=0x6501d0 ) at execScan.c:143
#9  0x006390a7 in ExecProcNode (queryDesc=0x276fc28, 
direction=, count=0, execute_once=144) at 
../../../src/include/executor/executor.h:241
#10 ExecutePlan (queryDesc=0x276fc28, direction=, 
count=0, execute_once=144) at execMain.c:1643
#11 standard_ExecutorRun (queryDesc=0x276fc28, direction=optimized out>, count=0, execute_once=144) at execMain.c:362
#12 0x0079e30b in PortalRunSelect (portal=0x27156b8, 
forward=, count=0, dest=) at 
pquery.c:929
#13 0x0079f671 in PortalRun (portal=0x27156b8, 
count=9223372036854775807, isTopLevel=true, run_once=true, 
dest=0xa826a0, altdest=0xa826a0, completionTag=0x7ffc04af2690 "")

    at pquery.c:770
#14 0x0079ba7b in exec_simple_query (query_string=0x27236f8 
"select * from pg_logical_slot_get_changes('m7',null,null);\n") at 
postgres.c:1215
#15 0x0079d044 in PostgresMain (argc=, 
argv=, dbname=0x26c9010 "postgres", username=optimized out>) at postgres.c:4256

#16 0x006874eb in main (argc=5, argv=0x26a7e20) at main.c:224
(gdb) q

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Minimal logical decoding on standbys

2019-03-07 Thread tushar
There is an another issue , where i am getting error while executing 
"pg_logical_slot_get_changes" on SLAVE


Master (running on port=5432) -  run "make installcheck"  after setting  
PATH=regress/ folder
Slave (running on port=)  -  Connect to regression database and 
select pg_logical_slot_get_changes


[centos@mail-arts bin]$ ./psql postgres -p  -f t.sql
You are now connected to database "regression" as user "centos".
 slot_name |    lsn
---+---
 m61   | 1/D437AD8
(1 row)

psql:t.sql:3: ERROR:  could not resolve cmin/cmax of catalog tuple

[centos@mail-arts bin]$ cat t.sql
\c regression
SELECT * from   pg_create_logical_replication_slot('m61', 'test_decoding');
select * from pg_logical_slot_get_changes('m61',null,null);

regards,

On 03/04/2019 10:57 PM, Andres Freund wrote:

Hi,

On 2019-03-04 16:54:32 +0530, tushar wrote:

On 03/01/2019 11:16 PM, Andres Freund wrote:

So, if I understand correctly you do*not*  have a phyiscal replication
slot for this standby? For the feature to work reliably that needs to
exist, and you need to have hot_standby_feedback enabled. Does having
that fix the issue?

Ok, This time around  - I performed like this -

.)Master cluster (set wal_level=logical and hot_standby_feedback=on in
postgresql.conf) , start the server and create a physical replication slot

Note that hot_standby_feedback=on needs to be set on a standby, not on
the primary (although it doesn't do any harm there).

Thanks,

Andres



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Minimal logical decoding on standbys

2019-03-13 Thread tushar
x00827c42 in BackendStartup (port=0x2afe3b0) at 
postmaster.c:4090

#24 0x00824036 in ServerLoop () at postmaster.c:1703
#25 0x008238ec in PostmasterMain (argc=3, argv=0x2ad6d00) at 
postmaster.c:1376

#26 0x00748aab in main (argc=3, argv=0x2ad6d00) at main.c:228
(gdb)

regards,


On 03/07/2019 09:03 PM, tushar wrote:
There is an another issue , where i am getting error while executing 
"pg_logical_slot_get_changes" on SLAVE


Master (running on port=5432) -  run "make installcheck"  after 
setting  PATH=PGDATABASE=postgres from regress/ folder
Slave (running on port=)  -  Connect to regression database and 
select pg_logical_slot_get_changes


[centos@mail-arts bin]$ ./psql postgres -p  -f t.sql
You are now connected to database "regression" as user "centos".
 slot_name |    lsn
---+---
 m61   | 1/D437AD8
(1 row)

psql:t.sql:3: ERROR:  could not resolve cmin/cmax of catalog tuple

[centos@mail-arts bin]$ cat t.sql
\c regression
SELECT * from   pg_create_logical_replication_slot('m61', 
'test_decoding');

select * from pg_logical_slot_get_changes('m61',null,null);

regards,

On 03/04/2019 10:57 PM, Andres Freund wrote:

Hi,

On 2019-03-04 16:54:32 +0530, tushar wrote:

On 03/01/2019 11:16 PM, Andres Freund wrote:

So, if I understand correctly you do*not*  have a phyiscal replication
slot for this standby? For the feature to work reliably that needs to
exist, and you need to have hot_standby_feedback enabled. Does having
that fix the issue?

Ok, This time around  - I performed like this -

.)Master cluster (set wal_level=logical and hot_standby_feedback=on in
postgresql.conf) , start the server and create a physical 
replication slot

Note that hot_standby_feedback=on needs to be set on a standby, not on
the primary (although it doesn't do any harm there).

Thanks,

Andres





--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




[sqlsmith] Failed assertion at relnode.c

2019-03-14 Thread tushar
g as c8,
  ref_0.description as c9,
  ref_4.opfname as c10,
  sample_0.stxname as c11
    from
  pg_catalog.pg_statistic_ext as sample_0 tablesample 
system (2.7)

    where cast(null as point) @ cast(null as polygon)
    limit 105) as subq_0
  where (true)
    and ((ref_4.opfname is NULL)
  and (cast(null as int8) < ref_0.objsubid))) as subq_1
where subq_1.c5 is not NULL
limit 79;

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




[PG 9.6]make is failing for test_decoding contrib module.

2019-03-29 Thread tushar

Hi,

Found by one of the my colleague - Kashif Jeeshan ,  in PG 9.6 - make is 
failing for test_decoding contrib module.


[centos@centos-cpula test_decoding]$ make
gcc -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv -O2 -fPIC -I. -I. 
-I../../src/include -D_GNU_SOURCE   -c -o test_decoding.o test_decoding.c

In file included from ../../src/include/postgres.h:48,
 from test_decoding.c:13:
../../src/include/utils/elog.h:71:28: error: utils/errcodes.h: No such 
file or directory

In file included from ../../src/include/replication/slot.h:15,
 from ../../src/include/replication/logical.h:12,
 from test_decoding.c:23:
../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h: 
No such file or directory

test_decoding.c: In function ‘pg_decode_startup’:
test_decoding.c:127: error: ‘ERRCODE_INVALID_PARAMETER_VALUE’ undeclared 
(first use in this function)
test_decoding.c:127: error: (Each undeclared identifier is reported only 
once

test_decoding.c:127: error: for each function it appears in.)
make: *** [test_decoding.o] Error 1
[centos@centos-cpula test_decoding]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [PG 9.6]make is failing for test_decoding contrib module.

2019-03-29 Thread tushar

On 03/29/2019 06:12 PM, Robert Haas wrote:

On Fri, Mar 29, 2019 at 8:24 AM tushar  wrote:

Found by one of the my colleague - Kashif Jeeshan ,  in PG 9.6 - make is

Kashif Jeeshan?
:-) , actually he is also working  on logical replication on standbys 
testing - whenever he has some bandwidth (On/off)  ..he found one issue .
 i suggested him to see the behavior on PG 9.6/ PG 10 and while doing 
so - got this issue when he  performed make against test_decoding

test_decoding.c:127: error: for each function it appears in.)
make: *** [test_decoding.o] Error 1
[centos@centos-cpula test_decoding]$

I think your tree is not clean, or you haven't built the server
correctly first.  If this were actually broken, the buildfarm would be
red:

https://buildfarm.postgresql.org/cgi-bin/show_status.pl

Try 'git clean -dfx'.


Yes, you are right.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [PG 9.6]make is failing for test_decoding contrib module.

2019-03-29 Thread tushar

On 03/29/2019 06:12 PM, Robert Haas wrote:

Kashif Jeeshan?


Ohh, Please read - Kashif Zeeshan.  Sorry for the typo.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





SSL Connection still showing TLSv1.3 even it is disabled in ssl_ciphers

2019-08-05 Thread tushar
1:!TLSv1.3
(1 row)

postgres=#

Cipher which has been rejected -should not display in the message.

Is this expected ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



ERROR: could not map dynamic shared memory segment

2018-02-07 Thread tushar

Hi ,

I am getting  ERROR:  could not map dynamic shared memory segment in the 
log file


- Please refer this scenario-

in V11/V10 latest sources

 set parallel_setup_cost=0;
 set parallel_tuple_cost=0;
 set max_parallel_workers_per_gather=4;
 create table r(n int);
insert into r values (generate_series(1,100));
insert into r values (generate_series(100,200));
analyze r;

postgres=# select * from r where n < (select n from r where n<=1 
limit 6644);

ERROR:  more than one row returned by a subquery used as an expression

in the log file -

2018-02-07 10:28:27.615 GMT [20569] ERROR:  more than one row returned 
by a subquery used as an expression
2018-02-07 10:28:27.615 GMT [20569] STATEMENT:  select * from r where n 
< (select n from r where n<=1 limit 6644);
2018-02-07 10:28:27.616 GMT [20586] ERROR:  could not map dynamic shared 
memory segment
2018-02-07 10:28:27.616 GMT [20587] ERROR:  could not map dynamic shared 
memory segment
2018-02-07 10:28:27.617 GMT [20559] LOG:  background worker "parallel 
worker" (PID 20586) exited with exit code 1
2018-02-07 10:28:27.617 GMT [20559] LOG:  background worker "parallel 
worker" (PID 20587) exited with exit code 1


Is this something already reported ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: [HACKERS] Block level parallel vacuum

2019-12-02 Thread tushar

On 11/27/19 11:13 PM, Masahiko Sawada wrote:

Thank you for reviewing this patch. All changes you made looks good to me.

I thought I already have posted all v34 patches but didn't, sorry. So
I've attached v35 patch set that incorporated your changes and it
includes Dilip's patch for gist index (0001). These patches can be
applied on top of the current HEAD and make check should pass.
Regards,
While doing testing of this feature against v35- patches ( minus 004) on 
Master ,
getting crash when user connect to server using single mode and try to 
perform vacuum (parallel 1 ) o/p


tushar@localhost bin]$ ./postgres --single -D data/  postgres
2019-12-03 12:49:26.967 +0530 [70300] LOG:  database system was 
interrupted; last known up at 2019-12-03 12:48:51 +0530
2019-12-03 12:49:26.987 +0530 [70300] LOG:  database system was not 
properly shut down; automatic recovery in progress
2019-12-03 12:49:26.990 +0530 [70300] LOG:  invalid record length at 
0/29F1638: wanted 24, got 0

2019-12-03 12:49:26.990 +0530 [70300] LOG:  redo is not required

PostgreSQL stand-alone backend 13devel
backend>
backend> vacuum full;
backend> vacuum (parallel 1);
TRAP: FailedAssertion("IsUnderPostmaster", File: "dsm.c", Line: 444)
./postgres(ExceptionalCondition+0x53)[0x8c6fa3]
./postgres[0x785ced]
./postgres(GetSessionDsmHandle+0xca)[0x49304a]
./postgres(InitializeParallelDSM+0x74)[0x519d64]
./postgres(heap_vacuum_rel+0x18d3)[0x4e47e3]
./postgres[0x631d9a]
./postgres(vacuum+0x444)[0x632f14]
./postgres(ExecVacuum+0x2bb)[0x63369b]
./postgres(standard_ProcessUtility+0x4cf)[0x7b312f]
./postgres[0x7b02c6]
./postgres[0x7b0dd3]
./postgres(PortalRun+0x162)[0x7b1b02]
./postgres[0x7ad874]
./postgres(PostgresMain+0x1002)[0x7aebf2]
./postgres(main+0x1ce)[0x48188e]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f4fe6908505]
./postgres[0x481b6a]
Aborted (core dumped)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: Minimal logical decoding on standbys

2019-04-09 Thread tushar



On 03/13/2019 08:40 PM, tushar wrote:

Hi ,

I am getting a server crash on standby while executing 
pg_logical_slot_get_changes function   , please refer this scenario


Master cluster( ./initdb -D master)
set wal_level='hot_standby in master/postgresql.conf file
start the server , connect to  psql terminal and create a physical 
replication slot ( SELECT * from 
pg_create_physical_replication_slot('p1');)


perform pg_basebackup using --slot 'p1'  (./pg_basebackup -D slave/ -R 
--slot p1 -v))
set wal_level='logical' , hot_standby_feedback=on, 
primary_slot_name='p1' in slave/postgresql.conf file
start the server , connect to psql terminal and create a logical 
replication slot (  SELECT * from 
pg_create_logical_replication_slot('t','test_decoding');)


run pgbench ( ./pgbench -i -s 10 postgres) on master and select 
pg_logical_slot_get_changes on Slave database


postgres=# select * from pg_logical_slot_get_changes('t',null,null);
2019-03-13 20:34:50.274 IST [26817] LOG:  starting logical decoding 
for slot "t"
2019-03-13 20:34:50.274 IST [26817] DETAIL:  Streaming transactions 
committing after 0/6C60, reading WAL from 0/6C28.
2019-03-13 20:34:50.274 IST [26817] STATEMENT:  select * from 
pg_logical_slot_get_changes('t',null,null);
2019-03-13 20:34:50.275 IST [26817] LOG:  logical decoding found 
consistent point at 0/6C28
2019-03-13 20:34:50.275 IST [26817] DETAIL:  There are no running 
transactions.
2019-03-13 20:34:50.275 IST [26817] STATEMENT:  select * from 
pg_logical_slot_get_changes('t',null,null);
TRAP: FailedAssertion("!(data == tupledata + tuplelen)", File: 
"decode.c", Line: 977)

server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: 2019-03-13 
20:34:50.276 IST [26809] LOG:  server process (PID 26817) was 
terminated by signal 6: Aborted



Andres - Do you think - this is an issue which needs to  be fixed ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: Minimal logical decoding on standbys

2019-04-11 Thread tushar

On 04/10/2019 09:39 PM, Andres Freund wrote:

  Have you reproduced this with Amit's latest version?


Yes-it is very much reproducible.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





[pg_rewind] cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory

2019-05-27 Thread tushar

Hi,

I am getting this below error - after performing pg_rewind when i try to 
start new slave ( which earlier was my master) against PGv12 Beta1.

"
cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
2019-05-27 18:55:47.387 IST [25500] LOG:  entering standby mode
cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory
"

Steps to reproduce -
=
0)mkdir /tmp/archive_dir1
1)Master Setup -> ./initdb -D master  , add these parameters in 
postgresql.conf file -

"
wal_level = hot_standby
wal_log_hints = on
max_wal_senders = 2
wal_keep_segments = 64
hot_standby = on
archive_mode=on
archive_command='cp %p /tmp//archive_dir1/%f'
port=5432
"
Start the server  (./pg_ctl -D master start)
Connect to psql terminal - create table/ insert few rows

2)Slave Setup  ->  ./pg_basebackup -PR -X stream -c fast -h 127.0.0.1 -U 
centos -p 5432 -D slave


add these parameters in postgresql.conf file -
"
primary_conninfo = 'user=centos host=127.0.0.1 port=5432'
promote_trigger_file = '/tmp/s1.txt'
restore_command='cp %p /tmp/archive_dir1/%f'
port=5433
"
Start Slave  (./pg_ctl -D slave start)

3)Touch trigger file (touch /tmp/s1.txt)  -> - standby.signal is gone 
from standby directory and now able to insert rows on standby server.

4)stop master ( ./pg_ctl -D master stop)
5)Perform pg_rewind
[centos@mail-arts bin]$ ./pg_rewind -D master/ 
--source-server="host=localhost port=5433 user=centos password=edb 
dbname=postgres"

pg_rewind: servers diverged at WAL location 0/3003538 on timeline 1
pg_rewind: rewinding from last common checkpoint at 0/260 on timeline 1

pg_rewind: Done!

6)Create standby.signal file on master directory ( touch standby.signal)

7)Modify old master/postgresql.conf file -
primary_conninfo = 'user=centos host=127.0.0.1 port=5433'
promote_trigger_file = '/tmp/s1.txt'
restore_command='cp %p /tmp/archive_dir1/%f'
port=5432

8)Try to start the new slave/old master -

[centos@mail-arts bin]$ ./pg_ctl -D m1/ start
waiting for server to start2019-05-27 18:55:47.237 IST [25499] LOG:  
starting PostgreSQL 12beta1 on x86_64-pc-linux-gnu, compiled by gcc 
(GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
2019-05-27 18:55:47.237 IST [25499] LOG:  listening on IPv6 address 
"::1", port 5432
2019-05-27 18:55:47.237 IST [25499] LOG:  listening on IPv4 address 
"127.0.0.1", port 5432
2019-05-27 18:55:47.239 IST [25499] LOG:  listening on Unix socket 
"/tmp/.s.PGSQL.5432"
2019-05-27 18:55:47.259 IST [25500] LOG:  database system was 
interrupted while in recovery at log time 2019-05-27 18:53:45 IST
2019-05-27 18:55:47.259 IST [25500] HINT:  If this has occurred more 
than once some data might be corrupted and you might need to choose an 
earlier recovery target.

cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
2019-05-27 18:55:47.387 IST [25500] LOG:  entering standby mode
cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory
cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory
2019-05-27 18:55:47.402 IST [25500] LOG:  redo starts at 0/228
cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory
2019-05-27 18:55:47.410 IST [25500] LOG:  invalid record length at 
0/301E740: wanted 24, got 0
2019-05-27 18:55:47.413 IST [25509] FATAL:  the database system is 
starting up
2019-05-27 18:55:47.413 IST [25508] FATAL:  could not connect to the 
primary server: FATAL:  the database system is starting up

cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory
2019-05-27 18:55:47.424 IST [25513] FATAL:  the database system is 
starting up
2019-05-27 18:55:47.425 IST [25512] FATAL:  could not connect to the 
primary server: FATAL:  the database system is starting up

cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory
.cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory

Is there anything i need to change/add  to make it work ?

Thanks.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





pg_logdir_ls function throwing an error if log_filename name is not default

2018-10-24 Thread tushar

Hi ,

Please refer this small testcase -

Open postgresql.conf file and set -

log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log1'
log_filename = 'abcd-%Y-%m-%d_%H%M%S.log'

restart the server  , connect   to psql terminal and create adminpack 
extension and fire pg_logdir_ls function


postgres=# create extension adminpack;
CREATE EXTENSION
postgres=# SELECT filename, filetime FROM pg_logdir_ls() AS A(filetime 
timestamp, filename text) ORDER BY filetime ASC;
ERROR:  the log_filename parameter must equal 
'postgresql-%Y-%m-%d_%H%M%S.log'

postgres=#

postgres=# show log_filename ;
  log_filename
-
 abcd-%Y-%m-%d_%H%M%S.log
(1 row)

postgres=#

Is this an expected , if so - any work around ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Server crash in pg_replication_slot_advance function

2018-02-16 Thread tushar
 postgres.c:4144

#15 0x0080d11a in BackendRun (port=0x2063d00) at postmaster.c:4412
#16 0x0080c88e in BackendStartup (port=0x2063d00) at 
postmaster.c:4084

#17 0x00808cbb in ServerLoop () at postmaster.c:1757
#18 0x00808379 in PostmasterMain (argc=3, argv=0x203aeb0) at 
postmaster.c:1365

#19 0x0073d3d0 in main (argc=3, argv=0x203aeb0) at main.c:228
(gdb)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




After an error - pg_replication_slot is dropped

2018-02-16 Thread tushar

Hi,

Please refer this straight forward scenario  against latest sources of v11.

[centos@centos-cpula bin]$ ./psql  postgres
psql (11devel)
Type "help" for help.

postgres=#  SELECT * FROM 
pg_create_logical_replication_slot('regression_slot1', 'test_decoding', 
true);

    slot_name |    lsn
--+---
 regression_slot1 | 0/40001E8
(1 row)

postgres=#
postgres=# select * from pg_replication_slots;
    slot_name |    plugin | slot_type | datoid | database | 
temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

--+---+---++--+---+++--+--+-+-
 regression_slot1 | test_decoding | logical   |  13220 | postgres | 
t | t  |  28015 |  |  557 | 0/40001B0 | 
0/40001E8

(1 row)

--Try to again create  the same slot , getting an error - which is expected
postgres=#
postgres=#  SELECT * FROM 
pg_create_logical_replication_slot('regression_slot1', 'test_decoding', 
true);

ERROR:  replication slot "regression_slot1" already exists
postgres=#

--No slot found
postgres=# select * from pg_replication_slots;
 slot_name | plugin | slot_type | datoid | database | temporary | 
active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

---++---++--+---+++--+--+-----+-
(0 rows)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Server crash in pg_replication_slot_advance function

2018-02-16 Thread tushar

On 02/16/2018 03:25 PM, amul sul wrote:

  Attached patch proposes a required fix.


Thanks, Issue seems to be fixed with this patch , now getting an 
expected error -ERROR:  cannot move slot to 0/290, minimum is 0/298


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Server Crash while executing pg_replication_slot_advance (second time)

2018-02-16 Thread tushar
ryDesc=0x2cce878, 
direction=ForwardScanDirection, count=0, execute_once=1 '\001') at 
execMain.c:304
#16 0x008afeeb in PortalRunSelect (portal=0x2d0e838, forward=1 
'\001', count=0, dest=0x2d63628) at pquery.c:932
#17 0x008afb79 in PortalRun (portal=0x2d0e838, 
count=9223372036854775807, isTopLevel=1 '\001', run_once=1 '\001', 
dest=0x2d63628, altdest=0x2d63628,

    completionTag=0x7ffd6a436480 "") at pquery.c:773
#18 0x008a9ba0 in exec_simple_query (query_string=0x2ca9438 
"SELECT end_lsn FROM pg_replication_slot_advance('regression_slot1', 
'0/600');") at postgres.c:1120
#19 0x008ade34 in PostgresMain (argc=1, argv=0x2cd4ec0, 
dbname=0x2cd4d20 "postgres", username=0x2ca5f58 "centos") at postgres.c:4144

#20 0x0080d11a in BackendRun (port=0x2cccd00) at postmaster.c:4412
#21 0x0080c88e in BackendStartup (port=0x2cccd00) at 
postmaster.c:4084

#22 0x00808cbb in ServerLoop () at postmaster.c:1757
#23 0x00808379 in PostmasterMain (argc=3, argv=0x2ca3eb0) at 
postmaster.c:1365

#24 0x0073d3d0 in main (argc=3, argv=0x2ca3eb0) at main.c:228
(gdb)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: After an error - pg_replication_slot is dropped

2018-02-16 Thread tushar

On 02/16/2018 04:02 PM, Petr Jelinek wrote:

It's because you are creating temporary slot. Temporary slots are
removed on error, that's a documented behavior.


Thanks for pointing out but It looks weird behavior - even a small mea 
culpa can remove the slot. Temporary table - doesn't go automatically 
after an error ?


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Re: Server Crash while executing pg_replication_slot_advance (second time)

2018-03-05 Thread tushar

Hi,

There is an another similar issue where i am getting an error -

postgres=# select * from pg_replication_slots ;
    slot_name |    plugin | slot_type | datoid | database | 
temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

--+---+---++--+---+++--+--+-+-
 regression_slot2 | test_decoding | logical   |  13223 | postgres | 
f | f  |    |  |  569 | 0/2B000370  | 
0/2B0003A8

(1 row)

postgres=#  SELECT * FROM 
pg_replication_slot_advance('regression_slot2',pg_switch_wal());

    slot_name |  end_lsn
--+
 regression_slot2 | 0/2B0003C0
(1 row)

postgres=#  SELECT * FROM 
pg_replication_slot_advance('regression_slot2',pg_switch_wal());

ERROR:  invalid record length at 0/2B0003C0: wanted 24, got 0
postgres=#

regards,

On 02/16/2018 04:04 PM, tushar wrote:


Hi,

Getting an another server crash against latest sources of v11 while 
executing pg_replication_slot_advance second time . This issue is 
also  reproducible  with the patch given at  
<https://www.postgresql.org/message-id/CAAJ_b9721pXZST4tGs%2BNPbjRxXF7gmrEomVhTW_Pa2QFD7Lr1A%40mail.gmail.com>


Test-Case scenario-

postgres=#  SELECT * FROM 
pg_create_logical_replication_slot('regression_slot1', 
'test_decoding', true);

    slot_name |    lsn
--+---
 regression_slot1 | 0/4000258
(1 row)

postgres=# select * from pg_replication_slots;
    slot_name |    plugin | slot_type | datoid | database | 
temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

--+---+---++--+---+++--+--+-+-
 regression_slot1 | test_decoding | logical   |  13220 | postgres | 
t | t  |   8756 |  |  557 | 0/4000220   | 
0/4000258

(1 row)

postgres=# select pg_switch_wal();
 pg_switch_wal
---
 0/4000270
(1 row)

postgres=# select pg_switch_wal();
 pg_switch_wal
---
 0/500
(1 row)

postgres=# SELECT end_lsn FROM 
pg_replication_slot_advance('regression_slot1', '0/500');

  end_lsn
---
 0/500
(1 row)

postgres=# select * from pg_replication_slots;
    slot_name |    plugin | slot_type | datoid | database | 
temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | 
confirmed_flush_lsn

--+---+---++--+---+++--+--+-+-
 regression_slot1 | test_decoding | logical   |  13220 | postgres | 
t | t  |   8756 |  |  557 | 0/4000220   | 
0/500

(1 row)

postgres=# select pg_switch_wal();
 pg_switch_wal
---
 0/578
(1 row)

postgres=# select pg_switch_wal();
 pg_switch_wal
---
 0/600
(1 row)

postgres=# SELECT end_lsn FROM 
pg_replication_slot_advance('regression_slot1', '0/600');

server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!> \q

Stack trace -

Core was generated by `postgres: centos postgres [local] 
SELECT  '.

Program terminated with signal 6, Aborted.
#0  0x003746e325e5 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install 
glibc-2.12-1.192.el6.x86_64 keyutils-libs-1.4-5.el6.x86_64 
krb5-libs-1.10.3-57.el6.x86_64 libcom_err-1.41.12-22.el6.x86_64 
libselinux-2.0.94-7.el6.x86_64 openssl-1.0.1e-48.el6_8.4.x86_64 
zlib-1.2.3-29.el6.x86_64

(gdb) bt
#0  0x003746e325e5 in raise () from /lib64/libc.so.6
#1  0x003746e33dc5 in abort () from /lib64/libc.so.6
#2  0x00a11f8a in ExceptionalCondition (conditionName=0xad5f00 
"!(((RecPtr) % 8192 >= (((uintptr_t) ((sizeof(XLogPageHeaderData))) + 
((8) - 1)) & ~((uintptr_t) ((8) - 1)",
    errorType=0xad5eed "FailedAssertion", fileName=0xad5ee0 
"xlogreader.c", lineNumber=243) at assert.c:54
#3  0x0055df1d in XLogReadRecord (state=0x2d859c0, 
RecPtr=83886080, errormsg=0x7ffd6a4359f8) at xlogreader.c:243
#4  0x008516bc in pg_logical_replication_slot_advance 
(startlsn=83886080, moveto=100663296) at slotfuncs.c:370
#5  0x00851a21 in pg_replication_slot_advance 
(fcinfo=0x7ffd6a435bb0) at slotfuncs.c:488
#6  0x006defdc in ExecMakeTableFunctionResult 
(setexpr=0x2d5e260, econtext=0x2d5df58, argContext=0x2da97f0, 
expectedDesc=0x2d5f780, randomAccess=0 '\000') at execSRF.c:231
#7  0x006f1782 in FunctionNext (node=0x2d5de40) at 
no

cannot drop replication slot if server is running in single-user mode

2018-03-06 Thread tushar

Hi,

I found that if server is running in single-user mode , there we can 
create replication slot but cannot drop it .


backend> SELECT * FROM pg_create_physical_replication_slot('p');
2018-03-06 13:20:03.441 GMT [14869] LOG:  statement: SELECT * FROM 
pg_create_physical_replication_slot('p');


     1: slot_name    (typeid = 19, len = 64, typmod = -1, byval = f)
     2: lsn    (typeid = 3220, len = 8, typmod = -1, byval = t)
    
     1: slot_name = "p"    (typeid = 19, len = 64, typmod = -1, byval = f)
    
backend> select pg_drop_replication_slot('p');
2018-03-06 13:20:24.390 GMT [14869] LOG:  statement: select 
pg_drop_replication_slot('p');


     1: pg_drop_replication_slot    (typeid = 2278, len = 4, typmod = 
-1, byval = t)

    
2018-03-06 13:20:24.391 GMT [14869] ERROR:  epoll_ctl() failed: Bad file 
descriptor
2018-03-06 13:20:24.391 GMT [14869] STATEMENT:  select 
pg_drop_replication_slot('p');


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company




Getting an error if we provide --enable-tap-tests switch on SLES 12

2023-01-04 Thread tushar
Hi,

We found that if we provide *--enable-tap-tests * switch at the time of PG
sources configuration,  getting this below error
"
checking for Perl modules required for TAP tests... Can't locate IPC/Run.pm
in @INC (you may need to install the IPC::Run module) (@INC contains:
/usr/lib/perl5/site_perl/5.18.2/x86_64-linux-thread-multi
/usr/lib/perl5/site_perl/5.18.2
/usr/lib/perl5/vendor_perl/5.18.2/x86_64-linux-thread-multi
/usr/lib/perl5/vendor_perl/5.18.2
/usr/lib/perl5/5.18.2/x86_64-linux-thread-multi /usr/lib/perl5/5.18.2
/usr/lib/perl5/site_perl .) at ./config/check_modules.pl line 11.

BEGIN failed--compilation aborted at ./config/check_modules.pl line 11.

configure: error: Additional Perl modules are required to run TAP tests
"

look like this is happening because the Perl-IPC-Run package is not
available on SLES 12 where Perl-IPC-Run3 is available.

Srinu (my teammate) found that  IPC::Run is hard coded in config/
check_modules.pl and if we replace Run to Run3 it works (patch is attached,
created by Srinu)

Do we have any better option to work without this workaround?

regards,


perl.patch
Description: Binary data


Re: Getting an error if we provide --enable-tap-tests switch on SLES 12

2023-01-06 Thread tushar

On 1/5/23 2:40 AM, Andres Freund wrote:


Have you actually tested running the tests with the patch applied?

Yes but getting an errors like
t/006_edb_current_audit_logfile.pl .. Can't locate IPC/Run.pm in @INC 
(you may need to install the IPC::Run module) (@INC contains: 
/home/runner/edbas/src/bin/pg_ctl/../../../src/test/perl 
/home/runner/edbas/src/bin/pg_ctl



Do we have any better option to work without this workaround?

You could install the module via cpan :/.



Yes, will try to install.

Thanks Andres.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: CREATEROLE users vs. role properties

2023-01-19 Thread tushar

On 1/19/23 4:47 AM, Nathan Bossart wrote:

This seems like a clear improvement to me.  However, as the attribute
system becomes more sophisticated, I think we ought to improve the error
messages in user.c.  IMHO messages like "permission denied" could be
greatly improved with some added context.
I observed this behavior where the role is having creatrole but still 
it's unable to pass it to another user.


postgres=# create role abc1 login createrole;
CREATE ROLE
postgres=# create user test1;
CREATE ROLE
postgres=# \c - abc1
You are now connected to database "postgres" as user "abc1".
postgres=> alter role test1 with createrole ;
ERROR:  permission denied
postgres=>

which was working previously without patch.

Is this an expected behavior?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: CREATEROLE users vs. role properties

2023-01-19 Thread tushar

On 1/19/23 3:05 PM, tushar wrote:
which was working previously without patch. 

My bad, I was testing against PG v15 but this issue is not
reproducible on master (without patch).

As you mentioned- "This implements the standard idea that you can't give 
permissions
you don't have (but you can give the ones you do have)" but here the 
role is having
createrole  privilege that he cannot pass on to another user? Is this 
expected?


postgres=# create role fff with createrole;
CREATE ROLE
postgres=# create role xxx;
CREATE ROLE
postgres=# set role fff;
SET
postgres=> alter role xxx with createrole;
ERROR:  permission denied
postgres=>

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: almost-super-user problems that we haven't fixed yet

2023-01-19 Thread tushar

On 1/19/23 2:44 AM, Nathan Bossart wrote:

On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:

Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?

I believe < is correct.  At this point, the new backend will have already
claimed a proc struct, so if the number of remaining free slots equals the
number of reserved slots, it is okay.


What's the deal with removing "and no new replication connections will
be accepted" from the documentation? Is the existing documentation
just wrong? If so, should we fix that first? And maybe delete
"non-replication" from the error message that says "remaining
connection slots are reserved for non-replication superuser
connections"? It seems like right now the comments say that
replication connections are a completely separate pool of connections,
but the documentation and the error message make it sound otherwise.
If that's true, then one of them is wrong, and I think it's the
docs/error message. Or am I just misreading it?

I think you are right.  This seems to have been missed in ea92368.  I moved
this part to a new patch that should probably be back-patched to v12.

On that note, I wonder if it's worth changing the "sorry, too many clients
already" message to make it clear that max_connections has been reached.
IME some users are confused by this error, and I think it would be less
confusing if it pointed to the parameter that governs the number of
connection slots.  I'll create a new thread for this.

There is  one typo , for the doc changes, it is  mentioned 
"pg_use_reserved_backends" but i think it supposed to be 
"pg_use_reserved_connections"

under Table 22.1. Predefined Roles.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: almost-super-user problems that we haven't fixed yet

2023-01-19 Thread tushar
On Thu, Jan 19, 2023 at 6:28 PM tushar 
wrote:

> On 1/19/23 2:44 AM, Nathan Bossart wrote:
> > On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:
> >> Should (nfree < SuperuserReservedBackends) be using <=, or am I
> confused?
> > I believe < is correct.  At this point, the new backend will have already
> > claimed a proc struct, so if the number of remaining free slots equals
> the
> > number of reserved slots, it is okay.
> >
> >> What's the deal with removing "and no new replication connections will
> >> be accepted" from the documentation? Is the existing documentation
> >> just wrong? If so, should we fix that first? And maybe delete
> >> "non-replication" from the error message that says "remaining
> >> connection slots are reserved for non-replication superuser
> >> connections"? It seems like right now the comments say that
> >> replication connections are a completely separate pool of connections,
> >> but the documentation and the error message make it sound otherwise.
> >> If that's true, then one of them is wrong, and I think it's the
> >> docs/error message. Or am I just misreading it?
> > I think you are right.  This seems to have been missed in ea92368.  I
> moved
> > this part to a new patch that should probably be back-patched to v12.
> >
> > On that note, I wonder if it's worth changing the "sorry, too many
> clients
> > already" message to make it clear that max_connections has been reached.
> > IME some users are confused by this error, and I think it would be less
> > confusing if it pointed to the parameter that governs the number of
> > connection slots.  I'll create a new thread for this.
> >
> There is  one typo , for the doc changes, it is  mentioned
> "pg_use_reserved_backends" but i think it supposed to be
> "pg_use_reserved_connections"
> under Table 22.1. Predefined Roles.
>
> and in the error message too

[edb@centos7tushar bin]$ ./psql postgres -U r2

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed:
FATAL:  remaining connection slots are reserved for roles with privileges
of pg_use_reserved_backends
[edb@centos7tushar bin]$

regards,


Re: backup manifests

2020-03-12 Thread tushar

On 3/9/20 10:46 PM, Robert Haas wrote:

Seems like expected behavior to me. We could consider providing a more
descriptive error message, but there's now way for it to work.


Right , Error message need to be more user friendly .

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-03-13 Thread tushar

On 3/9/20 10:01 PM, 曾文旌(义从) wrote:

Fixed in global_temporary_table_v18-pg13.patch.


Thanks Wenjing.

I am getting this error  "ERROR:  could not open file 
"base/13589/t3_16440": No such file or directory" if 
max_active_global_temporary_table set to 0


Please refer this scenario -

postgres=# create global temp table  tab1 (n int ) with ( 
on_commit_delete_rows='true');

CREATE TABLE
postgres=# insert into tab1 values (1);
INSERT 0 1
postgres=# select * from tab1;
 n
---
(0 rows)

postgres=# alter system set max_active_global_temporary_table=0;
ALTER SYSTEM
postgres=# \q
[tushar@localhost bin]$ ./pg_ctl -D data/ restart -c -l logs123

waiting for server to start.... done
server started

[tushar@localhost bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# insert into tab1 values (1);
ERROR:  could not open file "base/13589/t3_16440": No such file or directory
postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: backup manifests

2020-03-13 Thread tushar

On 3/12/20 8:16 PM, tushar wrote:

Seems like expected behavior to me. We could consider providing a more
descriptive error message, but there's now way for it to work.


Right , Error message need to be more user friendly . 


One scenario which i feel - should error out  even if  -s option is 
specified.


create  base  backup directory ( ./pg_basebackup data1)
Connect to root user and take out  the permission from pg_hba.conf file 
( chmod 004 pg_hba.conf)


run pg_validatebackup -

[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1
pg_validatebackup: error: could not open file "pg_hba.conf": Permission 
denied


run pg_validatebackup  with switch -s

[centos@tushar-ldap-docker bin]$ ./pg_validatebackup data1 -s
pg_validatebackup: backup successfully verified

here file is not accessible so i think - it should throw you an error ( 
the same above one) instead of   blindly skipping it.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-03-16 Thread tushar

Hi Wenjing,

I have created a global table on X session but i am not able to drop 
from Y session ?


X session - ( connect to psql terminal )
postgres=# create global temp table foo(n int);
CREATE TABLE
postgres=# select * from foo;
 n
---
(0 rows)


Y session - ( connect to psql terminal )
postgres=# drop table foo;
ERROR:  can not drop relation foo when other backend attached this 
global temp table


Table has been created  so i think - user should be able to drop from 
another session as well without exit from X session.


regards,

On 3/16/20 1:35 PM, 曾文旌(义从) wrote:



2020年3月16日 下午2:23,Prabhat Sahu <mailto:prabhat.s...@enterprisedb.com>> 写道:


Hi Wenjing,
Please check the below scenario, where the Foreign table on GTT not 
showing records.


postgres=# create extension postgres_fdw;
CREATE EXTENSION
postgres=# do $d$
    begin
        execute $$create server fdw foreign data wrapper postgres_fdw 
options (host 'localhost',dbname 'postgres',port 
'$$||current_setting('port')||$$')$$;

    end;
$d$;
DO
postgres=# create user mapping for public server fdw;
CREATE USER MAPPING

postgres=# create table lt1 (c1 integer, c2 varchar(50));
CREATE TABLE
postgres=# insert into lt1 values (1,'c21');
INSERT 0 1
postgres=# create foreign table ft1 (c1 integer, c2 varchar(50)) 
server fdw options (table_name 'lt1');

CREATE FOREIGN TABLE
postgres=# select * from ft1;
 c1 | c2
+-
  1 | c21
(1 row)

postgres=# create global temporary table gtt1 (c1 integer, c2 
varchar(50));

CREATE TABLE
postgres=# insert into gtt1 values (1,'gtt_c21');
INSERT 0 1
postgres=# create foreign table f_gtt1 (c1 integer, c2 varchar(50)) 
server fdw options (table_name 'gtt1');

CREATE FOREIGN TABLE

postgres=# select * from gtt1;
 c1 |   c2
+-
  1 | gtt_c21
(1 row)

postgres=# select * from f_gtt1;
 c1 | c2
+
(0 rows)

--


I understand that postgre_fdw works similar to dblink.
postgre_fdw access to the table requires a new connection.
The data in the GTT table is empty in the newly established connection.
Because GTT shares structure but not data between connections.

Try local temp table:
create temporary table ltt1 (c1 integer, c2 varchar(50));

insert into ltt1 values (1,'gtt_c21');

create foreign table f_ltt1 (c1 integer, c2 varchar(50)) server fdw 
options (table_name 'ltt1');


select * from ltt1;
 c1 |   c2
+-
  1 | gtt_c21
(1 row)

select * from l_gtt1;
ERROR:  relation "l_gtt1" does not exist
LINE 1: select * from l_gtt1;


Wenjing



With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>





--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: backup manifests

2020-03-16 Thread tushar

On 3/14/20 2:04 AM, Robert Haas wrote:

OK. Done in the attached version


Thanks. Verified.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-03-25 Thread tushar

On 3/17/20 9:15 AM, 曾文旌(义从) wrote:

reindex GTT is already supported

Please check global_temporary_table_v20-pg13.patch


Please refer this scenario -


postgres=# create global temp table co(n int) ;
CREATE TABLE

postgres=# create index fff on co(n);
CREATE INDEX

Case 1-
postgres=# reindex table  co;
REINDEX

Case -2
postgres=# reindex database postgres ;
WARNING:  global temp table "public.co" skip reindexed
REINDEX
postgres=#

Case 2 should work as similar to Case 1.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-03-25 Thread tushar

On 3/17/20 9:15 AM, 曾文旌(义从) wrote:

Please check global_temporary_table_v20-pg13.patch


There is a typo in the error message

postgres=# create global temp table test(a int ) 
with(on_commit_delete_rows=true) on commit delete rows;
ERROR:  can not defeine global temp table with on commit and with clause 
at same time

postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-03-27 Thread tushar

On 3/27/20 10:55 AM, 曾文旌 wrote:

Hi Wenjing,
This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope 
you have prepared the patch on top of some previous commit.

Could you please rebase the patch which we can apply on HEAD ?

Yes, It looks like the built-in functions are in conflict with new code.



This error message looks wrong  to me-

postgres=# reindex table concurrently t ;
ERROR:  cannot create indexes on global temporary tables using 
concurrent mode

postgres=#

Better message would be-

ERROR:  cannot reindex global temporary tables concurrently

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-03-27 Thread tushar

On 3/27/20 10:55 AM, 曾文旌 wrote:

Hi Wenjing,
This patch(gtt_v21_pg13.patch) is not applicable on PG HEAD, I hope 
you have prepared the patch on top of some previous commit.

Could you please rebase the patch which we can apply on HEAD ?

Yes, It looks like the built-in functions are in conflict with new code.


In this below scenario, pg_dump is failing -

test=# CREATE database foo;
CREATE DATABASE
test=# \c foo
You are now connected to database "foo" as user "tushar".
foo=# CREATE GLOBAL TEMPORARY TABLE bar(c1 bigint, c2 bigserial) on 
commit PRESERVE rows;

CREATE TABLE
foo=# \q

[tushar@localhost bin]$ ./pg_dump -Fp foo > /tmp/rf2
pg_dump: error: query to get data of sequence "bar_c2_seq" returned 0 
rows (expected 1)

[tushar@localhost bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: refactoring basebackup.c

2022-01-19 Thread tushar
oint starting: force wait
2022-01-19 21:27:51.687 IST [30229] LOG:  checkpoint complete: wrote 0 
buffers (0.0%); 0 WAL file(s) added, 1 removed, 0 recycled; write=0.001 
s, sync=0.001 s, total=0.046 s; sync files=0, longest=0.000 s, 
average=0.000 s; distance=16383 kB, estimate=16383 kB

pg_basebackup: checkpoint completed

NOTICE:  WAL archiving is not enabled; you must ensure that all required 
WAL segments are copied through other means to complete the backup

pg_basebackup: base backup completed
[edb@centos7tushar bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-25 Thread tushar

On 1/22/22 12:03 AM, Robert Haas wrote:

I committed the base backup target patch yesterday, and today I
updated the remaining code in light of Michael Paquier's commit
5c649fe153367cdab278738ee4aebbfd158e0546. Here is the resulting patch.

Thanks Robert,  I tested against the latest PG Head and found a few issues -

A)Getting syntax error if -z is used along with -t

[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/data902 -z -Xfetch
pg_basebackup: error: could not initiate base backup: ERROR:  syntax error

OR

[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/t2 
--compress=server-gzip:9 -Xfetch -v -z

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: error: could not initiate base backup: ERROR:  syntax error

B)No information of "client-gzip" or "server-gzip" added under 
"--compress" option/method of ./pg_basebackup --help.


C) -R option is silently ignoring

[edb@centos7tushar bin]$  ./pg_basebackup  -Z 4  -v  -t server:/tmp/pp 
-Xfetch -R

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/3028 on timeline 1
pg_basebackup: write-ahead log end point: 0/3100
pg_basebackup: base backup completed
[edb@centos7tushar bin]$

go to /tmp/pp folder and extract it - there is no "standby.signal" file 
and if we start cluster against this data directory,it will not be in 
slave mode.


if this is not supported then I think we should throw some errors.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-27 Thread tushar

On 1/27/22 2:15 AM, Robert Haas wrote:

The attached patch should fix this, too.

Thanks, the issues seem to be fixed now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-27 Thread tushar

On 1/27/22 10:17 PM, Robert Haas wrote:

Cool. I committed that patch.
Thanks , Please refer to this scenario  where the label is set to  0 for 
server-gzip but the directory is still  compressed


[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/11 --gzip 
--compress=0 -Xnone

NOTICE:  all required WAL segments have been archived
[edb@centos7tushar bin]$ ls /tmp/11
16384.tar  backup_manifest  base.tar


[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/10 --gzip 
--compress=server-gzip:0 -Xnone

NOTICE:  all required WAL segments have been archived
[edb@centos7tushar bin]$ ls /tmp/10
16384.tar.gz  backup_manifest  base.tar.gz

0 is for no compression so the directory should not be compressed if we 
mention server-gzip:0 and both these

above scenarios should match?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2022-01-28 Thread tushar

On 1/27/22 11:12 PM, Robert Haas wrote:

Well what's weird here is that you are using both --gzip and also
--compress. Those both control the same behavior, so it's a surprising
idea to specify both. But I guess if someone does, we should make the
second one fully override the first one. Here's a patch to try to do
that.

right, the current behavior was  -

[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip -Z 
none  -Xnone

pg_basebackup: error: cannot use compression level with method none
Try "pg_basebackup --help" for more information.

and even this was not matching with PG v14 behavior too
e.g
 ./pg_basebackup -Ft -z -Z none  -D /tmp/test1  ( working in PG v14 but 
throwing above error on PG HEAD)


and somewhere we were breaking the backward compatibility.

now with your patch -this seems working fine

[edb@centos7tushar bin]$ ./pg_basebackup  -t server:/tmp/y101 --gzip*-Z 
none*  -Xnone
NOTICE:  WAL archiving is not enabled; you must ensure that all required 
WAL segments are copied through other means to complete the backup

[edb@centos7tushar bin]$ ls /tmp/y101
backup_manifest *base.tar*

OR

[edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/y0p -Z none  
-Xfetch *-z*

[edb@centos7tushar bin]$ ls /tmp/y0p
backup_manifest *base.tar.gz*

but what about server-gzip:0? should it allow compressing the directory?

[edb@centos7tushar bin]$  ./pg_basebackup  -t server:/tmp/1 
--compress=server-gzip:0  -Xfetch

[edb@centos7tushar bin]$ ls /tmp/1
backup_manifest  base.tar.gz

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: refactoring basebackup.c

2022-02-15 Thread tushar

On 2/15/22 6:48 PM, Jeevan Ladhe wrote:

Please find the attached updated version of patch for ZSTD server side
Thanks, Jeevan, I again tested with the attached patch, and as mentioned 
the crash is fixed now.


also, I tested with different labels with gzip V/s zstd against data 
directory size which is 29GB and found these results



./pg_basebackup  -t server:/tmp/ 
--compress=server-zstd:  -Xnone -n -N --no-estimate-size -v


--compress=server-zstd:1 =  compress directory size is  1.3GB
--compress=server-zstd:4 = compress  directory size is  1.3GB
--compress=server-zstd:7 = compress  directory size is  1.2GB
--compress=server-zstd:12 = compress directory size is 1.2GB


===
./pg_basebackup  -t server:/tmp/ 
--compress=server-gzip:  -Xnone -n -N --no-estimate-size -v


--compress=server-gzip:1 =  compress directory size is  1.8GB
--compress=server-gzip:4 = compress  directory size is  1.6GB
--compress=server-gzip:9 = compress  directory size is  1.6GB
===

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: almost-super-user problems that we haven't fixed yet

2023-01-19 Thread tushar
On Thu, Jan 19, 2023 at 6:50 PM tushar 
wrote:

> and in the error message too
>
> [edb@centos7tushar bin]$ ./psql postgres -U r2
>
> psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed:
> FATAL:  remaining connection slots are reserved for roles with privileges
> of pg_use_reserved_backends
> [edb@centos7tushar bin]$
>


I think there is also a need to improve the error message if non
super users are not able to connect due to slot unavailability.
--Connect to psql terminal, create a user
create user t1;

--set these GUC parameters in postgresql.conf and restart the server

max_connections = 3 # (change requires restart)

superuser_reserved_connections = 1  # (change requires restart)

reserved_connections = 1

psql terminal ( connect to superuser),  ./psql postgres
psql terminal (try to connect to user t1) ,  ./psql postgres -U t1
Error message is

psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed:
FATAL:  remaining connection slots are reserved for roles with privileges
of pg_use_reserved_backends



that is not true because the superuser can still able to connect,

probably in this case message should be like this -

"remaining connection slots are reserved for roles with privileges of
pg_use_reserved_connections and for superusers" or something better.

regards,


Re: almost-super-user problems that we haven't fixed yet

2023-01-20 Thread tushar

On 1/19/23 6:28 PM, tushar wrote:


There is  one typo , for the doc changes, it is  mentioned 
"pg_use_reserved_backends" but i think it supposed to be 
"pg_use_reserved_connections"

under Table 22.1. Predefined Roles.


Thanks, this is fixed now with the latest patches.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: CREATEROLE users vs. role properties

2023-01-23 Thread tushar
On Thu, Jan 19, 2023 at 8:34 PM Robert Haas  wrote:

> On Thu, Jan 19, 2023 at 6:15 AM tushar 
> wrote:
> > postgres=# create role fff with createrole;
> > CREATE ROLE
> > postgres=# create role xxx;
> > CREATE ROLE
> > postgres=# set role fff;
> > SET
> > postgres=> alter role xxx with createrole;
> > ERROR:  permission denied
> > postgres=>
>
> Here fff would need ADMIN OPTION on xxx to be able to make modifications
> to it.
>
> See
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb


Thanks, Robert, that was helpful.

Please refer to this scenario where I am able to give createrole privileges
but not replication  privilege to role

postgres=# create role t1 createrole;
CREATE ROLE
postgres=# create role t2 replication;
CREATE ROLE
postgres=# create role t3;
CREATE ROLE
postgres=# grant t3 to t1,t2 with admin option;
GRANT ROLE
postgres=# set session authorization t1;
SET

*postgres=> alter role t3 createrole ;ALTER ROLE*
postgres=> set session authorization t2;
SET
postgres=> alter role t3 replication;
ERROR:  permission denied

This same behavior was observed in v14 as well but why i am able to give
createrole grant but not replication?

regards,


Re: CREATEROLE users vs. role properties

2023-01-24 Thread tushar
On Mon, Jan 23, 2023 at 10:28 PM Robert Haas  wrote:

>
> In previous releases, you needed to have CREATEROLE in order to be
> able to perform user management functions. In master, you still need
> CREATEROLE, and you also need ADMIN OPTION on the role. In this
> scenario, only t1 meets those requirements with respect to t3, so only
> t1 can manage t3. t2 can SET ROLE to t3 and grant membership in t3,
> but it can't set role properties on t3 or change t3's password or
> things like that, because the ability to make user management changes
> is controlled by CREATEROLE.
>
ok.

>
> The patch is only intended to change behavior in the case where you
> possess both CREATEROLE and also ADMIN OPTION on the target role (but
> not SUPERUSER). In that scenario, it intends to change whether you can
> give or remove the CREATEDB, REPLICATION, and BYPASSRLS properties
> from a user.
>

right, Neha/I have tested with different scenarios using
createdb/replication/bypassrls and other
privileges properties on the role. also checked
pg_dumpall/pg_basebackup and everything looks fine.

regards,


pg_upgrade is failed for 'plpgsql_call_handler' handler

2021-06-03 Thread tushar

Hi,

In one of my testing scenario, i found pg_upgrade is failed for 
'plpgsql_call_handler' handler


Steps to reproduce - ( on any supported version of PG)

Perform initdb ( ./initdb -D  d1 ;  ./initdb -D d2)

Start d1 cluster(./pg_ctl -D d1 start) , connect to postgres (./psql 
postgres)  and create this language


postgres=# CREATE TRUSTED LANGUAGE plspl_sm HANDLER plpgsql_call_handler;
CREATE LANGUAGE

stop the server (./pg_ctl -D d1 stop)

perform pg_upgrade ( ./pg_upgrade -d d1 -D d2 -b . B .)

will fail with these message

pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 825; 2612 16384 PROCEDURAL LANGUAGE plspl_sm edb
pg_restore: error: could not execute query: ERROR:  could not open 
extension control file 
"/home/edb/pg14/pg/edbpsql/share/postgresql/extension/plspl_sm.control": 
No such file or directory

Command was: CREATE OR REPLACE PROCEDURAL LANGUAGE "plspl_sm";

is this expected ?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2021-07-12 Thread tushar

On 7/8/21 9:26 PM, Robert Haas wrote:

Here at last is a new version.
Please refer this scenario ,where backup target using 
--server-compression is closing the server

unexpectedly if we don't provide -no-manifest option

[tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4  -t 
server:/tmp/data_1  -Xnone
NOTICE:  WAL archiving is not enabled; you must ensure that all required 
WAL segments are copied through other means to complete the backup
pg_basebackup: error: could not read COPY data: server closed the 
connection unexpectedly

    This probably means the server terminated abnormally
    before or while processing the request.

if we try to check with -Ft then this same scenario  is working ?

[tushar@localhost bin]$ ./pg_basebackup --server-compression=gzip4  -Ft 
-D data_0 -Xnone
NOTICE:  WAL archiving is not enabled; you must ensure that all required 
WAL segments are copied through other means to complete the backup

[tushar@localhost bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





[v15 beta] pg_upgrade failed if earlier executed with -c switch

2022-06-03 Thread tushar

Hi,

While performing pg_upgrade from v15Beta binaries/source,
I got this error below error

could not create directory "d2/pg_upgrade_output.d": File exists
Failure, exiting


*Steps to reproduce *
v15 Beta sources
initalize a cluster ( ./initdb -D d1)
initalize another cluster ( ./initdb -D d2)
run pg_upgrade with -c option  ( ./pg_upgrade -d d1 -D d2 -b . -B . -c -v)
run pg_upgrade without -c option ( ./pg_upgrade -d d1 -D d2 -b . -B .)
--
--
--
Error


This behavior was not there in earlier released versions, i guess.
Is it expected behavior now onwards?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: replacing role-level NOINHERIT with a grant-level option

2022-07-11 Thread tushar
On Sat, Jul 9, 2022 at 1:27 AM Robert Haas  wrote:

> On Tue, Jul 5, 2022 at 8:04 AM Robert Haas  wrote:
> > On Sun, Jul 3, 2022 at 1:17 PM Nathan Bossart 
> wrote:
> > > If by "bolder" you mean "mark [NO]INHERIT as
> deprecated-and-to-be-removed
> > > and begin emitting WARNINGs when it and WITH INHERIT DEFAULT are
> used," I
> > > think it's worth consideration.  I suspect it will be hard to sell
> removing
> > > [NO]INHERIT in v16 because it would introduce a compatibility break
> without
> > > giving users much time to migrate.  I could be wrong, though.
> >
> > It's a fair point. But, if our goal for v16 is to do something that
> > could lead to an eventual deprecation of [NO]INHERIT, I still think
> > removing WITH INHERIT DEFAULT from the patch set is probably a good
> > idea.
>
> So here is an updated patch with that change.
>
>
Thanks, Robert, I created a few objects with different privileges on v14.4
e.g


postgres=# \dp+ atest2

   Access privileges

 Schema |  Name  | Type  |   Access privileges   |
Column privileges | Policies

++---+---+---+--

 public | atest2 | table | regress_priv_user1=arwdDxt/regress_priv_user1+|
|

||   | regress_priv_user2=r/regress_priv_user1  +|
|

||   | regress_priv_user3=w/regress_priv_user1  +|
|

||   | regress_priv_user4=a/regress_priv_user1  +|
|

||   | regress_priv_user5=D/regress_priv_user1   |
|

(1 row)




and found that after pg_upgrade there is no change on privileges  on
v16(w/patch)


One scenario where the syntax is created in pg_dumpall is wrong


postgres=# create user u1;

CREATE ROLE

postgres=# create group g1 with user u1;

CREATE ROLE

postgres=# grant g1 to u1 with admin option, inherit false;

GRANT ROLE

postgres=#


Perform pg_dumpall


This is the syntax coming


"


-- Role memberships

--


GRANT g1 TO u1 WITH ADMIN OPTION WITH INHERIT FALSE GRANTED BY edb;


"


If we run this syntax on psql, there is an error.


postgres=# GRANT g1 TO u1 WITH ADMIN OPTION WITH INHERIT FALSE GRANTED BY
edb;

ERROR:  syntax error at or near "WITH"



regards,


Re: replacing role-level NOINHERIT with a grant-level option

2022-07-12 Thread tushar

On 7/11/22 11:01 PM, Robert Haas wrote:

Oops. Here is a rebased version of v3 which aims to fix this bug.

Thanks, Issue seems to be fixed with this patch.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: replacing role-level NOINHERIT with a grant-level option

2022-07-14 Thread tushar

On 7/11/22 11:01 PM, Robert Haas wrote:

Oops. Here is a rebased version of v3 which aims to fix this bug.

I found one issue where pg_upgrade is failing

PG v14.4 , create these below objects

create user u1 with superuser;
create user u3;
create group g2 with user  u1;

now try to perform pg_upgrade from v16(w/patch), it is failing with 
these messages


[edb@centos7tushar bin]$ tail -10 
dc2/pg_upgrade_output.d/20220714T195919.494/log/pg_upgrade_utility.log

--
--
CREATE ROLE "u3";
CREATE ROLE
ALTER ROLE "u3" WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN 
NOREPLICATION NOBYPASSRLS;

ALTER ROLE
GRANT "g2" TO "u1" WITH  GRANTED BY "edb";
psql:dc2/pg_upgrade_output.d/20220714T195919.494/dump/pg_upgrade_dump_globals.sql:47: 
ERROR:  syntax error at or near "BY"

LINE 1: GRANT "g2" TO "u1" WITH  GRANTED BY "edb";
         ^
This issue is not reproducible on PG v16 (without patch).

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: replacing role-level NOINHERIT with a grant-level option

2022-07-20 Thread tushar

On 7/19/22 12:56 AM, Robert Haas wrote:

Another good catch. Here is v5 with a fix for that problem.

Thanks, the issue is fixed now.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: replacing role-level NOINHERIT with a grant-level option

2022-07-28 Thread tushar

On 7/19/22 12:56 AM, Robert Haas wrote:

Another good catch. Here is v5 with a fix for that problem.
Here is one scenario in which I have NOT granted (inherit false) 
explicitly but still revoke

command is changing the current state

postgres=# create group foo;
CREATE ROLE
postgres=# create user bar in group foo;
CREATE ROLE
postgres=# revoke inherit option for foo from bar;
REVOKE ROLE

[edb@centos7tushar bin]$ ./pg_dumpall > /tmp/a11

[edb@centos7tushar bin]$ cat /tmp/a11 |grep 'inherit false' -i
GRANT foo TO bar WITH INHERIT FALSE GRANTED BY edb;

I think this revoke command should be ignored and inherit option should 
remain 'TRUE'

as it was before?

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: replacing role-level NOINHERIT with a grant-level option

2022-07-28 Thread tushar

On 7/28/22 8:03 PM, Robert Haas wrote:

No, it seems to me that's behaving as intended. REVOKE BLAH OPTION ...
is intended to be a way of switching an option off.
Ok, Thanks, Robert. I tested with a couple of more scenarios like 
pg_upgrade/pg_dumpall /grant/revoke .. with admin option/inherit

and things look good to me.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: extensible options syntax for replication parser?

2021-09-22 Thread tushar

On 9/23/21 1:25 AM, Robert Haas wrote:

postgres=# create subscription test CONNECTION 'host=127.0.0.1 user=postgres' 
PUBLICATION test with (create_slot = true);
ERROR:  could not create replication slot "test": ERROR:  syntax error

Thanks. I have attempted to fix these problems in the attached version.


l checked and look like the issue is still not fixed against v7-* patches -

postgres=#  create subscription test CONNECTION 'host=127.0.0.1 user=edb 
dbname=postgres' PUBLICATION p with (create_slot = true);

ERROR:  could not create replication slot "test": ERROR:  syntax error

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: extensible options syntax for replication parser?

2021-09-24 Thread tushar

On 9/23/21 8:35 PM, Robert Haas wrote:

Thanks. Looks like that version had some stupid mistakes. Here's a new one.


Thanks, the reported issue seems to be fixed now for HEAD w/patch 
(publication) to HEAD w/patch (subscription) but still getting the same 
error if we try to perform v12(publication) to HEAD 
w/patch(subscription) . I checked there is no such issue for 
v12(publication) to v14 RC1 (subscription)


postgres=#  create subscription sub123s CONNECTION 'host=127.0.0.1 
user=edb  port= dbname=postgres' PUBLICATION pp with (slot_name = 
from_v14);

ERROR:  could not create replication slot "from_v14": ERROR: syntax error
postgres=#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: extensible options syntax for replication parser?

2021-09-24 Thread tushar

On 9/24/21 10:36 PM, Robert Haas wrote:

I am not able to reproduce this failure. I suspect you made a mistake
in testing, because my test case before sending the patch was
basically the same as yours, except that I was testing with v13. But I
tried again with v12 and it seems fine:


Right, on a clean setup -I am not also not able to reproduce this issue. 
Thanks for checking at your end.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: extensible options syntax for replication parser?

2021-09-24 Thread tushar

On 9/24/21 10:36 PM, Robert Haas wrote:

Here's v9, fixing the issue reported by Fujii Masao.


Please refer this scenario where publication on v14RC1  and subscription 
on HEAD (w/patch)


--create a subscription with parameter two_phase=1 on HEAD

postgres=# CREATE SUBSCRIPTION r1015 CONNECTION 'dbname=postgres 
host=localhost port=5454' PUBLICATION p WITH (two_phase=1);

NOTICE:  created replication slot "r1015" on publisher
CREATE SUBSCRIPTION
postgres=#

--check on 14RC1

postgres=# select two_phase from pg_replication_slots where 
slot_name='r105';

 two_phase
---
 f
(1 row)

so are we silently ignoring this parameter as it is not supported on 
v14RC/HEAD ? and if yes then why not we just throw an message like

ERROR:  unrecognized subscription parameter: "two_phase"

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: extensible options syntax for replication parser?

2021-09-24 Thread tushar

On 9/24/21 11:57 PM, tushar wrote:
postgres=# select two_phase from pg_replication_slots where 
slot_name='r105'; 


Correction -Please read  'r105' to 'r1015'

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





two_phase commit parameter used in subscription for a publication which is on < 15.

2021-09-27 Thread tushar
Hi,

Please refer to this scenario
CASE 1- HEAD (publication)-> HEAD (Subscription)
CASE 2 - PG 14 (Publication) ->HEAD (Subscription)

Test-case -
Publication  = create table t(n int); create publication p for table t;
Subscription = create table t(n int);
create subscription  s connection 'dbname=postgres host=localhost '
PUBLICATION p WITH (two_phase=1);

Result-
CASE 1-
postgres=# select two_phase from pg_replication_slots where slot_name='s';
 two_phase
---
 t
(1 row)


CASE 2 -
postgres=# select two_phase from pg_replication_slots where slot_name='s';
 two_phase
---
 f
(1 row)

so are we silently ignoring this parameter as it is not supported on v14 ?
and if yes then why not we just throw a message like
ERROR:  unrecognized subscription parameter: "two_phase"

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company


Re: extensible options syntax for replication parser?

2021-09-27 Thread tushar

On 9/25/21 12:08 AM, Robert Haas wrote:

two_phase is new in v15, something you could also find out by checking
the documentation. Now if the patch changes the way two_phase
interacts with older versions, that's a bug in the patch and we should
fix it. But if the same issue exists without the patch then I'm not
sure why you are raising it here rather than on the thread where that
feature was developed.


Right, issue is reproducible on HEAD as well. I should have checked 
that, sorry about it.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: extensible options syntax for replication parser?

2021-09-27 Thread tushar

On 9/27/21 9:29 AM, Ajin Cherian wrote:

And in case you do see a problem, I request you create a seperate
thread for this. I didn't want to derail this patch.


It would be great if we throw an error rather than silently ignoring 
this parameter ,  I opened a separate email for this


https://www.postgresql.org/message-id/CAC6VRoY3SAFeO7kZ0EOVC6mX%3D1ZyTocaecTDTh209W20KCC_aQ%40mail.gmail.com

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2021-07-19 Thread tushar

On 7/8/21 9:26 PM, Robert Haas wrote:

Here at last is a new version.
if i try to perform pg_basebackup using "-t server " option against 
localhost V/S remote machine ,

i can see difference in backup size.

data directory whose size is

[edb@centos7tushar bin]$ du -sch data/
578M    data/
578M    total

-h=localhost

[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2*-h 
localhost*   -Xnone --no-manifest -P -v

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
NOTICE:  all required WAL segments have been archived
329595/329595 kB (100%), 1/1 tablespace
pg_basebackup: base backup completed

[edb@centos7tushar bin]$ du -sch /tmp/all_data2
322M    /tmp/all_data2
322M    total
[edb@centos7tushar bin]$

-h=remote

[edb@centos7tushar bin]$ ./pg_basebackup -t server:/tmp/all_data2 *-h 
* -Xnone --no-manifest -P -v

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
NOTICE:  all required WAL segments have been archived
170437/170437 kB (100%), 1/1 tablespace
pg_basebackup: base backup completed

[edb@0 bin]$ du -sch /tmp/all_data2
167M    /tmp/all_data2
167M    total
[edb@0 bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: refactoring basebackup.c

2021-07-19 Thread tushar

On 7/16/21 12:43 PM, Dilip Kumar wrote:

I think the problem is that bbsink_gzip_end_archive() is not
forwarding the end request to the next bbsink.  The attached patch so
fix it.


Thanks Dilip. Reported issue seems to be fixed now with your patch

[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4  -t 
server:/tmp/data_2 -v  -Xnone -R

pg_basebackup: initiating base backup, waiting for checkpoint to complete
pg_basebackup: checkpoint completed
NOTICE:  all required WAL segments have been archived
pg_basebackup: base backup completed
[edb@centos7tushar bin]$

OR

[edb@centos7tushar bin]$ ./pg_basebackup   -t server:/tmp/pv1 -Xnone   
--server-compression=gzip4 -r 1024  -P

NOTICE:  all required WAL segments have been archived
23133/23133 kB (100%), 1/1 tablespace
[edb@centos7tushar bin]$

Please refer this scenario ,where -R option is working with '-t server' 
but not with -Ft


--not working

[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4  
-Ft  -D ccv   -Xnone  -R --no-manifest

pg_basebackup: error: unable to parse archive: base.tar.gz
pg_basebackup: only tar archives can be parsed
pg_basebackup: the -R option requires pg_basebackup to parse the archive
pg_basebackup: removing data directory "ccv"

--working

[edb@centos7tushar bin]$ ./pg_basebackup --server-compression=gzip4 -t   
server:/tmp/ccv    -Xnone  -R --no-manifest

NOTICE:  all required WAL segments have been archived
[edb@centos7tushar bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: refactoring basebackup.c

2021-07-22 Thread tushar

On 7/19/21 8:29 PM, Dilip Kumar wrote:

I am not sure why this is working, from the code I could not find if
the backup target is server then are we doing anything with the -R
option or we are just silently ignoring it


OK, in an  another scenario  I can see , "-t server" working with 
"--server-compression" option  but not with -z  or -Z ?


"-t  server" with option "-z"  / or (-Z )

[tushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataN -Xnone  -z  
--no-manifest -p 9033

pg_basebackup: error: only tar mode backups can be compressed
Try "pg_basebackup --help" for more information.

tushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataNa -Z 1    
-Xnone  --server-compression=gzip4  --no-manifest -p 9033

pg_basebackup: error: only tar mode backups can be compressed
Try "pg_basebackup --help" for more information.

"-t server" with "server-compression"  (working)

[tushar@localhost bin]$ ./pg_basebackup -t server:/tmp/dataN -Xnone  
--server-compression=gzip4  --no-manifest -p 9033
NOTICE:  WAL archiving is not enabled; you must ensure that all required 
WAL segments are copied through other means to complete the backup

[tushar@localhost bin]$

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: replacing role-level NOINHERIT with a grant-level option

2022-08-24 Thread tushar

On 8/24/22 12:28 AM, Robert Haas wrote:

This patch needed to be rebased pretty extensively after commit
ce6b672e4455820a0348214be0da1a024c3f619f. Here is a new version.

Thanks, Robert, I have retested this patch with my previous scenarios
and things look good to me.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Patch] ALTER SYSTEM READ ONLY

2020-06-17 Thread tushar

On 6/16/20 7:25 PM, amul sul wrote:
Attached patch proposes $Subject feature which forces the system into 
read-only
mode where insert write-ahead log will be prohibited until ALTER 
SYSTEM READ

WRITE executed.


Thanks Amul.

1) ALTER SYSTEM

postgres=# alter system read only;
ALTER SYSTEM
postgres=# alter  system reset all;
ALTER SYSTEM
postgres=# create table t1(n int);
ERROR:  cannot execute CREATE TABLE in a read-only transaction

Initially i thought after firing 'Alter system reset all' , it will be 
back to  normal.


can't we have a syntax like - "Alter system set read_only='True' ; "

so that ALTER SYSTEM command syntax should be same for all.

postgres=# \h alter system
Command: ALTER SYSTEM
Description: change a server configuration parameter
Syntax:
ALTER SYSTEM SET configuration_parameter { TO | = } { value | 'value' | 
DEFAULT }


ALTER SYSTEM RESET configuration_parameter
ALTER SYSTEM RESET ALL

How we are going to justify this in help command of ALTER SYSTEM ?

2)When i connected to postgres in a single user mode , i was not able to 
set the system in read only


[edb@tushar-ldap-docker bin]$ ./postgres --single -D data postgres


PostgreSQL stand-alone backend 14devel
backend> alter system read only;
ERROR:  checkpointer is not running

backend>

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Patch] ALTER SYSTEM READ ONLY

2020-06-24 Thread tushar

On 6/22/20 11:59 AM, Amul Sul wrote:

2. Now skipping the startup checkpoint if the system is read-only mode, as
discussed [2].


I am not able to perform pg_checksums o/p after shutting down my server 
in read only  mode .


Steps -

1.initdb (./initdb -k -D data)
2.start the server(./pg_ctl -D data start)
3.connect to psql (./psql postgres)
4.Fire query (alter system read only;)
5.shutdown the server(./pg_ctl -D data stop)
6.pg_checksums

[edb@tushar-ldap-docker bin]$ ./pg_checksums -D data
pg_checksums: error: cluster must be shut down
[edb@tushar-ldap-docker bin]$

Result - (when server is not in read only)

[edb@tushar-ldap-docker bin]$ ./pg_checksums -D data
Checksum operation completed
Files scanned:  916
Blocks scanned: 2976
Bad checksums:  0
Data checksum version: 1

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-07 Thread tushar

On 4/7/20 2:27 PM, 曾文旌 wrote:
Vacuum full GTT, cluster GTT is already 
supported in global_temporary_table_v24-pg13.patch.


Here , it is skipping GTT

postgres=# \c foo
You are now connected to database "foo" as user "tushar".
foo=# create global temporary table  g123( c1 int) ;
CREATE TABLE
foo=# \q
[tushar@localhost bin]$ ./vacuumdb --full  foo
vacuumdb: vacuuming database "foo"
WARNING:  skipping vacuum global temp table "g123" because storage is 
not initialized for current session


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-08 Thread tushar

On 4/7/20 2:27 PM, 曾文旌 wrote:
Vacuum full GTT, cluster GTT is already 
supported in global_temporary_table_v24-pg13.patch.

Please refer this below scenario , where pg_upgrade is failing
1)Server is up and running (./pg_ctl -D data status)
2)Stop the server ( ./pg_ctl -D data stop)
3)Connect to server using single user mode ( ./postgres --single -D data 
postgres) and create a global temp table

[tushar@localhost bin]$ ./postgres --single -D data1233 postgres

PostgreSQL stand-alone backend 13devel
backend> create global temp table t(n int);

--Press Ctl+D to exit

4)Perform initdb ( ./initdb -D data123)
5.Run pg_upgrade
[tushar@localhost bin]$ ./pg_upgrade -d data -D data123 -b . -B .
--
--
--
Restoring database schemas in the new cluster
  postgres
*failure*
Consult the last few lines of "pg_upgrade_dump_13592.log" for
the probable cause of the failure.
Failure, exiting

log file content  -

[tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log
pg_restore: error: could not execute query: ERROR:  pg_type array OID 
value not set when in binary upgrade mode

Command was:
-- For binary upgrade, must preserve pg_type oid
SELECT 
pg_catalog.binary_upgrade_set_next_pg_type_oid('13594'::pg_catalog.oid);



-- For binary upgrade, must preserve pg_class oids
SELECT 
pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('13593'::pg_catalog.oid);


CREATE GLOBAL TEMPORARY TABLE "public"."t" (
    "n" integer
)
WITH ("on_commit_delete_rows"='false');

-- For binary upgrade, set heap's relfrozenxid and relminmxid
UPDATE pg_catalog.pg_class
SET relfrozenxid = '0', relminmxid = '0'
WHERE oid = '"public"."t"'::pg_catalog.regclass;

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Vacuum o/p with (full 1, parallel 0) option throwing an error

2020-04-08 Thread tushar

Hi,

I just came across this scenario  where - vaccum o/p with (full 1, 
parallel 0) option not working


--working

postgres=# vacuum (parallel 1, full 0 ) foo;
VACUUM
postgres=#

--Not working

postgres=# vacuum (full 1, parallel 0 ) foo;
ERROR:  cannot specify both FULL and PARALLEL options

I think it should work.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-09 Thread tushar

On 4/7/20 2:27 PM, 曾文旌 wrote:
Vacuum full GTT, cluster GTT is already 
supported in global_temporary_table_v24-pg13.patch.


Hi Wenjing,

Please refer this scenario , where reindex   message is not coming next 
time ( after reconnecting to database) for GTT


A)
--normal table
postgres=# create table nt(n int primary key);
CREATE TABLE
--GTT table
postgres=# create global temp table gtt(n int primary key);
CREATE TABLE
B)
--Reindex  , normal table
postgres=# REINDEX (VERBOSE) TABLE  nt;
INFO:  index "nt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
--reindex GTT table
postgres=# REINDEX (VERBOSE) TABLE  gtt;
INFO:  index "gtt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
C)
--Reconnect  to database
postgres=# \c
You are now connected to database "postgres" as user "tushar".
D) again perform step B)

postgres=# REINDEX (VERBOSE) TABLE  nt;
INFO:  index "nt_pkey" was reindexed
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
REINDEX
postgres=# REINDEX (VERBOSE) TABLE  gtt;   <-- message  not coming
REINDEX

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-13 Thread tushar

On 4/13/20 1:57 PM, 曾文旌 wrote:

[tushar@localhost bin]$ tail -20   pg_upgrade_dump_13592.log
pg_restore: error: could not execute query: ERROR:  pg_type array OID value not 
set when in binary upgrade mode

I found that the regular table also has this problem, I am very unfamiliar with 
this part, so I opened another email to consult this problem.


ohh. Thanks.

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-04-13 Thread tushar

On 4/9/20 6:26 PM, 曾文旌 wrote:

On 4/7/20 2:27 PM, 曾文旌 wrote:

Vacuum full GTT, cluster GTT is already supported in 
global_temporary_table_v24-pg13.patch.

Here , it is skipping GTT

postgres=# \c foo
You are now connected to database "foo" as user "tushar".
foo=# create global temporary table  g123( c1 int) ;
CREATE TABLE
foo=# \q
[tushar@localhost bin]$ ./vacuumdb --full  foo
vacuumdb: vacuuming database "foo"
WARNING:  skipping vacuum global temp table "g123" because storage is not 
initialized for current session

The message was inappropriate at some point, so I removed it.


Thanks Wenjing. Please see -if this below behavior is correct

X terminal -

postgres=# create global temp table foo1(n int);
CREATE TABLE
postgres=# insert into foo1 values (generate_series(1,10));
INSERT 0 10
postgres=# vacuum full ;
VACUUM

Y Terminal -

[tushar@localhost bin]$ ./vacuumdb -f  postgres
vacuumdb: vacuuming database "postgres"
WARNING:  global temp table oldest relfrozenxid 3276 is the oldest in 
the entire db

DETAIL:  The oldest relfrozenxid in pg_class is 3277
HINT:  If they differ greatly, please consider cleaning up the data in 
global temp table.
WARNING:  global temp table oldest relfrozenxid 3276 is the oldest in 
the entire db

DETAIL:  The oldest relfrozenxid in pg_class is 3277
HINT:  If they differ greatly, please consider cleaning up the data in 
global temp table.



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



While restoring -getting error if dump contain sql statements generated from generated.sql file

2020-04-14 Thread tushar

Hi ,

We have a sql file  called 'generated.sql' under src/test/regress/sql 
folder . if we run this file on psql , take the dump and try to restore 
it on another db

we are getting error like -

psql:/tmp/x:434: ERROR:  column "b" of relation "gtest1_1" is a 
generated column

psql:/tmp/x:441: ERROR:  cannot use column reference in DEFAULT expression

These sql statements , i copied from the dump file

postgres=# CREATE TABLE public.gtest30 (
postgres(# a integer,
postgres(# b integer
postgres(# );
CREATE TABLE
postgres=#
postgres=# CREATE TABLE public.gtest30_1 (
postgres(# )
postgres-# INHERITS (public.gtest30);
CREATE TABLE
postgres=# ALTER TABLE ONLY public.gtest30_1 ALTER COLUMN b SET DEFAULT 
(a * 2);

ERROR:  cannot use column reference in DEFAULT expression
postgres=#

Steps to reproduce -

connect to psql - ( ./psql postgres)
create database ( create database x;)
connect to database x (\c x )
execute generated.sql file (\i ../../src/test/regress/sql/generated.sql)
take the dump of x db (./pg_dump -Fp x > /tmp/t.dump)
create another database  (create database y;)
Connect to y db (\c y)
execute plain dump sql file (\i /tmp/t.dump)

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-20 Thread tushar

On 4/20/20 2:59 PM, 曾文旌 wrote:

Please check my new patch.


Thanks Wenjing. Please refer this below scenario , getting error -  
ERROR:  could not read block 0 in file "base/16466/t4_16472": read only 
0 of 8192 bytes


Steps to reproduce

Connect to psql terminal,create a table ( create global temp table t2 (n 
int primary key ) on commit delete rows;)

exit from psql terminal and execute (./clusterdb -t t2 -d postgres -v)
connect to psql terminal and one by one execute these below sql statements
(
cluster verbose t2 using t2_pkey;
cluster verbose t2 ;
alter table t2 add column i int;
cluster verbose t2 ;
cluster verbose t2 using t2_pkey;
create unique index ind on t2(n);
create unique index concurrently  ind1 on t2(n);
select * from t2;
)
This last SQL - will throw this error -  - ERROR:  could not read block 
0 in file "base/16466/t4_16472": read only 0 of 8192 bytes


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





[IBM z Systems] Getting server crash when jit_above_cost =0

2020-04-21 Thread tushar
rMain (argc=argc@entry=3, 
argv=argv@entry=0xa8c00cc0) at postmaster.c:1377
#26 0x800811f4 in main (argc=, argv=0xa8c00cc0) 
at main.c:228


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



[IBM z Systems] Rpm package issues.

2020-04-21 Thread tushar

HI,

I tried to install PG v11 and v12 on IBM z/OS using YUM command , 
following -https://wiki.postgresql.org/wiki/YUM_Installation_on_z_Systems


Found that 2 issues

1) rpm packages  are failing  due to " Package .  is not signed "

PG v12 -
Package postgresql12-contrib-12.1-2PGDG.rhel7.s390x.rpm is not signed
PG v11
Package postgresql11-libs-11.6-2PGDG.rhel7.s390x.rpm is not signed

2) Rpm packages are NOT updated . still showing 11.6 and 12.1 version 
whereas latest released version is 11.7 and 12.2.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [IBM z Systems] Getting server crash when jit_above_cost =0

2020-04-21 Thread tushar

On 4/22/20 2:40 AM, Thomas Munro wrote:

I'm not sure if you're seeing the same problem or another similar one,
but I know that Andres got a patch along those lines into llvm.  Maybe
you could try on a more recent llvm release?
Thanks a lot Thomas. it is working fine with llvm-toolset-7.0. look 
like  issue is with llvm-toolset-6.0 .
Yesterday when we installed llvm-toolset-7  (yum install 
llvm-toolset-7.0), there was no llvm-config available under 
/opt/rh/llvm-toolset-7.0/root/usr/bin/

so we ,chosen  llvm-toolset-6 with PG v12.
today , we again fired this same yum command using asterisk , now all 
the required file have been placed under llvm-toolset-7 directory and 
things look fine.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-24 Thread tushar

On 4/22/20 2:49 PM, 曾文旌 wrote:


I provide the TRUNCATE tablename DROP to clear the data in the GTT and 
delete the storage files.
This feature requires the current transaction to commit immediately 
after it finishes truncate.



Thanks Wenjing , Please refer this scenario

postgres=# create global temp table testing (a int);
CREATE TABLE
postgres=# begin;
BEGIN
postgres=*# truncate testing;  -- working   [1]
TRUNCATE TABLE
postgres=*# truncate testing drop;
ERROR:  Truncate global temporary table cannot run inside a transaction 
block    --that is throwing an error claiming something which i did  
successfully [1]

postgres=!#

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





[pg_dump] 'create index' statement is failing due to search_path is empty

2020-04-28 Thread tushar

Hi ,

While testing something else ,i found 1 scenario  where pg_dump  is failing

Below is the standalone scenario -

--connect to psql terminal and create 2 database

postgres=# create database db1;
CREATE DATABASE
postgres=# create database db2;
CREATE DATABASE

--Connect to database db1 and run these below bunch of sql ( got from 
vacuum.sql file)


\c db1

create  temp table vaccluster (i INT PRIMARY KEY);
ALTER TABLE vaccluster CLUSTER ON vaccluster_pkey;
CLUSTER vaccluster;

CREATE FUNCTION do_analyze() RETURNS VOID VOLATILE LANGUAGE SQL
    AS 'ANALYZE pg_am';
CREATE FUNCTION wrap_do_analyze(c INT) RETURNS INT IMMUTABLE LANGUAGE SQL
    AS 'SELECT $1 FROM do_analyze()';
CREATE INDEX ON vaccluster(wrap_do_analyze(i));
INSERT INTO vaccluster VALUES (1), (2);

--Take the  dump of db1 database  ( ./pg_dump -Fp db1 > /tmp/dump.sql)

--Restore the dump file into db2 database

You are now connected to database "db2" as user "tushar".
db2=# \i /tmp/dump.sql
SET
SET
SET
SET
SET
 set_config


(1 row)

SET
SET
SET
SET
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
SET
SET
CREATE TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
psql:/tmp/dump.sql:71: ERROR:  function do_analyze() does not exist
LINE 1: SELECT $1 FROM do_analyze()
   ^
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.

QUERY:  SELECT $1 FROM do_analyze()
CONTEXT:  SQL function "wrap_do_analyze" during inlining
db2=#

Workaround -

reset search_path ; before 'create index' statement in the dump.sql file .

--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-04-29 Thread tushar

On 4/29/20 8:52 AM, 曾文旌 wrote:

Fixed the error message to make the expression more accurate. In v33.


Thanks wenjing

Please refer this scenario  , where getting an error while performing 
cluster o/p


1)

X terminal -

postgres=# create global temp table f(n int);
CREATE TABLE

Y Terminal -

postgres=# create index index12 on f(n);
CREATE INDEX
postgres=# \q

X terminal -

postgres=# reindex index  index12;
REINDEX
postgres=#  cluster f using index12;
ERROR:  cannot cluster on invalid index "index12"
postgres=# drop index index12;
DROP INDEX

if this is an expected  , could we try  to make the error message more 
simpler, if possible.


Another issue  -

X terminal -

postgres=# create global temp table f11(n int);
CREATE TABLE
postgres=# create index ind1 on f11(n);
CREATE INDEX
postgres=# create index ind2 on f11(n);
CREATE INDEX
postgres=#

Y terminal -

postgres=# drop table f11;
ERROR:  cannot drop index ind2 or global temporary table f11
HINT:  Because the index is created on the global temporary table and 
other backend attached it.

postgres=#

it is only mentioning about ind2 index but what about ind1 and what if  
- they have lots of indexes ?
i  think - we should not mix index information while dropping the table 
and vice versa.


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company





Re: [Proposal] Global temporary tables

2020-02-25 Thread tushar

Hi ,

pg_upgrade  scenario is failing if database is containing  global 
temporary table


=
centos@tushar-ldap-docker bin]$ ./psql postgres
psql (13devel)
Type "help" for help.

postgres=# create global temporary table  t(n int);
CREATE TABLE
postgres=# \q
===

run pg_upgrade -

[centos@tushar-ldap-docker bin]$ ./pg_upgrade -d /tmp/t1/ -D /tmp/t2 -b 
. -B .

Performing Consistency Checks
-
Checking cluster versions    ok
Checking database user is the install user   ok
Checking database connection settings   ok
Checking for prepared transactions ok
Checking for reg* data types in user tables ok
--
--
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
--
Analyzing all rows in the new cluster    ok
Freezing all rows in the new cluster  ok
Deleting files from new pg_xact    ok
--
--
Restoring database schemas in the new cluster
ok
Copying user relation files
  /tmp/t1/base/13585/16384
error while copying relation "public.t": could not open file 
"/tmp/t1/base/13585/16384": No such file or directory

Failure, exiting

regards,

On 2/25/20 7:06 PM, Prabhat Sahu wrote:

Hi All,

Please check the below findings on GTT.
_-- Scenario 1:_
Under "information_schema", We are not allowed to create "temporary 
table", whereas we can CREATE/DROP "Global Temporary Table", is it 
expected ?


postgres=# create temporary table information_schema.temp1(c1 int);
ERROR:  cannot create temporary relation in non-temporary schema
LINE 1: create temporary table information_schema.temp1(c1 int);
                               ^

postgres=# create global temporary table information_schema.temp1(c1 int);
CREATE TABLE

postgres=# drop table information_schema.temp1 ;
DROP TABLE

_-- Scenario 2:_
Here I am getting the same error message in both the below cases.
We may add a "global" keyword with GTT related error message.

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE
postgres=# create temporary table tmp1 (c1 int unique);
CREATE TABLE

postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );
ERROR:  constraints on temporary tables may reference only temporary 
tables


postgres=# create global temporary table gtt2 (c1 int references 
tmp1(c1) );
ERROR:  constraints on temporary tables may reference only temporary 
tables


Thanks,
Prabhat Sahu

On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从) <mailto:wenjing@alibaba-inc.com>> wrote:





2020年2月24日 下午5:44,Prabhat Sahu mailto:prabhat.s...@enterprisedb.com>> 写道:

On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从)
mailto:wenjing@alibaba-inc.com>> wrote:

Hi,
I have started testing the "Global temporary table" feature,
That's great, I see hope.
from "gtt_v11-pg13.patch". Below is my findings:

-- session 1:
postgres=# create global temporary table gtt1(a int);
CREATE TABLE

-- seeeion 2:
postgres=# truncate gtt1 ;
ERROR:  could not open file "base/13585/t3_16384": No such
file or directory

is it expected?

Oh ,this is a bug, I fixed it.

Thanks for the patch.
I have verified the same, Now the issue is resolved with v12 patch.

Kindly confirm the below scenario:

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE

postgres=# create global temporary table gtt2 (c1 int references
gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

postgres=# create table tab2 (c1 int references gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

Thanks,
Prabhat Sahu


GTT supports foreign key constraints
in global_temporary_table_v13-pg13.patch


    Wenjing





--

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com/>



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-02-25 Thread tushar

Hi,

I have created two  global temporary tables like this -

Case 1-
postgres=# create global  temp table foo(n int) *with 
(on_c*ommit_delete_rows='true');

CREATE TABLE

Case 2-
postgres=# create global  temp table bar1(n int) *on c*ommit delete rows;
CREATE TABLE


but   if i try to do the same having only 'temp' keyword , Case 2 is 
working fine but getting this error  for case 1 -


postgres=# create   temp table foo1(n int) with 
(on_commit_delete_rows='true');

ERROR:  regular table cannot specifie on_commit_delete_rows
postgres=#

postgres=#  create   temp table bar1(n int) on commit delete rows;
CREATE TABLE

i think this error message need to be more clear .

regards,
tushar

On 2/25/20 7:19 PM, Pavel Stehule wrote/:



út 25. 2. 2020 v 14:36 odesílatel Prabhat Sahu 
mailto:prabhat.s...@enterprisedb.com>> 
napsal:


Hi All,

Please check the below findings on GTT.
_-- Scenario 1:_
Under "information_schema", We are not allowed to create
"temporary table", whereas we can CREATE/DROP "Global Temporary
Table", is it expected ?


It is ok for me. temporary tables should be created only in 
proprietary schema. For GTT there is not risk of collision, so it can 
be created in any schema where are necessary access rights.


Pavel


postgres=# create temporary table information_schema.temp1(c1 int);
ERROR:  cannot create temporary relation in non-temporary schema
LINE 1: create temporary table information_schema.temp1(c1 int);
                               ^

postgres=# create global temporary table
information_schema.temp1(c1 int);
CREATE TABLE

postgres=# drop table information_schema.temp1 ;
DROP TABLE

_-- Scenario 2:_
Here I am getting the same error message in both the below cases.
We may add a "global" keyword with GTT related error message.

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE
postgres=# create temporary table tmp1 (c1 int unique);
CREATE TABLE

postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );
ERROR:  constraints on temporary tables may reference only
temporary tables

postgres=# create global temporary table gtt2 (c1 int references
tmp1(c1) );
ERROR:  constraints on temporary tables may reference only
temporary tables

Thanks,
Prabhat Sahu

On Tue, Feb 25, 2020 at 2:25 PM 曾文旌(义从)
mailto:wenjing@alibaba-inc.com>>
wrote:




2020年2月24日 下午5:44,Prabhat Sahu
mailto:prabhat.s...@enterprisedb.com>> 写道:

On Fri, Feb 21, 2020 at 9:10 PM 曾文旌(义从)
mailto:wenjing@alibaba-inc.com>> wrote:

Hi,
I have started testing the "Global temporary table" feature,
That's great, I see hope.
from "gtt_v11-pg13.patch". Below is my findings:

-- session 1:
postgres=# create global temporary table gtt1(a int);
CREATE TABLE

-- seeeion 2:
postgres=# truncate gtt1 ;
ERROR:  could not open file "base/13585/t3_16384": No
such file or directory

is it expected?

Oh ,this is a bug, I fixed it.

Thanks for the patch.
I have verified the same, Now the issue is resolved with v12
patch.

Kindly confirm the below scenario:

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE

postgres=# create global temporary table gtt2 (c1 int
references gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

postgres=# create table tab2 (c1 int references gtt1(c1) );
ERROR:  referenced relation "gtt1" is not a global temp table

Thanks,
Prabhat Sahu


GTT supports foreign key constraints
in global_temporary_table_v13-pg13.patch


Wenjing





    -- 


With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com
<http://www.enterprisedb.com/>



--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



Re: [Proposal] Global temporary tables

2020-03-02 Thread tushar

On 2/27/20 9:43 AM, 曾文旌(义从) wrote:

_-- Scenario 2:_
Here I am getting the same error message in both the below cases.
We may add a "global" keyword with GTT related error message.

postgres=# create global temporary table gtt1 (c1 int unique);
CREATE TABLE
postgres=# create temporary table tmp1 (c1 int unique);
CREATE TABLE

postgres=# create temporary table tmp2 (c1 int references gtt1(c1) );
ERROR:  constraints on temporary tables may reference only temporary 
tables


postgres=# create global temporary table gtt2 (c1 int references 
tmp1(c1) );
ERROR:  constraints on temporary tables may reference only temporary 
tables

Fixed in global_temporary_table_v15-pg13.patch



Thanks Wenjing.

This below scenario is not working  i.e even 'on_commit_delete_rows' is 
true then after commit -  rows are NOT removing


postgres=#  create global  temp table foo1(n int) with 
(on_commit_delete_rows='true');

CREATE TABLE
postgres=#
postgres=# begin;
BEGIN
postgres=*# insert into foo1 values (9);
INSERT 0 1
postgres=*# insert into foo1 values (9);
INSERT 0 1
postgres=*# select * from foo1;
 n
---
 9
 9
(2 rows)

postgres=*# commit;
COMMIT
postgres=# select * from foo1;   -- after commit -there should be 0 row 
as on_commit_delete_rows is 'true'

 n
---
 9
 9
(2 rows)

postgres=# \d+ foo1
   Table "public.foo1"
 Column |  Type   | Collation | Nullable | Default | Storage | Stats 
target | Description

+-+---+--+-+-+--+-
 n  | integer |   |  | | plain 
|  |

Access method: heap
Options: on_commit_delete_rows=true

postgres=#

but if user - create table this way then it is working as expected

postgres=#  create global  temp table foo2(n int) *on commit delete rows;*
CREATE TABLE
postgres=# begin; insert into foo2 values (9); insert into foo2 values 
(9); commit; select * from foo2;

BEGIN
INSERT 0 1
INSERT 0 1
COMMIT
 n
---
(0 rows)

postgres=#

i guess , problem is something with this syntax - create global temp 
table foo1(n int) *with (on_commit_delete_rows='true'); *


--
regards,tushar
EnterpriseDB  https://www.enterprisedb.com/
The Enterprise PostgreSQL Company



  1   2   >