stale statistics on postgres 14

2022-02-28 Thread Jaime Casanova
Hi everyone,

I have been doing some tests on a little server (at least compared to
some others around). It have 128 cores (64 physical), 128GB of RAM and
against my will a 4-disk (10k RPM) RAID5.

I have been using hammerdb testing from 9.6 to 14, currently on windows
10. Obviously, I have been recording changes on performance.

Hammerdb results shows 2 numbers, NOPM and TPM the second one is
calculated using statistics.

Sadly, on 14 I saw a lowered number of TPM while NOPM kept on the
average (at least is the average since 11).

The reason that on 14 the TPM number dropped is because it's based 
on the statistics[1] which of course are stalled[2].

I consider this a regression because no other postgres version had 
this problem on the same machine and the same OS. Anything I can do to
track what caused this regression?

[1] Query from hammerdb to get TPM number:
select sum(xact_commit + xact_rollback) 
  from pg_stat_database

[2] Message from the log, saying what is obvious
LOG:  0: using stale statistics instead of current ones because
stats collector is not responding

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Removing unneeded self joins

2022-02-28 Thread Jaime Casanova
On Thu, Jul 15, 2021 at 05:49:11PM +0300, Andrey Lepikhov wrote:
> On 6/7/21 13:49, Hywel Carver wrote:
> > On Mon, Jul 5, 2021 at 2:20 PM Andrey Lepikhov
> > mailto:a.lepik...@postgrespro.ru>> wrote:
> > Looking through the email chain, a previous version of this patch added
> > ~0.6% to planning time in the worst case tested - does that meet the
> > "essentially free" requirement?
> I think these tests weren't full coverage of possible use cases. It will
> depend on a number of relations in the query. For the JOIN of partitioned
> tables, for example, the overhead could grow. But in the context of overall
> planning time this overhead will be small till the large number of
> relations.
> Also, we made this feature optional to solve possible problems.
> Rebased on 768ea9bcf9
> 

I made some tests in a machine with 16 cores and 32GB of RAM.
So we can see if this is an improvement.

This is what I found:

+---+--+---+---+---+---+---+
| test  |   mode   |  master   |  enabled  |   %   | disabled  
|   %   |
+---+--+---+---+---+---+---+
| pgbench read only | standard |  64418.13 |  63942.94 | -0.74 |  62231.38 
| -3.39 |
| pgbench read only | prepared | 108463.51 | 107002.13 | -1.35 | 100960.83 
| -6.92 |
| pgbench read only | extended |  55409.65 |  56427.63 |  1.84 |  55927.62 
|  0.93 |
+---+--+---+---+---+---+---+
| pgbench read/write| standard |   9374.91 |   9135.21 | -2.56 |   8840.68 
| -5.70 |
| pgbench read/write| prepared |  11849.86 |  11672.23 | -1.50 |  11393.39 
| -3.85 |
| pgbench read/write| extended |   7976.80 |   7947.07 | -0.37 |   7788.99 
| -2.35 |
+---+--+---+---+---+---+---+
| select non optimize 1 | standard | 80.97 | 81.29 |  0.40 | 81.30 
|  0.41 |
| select non optimize 1 | prepared | 81.29 | 81.28 | -0.01 | 80.89 
| -0.49 |
| select non optimize 1 | extended | 81.07 | 80.81 | -0.32 | 80.98 
| -0.11 |
+---+--+---+---+---+---+---+
| select optimized 1| standard | 15.84 | 13.90 |-12.25 | 15.80 
| -0.25 |
| select optimized 1| prepared | 15.24 | 13.82 | -9.32 | 15.55 
|  2.03 |
| select optimized 1| extended | 15.38 | 13.89 | -9.69 | 15.59 
|  1.37 |
+---+--+---+---+---+---+---+
| select optimized 2| standard |  10204.91 |  10818.39 |  6.01 |  10261.07 
|  0.55 |
| select optimized 2| prepared |  13284.06 |  15579.33 | 17.28 |  13116.22 
| -1.26 |
| select optimized 2| extended |  10143.43 |  10645.23 |  4.95 |  10142.77 
| -0.01 |
+---+--+---+---+---+---+---+
| select shoe   | standard |   5645.28 |   5661.71 |  0.29 |   6180.60 
|  9.48 |
| select shoe   | prepared |   9660.45 |   9602.37 | -0.60 |   9894.82 
|  2.43 |
| select shoe   | extended |   5666.47 |   5634.10 | -0.57 |   5757.26 
|  1.60 |
+---+--+---+---+---+---+---+

Obviously the pgbench runs are from the standard script. The numbers are
not clear for me, I can see improvementes with the patch only in one
case and, for some reason, if I disable the patch
(enable_self_join_removal='off') I still see a regression in normal
cases and curiosly an improvement in one case.

I'm attaching the queries. I used the users table that is down-thread
and loaded with ~200k rows using:

insert into users 
select seq, case when random() < 0.2 then null else random() * 1000 end, 
   random() * 1 
  from generate_series(1, 100) seq 
  on conflict (nullable_int) do nothing;

for master I just dumped the data from the table and loaded it. I'm also
attaching the queries I used.

After this tests, I'm not convinced this is actually providing something
performance-wise. At least not in its current state.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL


select_nonoptimize1.sql
Description: application/sql


select_optimize1.sql
Description: application/sql


select_optimize2.sql
Description: application/sql


select_shoe.sql
Description: application/sql


Re: enable_incremental_sort changes query behavior

2020-12-15 Thread Jaime Casanova
On Tue, Dec 1, 2020 at 4:08 AM Anastasia Lubennikova
 wrote:
>
> On 01.12.2020 03:08, James Coleman wrote:
> > On Tue, Nov 3, 2020 at 4:39 PM Tomas Vondra
> >  wrote:
> >> I've pushed the 0001 part, i.e. the main fix. Not sure about the other
> >> parts (comments and moving the code back to postgres_fdw) yet.
> > I noticed the CF entry [1] got moved to the next CF; I'm thinking this
> > entry should be marked as committed since the fix for the initial bug
> > reported on this thread has been pushed. We have the parallel safety
> > issue outstanding, but there's a separate thread and patch for that,
> > so I'll make a new CF entry for that.
> >
> > I can mark it as committed, but I'm not sure how to "undo" (or if
> > that's desirable) the move to the next CF.
> >
> > Thoughts?
> >
> > James
> >
> > 1: https://commitfest.postgresql.org/30/2754/
> >
> >
> Oops...
> I must have rushed with this one, thank you for noticing.
> I don't see how to move it back either. I think it's fine to mark it as
> Committed where it is now.
>

BTW, I still see this one as needs review


--
Jaime Casanova
Professional PostgreSQL: Soporte 24x7 y capacitación




Re: SQL-standard function body

2021-03-04 Thread Jaime Casanova
On Tue, Mar 2, 2021 at 12:13 PM Peter Eisentraut
 wrote:
>
> On 11.02.21 09:02, Peter Eisentraut wrote:
> >> Here is an updated patch to get it building again.
> >
> > Another updated patch to get things building again.  I've also fixed the
> > last TODO I had in there in qualifying function arguments as necessary
> > in ruleutils.c.
>
> Updated patch to resolve merge conflict.  No changes in functionality.

Hi,

I was making some tests with this patch and found this problem:

"""
CREATE OR REPLACE FUNCTION public.make_table()
 RETURNS void
 LANGUAGE sql
BEGIN ATOMIC
  CREATE TABLE created_table AS SELECT * FROM int8_tbl;
END;
ERROR:  unrecognized token: "?"
CONTEXT:  SQL function "make_table"
"""

Attached a backtrace from the point the error is thrown.

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL
#0  nodeRead (
token=0x55c62af712b4 "? :resultRelation 0 :hasAggs false :hasWindowFuncs 
false :hasTargetSRFs false :hasSubLinks false :hasDistinctOn false 
:hasRecursive false :hasModifyingCTE false :hasForUpdate false :hasRowSecurity 
fal"..., tok_len=1)
at read.c:421
__errno_location = 
result = 0x55c62af712b3
type = 104
__func__ = "nodeRead"
#1  0x55c62a706491 in _readQuery () at readfuncs.c:255
local_node = 0x55c62af71b20
token = 0x55c62af712a7 ":utilityStmt ? :resultRelation 0 :hasAggs false 
:hasWindowFuncs false :hasTargetSRFs false :hasSubLinks false :hasDistinctOn 
false :hasRecursive false :hasModifyingCTE false :hasForUpdate false :hasRo"...
length = 12
#2  0x55c62a711588 in parseNodeString () at readfuncs.c:2701
return_value = 0xf424308e6
token = 0x55c62af71273 "QUERY :commandType 5 :querySource 0 :canSetTag 
true :utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
fal"...
length = 5
__func__ = "parseNodeString"
#3  0x55c62a705b74 in nodeRead (
token=0x55c62af71272 "{QUERY :commandType 5 :querySource 0 :canSetTag true 
:utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
fa"..., tok_len=1)
at read.c:334
result = 0x55c62af71273
type = 103
__func__ = "nodeRead"
#4  0x55c62a705e8d in nodeRead (
token=0x55c62af71272 "{QUERY :commandType 5 :querySource 0 :canSetTag true 
:utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
fa"..., tok_len=1)
at read.c:400
l = 0x0
result = 0x55c62af71272
type = 102
__func__ = "nodeRead"
#5  0x55c62a705e8d in nodeRead (
token=0x55c62af71271 "({QUERY :commandType 5 :querySource 0 :canSetTag true 
:utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
f"..., tok_len=1)
at read.c:400
l = 0x0
result = 0x277
type = 102
__func__ = "nodeRead"
#6  0x55c62a705700 in stringToNodeInternal (
str=0x55c62af71270 "(({QUERY :commandType 5 :querySource 0 :canSetTag true 
:utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
"..., 
restore_loc_fields=false) at read.c:74
retval = 0x7ffd9bafc9f0
save_strtok = 0x55c62af712b5 " :resultRelation 0 :hasAggs false 
:hasWindowFuncs false :hasTargetSRFs false :hasSubLinks false :hasDistinctOn 
false :hasRecursive false :hasModifyingCTE false :hasForUpdate false 
:hasRowSecurity fals"...
#7  0x55c62a705732 in stringToNode (
str=0x55c62af71270 "(({QUERY :commandType 5 :querySource 0 :canSetTag true 
:utilityStmt ? :resultRelation 0 :hasAggs false :hasWindowFuncs false 
:hasTargetSRFs false :hasSubLinks false :hasDistinctOn false :hasRecursive 
"...) at read.c:91
No locals.
#8  0x55c62a501a7d in fmgr_sql_validator (fcinfo=0x7ffd9bafcb00) at 
pg_proc.c:895
n = 0x55c62af065a0
funcoid = 23208
tuple = 0x7f95f02b9348
proc = 0x7f95f02b9390
raw_parsetree_list = 0x55c62af065a0
querytree_list = 0x55c62aa27bf2 
lc = 0x7ffd9bafcb20
isnull = false
tmp = 140281956242456
prosrc = 0x55c62af065a0 "\240e\360*\306U"
callback_arg = {proname = 0x7f95f02b9394 "make_table", prosrc = 0x0}
sqlerrcontext = {previous = 0x0, callback = 0x55c62a501c77 
, 
  arg = 0x7ffd9bafca50}
haspolyarg = false
i = 0
__f

Re: SQL-standard function body

2021-03-14 Thread Jaime Casanova
On Tue, Mar 9, 2021 at 7:27 AM Peter Eisentraut
 wrote:
>
>
> I see.  The problem is that we don't have serialization and
> deserialization support for most utility statements.  I think I'll need
> to add that eventually.  For now, I have added code to prevent utility
> statements.  I think it's still useful that way for now.
>

Great! thanks!

I found another problem when using CASE expressions:

CREATE OR REPLACE FUNCTION foo_case()
RETURNS boolean
LANGUAGE SQL
BEGIN ATOMIC
select case when random() > 0.5 then true else false end;
END;

apparently the END in the CASE expression is interpreted as the END of
the function

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL




Re: FETCH FIRST clause PERCENT option

2021-01-26 Thread Jaime Casanova
On Mon, Jan 25, 2021 at 6:39 AM Kyotaro Horiguchi
 wrote:
>
> Sorry for the dealy. I started to look this.
>
> At Fri, 25 Sep 2020 12:25:24 +0300, Surafel Temesgen  
> wrote in
> > Hi Michael
> > On Thu, Sep 24, 2020 at 6:58 AM Michael Paquier  wrote:
> >
> > > On Mon, Aug 10, 2020 at 01:23:44PM +0300, Surafel Temesgen wrote:
> > > > I also Implement PERCENT WITH TIES option. patch is attached
> > > > i don't start a new tread because the patches share common code
> > >
> > > This fails to apply per the CF bot.  Could you send a rebase?
>
> This still applies on the master HEAD.
>
> percent-incremental-v11.patch
>
> The existing nodeLimit passes the slot of the subnode to the
> caller. but this patch changes that behavior. You added a new function
> to tuplestore.c not to store a minimal tuple into the slot that passed
> from subnode, but we should refrain from scribbling on the slot passed
> from the subnode. Instead, the PERCENT path of the limit node should
> use its own ResultTupleSlot for the purpose. See nodeSort for a
> concrete example.
>
>
> +   LIMIT_OPTION_PER_WITH_TIES, /* FETCH FIRST... PERCENT WITH TIES */
>
> That name is a bit hard to read. We should spell it with complete
> words.
>
> case LIMIT_INWINDOW:
> ...
> +   if (IsPercentOption(node->limitOption) && 
> node->backwardPosition
> ...
> +   if (IsPercentOption(node->limitOption) && 
> node->reachEnd)
> ...
> +   if (IsPercentOption(node->limitOption))
>
> I think we can use separate lstate state for each condition above
> since IsPercentOption() gives a constant result through the execution
> time. For example, LIMIT_PERCENT_TUPLESLOT_NOT_FILLED and
> LIMIT_PERCENT_TUPLESLOT_FILLED and some derived states similar to the
> non-percent path.  I *feel* that makes code simpler.
>
> What do you think about this?
>
> regards.
>
> --
> Kyotaro Horiguchi
> NTT Open Source Software Center
>
>

Hi,

I was testing this and found that it doesn't work well with
subselects, this query make the server crash:

"""
select * from pg_type where exists (select 1 from pg_authid fetch
first 10 percent rows only);
"""

postgres was compiled with these options:

"""
CFLAGS="-ggdb -O0 -g3 -fno-omit-frame-pointer" ./configure
--prefix=/opt/var/pgdg/14dev/percent --enable-debug --enable-cassert
--with-pgport=54314 --enable-depend
"""

attached  is the stack trace

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 140724437079104, 2, 6, 6458975, 94647092334544, 
4611686018427388799, 139701007526566, 0, 281470681751456, 0, 0, 0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f0eacb55535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 139701005283317, 2, 3760846768312216920, 
7018409443653412194, 94647092334544, 
  7003717752975484000, 0, 3976935554753230848, 140724437079344, 0, 
140724437080208}}, sa_flags = -1101967408, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x5614beb8d44e in ExceptionalCondition (conditionName=0x5614bed22e7b 
"limit->consttype == INT8OID", errorType=0x5614bed22b76 "FailedAssertion", 
fileName=0x5614bed22c44 "subselect.c", lineNumber=1594)
at assert.c:69
No locals.
#3  0x5614be8df87d in simplify_EXISTS_query (root=0x5614bfecf4b8, 
query=0x5614bfe038e0) at subselect.c:1594
node = 0x5614bfed0190
limit = 0x5614bfed0190
#4  0x5614be8df49b in convert_EXISTS_sublink_to_join (root=0x5614bfecf4b8, 
sublink=0x5614bfe036e0, under_not=false, available_rels=0x5614bfecf1d8) at 
subselect.c:1420
result = 0x5614bfecf940
parse = 0x5614bfe039f8
subselect = 0x5614bfe038e0
whereClause = 0x5614bfecf8b8
rtoffset = 22036
varno = 22036
clause_varnos = 0x18
upper_varnos = 0x20
#5  0x5614be8e3d8d in pull_up_sublinks_qual_recurse (root=0x5614bfecf4b8, 
node=0x5614bfe036e0, jtlink1=0x7ffcf6155a48, available_rels1=0x5614bfecf1d8, 
jtlink2=0x0, available_rels2=0x0) at prepjointree.c:458
sublink = 0x5614bfe036e0
j = 0x0
child_rels = 0x5614bfecf940
#6  0x5614be8e38cc in pull_up_sublinks_jointree_recurse 
(root=0x5614bfecf4b8, jtnode=0x5614bfecf3d0, relids=0x7ffcf6155ab0) at 
prepjointree.c:275
frelids = 0x5614bfecf1d8
newf = 0x5614bfecf928
jtlink = 0x

FailedAssertion in heap_index_delete_tuples at heapam.c:7220

2021-01-26 Thread Jaime Casanova
Hi,

${subject} happened while executing ${attached query} at regresssion
database, using 14dev (commit
d5a83d79c9f9b660a6a5a77afafe146d3c8c6f46) and produced ${attached
stack trace}.

Sadly just loading the regression database and executing this query is
not enough to reproduce. Not sure what else I can do to help with this
one.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 140734235450160, 2, 6, 6456150, 94891112648656, 
4611686018427388799, 140604537363110, 0, 281470681751457, 0, 0, 0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7fe10b488535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x3feb0800, 
sa_sigaction = 0x3feb0800}, sa_mask = {__val = {0, 4605253854145256240, 
0, 13781012405103410633, 0, 140604535119861, 2, 
  7220455706289059512, 7018409646405740088, 0, 7003723482445100592, 
0, 3166617362634023936, 140734235450400, 684527088, 140734235451264}}, sa_flags 
= 0, sa_restorer = 0x28cd0df0}
sigs = {__val = {32, 0 }}
#2  0x564d8f7748be in ExceptionalCondition (conditionName=0x564d8f800559 
"delstate->ndeltids > 0", errorType=0x564d8f7fe869 "FailedAssertion", 
fileName=0x564d8f7fe948 "heapam.c", lineNumber=7220)
at assert.c:69
No locals.
#3  0x564d8f176825 in heap_index_delete_tuples (rel=0x7fe1022847c8, 
delstate=0x7fff3e1ca120) at heapam.c:7220
latestRemovedXid = 0
blkno = 4294967295
buf = 0
page = 0x0
maxoff = 0
priorXmax = 2446144864
prefetch_state = {cur_hblkno = 4294967295, next_item = 0, ndeltids = 0, 
deltids = 0x564d91cd3748}
prefetch_distance = 10
SnapshotNonVacuumable = {snapshot_type = SNAPSHOT_NON_VACUUMABLE, xmin 
= 0, xmax = 2446145024, xip = 0x216, xcnt = 0, subxip = 0x, 
subxcnt = 1042063264, suboverflowed = 255, 
  takenDuringRecovery = 127, copied = false, curcid = 1042063408, 
speculativeToken = 32767, vistest = 0x564d8fab1c10 , 
active_count = 2446145040, regd_count = 22093, ph_node = {
first_child = 0x7fff3e1ca070, next_sibling = 0x7fff3e1ca060, 
prev_or_parent = 0x564d91ddda08}, whenTaken = 462, lsn = 18446744071860729372, 
snapXactCompletionCount = 0}
finalndeltids = 0
nblocksaccessed = 0
nblocksfavorable = 0
curtargetfreespace = 0
lastfreespace = 0
actualfreespace = 0
bottomup_final_block = false
#4  0x564d8f195780 in table_index_delete_tuples (rel=0x7fe1022847c8, 
delstate=0x7fff3e1ca120) at ../../../../src/include/access/tableam.h:1240
No locals.
#5  0x564d8f195e6a in index_compute_xid_horizon_for_tuples 
(irel=0x564d924f6428, hrel=0x7fe1022847c8, ibuf=807, itemnos=0x7fff3e1ca190, 
nitems=0) at genam.c:331
delstate = {bottomup = false, bottomupfreespace = 0, ndeltids = 0, 
deltids = 0x564d91cd3748, status = 0x564d91cd3768}
latestRemovedXid = 0
ipage = 0x7fe102fc5f00 ""
itup = 0x7fff3e1ca630
#6  0x564d8f13e568 in gistprunepage (rel=0x564d924f6428, 
page=0x7fe102fc5f00 "", buffer=807, heapRel=0x7fe1022847c8) at gist.c:1669
deletable = {1, 0, 0, 0, 46976, 37580, 22016, 0, 41520, 15900, 32767, 
0, 42544, 15900, 32767, 0, 42752, 15900, 0, 0, 2674, 2890, 32737, 0, 1, 0, 
25580, 15290, 41984, 48994, 5558, 11250, 42752, 15900, 
  32767, 0, 52880, 36628, 22093, 0, 46976, 37580, 22093, 0, 13728, 
37325, 22093, 0, 32512, 765, 32737, 0, 25640, 37455, 22093, 0, 0, 0, 2, 0, 
65535, 255, 0, 0, 41472, 15900, 32767, 0, 14468, 566, 
  32737, 0, 41536, 15900, 0, 256, 14468, 566, 32737, 0, 13736, 37325, 
22093, 0, 25640, 37455, 22093, 0, 0, 0, 0, 0, 0, 512, 32737, 0, 41584, 15900, 
32767, 0, 47989, 36700, 22093, 0, 41712, 15900, 0, 
  256, 14468, 566, 32737, 0, 41632, 15900, 32767, 0, 56219, 36700, 
22093, 0, 0, 0, 0, 0, 14464, 566, 32737, 0, 0, 0, 3, 0, 21856, 174, 0, 8192, 
41680, 15900, 32767, 0, 56472, 36700, 22093, 0, 11968, 
  37108, 22093, 0, 0, 0, 0, 0, 14480, 566, 32737, 0, 14464, 566, 32737, 
0, 41712, 15900, 32767, 0, 54083, 36639, 22093, 0, 40672, 765, 32737, 0, 25640, 
37455, 22093, 0, 0, 0, 0, 0, 0, 512, 32737, 0, 
  41776, 15900, 0, 256, 14468, 566, 32737, 0, 41776, 15900, 32767, 0, 
47349, 36700, 22093, 0...}
ndeletable = 0
offnum = 292
maxoff = 291
latestRemovedXid = 0
#7  0x564d8f139e3e in gistplacetopage (rel=0x564d924f6428, freespace=0, 
giststate=0x564d92ccb780, buffer=807, itup=0x7fff3e1ca6e0, ntup=1, oldoffnum=0, 
newblkno=0x0, leftchildbuf=0, 
splitinfo=0x7fff3e1ca690, markfollowright=true, heapRel=0x7fe1022847c8, 
is_build=false) at gist.c:274
blkno = 4
page = 0x7fe102fc5f00 ""
is_leaf = true
   

Re: FailedAssertion in heap_index_delete_tuples at heapam.c:7220

2021-01-27 Thread Jaime Casanova
On Wed, Jan 27, 2021 at 2:09 AM Peter Geoghegan  wrote:
>
> On Tue, Jan 26, 2021 at 10:52 PM Jaime Casanova
>  wrote:
> > ${subject} happened while executing ${attached query} at regresssion
> > database, using 14dev (commit
> > d5a83d79c9f9b660a6a5a77afafe146d3c8c6f46) and produced ${attached
> > stack trace}.
>
> I see the bug: gistprunepage() calls
> index_compute_xid_horizon_for_tuples() (which ultimately calls the
> heapam.c callback for heap_index_delete_tuples()) with an empty array,
> which we don't expect. The similar code within _hash_vacuum_one_page()
> already only calls index_compute_xid_horizon_for_tuples() when
> ndeletable > 0.
>
> The fix is obvious: Bring gistprunepage() in line with
> _hash_vacuum_one_page(). I'll go push a fix for that now.
>

Thanks

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




protect pg_stat_statements_info() for being used without the library loaded

2021-01-27 Thread Jaime Casanova
Hi,

Attached is a small patch for ${subject}

--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 72a117fc19..62cccbfa44 100644
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ b/contrib/pg_stat_statements/pg_stat_statements.c
@@ -1898,6 +1898,11 @@ pg_stat_statements_info(PG_FUNCTION_ARGS)
 	Datum		values[PG_STAT_STATEMENTS_INFO_COLS];
 	bool		nulls[PG_STAT_STATEMENTS_INFO_COLS];
 
+	if (!pgss || !pgss_hash)
+		ereport(ERROR,
+(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+
 	/* Build a tuple descriptor for our result type */
 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");


Assertion fail with window function and partitioned tables

2021-01-28 Thread Jaime Casanova
Hi,

Just found another crash.

Seems that commit a929e17e5a8c9b751b66002c8a89fdebdacfe194 broke something.
Attached is a minimal case and the stack trace.

--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0, 140736891297696, 2, 6, 6453766, 94092477853648, 
4611686018427388799, 140041917086374, 0, 281470681751456, 0, 0, 0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f5e0c806535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140041914843125, 2, 3691039874016042664, 
7018409654210421561, 94092477853648, 
  7003722382933471536, 0, 6953716221396971264, 140736891297936, 0, 
140736891298800}}, sa_flags = -1665667120, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x55939d1f5f6e in ExceptionalCondition (conditionName=0x55939d392fe2 
"!bms_is_empty(present_parts)", errorType=0x55939d392e73 "FailedAssertion", 
fileName=0x55939d392f44 "partprune.c", lineNumber=588)
at assert.c:69
No locals.
#3  0x55939cf87d2e in make_partitionedrel_pruneinfo (root=0x55939e84ac40, 
parentrel=0x55939e86c1a0, relid_subplan_map=0x55939e875d78, 
partrelids=0x55939e874990, prunequal=0x55939e875d20, 
matchedsubplans=0x7fffdc69a578) at partprune.c:588
subpart = 0x55939e86c1a0
nparts = 1
relid_map = 0x55939e876228
pinfo = 0x55939e875320
present_parts = 0x0
subplan_map = 0x55939e8761e8
subpart_map = 0x55939e876208
lc__state = {l = 0x55939e876190, i = 0}
targetpart = 0x55939e86c1a0
pinfolist = 0x55939e876190
doruntimeprune = true
relid_subpart_map = 0x55939e875da0
subplansfound = 0x0
lc = 0x55939e8761a8
rti = -2
i = 1
#4  0x55939cf8757c in make_partition_pruneinfo (root=0x55939e84ac40, 
parentrel=0x55939e86c1a0, subpaths=0x55939e874bf0, 
partitioned_rels=0x55939e874d38, prunequal=0x55939e875d20) at partprune.c:274
partrelids = 0x55939e874990
pinfolist = 0x55939e875d20
matchedsubplans = 0x0
lc__state = {l = 0x55939e874d38, i = 0}
pruneinfo = 0x55939cec2640 
allmatchedsubplans = 0x0
relid_subplan_map = 0x55939e875d78
lc = 0x55939e874d50
prunerelinfos = 0x0
i = 2
#5  0x55939cf213d6 in create_append_plan (root=0x55939e84ac40, 
best_path=0x55939e874ca0, flags=6) at createplan.c:1250
prunequal = 0x55939e875d20
plan = 0x55939e84a130
tlist = 0x55939e875928
orig_tlist_length = 1
tlist_was_changed = false
pathkeys = 0x55939e871288
subplans = 0x55939e875cc8
subpaths = 0x0
rel = 0x55939e86c1a0
partpruneinfo = 0x0
nodenumsortkeys = 1
nodeSortColIdx = 0x55939e8731f8
nodeSortOperators = 0x55939e875980
nodeCollations = 0x55939e8759a0
nodeNullsFirst = 0x55939e8759c0
__func__ = "create_append_plan"
#6  0x55939cf1ff5b in create_plan_recurse (root=0x55939e84ac40, 
best_path=0x55939e874ca0, flags=6) at createplan.c:405
plan = 0x0
__func__ = "create_plan_recurse"
#7  0x55939cf2385f in create_windowagg_plan (root=0x55939e84ac40, 
best_path=0x55939e8754c0) at createplan.c:2454
plan = 0x0
wc = 0x55939e82a4c0
numPart = 1
numOrder = 0
subplan = 0x55930001
tlist = 0x55939e82a748
partNumCols = 21907
partColIdx = 0x0
partOperators = 0x0
partCollations = 0x55930001
ordNumCols = 0
ordColIdx = 0x0
ordOperators = 0x55939e82a558
ordCollations = 0x55939e75f468
lc = 0x0
#8  0x55939cf201f8 in create_plan_recurse (root=0x55939e84ac40, 
best_path=0x55939e8754c0, flags=1) at createplan.c:492
plan = 0x55939e875558
__func__ = "create_plan_recurse"
#9  0x55939cf1fe1a in create_plan (root=0x55939e84ac40, 
best_path=0x55939e8754c0) at createplan.c:333
plan = 0x55939e8754c0
__func__ = "create_plan"
#10 0x55939cf32737 in standard_planner (parse=0x55939e75f468, 
query_string=0x55939e75d6d0 "select pg_catalog.min(100) over (partition by 
ref_1.a)\n  from ref_1 \n where (ref_1.a <= (select foo from generate_series(1, 
10) foo order by 1 limit 1)) ", cursorOptions=256, 
boundParams=0x0) at planner.c:409
result = 0x55939d231027 
glob = 0x55939e82a558
tuple_fraction = 0
root = 0x55939e84ac40
final_rel = 0x55939e875558
best_path = 0x55939e8754c0
top_plan = 0x7fffdc69aa50
lp = 0x7f5e035e6f90
lr = 0x55939cec2157 
#11 0x55939cf324a8 in planner (parse=0x55939e75f468, 
query_string=0x

Re: Assertion fail with window function and partitioned tables

2021-01-29 Thread Jaime Casanova
On Thu, Jan 28, 2021 at 9:45 PM Jaime Casanova
 wrote:
>
> Hi,
>
> Just found another crash.
>
> Seems that commit a929e17e5a8c9b751b66002c8a89fdebdacfe194 broke something.
> Attached is a minimal case and the stack trace.
>

Hi,

Seems this is the same that Andreas reported in
https://www.postgresql.org/message-id/87sg8tqhsl@aurora.ydns.eu so
consider this one as noise


--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL



--




typo in "Determine XID horizons" comment in procarray.c

2021-02-03 Thread Jaime Casanova
Hi,

Second paragraph of this comment (procarray.c:1604) says:
* See the definition of ComputedXidHorizonsResult for the various computed

It should say ComputeXidHorizonsResult (it has an extra "d" in Computed)

--
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Is it correct to update db state in control file as "shutting down" during end-of-recovery checkpoint?

2022-01-09 Thread Jaime Casanova
On Thu, Dec 09, 2021 at 07:41:52AM +0530, Bharath Rupireddy wrote:
> On Wed, Dec 8, 2021 at 11:02 PM Bossart, Nathan  wrote:
> >
> > On 12/8/21, 3:29 AM, "Bharath Rupireddy" 
> >  wrote:
> > > Thanks for your thoughts. I'm fine either way, hence attaching two
> > > patches here with and I will leave it for the committer 's choice.
> > > 1) v1-0001-Add-DB_IN_END_OF_RECOVERY_CHECKPOINT-state-for-co.patch --
> > > adds new db state DB_IN_END_OF_RECOVERY_CHECKPOINT for control file.
> > > 2) v1-0001-Skip-control-file-db-state-updation-during-end-of.patch --
> > > just skips setting db state to DB_SHUTDOWNING and DB_SHUTDOWNED in
> > > case of end-of-recovery checkpoint so that the state will be
> > > DB_IN_CRASH_RECOVERY which then changes to DB_IN_PRODUCTION.
> >
> > I've bumped this one to ready-for-committer.  For the record, my
> > preference is the second patch (for the reasons discussed upthread).
> > Both patches might benefit from a small comment or two, too.
> 
> Thanks. I've added a comment to the patch
> v2-0001-Skip-control-file-db-state-updation-during-end-of.patch. The
> other patch remains the same as the new state
> DB_IN_END_OF_RECOVERY_CHECKPOINT introduced there says it all.
> 

AFAIU is one patch or the other but not both, isn't it?

This habit of putting two conflicting versions of patches on the same 
thread causes http://cfbot.cputube.org/ to fail.

Now; I do think that the secondd patch, the one that just skips update
of the state in control file, is the way to go. The other patch adds too
much complexity for a small return.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: pg_upgrade parallelism

2022-01-11 Thread Jaime Casanova
On Wed, Nov 17, 2021 at 08:04:41PM +, Jacob Champion wrote:
> On Wed, 2021-11-17 at 14:44 -0500, Jaime Casanova wrote:
> > I'm trying to add more parallelism by copying individual segments
> > of a relfilenode in different processes. Does anyone one see a big
> > problem in trying to do that? I'm asking because no one did it before,
> > that could not be a good sign.
> 
> I looked into speeding this up a while back, too. For the use case I
> was looking at -- Greenplum, which has huge numbers of relfilenodes --
> spinning disk I/O was absolutely the bottleneck and that is typically
> not easily parallelizable. (In fact I felt at the time that Andres'
> work on async I/O might be a better way forward, at least for some
> filesystems.)
> 
> But you mentioned that you were seeing disks that weren't saturated, so
> maybe some CPU optimization is still valuable? I am a little skeptical
> that more parallelism is the way to do that, but numbers trump my
> skepticism.
> 

Sorry for being unresponsive too long. I did add a new --jobs-per-disk
option, this is a simple patch I made for the customer and ignored all
WIN32 parts because I don't know anything about that part. I was wanting
to complete that part but it has been in the same state two months now.

AFAIU, it seems there is a different struct for the parameters of the
function that will be called on the thread.

I also decided to create a new reap_*_child() function for using with
the new parameter.

Now, the customer went from copy 25Tb in 6 hours to 4h 45min, which is
an improvement of 20%!


> > - why we read()/write() at all? is not a faster way of copying the file?
> >   i'm asking that because i don't actually know.
> 
> I have idly wondered if something based on splice() would be faster,
> but I haven't actually tried it.
> 

I tried and got no better result.

> But there is now support for copy-on-write with the clone mode, isn't
> there? Or are you not able to take advantage of it?
> 

That's sadly not possible because those are different disks, and yes I
know that's something that pg_upgrade normally doesn't allow but is not
difficult to make it happen.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
>From 0d04f79cb51d6be0ced9c6561cfca5bfe18c4bdd Mon Sep 17 00:00:00 2001
From: Jaime Casanova 
Date: Wed, 15 Dec 2021 12:14:44 -0500
Subject: [PATCH] Add --jobs-per-disk option to allow multiple processes per
 tablespace

This option is independent of the --jobs one. It's will fork new processes
to copy the different segments of a relfilenode in parallel.
---
 src/bin/pg_upgrade/option.c  |  8 ++-
 src/bin/pg_upgrade/parallel.c| 93 
 src/bin/pg_upgrade/pg_upgrade.h  |  4 ++
 src/bin/pg_upgrade/relfilenode.c | 59 +++-
 4 files changed, 139 insertions(+), 25 deletions(-)

diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 66fe16964e..46b1913a42 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -54,6 +54,7 @@ parseCommandLine(int argc, char *argv[])
 		{"link", no_argument, NULL, 'k'},
 		{"retain", no_argument, NULL, 'r'},
 		{"jobs", required_argument, NULL, 'j'},
+		{"jobs-per-disks", required_argument, NULL, 'J'},
 		{"socketdir", required_argument, NULL, 's'},
 		{"verbose", no_argument, NULL, 'v'},
 		{"clone", no_argument, NULL, 1},
@@ -103,7 +104,7 @@ parseCommandLine(int argc, char *argv[])
 	if (os_user_effective_id == 0)
 		pg_fatal("%s: cannot be run as root\n", os_info.progname);
 
-	while ((option = getopt_long(argc, argv, "d:D:b:B:cj:kNo:O:p:P:rs:U:v",
+	while ((option = getopt_long(argc, argv, "d:D:b:B:cj:J:kNo:O:p:P:rs:U:v",
  long_options, &optindex)) != -1)
 	{
 		switch (option)
@@ -132,6 +133,10 @@ parseCommandLine(int argc, char *argv[])
 user_opts.jobs = atoi(optarg);
 break;
 
+			case 'J':
+user_opts.jobs_per_disk = atoi(optarg);
+break;
+
 			case 'k':
 user_opts.transfer_mode = TRANSFER_MODE_LINK;
 break;
@@ -291,6 +296,7 @@ usage(void)
 	printf(_("  -d, --old-datadir=DATADIR old cluster data directory\n"));
 	printf(_("  -D, --new-datadir=DATADIR new cluster data directory\n"));
 	printf(_("  -j, --jobs=NUMnumber of simultaneous processes or threads to use\n"));
+	printf(_("  -J, --jobs_per_disk=NUM   number of simultaneous processes or threads to use per tablespace\n"));
 	printf(_("  -k, --linklink instead of copying files to new cluster\n"));
 	printf(_("  -N, --no-sync   

Re: support for MERGE

2022-01-11 Thread Jaime Casanova
On Wed, Dec 22, 2021 at 11:35:56AM +, Simon Riggs wrote:
> On Mon, 15 Nov 2021 at 22:45, Alvaro Herrera  wrote:
> >
> > On 2021-Nov-15, Alvaro Herrera wrote:
> >
> > > Thanks everyone for the feedback.  I attach a version with the fixes
> > > that were submitted, as well as some additional changes:
> >
> > Attachment failure.
> 
> I rebased this, please check.
> 

Hi,

I found two crashes, actually I found them on the original patch Álvaro
sent on november but just checked that those already exists.

I configured with:

CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer" ./configure 
--prefix=/opt/var/pgdg/15/merge --enable-debug --enable-depend --enable-cassert 
--with-llvm --enable-tap-tests --with-pgport=54315

And tested on the regression database.

Attached the SQL files for the crashes and its respective stacktraces.
FWIW, the second crash doesn't appear to be caused by the MERGE patch
but I cannot trigger it other way.


-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL


merge1.sql
Description: application/sql
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140726414547632, 2, 6, 5309120, 
94249822928768, 4611686018427388799, 140446649031334, 0, 281470681751456, 0, 0, 
0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7fbc4867c535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140446646788085, 2, 4062868317402242624, 
7017002064575424051, 94249829619760, 
  7003715557358071819, 674, 12862997737215894016, 140726414547872, 
140726414547888, 140726414548720}}, sa_flags = 22, sa_restorer = 0x7ffd6bf31bb0}
sigs = {__val = {32, 0 }}
#2  0x55b83f88c8b5 in ExceptionalCondition 
(conditionName=conditionName@entry=0x55b83f9d61c8 "resultnum >= 0 && resultnum 
< resultslot->tts_tupleDescriptor->natts", 
errorType=errorType@entry=0x55b83f8e700b "FailedAssertion", 
fileName=fileName@entry=0x55b83f9d5830 "execExprInterp.c", 
lineNumber=lineNumber@entry=674) at assert.c:69
No locals.
#3  0x55b83f58ea5f in ExecInterpExpr (state=0x55b840b0ac18, 
econtext=0x55b840b0aa10, isnull=0x7ffd6bf31fff) at execExprInterp.c:674
resultnum = 
op = 0x55b840b0adb0
resultslot = 0x55b840b0ab40
innerslot = 0x55b840b08660
outerslot = 0x0
scanslot = 0x0
dispatch_table = {0x55b83f58e5c4 , 0x55b83f58e5e4 
, 0x55b83f58e616 , 0x55b83f58e648 
, 0x55b83f58e67a , 
  0x55b83f58e6db , 0x55b83f58e73c 
, 0x55b83f58e79d , 0x55b83f58e7bc 
, 0x55b83f58e7db , 
  0x55b83f58e7fa , 0x55b83f58e815 
, 0x55b83f58e8b7 , 0x55b83f58e959 
, 0x55b83f58e9fb , 
  0x55b83f58ea5f , 0x55b83f58eaeb 
, 0x55b83f58eb0c , 0x55b83f58eb39 
, 0x55b83f58eb92 , 
  0x55b83f58ebad , 0x55b83f58ebc8 
, 0x55b83f58ebcf , 0x55b83f58ec16 
, 0x55b83f58ec4c , 
  0x55b83f58ec53 , 0x55b83f58ec9a 
, 0x55b83f58ecd0 , 0x55b83f58ecf5 
, 0x55b83f58ed43 , 
  0x55b83f58ed64 , 0x55b83f58ed9a 
, 0x55b83f58edd0 , 0x55b83f58ee10 
, 0x55b83f58ee3f , 
  0x55b83f58ee6e , 0x55b83f58ee89 
, 0x55b83f58eea4 , 0x55b83f58eecb 
, 0x55b83f58ef0d , 
  0x55b83f58ef4f , 0x55b83f58ef76 
, 0x55b83f58ef91 , 0x55b83f58efac 
, 0x55b83f58efc5 , 
  0x55b83f58f053 , 0x55b83f58f08b 
, 0x55b83f58f1c9 , 0x55b83f58f24a 
, 0x55b83f58f2ba , 
  0x55b83f58f323 , 0x55b83f58f33a 
, 0x55b83f58f345 , 0x55b83f58f35c 
, 0x55b83f58f373 , 
  0x55b83f58f38e , 0x55b83f58f3a5 
, 0x55b83f58f467 , 0x55b83f58f511 
, 0x55b83f58f528 , 
  0x55b83f58f540 , 0x55b83f58f558 
, 0x55b83f58f570 , 0x55b83f58f5a8 
, 0x55b83f58f5aa , 
  0x55b83f58f5aa , 0x55b83f58f00c 
, 0x55b83f58f604 , 0x55b83f58f618 
, 0x55b83f58f5c0 , 
  0x55b83f58f5d8 , 0x55b83f58f5ec 
, 0x55b83f58f62c , 0x55b83f58f643 
, 0x55b83f58f69d , 
  0x55b83f58f6b4 , 0x55b83f58f715 
, 0x55b83f58f730 , 0x55b83f58f75b 
, 0x55b83f58f7d9 , 
  0x55b83f58f829 , 0x55b83f58f874 
, 0x55b83f58f8e4 , 0x55b83f58f9f6 
, 0x55b83f58faef , 
  0x55b83f58fbe1 , 0x55b83f58fd30 
, 0x55b83f58fe61 , 0x55b83f58ff85 
, 0x55b83f58ffa0 , 
  0x55b83f58ffbb }
#4  0x55b83f58ae18 in ExecInterpExprStillValid (state=0x55b840b0ac18, 
econtext=0x55b840b0aa10, isNull=0x7ffd6bf31fff) at execExprInterp.c:1824
No locals.
#5  0x55b83f597f7b in ExecEvalExprSwitchContext (isNull=0x7ffd6bf31fff, 
econtext=0x55b840b0aa10, state=0x55b840b0ac18) at 
../../../src/include/executor/executor.h:339
retDatum = 
oldContext = 0x55b840ad7b40
retDatum = 
oldContext = 
#6  ExecProject (projInfo=0x55b840b0ac10) at 
../../../src/include/executor/executor.h:373
econtext = 0x55b840b0aa10
state = 0x55b840b0

Re: SQL-standard function body

2021-03-23 Thread Jaime Casanova
On Fri, Mar 19, 2021 at 8:49 AM Peter Eisentraut
 wrote:
>
> Right.  Here is a new patch with that fix added and a small conflict
> resolved.

Great.

It seems print_function_sqlbody() is not protected to avoid receiving
a function that hasn't a standard sql body in
src/backend/utils/adt/ruleutils.c:3292, but instead it has an assert
that gets hit with something like this:

CREATE FUNCTION foo() RETURNS int LANGUAGE SQL AS $$ SELECT 1 $$;
SELECT pg_get_function_sqlbody('foo'::regproc);

--
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL




Re: [HACKERS] Custom compression methods

2021-03-24 Thread Jaime Casanova
On Fri, Mar 19, 2021 at 2:44 PM Robert Haas  wrote:
>
> I committed the core patch (0003) with a bit more editing.  Let's see
> what the buildfarm thinks.
>

I think this is bbe0a81db69bd10bd166907c3701492a29aca294, right?
This introduced a new assert failure, steps to reproduce:

"""
create table t1 (col1 text, col2 text);
create unique index on t1 ((col1 || col2));
insert into t1 values((select array_agg(md5(g::text))::text from
generate_series(1, 256) g), version());
"""

Attached is a backtrace from current HEAD

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140726735034080, 2, 6, 6515006, 
94155769016304, 4611686018427388799, 140459852548774, 0, 281470681751456, 0, 0, 
0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7fbf5b657535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140459850305525, 2, 3630241276386869248, 
7003431909949847861, 94155769016304, 
  7003772758621645568, 0, 1523958137291918592, 140726735034320, 0, 
140726735035184}}, sa_flags = 1495953392, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x55a25992a983 in ExceptionalCondition (conditionName=0x55a2599a9248 
"CompressionMethodIsValid(cmethod)", errorType=0x55a2599a91d2 
"FailedAssertion", fileName=0x55a2599a91c0 "toast_internals.c", 
lineNumber=56) at assert.c:69
No locals.
#3  0x55a2592c0709 in toast_compress_datum (value=94155801611904, cmethod=0 
'\000') at toast_internals.c:56
tmp = 0x0
valsize = 1528257472
cmid = TOAST_INVALID_COMPRESSION_ID
__func__ = "toast_compress_datum"
#4  0x55a2592b9532 in index_form_tuple (tupleDescriptor=0x7fbf524881a8, 
values=0x7ffd7f0d5e10, isnull=0x7ffd7f0d5df0) at indextuple.c:106
cvalue = 140726735035664
att = 0x7fbf524881c0
tp = 0x25b19e830 
tuple = 0x55a25b1a4608
size = 94155801511432
data_size = 0
hoff = 0
i = 0
infomask = 0
hasnull = false
tupmask = 0
numberOfAttributes = 1
untoasted_values = {94155801611904, 94155771761321, 140726735035376, 
94155801506904, 94155779258656, 140726735035488, 94155779258656, 
140726735035488, 0, 140459848948121, 140726735035488, 
  94155779258656, 140726735035520, 0, 140726735035520, 94155769016304, 
94155801611904, 8589934592, 94155801512576, 94155801512544, 94155801511432, 
94155801506904, 94155771775058, 94155801352832, 0, 
  94155779258656, 6426549472, 94155771784441, 94155801487408, 0, 0, 0}
untoasted_free = {false, false, false, false, false, false, false, 
false, false, 4, false, false, false, false, false, false, 8, 69, 26, 91, 162, 
85, false, false, 192, 91, 23, 91, 162, 85, false, 
  false}
__func__ = "index_form_tuple"
#5  0x55a259355563 in btinsert (rel=0x7fbf5248b3f0, values=0x7ffd7f0d5e10, 
isnull=0x7ffd7f0d5df0, ht_ctid=0x55a25b19e860, heapRel=0x7fbf524840a8, 
checkUnique=UNIQUE_CHECK_YES, indexUnchanged=false, 
indexInfo=0x55a25b17d968) at nbtree.c:196
result = false
itup = 0x55a2593eb96d 
#6  0x55a259341253 in index_insert (indexRelation=0x7fbf5248b3f0, 
values=0x7ffd7f0d5e10, isnull=0x7ffd7f0d5df0, heap_t_ctid=0x55a25b19e860, 
heapRelation=0x7fbf524840a8, checkUnique=UNIQUE_CHECK_YES, 
indexUnchanged=false, indexInfo=0x55a25b17d968) at indexam.c:193
__func__ = "index_insert"
#7  0x55a259551d6b in ExecInsertIndexTuples (resultRelInfo=0x55a25b17d0a8, 
slot=0x55a25b19e830, estate=0x55a25b175ce0, update=false, noDupErr=false, 
specConflict=0x0, arbiterIndexes=0x0)
at execIndexing.c:416
applyNoDupErr = false
checkUnique = UNIQUE_CHECK_YES
indexRelation = 0x7fbf5248b3f0
indexInfo = 0x55a25b17d968
indexUnchanged = false
satisfiesConstraint = false
tupleid = 0x55a25b19e860
result = 0x0
i = 0
numIndices = 1
relationDescs = 0x55a25b17d900
heapRelation = 0x7fbf524840a8
indexInfoArray = 0x55a25b17d920
econtext = 0x55a25b1a2f88
values = {94155801611904, 94155801320384, 23002747632, 94155801505976, 
94155801505952, 94155801320384, 140726735036016, 94155776090255, 
94155800974064, 94155801505976, 140726735036048, 94155801320384, 
  140726735036048, 94155769089800, 5801161712, 94155801505976, 
140726735036192, 94155769471556, 0, 0, 94155801603232, 140459695947944, 
140726735036224, 6764221304040416, 1, 23351616, 140459705036672, 
  856226916400, 94155801505976, 2325076142599, 549755813894, 0}
isnull = {false, true, false, false, false, false, false,

cursor already in use, UPDATE RETURNING bug?

2021-03-30 Thread Jaime Casanova
Hi,

Just noted an interesting behaviour when using a cursor in a function
in an UPDATE RETURNING (note that INSERT RETURNING has no problem).

I have seen this problem in all versions I tested (9.4 thru master).
Steps to reproduce:

prepare the test
```
create table t1 as select random() * foo i from generate_series(1, 100) foo;
create table t2 as select random() * foo i from generate_series(1, 100) foo;

CREATE OR REPLACE FUNCTION cursor_bug()
 RETURNS integer
 LANGUAGE plpgsql
AS $function$
declare
  c1 cursor (p1 int) for select count(*) from t1 where i = p1;
  n int4;
begin
  open c1 (77);
  fetch c1 into n;
  return n;
end $function$
;
```

-- this ends fine
insert into t2 values(5) returning cursor_bug() as c1;
 c1

  0
(1 row)

-- this fails
update t2 set i = 5 returning cursor_bug() as c1;
ERROR:  cursor "c1" already in use
CONTEXT:  PL/pgSQL function cursor_bug() line 6 at OPEN

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL




Re: cursor already in use, UPDATE RETURNING bug?

2021-03-31 Thread Jaime Casanova
On Wed, Mar 31, 2021 at 7:50 AM Ashutosh Bapat
 wrote:
>
> On Wed, Mar 31, 2021 at 6:09 AM Jaime Casanova
>
> >
> > -- this fails
> > update t2 set i = 5 returning cursor_bug() as c1;
> > ERROR:  cursor "c1" already in use
> > CONTEXT:  PL/pgSQL function cursor_bug() line 6 at OPEN
>
> but that's called as many time as the number of rows in t2 in the same
> transaction. The first row will go fine. For the second row it will
> find c1 is already open. Shouldn't cursor_bug() close c1 at the end?
> Is it intended to be kept open when the function finishes? May be you
> are expecting it to be closed automatically when the function
> finishes. But that's not what is documented at
> https://www.postgresql.org/docs/13/plpgsql-cursors.html.
>

Now that I see it again, after sleeping, I can see you're right! sorry
for the noise

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL




Crash in BRIN minmax-multi indexes

2021-03-31 Thread Jaime Casanova
Hi,

Just found $SUBJECT involving time with time zone and a subselect. I
still don't have narrowed to the exact table/index minimal schema but
if you run this query on the regression database it will creash.

```
update public.brintest_multi set
  timetzcol = (select tz from generate_series('2021-01-01'::timestamp
with time zone, '2021-01-31', '5 days') tz limit 1)
;
```

attached a backtrace. Let me know if you need extra information.

--
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140736618859920, 2, 6, 6570493, 
93847671406576, 4611686018427388799, 140140941466278, 0, 
281470681751456, 0, 0, 0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f751ad08535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140140939223029, 
  2, 7004563297194213396, 7003431879952131129, 93847671406576, 
7003772732868633408, 0, 18123550304647508992, 
  140736618860160, 93847715200152, 140736618861024}}, sa_flags = 
-1658978320, sa_restorer = 0x555a9fba3c98}
sigs = {__val = {32, 0 }}
#2  0x555a9d872557 in ExceptionalCondition (conditionName=0x555a9d8ec6c3 
"delta >= 0", 
errorType=0x555a9d8ebf74 "FailedAssertion", fileName=0x555a9d8ebf60 
"brin_minmax_multi.c", lineNumber=2095) at assert.c:69
No locals.
#3  0x555a9d1e9933 in brin_minmax_multi_distance_timetz 
(fcinfo=0x7fffcc2c9250) at brin_minmax_multi.c:2095
delta = -678500
ta = 0x555a9fb99720
tb = 0x555a9fb9c768
#4  0x555a9d87c6f1 in FunctionCall2Coll (flinfo=0x555a9fbd7c88, 
collation=0, arg1=93847715157792, arg2=93847715170152)
at fmgr.c:1163
fcinfodata = {fcinfo = {flinfo = 0x555a9fbd7c88, context = 0x0, 
resultinfo = 0x0, fncollation = 0, isnull = false, 
nargs = 2, args = 0x7fffcc2c9270}, 
  fcinfo_data = "\210|\275\237ZU", '\000' , "\002\000 
\227\271\237ZU\000\000\000\000\000\000\000\000\000\000hǹ\237ZU\000\000\000\000\000@\317#\270A"}
fcinfo = 0x7fffcc2c9250
result = 4735574381219020800
__func__ = "FunctionCall2Coll"
#5  0x555a9d1e85b7 in build_distances (distanceFn=0x555a9fbd7c88, 
colloid=0, eranges=0x555a9fba3b50, neranges=10)
at brin_minmax_multi.c:1350
a1 = 93847715157792
a2 = 93847715170152
r = 4735574381219020800
i = 4
ndistances = 9
distances = 0x555a9fba3c68
#6  0x555a9d1e912f in compactify_ranges (bdesc=0x555a9fbd72a8, 
ranges=0x555a9fc02df8, max_values=32)
at brin_minmax_multi.c:1820
cmpFn = 0x555a9fbd7cc0
distanceFn = 0x555a9fbd7c88
eranges = 0x555a9fba3b50
neranges = 10
distances = 0x555a0010
ctx = 0x555a9fba3a30
oldctx = 0x555a9fb9b9f0
#7  0x555a9d1ea409 in brin_minmax_multi_serialize (bdesc=0x555a9fbd72a8, 
src=93847715589624, dst=0x555a9fb9bea8)
at brin_minmax_multi.c:2333
ranges = 0x555a9fc02df8
s = 0x555a9fb9d688
#8  0x555a9d1f0601 in brin_form_tuple (brdesc=0x555a9fbd72a8, blkno=0, 
tuple=0x555a9fb9bb10, size=0x7fffcc2c94e8)
at brin_tuple.c:165
datumno = 1
values = 0x555a9fb9c108
nulls = 0x555a9fb9c920
anynulls = true
rettuple = 0x89fc97d00
keyno = 15
idxattno = 15
phony_infomask = 0
phony_nullbitmap = 0x555a9fb9c9a0 "\177\177\177~\177\177\177\177"
len = 93847671475524
hoff = 2048
data_len = 93847715170728
i = 32767
untoasted_values = 0x555a9fb9c220
nuntoasted = 0
#9  0x555a9d1e09c1 in brininsert (idxRel=0x555a9fc97dd0, 
values=0x7fffcc2c9640, nulls=0x7fffcc2c9620, heaptid=0x555a9fb92ea8, 
heapRel=0x555a9fc83540, checkUnique=UNIQUE_CHECK_NO, indexUnchanged=false, 
indexInfo=0x555a9fbac900) at brin.c:281
lp = 0x7f751a1abfb8
origsz = 1592
newsz = 93847674317391
page = 0x7f751a1abf80 "\004"
origtup = 0x555a9fb9c9c0
newtup = 0x7fffcc2c9520
samepage = false
need_insert = true
off = 9
brtup = 0x7f751a1ac1d8
dtup = 0x555a9fb9bb10
pagesPerRange = 1
origHeapBlk = 0
heapBlk = 0
bdesc = 0x555a9fbd72a8
revmap = 0x555a9fb93248
buf = 16301
tupcxt = 0x555a9fb9b9f0
oldcxt = 0x555a9fbafa80
autosummarize = false
__func__ = "brininsert"
#10 0x555a9d280a1e in index_insert (indexRelation=0x555a9fc97dd0, 
values=0x7fffcc2c9640, isnull=0x7fffcc2c9620, 
heap_t_ctid=0x555a9fb92ea8, heapRelation=0x555a9fc83540, 
checkUnique=UNIQUE_CHECK_NO, indexUnchange

Re: Crash in BRIN minmax-multi indexes

2021-03-31 Thread Jaime Casanova
On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra
 wrote:
>
> Hi,
>
> I think I found the issue - it's kinda obvious, really. We need to
> consider the timezone, because the "time" parts alone may be sorted
> differently. The attached patch should fix this, and it also fixes a
> similar issue in the inet data type.
>

ah! yeah! obvious... if you say so ;)

> As for why the regression tests did not catch this, it's most likely
> because the data is likely generated in "nice" ordering, or something
> like that. I'll see if I can tweak the ordering to trigger these issues
> reliably, and I'll do a bit more randomized testing.
>
> There's also the question of rounding errors, which I think might cause
> random assert failures (but in practice it's harmless, in the worst case
> we'll merge the ranges a bit differently).
>
>

I can confirm this fixes the crash in the query I showed and the original case.

-- 
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL




Re: Crash in BRIN minmax-multi indexes

2021-03-31 Thread Jaime Casanova
On Wed, Mar 31, 2021 at 6:19 PM Jaime Casanova
 wrote:
>
> On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra
>  wrote:
> >
> > Hi,
> >
> > I think I found the issue - it's kinda obvious, really. We need to
> > consider the timezone, because the "time" parts alone may be sorted
> > differently. The attached patch should fix this, and it also fixes a
> > similar issue in the inet data type.
> >
>
> ah! yeah! obvious... if you say so ;)
>
> > As for why the regression tests did not catch this, it's most likely
> > because the data is likely generated in "nice" ordering, or something
> > like that. I'll see if I can tweak the ordering to trigger these issues
> > reliably, and I'll do a bit more randomized testing.
> >
> > There's also the question of rounding errors, which I think might cause
> > random assert failures (but in practice it's harmless, in the worst case
> > we'll merge the ranges a bit differently).
> >
> >
>
> I can confirm this fixes the crash in the query I showed and the original 
> case.
>

But I found another, but similar issue.

```
update public.brintest_multi set
  intervalcol = (select pg_catalog.avg(intervalcol) from public.brintest_bloom)
;
```

BTW, i can reproduce just by executing "make installcheck" and
immediately execute that query

--
Jaime Casanova
Director de Servicios Profesionales
SYSTEMGUARDS - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140732635565408, 2, 6, 6574673, 
94040556822512, 4611686018427388799, 140208685218470, 0, 281470681751456, 0, 0, 
0, 0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f84e0a82535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140208682975221, 2, 3978988772531372052, 
7003432107538527544, 94040556822512, 
  7003767243900412736, 0, 11642884921704132352, 140732635565648, 0, 
140732635566512}}, sa_flags = -2047090704, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x558786650fb6 in ExceptionalCondition (conditionName=0x5587866cb6c3 
"delta >= 0", errorType=0x5587866caf74 "FailedAssertion", 
fileName=0x5587866caf60 "brin_minmax_multi.c", lineNumber=2166)
at assert.c:69
No locals.
#3  0x558785fc7cd4 in brin_minmax_multi_distance_interval 
(fcinfo=0x7ffedec05220) at brin_minmax_multi.c:2166
delta = -82532.55908594
ia = 0x558787f13738
ib = 0x558787efdc20
result = 0x558787f207e0
__func__ = "brin_minmax_multi_distance_interval"
#4  0x55878665b3ae in FunctionCall2Coll (flinfo=0x558787efbbf0, 
collation=0, arg1=94040589678392, arg2=94040589589536) at fmgr.c:1163
fcinfodata = {fcinfo = {flinfo = 0x558787efbbf0, context = 0x0, 
resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 2, args = 
0x7ffedec05240}, 
  fcinfo_data = "\360\273U", '\000' , 
"\002\000\070\067\361\207\207U\000\000\000\000\000\000\000\000\000\000 
\334U\000\000\000\a\362\207\207U\000"}
fcinfo = 0x7ffedec05220
result = 94040589731280
__func__ = "FunctionCall2Coll"
#5  0x558785fc65b7 in build_distances (distanceFn=0x558787efbbf0, 
colloid=0, eranges=0x558787f206f0, neranges=5) at brin_minmax_multi.c:1350
a1 = 94040589678392
a2 = 94040589589536
r = 94040556857001
i = 0
ndistances = 4
distances = 0x558787f20788
#6  0x558785fc712f in compactify_ranges (bdesc=0x558787efb428, 
ranges=0x558787f12720, max_values=32) at brin_minmax_multi.c:1820
cmpFn = 0x558787efbc28
distanceFn = 0x558787efbbf0
eranges = 0x558787f206f0
neranges = 5
distances = 0x10055870008
ctx = 0x558787f205d0
oldctx = 0x558787efd560
#7  0x558785fc8483 in brin_minmax_multi_serialize (bdesc=0x558787efb428, 
src=94040589674272, dst=0x558787efda10) at brin_minmax_multi.c:2354
ranges = 0x558787f12720
s = 0x558787efe7a8
#8  0x558785fce67b in brin_form_tuple (brdesc=0x558787efb428, blkno=3, 
tuple=0x558787efd680, size=0x7ffedec054b8) at brin_tuple.c:165
datumno = 1
values = 0x558787efe0f8
nulls = 0x558787efdc48
anynulls = false
rettuple = 0x7d7924400
keyno = 14
idxattno = 14
phony_infomask = 0
phony_nullbitmap = 0x558787efe210 "\177\177\177~\177\177\177\177"
len = 94040556891582
hoff = 1024
data_len = 94040589589704
i = 32766
untoasted_values = 0x558787efe230
nuntoasted = 0
#9  0x558785fbe9c1 in brininsert (idxRel=0x7f84d79244b0, 
val

Re: Crash in BRIN minmax-multi indexes

2021-04-03 Thread Jaime Casanova
On Thu, Apr 01, 2021 at 03:22:59PM +0200, Tomas Vondra wrote:
> On 4/1/21 3:09 PM, Zhihong Yu wrote:
> > Hi,
> > Can you try this patch ?
> > 
> > Thanks
> > 
> > diff --git a/src/backend/access/brin/brin_minmax_multi.c
> > b/src/backend/access/brin/brin_minmax_multi.c
> > index 70109960e8..25d6d2e274 100644
> > --- a/src/backend/access/brin/brin_minmax_multi.c
> > +++ b/src/backend/access/brin/brin_minmax_multi.c
> > @@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)
> >      delta = 24L * 3600L * delta;
> > 
> >      /* and add the time part */
> > -    delta += result->time / (float8) 100.0;
> > +    delta += (result->time + result->zone * USECS_PER_SEC) / (float8)
> > 100.0;
> > 
> 
> That won't work, because Interval does not have a "zone" field, so this
> won't even compile.
> 
> The problem is that interval comparisons convert the value using 30 days
> per month (see interval_cmp_value), but the formula in this function
> uses 31. So either we can tweak that (seems to fix it for me), or maybe
> just switch to interval_cmp_value directly.
> 

Changing to using month of 30 days on the formula fixed it.

and I found another issue, this time involves autovacuum which makes it
a little more complicated to reproduce.

Currently the only stable way to reproduce it is using pgbench:

pgbench -i postgres
psql -c "CREATE INDEX ON pgbench_history USING brin (tid 
int4_minmax_multi_ops);" postgres
pgbench -c2 -j2 -T 300 -n postgres

Attached a backtrace

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140730649395040, 2, 6, 6603273, 
9461368344, 
4611686018427388799, 139855637088934, 0, 281470681751456, 0, 0, 0, 
0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f32ad593535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {
  0, 0, 0, 0, 0, 139855634845685, 2, 7147272211649118304, 
7003431887686743910, 
  9461368344, 7003770563910163344, 0, 17199369337215936768, 
140730649395280, 0, 
  140730649396144}}, sa_flags = -144564240, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x560cf7cbc5d0 in ExceptionalCondition (conditionName=0x560cf7d370c9 
"DatumGetBool(r)", 
errorType=0x560cf7d36f74 "FailedAssertion", fileName=0x560cf7d36f60 
"brin_minmax_multi.c", 
lineNumber=455) at assert.c:69
No locals.
#3  0x560cf7628767 in AssertCheckExpandedRanges (bdesc=0x560cf9914e08, 
colloid=0, attno=1, 
attr=0x7f32a442d858, ranges=0x560cf991bbf0, nranges=10) at 
brin_minmax_multi.c:455
r = 0
minval = 10
maxval = 10
i = 9
eq = 0x560cf9914ca0
lt = 0x560cf9914c40
#4  0x560cf762d17d in brin_minmax_multi_union (fcinfo=0x7ffe685dc520)
at brin_minmax_multi.c:2779
bdesc = 0x560cf9914e08
col_a = 0x560cf9923f88
col_b = 0x560cf9939898
colloid = 0
serialized_a = 0x560cf993d020
serialized_b = 0x560cf993d0b0
ranges_a = 0x560cf9923d48
ranges_b = 0x560cf9913be8
attno = 1
attr = 0x7f32a442d858
eranges = 0x560cf991bbf0
neranges = 10
cmpFn = 0x560cf9914c40
distanceFn = 0x560cf991aee0
distances = 0x560cf7d38d27
ctx = 0x560cf991bad0
oldctx = 0x560cf9923b00
#5  0x560cf7cc6ae2 in FunctionCall3Coll (flinfo=0x560cf991afb0, 
collation=0, 
arg1=94613726645768, arg2=94613726707592, arg3=94613726795928) at 
fmgr.c:1188
fcinfodata = {fcinfo = {flinfo = 0x560cf991afb0, context = 0x0, 
resultinfo = 0x0, 
fncollation = 0, isnull = false, nargs = 3, args = 0x7ffe685dc540}, 
  fcinfo_data = "\260\257\221\371\fV", '\000' , 
"\177\003\000\bN\221\371\fV\000\000\000;\222\371\fV\000\000\210?\222\371\fV\000\000\000\227\223\371\v\022\000\000\230\230\223\371\fV\000\000\000\257\221\371\fV\000"}
fcinfo = 0x7ffe685dc520
result = 17072012032
__func__ = "FunctionCall3Coll"
#6  0x560cf762541d in union_tuples (bdesc=0x560cf9914e08, a=0x560cf9923f60, 
b=0x560cf993d078)
at brin.c:1618
unionFn = 0x560cf991afb0
col_a = 0x560cf9923f88
col_b = 0x560cf9939898
opcinfo = 0x560cf9914bf0
keyno = 0
db = 0x560cf9939870
cxt = 0x560cf9939750
oldcxt = 0x560cf9923b00
#7  0x560cf7624eb2 in summarize_range (indexInfo=0x560cf99243a8, 
state=0x560cf9924e08, 
heapRel=0x7f32a442dfb8, heapBlk=460, heapNumBlks=463) at brin.c:1427
n

use AV worker items infrastructure for GIN pending list's cleanup

2021-04-04 Thread Jaime Casanova
Hi,

When AV worker items where introduced 4 years ago, i was suggested that
it could be used for other things like cleaning the pending list of GIN
index when it reaches gin_pending_list_limit instead of making user
visible operation pay the price.

That never happened though. So, here is a little patch for that.

Should I add an entry for this on next commitfest?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c
index e0d9940946..305326119c 100644
--- a/src/backend/access/gin/ginfast.c
+++ b/src/backend/access/gin/ginfast.c
@@ -459,7 +459,17 @@ ginHeapTupleFastInsert(GinState *ginstate, GinTupleCollector *collector)
 	 * pending list not forcibly.
 	 */
 	if (needCleanup)
-		ginInsertCleanup(ginstate, false, true, false, NULL);
+	{
+		bool		recorded;
+
+		recorded = AutoVacuumRequestWork(AVW_GINCleanPendingList, 
+	RelationGetRelid(ginstate->index), InvalidBlockNumber);
+		if (!recorded)
+			ereport(LOG,
+	(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
+	 errmsg("request for GIN clean pending list for index \"%s\" was not recorded",
+			RelationGetRelationName(ginstate->index;
+	}
 }
 
 /*
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 23ef23c13e..ee981640e3 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -2680,6 +2680,10 @@ perform_work_item(AutoVacuumWorkItem *workitem)
 	ObjectIdGetDatum(workitem->avw_relation),
 	Int64GetDatum((int64) workitem->avw_blockNumber));
 break;
+			case AVW_GINCleanPendingList:
+DirectFunctionCall1(gin_clean_pending_list,
+	ObjectIdGetDatum(workitem->avw_relation));
+break;
 			default:
 elog(WARNING, "unrecognized work item found: type %d",
 	 workitem->avw_type);
@@ -3291,6 +3295,10 @@ autovac_report_workitem(AutoVacuumWorkItem *workitem,
 			snprintf(activity, MAX_AUTOVAC_ACTIV_LEN,
 	 "autovacuum: BRIN summarize");
 			break;
+		case AVW_GINCleanPendingList:
+			snprintf(activity, MAX_AUTOVAC_ACTIV_LEN,
+	 "autovacuum: GIN pending list cleanup");
+			break;
 	}
 
 	/*
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index aacdd0f575..2cbdde41c4 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -22,7 +22,8 @@
  */
 typedef enum
 {
-	AVW_BRINSummarizeRange
+	AVW_BRINSummarizeRange,
+	AVW_GINCleanPendingList
 } AutoVacuumWorkItemType;
 
 


Re: use AV worker items infrastructure for GIN pending list's cleanup

2021-04-05 Thread Jaime Casanova
On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:
> On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:
> > When AV worker items where introduced 4 years ago, i was suggested that
> > it could be used for other things like cleaning the pending list of GIN
> > index when it reaches gin_pending_list_limit instead of making user
> > visible operation pay the price.
> > 
> > That never happened though. So, here is a little patch for that.
> > 
> > Should I add an entry for this on next commitfest?
> +1. It slipped through the cracks along the years. It is even suggested in the
> current docs since the fast update support.
> 
> https://www.postgresql.org/docs/current/gin-tips.html
> 

Interesting, that comment maybe needs to be rewritten. I would go for
remove completely the first paragraph under gin_pending_list_limit entry

> 
> Could you provide a link from the previous discussion?
> 

It happened here:
https://www.postgresql.org/message-id/flat/20170301045823.vneqdqkmsd4as4ds%40alvherre.pgsql

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




document that brin's autosummarize parameter is off by default

2021-04-05 Thread Jaime Casanova
Hi everyone,

Just noted that the default value of autosummarize reloption for brin
indexes is not documented, or at least not well documented.

I added the default value in create_index.sgml where other options
mention their own defaults, also made a little change in brin.sgml to 
make it more clear that is disabled by default (at least the way it 
was written made no sense for me, but it could be that my english is 
not that good).

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml
index d2f12bb605..5ac5d783d7 100644
--- a/doc/src/sgml/brin.sgml
+++ b/doc/src/sgml/brin.sgml
@@ -78,8 +78,8 @@
brin_summarize_new_values(regclass) functions;
automatically when VACUUM processes the table;
or by automatic summarization executed by autovacuum, as insertions
-   occur.  (This last trigger is disabled by default and can be enabled
-   with the autosummarize parameter.)
+   occur (the latter is disabled by default and can be enabled
+   with the autosummarize parameter).
Conversely, a range can be de-summarized using the
brin_desummarize_range(regclass, bigint) function,
which is useful when the index tuple is no longer a very good
diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml
index cc484d5b39..9d92a6b837 100644
--- a/doc/src/sgml/ref/create_index.sgml
+++ b/doc/src/sgml/ref/create_index.sgml
@@ -563,7 +563,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] 
 
  Defines whether a summarization run is invoked for the previous page
- range whenever an insertion is detected on the next one.
+ range whenever an insertion is detected on the next one. The default 
+ is OFF. 
 
 



maximum columns for brin bloom indexes

2021-04-08 Thread Jaime Casanova
Hi everyone,

When testing brin bloom indexes I noted that we need to reduce the
PAGES_PER_RANGE parameter of the index to allow more columns on it.

Sadly, this could be a problem if you create the index before the table
grows, once it reaches some number of rows (i see the error as early as
1000 rows) it starts error out.

create table t1(i int, j int);

-- uses default PAGES_PER_RANGE=128
create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;

insert into t1 
select random()*1000, random()*1000 from generate_series(1, 
1000);
ERROR:  index row size 8968 exceeds maximum 8152 for index "t1_i_j_idx"

if instead you create the index with a minor PAGES_PER_RANGE it goes
fine, in this case it works once you reduce it to at least 116

create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) 
with (pages_per_range=116);


so, for having:
two int columns - PAGES_PER_RANGE should be max 116
three int columns - PAGES_PER_RANGE should be max 77
one int and one timestamp - PAGES_PER_RANGE should be max 121 

and so on

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: maximum columns for brin bloom indexes

2021-04-08 Thread Jaime Casanova
On Thu, Apr 08, 2021 at 12:18:36PM +0200, Tomas Vondra wrote:
> On 4/8/21 9:08 AM, Jaime Casanova wrote:
> > Hi everyone,
> > 
> > When testing brin bloom indexes I noted that we need to reduce the
> > PAGES_PER_RANGE parameter of the index to allow more columns on it.
> > 
> > Sadly, this could be a problem if you create the index before the table
> > grows, once it reaches some number of rows (i see the error as early as
> > 1000 rows) it starts error out.
> > 
> > create table t1(i int, j int);
> > 
> > -- uses default PAGES_PER_RANGE=128
> > create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;
> > 
> > insert into t1 
> > select random()*1000, random()*1000 from generate_series(1, 
> > 1000);
> > ERROR:  index row size 8968 exceeds maximum 8152 for index "t1_i_j_idx"
> > 
> > if instead you create the index with a minor PAGES_PER_RANGE it goes
> > fine, in this case it works once you reduce it to at least 116
> > 
> > create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) 
> > with (pages_per_range=116);
> > 
> > 
> > so, for having:
> > two int columns - PAGES_PER_RANGE should be max 116
> > three int columns - PAGES_PER_RANGE should be max 77
> > one int and one timestamp - PAGES_PER_RANGE should be max 121 
> > 
> > and so on
> > 
> 
> No, because this very much depends on the number if distinct values in
> the page page range, which determines how well the bloom filter
> compresses. You used 1000, but that's just an arbitrary value and the
> actual data might have any other value. And it's unlikely that all three
> columns will have the same number of distinct values.
>

Ok, that makes sense. Still I see a few odd things: 

"""
drop table if exists t1;
create table t1(i int, j int);
create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;

-- This one will succeed, I guess because it has less different
-- values
insert into t1
select random()*20, random()*100 from generate_series(1, 1000);

-- succeed
insert into t1
select random()*20, random()*100 from generate_series(1, 10);

-- succeed
insert into t1
select random()*200, random()*1000 from generate_series(1, 1000);

-- succeed
insert into t1
select random()*200, random()*1000 from generate_series(1, 1000);

-- succeed? This is the case it has been causing problems before
insert into t1
select random()*1000, random()*1000 from generate_series(1, 1000);
"""

Maybe this makes sense, but it looks random to me. If it makes sense
this is something we should document better. 

Let's try another combination:

"""
drop table if exists t1;
create table t1(i int, j int);
create index on t1 using brin(i int4_bloom_ops, j int4_bloom_ops ) ;

-- this fails again
insert into t1
select random()*1000, random()*1000 from generate_series(1, 1000);

-- and this starts to fail now, but this worked before
insert into t1
select random()*20, random()*100 from generate_series(1, 1000);
"""

> Of course, this also depends on the false positive rate.
> 

How the false positive rate work?

> FWIW I doubt people are using multi-column BRIN indexes very often.
> 

true. 

Another question, should we allow to create a brin multi column index
that uses different opclasses?

CREATE INDEX ON t1 USING brin (i int4_bloom_ops, j int4_minmax_ops);

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-11 Thread Jaime Casanova
On Mon, 3 Aug 2020 at 13:56, Daniel Verite  wrote:
>
>  Hi,
>
> As a follow-up to bug #16570 [1] and other previous discussions
> on the mailing-lists, I'm checking out PG13 beta for Windows
> from:
>  https://www.enterprisedb.com/postgresql-early-experience
> and it ships with the same obsolete ICU 53 that was used
> for PG 10,11,12.
> Besides not having the latest Unicode features and fixes, ICU 53
> ignores the BCP 47 tags syntax in collations used as examples
> in Postgres documentation, which leads to confusion and
> false bug reports.
> The current version is ICU 67.
>

Hi,

Sadly, that is managed by EDB and not by the community.

You can try 
https://www.2ndquadrant.com/en/resources/postgresql-installer-2ndquadrant/
which uses ICU-62.2, is not the latest but should allow you to follow
the examples in the documentation.

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: EDB builds Postgres 13 with an obsolete ICU version

2020-08-12 Thread Jaime Casanova
On Tue, 11 Aug 2020 at 13:45, Thomas Kellerer  wrote:
>
> Jaime Casanova schrieb am 11.08.2020 um 20:39:
> >> As a follow-up to bug #16570 [1] and other previous discussions
> >> on the mailing-lists, I'm checking out PG13 beta for Windows
> >> from:
> >>   https://www.enterprisedb.com/postgresql-early-experience
> >> and it ships with the same obsolete ICU 53 that was used
> >> for PG 10,11,12.
> >> Besides not having the latest Unicode features and fixes, ICU 53
> >> ignores the BCP 47 tags syntax in collations used as examples
> >> in Postgres documentation, which leads to confusion and
> >> false bug reports.
> >> The current version is ICU 67.
> >>
> >
> > Sadly, that is managed by EDB and not by the community.
> >
> > You can try 
> > https://www.2ndquadrant.com/en/resources/postgresql-installer-2ndquadrant/
> > which uses ICU-62.2, is not the latest but should allow you to follow
> > the examples in the documentation.
>
>
> One of the reasons I prefer the EDB builds is, that they provide a ZIP file 
> without the installer overhead.
> Any chance 2ndQuadrant can supply something like that as well?
>

i don't think so, an unattended install mode is the closest

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [PATCH] Proposal for HIDDEN/INVISIBLE column

2021-10-14 Thread Jaime Casanova
On Thu, Oct 14, 2021 at 01:16:45PM +0200, Gilles Darold wrote:
> Hi,
> 
> 
> Here is a proposal to implement HIDDEN columns feature in PostgreSQL.
> 

Great! Actually I found this very useful, especially for those people
using big fields (geometry, files, large texts).

> The user defined columns are always visible in the PostgreSQL. If user
> wants to hide some column(s) from a SELECT * returned values then the
> hidden columns feature is useful. Hidden column can always be used and
> returned by explicitly referring it in the query.
> 
> I agree that views are done for that or that using a SELECT * is a bad
> practice

An a common one, even if we want to think otherwise. I have found that
in almost every customer I have the bad luck to get to see code or
SELECTs.

Not counting that sometimes we have columns for optimization like Dave
saved about hidden a ts_vector column.

Another use case I can think of is not covered in this patch, but it
could be (I hope!) or even if not I would like opinions on this idea. 
What about a boolean GUC log_hidden_column that throws a LOG message when 
a hidden column is used directly?

The intention is to mark a to-be-deleted column as HIDDEN and then check
the logs to understand if is still being used somewhere. I know systems
where they carry the baggage of deprecated columns only because they
don't know if some system is still using them.

I know this would be extending your original proposal, and understand if
you decide is not a first patch material. 

Anyway, a +1 to your proposal. 

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




to be a multirange or not be, that's the question

2021-11-06 Thread Jaime Casanova
Ok, subject was a bit philosophical but this message I just found is
quite confusing.

"""
regression=# select cast(null as anyrange) &> cast(null as anymultirange);
ERROR:  argument declared anymultirange is not a multirange type but type 
anymultirange
"""

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




pg_upgrade parallelism

2021-11-17 Thread Jaime Casanova
Hi,

Currently docs about pg_upgrade says:

"""

 The --jobs option allows multiple CPU cores to be used
 for copying/linking of files and to dump and reload database schemas
 in parallel;  a good place to start is the maximum of the number of
 CPU cores and tablespaces.  This option can dramatically reduce the
 time to upgrade a multi-database server running on a multiprocessor
 machine.

"""

Which make the user think that the --jobs option could use all CPU
cores. Which is not true. Or that it has anything to do with multiple
databases, which is true only to some extent.

What that option really improves are upgrading servers with multiple
tablespaces, of course if --link or --clone are used pg_upgrade is still
very fast but used with the --copy option is not what one could expect.

As an example, a customer with a 25Tb database, 40 cores and lots of ram
used --jobs=35 and got only 7 processes (they have 6 tablespaces) and
the disks where not used at maximum speed either. They expected 35
processes copying lots of files at the same time.

So, first I would like to improve documentation. What about something
like the attached? 

Now, a couple of questions:

- in src/bin/pg_upgrade/file.c at copyFile() we define a buffer to
  determine the amount of bytes that should be used in read()/write() to
  copy the relfilenode segments. And we define it as (50 * BLCKSZ),
  which is 400Kb. Isn't this too small?

- why we read()/write() at all? is not a faster way of copying the file?
  i'm asking that because i don't actually know.

I'm trying to add more parallelism by copying individual segments
of a relfilenode in different processes. Does anyone one see a big
problem in trying to do that? I'm asking because no one did it before,
that could not be a good sign.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml
index 20efdd7..74eaaee 100644
--- a/doc/src/sgml/ref/pgupgrade.sgml
+++ b/doc/src/sgml/ref/pgupgrade.sgml
@@ -406,10 +406,10 @@ NET STOP postgresql-&majorversion;
 
  The --jobs option allows multiple CPU cores to be used
  for copying/linking of files and to dump and reload database schemas
- in parallel;  a good place to start is the maximum of the number of
- CPU cores and tablespaces.  This option can dramatically reduce the
- time to upgrade a multi-database server running on a multiprocessor
- machine.
+ in parallel;  a good place to start is the maximum of: the number of
+ CPU cores or tablespaces.  This option can dramatically reduce the
+ time to upgrade a server with multiple tablespaces running on a 
+ multiprocessor machine.
 
 
 


Partitioned tables and covering indexes

2018-04-10 Thread Jaime Casanova
Hi,

Trying covering indexes on partitioned tables i get this error
"""
postgres=# create index on t1_part (i) include (t);
ERROR:  cache lookup failed for opclass 0
"""

To reproduce:

create table t1_part (i int, t text) partition by hash (i);
create table t1_part_0 partition of t1_part for values with (modulus
2, remainder 0);
create table t1_part_1 partition of t1_part for values with (modulus
2, remainder 1);
insert into t1_part values (1, repeat('abcdefquerty', 20));

create index on t1_part (i) include (t);

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Partitioned tables and covering indexes

2018-04-10 Thread Jaime Casanova
On 10 April 2018 at 10:36, Teodor Sigaev  wrote:
>> Does the attached fix look correct?  Haven't checked the fix with
>> ATTACH
>> PARTITION though.
>>
>>
>> Attached patch seems to fix the problem.  However, I would rather get
>> rid of modifying stmt->indexParams.  That seems to be more logical
>> for me.  Also, it would be good to check some covering indexes on
>> partitioned tables.  See the attached patch.
>
> Seems right way, do not modify incoming object and do not copy rather large
> and deep nested structure as suggested by Amit.
>
> But it will  be better to have a ATTACH PARTITION test too.
>

the patch worked for me, i also tried some combinations using ATTACH
PARTITION and found no problems

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: PostgreSQL and Real Application Testing (RAT)

2019-08-27 Thread Jaime Casanova
On Tue, 27 Aug 2019 at 05:47, ROS Didier  wrote:

> Hi
>
>
>
> In my business, one of the things blocking the migration from Oracle to
> PostgreSQL is not having the equivalent of Oracle Real Application Testing .
>
> This product captures a charge in production and replay it in a test
> environment.
>
> this allows to know the impacts of a migration to a newer version, the
> creation of an index..
>
> is there an equivalent in the PostgreSQL community?
>

I used https://github.com/laurenz/pgreplay recently to re-execute the
queries sent to a pg9.1 in a pg11. It was very useful to find queries that
are affected but changes in default values of GUCs.

Normally, a query that works in an old version will work in a new one; but
this is useful to catch the few that don't if any

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: PostgreSQL and Real Application Testing (RAT)

2019-08-27 Thread Jaime Casanova
On Tue, 27 Aug 2019 at 19:33, Nikolay Samokhvalov 
wrote:

> On Tue, Aug 27, 2019 at 3:47 AM ROS Didier  wrote:
>
>> Hi
>>
>>
>>
>> In my business, one of the things blocking the migration from Oracle to
>> PostgreSQL is not having the equivalent of Oracle Real Application Testing .
>>
>> This product captures a charge in production and replay it in a test
>> environment.
>>
>> this allows to know the impacts of a migration to a newer version, the
>> creation of an index..
>>
>> is there an equivalent in the PostgreSQL community?
>>
>> if not, do you think it's technically possible to do it ?
>>
>> who would be interested in this project ?
>>
>
> Replaying workload might or might not apply well to your case.
>
> There are several major difficulties if you want to replay workload:
>
> 1) How to "record" workload. You need to write all your queries to the
> Postgres log. Three problems here:
>   1a) pgreplay expects log_statements = 'all' while you might prefer
> dealing with log_min_duration_statement instead. This is a minor issue
> though, quite easy to solve with preprocessing.
>   1b) under heavy load, log_min_duration_statement = 0 (or log_statements
> = 'all') will lead to performance degradation or even downtime. Possible
> solutions are: write to memory, or don't write at all but send over the
> network.
>   1c) ideally, recoding just queries is not enough. To replay workload "as
> is", we need to replay queries with known plans. There is no easy solution
> to this problem in the Postgres ecosystem yet.
>
>
why? i prefer queries to take advantage of new plans for example if i'm
migrating from 9.5 to 9.6+ i would prefer that, when replaying, the queries
use parallel plans so i quickly get if that would somehow be a problem (for
example by using more cpu than before)

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


[no subject]

2018-02-13 Thread Jaime Casanova
Hi,

Compiling with CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer" as
recommended in https://wiki.postgresql.org/wiki/Developer_FAQ#Compile-time

My compiler gives me this message

"""
logtape.c: In function ‘ltsConcatWorkerTapes’:
logtape.c:462:48: warning: ‘tapeblocks’ may be used uninitialized in
this function [-Wmaybe-uninitialized]
  lts->nBlocksAllocated = lt->offsetBlockNumber + tapeblocks;
  ~~^~~~
"""

Can we please, initialize tapeblocks variable in
ltsConcatWorkerTapes() function?

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



tapeblocks is uninitialized in logtape.c

2018-02-13 Thread Jaime Casanova
On 13 February 2018 at 21:07, Jaime Casanova
 wrote:
> Hi,
>
> Compiling with CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer" as
> recommended in https://wiki.postgresql.org/wiki/Developer_FAQ#Compile-time
>
> My compiler gives me this message
>
> """
> logtape.c: In function ‘ltsConcatWorkerTapes’:
> logtape.c:462:48: warning: ‘tapeblocks’ may be used uninitialized in
> this function [-Wmaybe-uninitialized]
>   lts->nBlocksAllocated = lt->offsetBlockNumber + tapeblocks;
>   ~~^~~~
> """
>
> Can we please, initialize tapeblocks variable in
> ltsConcatWorkerTapes() function?
>

Sorry, i sent it without a subject...


-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: tapeblocks is uninitialized in logtape.c

2018-02-14 Thread Jaime Casanova
On 13 February 2018 at 23:10, Tom Lane  wrote:
> Jaime Casanova  writes:
>>> My compiler gives me this message
>>> logtape.c: In function ‘ltsConcatWorkerTapes’:
>>> logtape.c:462:48: warning: ‘tapeblocks’ may be used uninitialized in
>>> this function [-Wmaybe-uninitialized]
>>> lts->nBlocksAllocated = lt->offsetBlockNumber + tapeblocks;
>
> FWIW, I'm not seeing that.  What compiler are you using exactly?
>

gcc version 6.3.0 20170516 (Debian 6.3.0-18)

> (There are one or two other places where I see "may be used uninitialized"
> complaints from certain older gcc versions.  Not sure how excited we
> should be about removing such warnings.)
>

i tried to fix the ones in your 2nd attachment, but i'm not real sure
about what value initialize the typename array in objectaddress.c and
the bool values in pgbench.c (both of them i initialized to NULL)

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c
index de869e00ff..5bea073a2b 100644
*** a/src/backend/access/transam/xloginsert.c
--- b/src/backend/access/transam/xloginsert.c
***
*** 584,590  XLogRecordAssemble(RmgrId rmid, uint8 info,
  		if (include_image)
  		{
  			Page		page = regbuf->page;
! 			uint16		compressed_len;
  
  			/*
  			 * The page needs to be backed up, so calculate its hole length
--- 584,590 
  		if (include_image)
  		{
  			Page		page = regbuf->page;
! 			uint16		compressed_len = 0;
  
  			/*
  			 * The page needs to be backed up, so calculate its hole length
diff --git a/src/backend/catalog/objectaddressindex b4c2467710..3fa5b6e1db 100644
*** a/src/backend/catalog/objectaddress.c
--- b/src/backend/catalog/objectaddress.c
***
*** 1576,1583  get_object_address_opf_member(ObjectType objtype,
  	ObjectAddress address;
  	ListCell   *cell;
  	List	   *copy;
! 	TypeName   *typenames[2];
! 	Oid			typeoids[2];
  	int			membernum;
  	int			i;
  
--- 1576,1583 
  	ObjectAddress address;
  	ListCell   *cell;
  	List	   *copy;
! 	TypeName   *typenames[2] = { NULL };
! 	Oid			typeoids[2] = { InvalidOid };
  	int			membernum;
  	int			i;
  
diff --git a/src/backend/utils/sort/logtapindex 66bfcced8d..d6794bf3de 100644
*** a/src/backend/utils/sort/logtape.c
--- b/src/backend/utils/sort/logtape.c
***
*** 411,417  ltsConcatWorkerTapes(LogicalTapeSet *lts, TapeShare *shared,
  	 SharedFileSet *fileset)
  {
  	LogicalTape *lt = NULL;
! 	long		tapeblocks;
  	long		nphysicalblocks = 0L;
  	int			i;
  
--- 411,417 
  	 SharedFileSet *fileset)
  {
  	LogicalTape *lt = NULL;
! 	long		tapeblocks = 0L;
  	long		nphysicalblocks = 0L;
  	int			i;
  
diff --git a/src/bin/pgbench/pgbench.c index 31ea6ca06e..8b0c8bcaa8 100644
*** a/src/bin/pgbench/pgbench.c
--- b/src/bin/pgbench/pgbench.c
***
*** 1614,1620  evalLazyFunc(TState *thread, CState *st,
  			 PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval)
  {
  	PgBenchValue a1, a2;
! 	bool ba1, ba2;
  
  	Assert(isLazyFunc(func) && args != NULL && args->next != NULL);
  
--- 1614,1620 
  			 PgBenchFunction func, PgBenchExprLink *args, PgBenchValue *retval)
  {
  	PgBenchValue a1, a2;
! 	bool ba1 = NULL, ba2 = NULL;
  
  	Assert(isLazyFunc(func) && args != NULL && args->next != NULL);
  
***
*** 1935,1941  evalStandardFunc(
  			/* logical operators */
  		case PGBENCH_NOT:
  			{
! bool b;
  if (!coerceToBool(&vargs[0], &b))
  	return false;
  
--- 1935,1941 
  			/* logical operators */
  		case PGBENCH_NOT:
  			{
! bool b = NULL;
  if (!coerceToBool(&vargs[0], &b))
  	return false;
  


Re: This seems like very unfriendly behaviour

2019-05-25 Thread Jaime Casanova
On Sat, 25 May 2019 at 08:35, Dave Cramer  wrote:
>
> How do I get rid of this slot ?
>
> select pg_drop_replication_slot('mysub');
> ERROR:  replication slot "mysub" is active for PID 13065
> test_database=# select * from pg_subscription;
>  subdbid | subname | subowner | subenabled | subconninfo | subslotname | 
> subsynccommit | subpublications
> -+-+--++-+-+---+-
> (0 rows)
>
> test_database=# select * from pg_publication;
>  pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | 
> pubtruncate
> -+--+--+---+---+---+-
> (0 rows)
>

Can you check "select * from pg_stat_replication"?

also, what pid is being reported in pg_replication_slot for this slot?
do you see a process in pg_stat_activity for that pid? in the os?

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: unique indexes on partitioned tables

2018-02-19 Thread Jaime Casanova
On 12 February 2018 at 15:26, Alvaro Herrera  wrote:
> Hello,
>
> Thanks, Peter, Jesper, Amit, for reviewing the patch.  Replying to
> all review comments at once:
>

[... v5 of patch attached ...]

Hi Álvaro,

attached a tiny patch (on top of yours) that silence two "variables
uninitilized" warnings.

also noted that if you:

"""
create table t1(i int) partition by hash (i);
create table t1_0 partition of t1 for values with (modulus 2, remainder 0);
create table t1_1 partition of t1 for values with (modulus 2, remainder 1);
create unique index on t1(i);
alter table t1 add primary key using index t1_i_idx ;
"""

the ALTER TABLE ADD PK does not recurse to partitions, which maybe is
perfectly fine because i'm using USING INDEX but it feels like an
oversight to me

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 19233b68cb..677e9cabf8 100644
*** a/src/backend/commands/tablecmds.c
--- b/src/backend/commands/tablecmds.c
***
*** 14177,14183  AttachPartitionEnsureIndexes(Relation rel, Relation attachrel)
  		 */
  		for (i = 0; i < list_length(attachRelIdxs); i++)
  		{
! 			Oid		cldConstrOid;
  
  			/* does this index have a parent?  if so, can't use it */
  			if (has_superclass(RelationGetRelid(attachrelIdxRels[i])))
--- 14177,14183 
  		 */
  		for (i = 0; i < list_length(attachRelIdxs); i++)
  		{
! 			Oid		cldConstrOid = InvalidOid;
  
  			/* does this index have a parent?  if so, can't use it */
  			if (has_superclass(RelationGetRelid(attachrelIdxRels[i])))
***
*** 14475,14481  ATExecAttachPartitionIdx(List **wqueue, Relation parentIdx, RangeVar *name)
  		int			i;
  		PartitionDesc partDesc;
  		Oid			constraintOid,
! 	cldConstrId;
  
  		/*
  		 * If this partition already has an index attached, refuse the operation.
--- 14475,14481 
  		int			i;
  		PartitionDesc partDesc;
  		Oid			constraintOid,
! 	cldConstrId = InvalidOid;
  
  		/*
  		 * If this partition already has an index attached, refuse the operation.


Re: [WIP] Allow pg_upgrade to copy segments of the same relfilenode in parallel

2022-03-27 Thread Jaime Casanova
On Mon, Mar 21, 2022 at 05:34:31PM -0700, Andres Freund wrote:
> Hi,
> 
> On 2022-02-01 21:57:00 -0500, Jaime Casanova wrote:
> > This patch adds a new option (-J num, --jobs-per-disk=num) in 
> > pg_upgrade to speed up copy mode. This generates upto ${num} 
> > processes per tablespace to copy segments of the same relfilenode 
> > in parallel.
> > 
> > This can help when you have many multi gigabyte tables (each segment 
> > is 1GB by default) in different tablespaces (each tablespace in a 
> > different disk) and multiple processors.
> > 
> > In a customer's database (~20Tb) it went down from 6h to 4h 45min.
> > 
> > It lacks documentation and I need help with WIN32 part of it, I created
> > this new mail to put the patch on the next commitfest.
> 
> The patch currently fails on cfbot due to warnings, likely related due to the
> win32 issue: 
> https://cirrus-ci.com/task/4566046517493760?logs=mingw_cross_warning#L388
> 
> As it's a new patch submitted to the last CF, hasn't gotten any review yet and
> misses some platform support, it seems like there's no chance it can make it
> into 15?
> 

Hi,

Because I have zero experience on the windows side of this, I will take
some time to complete that part.

Should we move this to the next commitfest (and make 16 the target for
this)?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




JSON constructors and window functions

2022-04-01 Thread Jaime Casanova
I got a crash running the below query on the regression database:

"""
select pg_catalog.json_object_agg_unique(10,
cast(ref_0.level2_no as int4)) 
over (partition by ref_0.parent_no 
order by ref_0.level2_no)
from public.transition_table_level2 as ref_0;
"""

Attached the backtrace.

PS: I'm cc'ing Andrew and Nikita because my feeling is that this is 
f4fb45d15c59d7add2e1b81a9d477d0119a9691a responsability. 

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  0x55625fb793a5 in hash_search (hashp=0x5562614a7e20, 
keyPtr=0x7fff3bedf520, 
action=HASH_ENTER, foundPtr=0x7fff3bedf51f) at dynahash.c:959
No locals.
#1  0x55625fa57675 in json_unique_check_key (cxt=0x556261467d18, 
key=0x556261467d9c "\"10\"", object_id=0) at json.c:998
entry = {key = 0x556261467d9c "\"10\"", key_len = 4, object_id = 0}
found = false
#2  0x55625fa57ac2 in json_object_agg_transfn_worker 
(fcinfo=0x7fff3bedf600, 
absent_on_null=false, unique_keys=true) at json.c:1152
key = 0x556261467d9c "\"10\""
aggcontext = 0x556261467be0
oldcontext = 0x556261460310
state = 0x556261467d00
out = 0x556261467d58
arg = 10
skip = false
key_offset = 12
__func__ = "json_object_agg_transfn_worker"
#3  0x55625fa57c04 in json_object_agg_unique_transfn 
(fcinfo=0x7fff3bedf600) at json.c:1198
No locals.
#4  0x55625f7c14ab in advance_windowaggregate (winstate=0x55626145fef8, 
perfuncstate=0x55626147bb68, peraggstate=0x556261477da0) at 
nodeWindowAgg.c:345
fcinfodata = {fcinfo = {flinfo = 0x556261477db0, context = 
0x55626145fef8, 
resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 3, 
args = 0x7fff3bedf620}, 
  fcinfo_data = "\260}GabU\000\000\370\376EabU", '\000' , "U\003\000\000}FabU\000\000\000\376EabU\000\000\n", '\000' , 
"U\001\000\002\000\000\000\000\000\000\000\000\254t\263;\177\000\000\200[i\263;\177\000\000\004t\370\252Vf\000\000\340\366\355;\000\000@\000X\236P\253;\177\000\000\220\366\355;\377\177\000\000\276\240\227_bU\000\000m\264\323_\000\000@\000\260\366\355;\377\177\000\000\340\366\355;\377\177\000\000\335\031\230_bU\000\000\300\366\355;\377\177\000\000@\236P\253;\177\000\000\000\000\000\000\001\000\004\206X\236P\253;\177\000\000\340\366\355;\377\177\000\000"...}
fcinfo = 0x7fff3bedf600
wfuncstate = 0x55626146fba8
numArguments = 2
newVal = 93881027165152
arg = 0x0
i = 3
oldContext = 0x55626145fba0
econtext = 0x556261460310
filter = 0x0
__func__ = "advance_windowaggregate"
#5  0x55625f7c26b0 in eval_windowaggregates (winstate=0x55626145fef8) at 
nodeWindowAgg.c:964
ret = 1
peraggstate = 0x556261477da0
wfuncno = 0
numaggs = 1
numaggs_restart = 0
i = 0
aggregatedupto_nonrestarted = 1
oldContext = 0x2000
econtext = 0x556261460400
agg_winobj = 0x55626147dfa0
agg_row_slot = 0x55626146e0d0
temp_slot = 0x55626146e1e8
__func__ = "eval_windowaggregates"
#6  0x55625f7c511e in ExecWindowAgg (pstate=0x55626145fef8) at 
nodeWindowAgg.c:2207
winstate = 0x55626145fef8
econtext = 0x556261460400
i = 1
numfuncs = 1
__func__ = "ExecWindowAgg"
#7  0x55625f767494 in ExecProcNode (node=0x55626145fef8)
at ../../../src/include/executor/executor.h:259
No locals.
#8  0x55625f76a153 in ExecutePlan (estate=0x55626145fcc0, 
planstate=0x55626145fef8, 
use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, 
numberTuples=0, 
direction=ForwardScanDirection, dest=0x55626145c9a8, execute_once=true) at 
execMain.c:1636
slot = 0x55626147dff8
current_tuple_count = 1
#9  0x55625f767b3a in standard_ExecutorRun (queryDesc=0x5562613abac0, 
direction=ForwardScanDirection, count=0, execute_once=true) at 
execMain.c:363
estate = 0x55626145fcc0
operation = CMD_SELECT
dest = 0x55626145c9a8
sendTuples = true
oldcontext = 0x5562613ab9a0
__func__ = "standard_ExecutorRun"
#10 0x55625f767950 in ExecutorRun (queryDesc=0x5562613abac0, 
direction=ForwardScanDirection, 
count=0, execute_once=true) at execMain.c:307
No locals.
#11 0x55625f9cdf5a in PortalRunSelect (portal=0x5562613ef590, forward=true, 
count=0, 
dest=0x55626145c9a8) at pquery.c:924
queryDesc = 0x5562613abac0
direction = ForwardScanDirection
nprocessed = 1005453488
__func__ = "PortalRunSelect"
#12 0x55625f9cdc05 in PortalRun (portal=0x5562613ef590, 
count=92233720368547

Failed Assertion about PolymorphicType

2020-04-04 Thread Jaime Casanova
Hi,

Running sqlsmith on master i got an assertion failure on parse_coerce.c:2049

This is a minimal query to reproduce in an empty database, i also
attached the stack trace

 """
select
  pg_catalog.array_in(
cast(pg_catalog.regoperatorout(
  cast(cast(null as regoperator) as regoperator)) as cstring),
cast((select pronamespace from pg_catalog.pg_proc limit 1 offset 1)
   as oid),
cast(subq_1.pid as int4)) as c0
from pg_catalog.pg_stat_progress_analyze as subq_1
 """

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
#0  0x7f2509b5a337 in raise () from /lib64/libc.so.6
No symbol table info available.
#1  0x7f2509b5ba28 in abort () from /lib64/libc.so.6
No symbol table info available.
#2  0x00a98887 in ExceptionalCondition (conditionName=0xbc9e05 
"!IsPolymorphicType(rettype)", errorType=0xbc9885 "FailedAssertion",
fileName=0xbc9901 "parse_coerce.c", lineNumber=2049) at assert.c:67
No locals.
#3  0x006126e1 in enforce_generic_type_consistency 
(actual_arg_types=0x7ffc0442adb0, declared_arg_types=0x25dea70, nargs=3, 
rettype=2277, allow_poly=false)
at parse_coerce.c:2049
have_poly_anycompatible = false
have_poly_unknowns = false
elem_typeid = 0
array_typeid = 0
range_typeid = 0
anycompatible_typeid = 0
anycompatible_array_typeid = 0
anycompatible_range_typeid = 0
anycompatible_range_typelem = 0
have_anynonarray = false
have_anyenum = false
have_anycompatible_nonarray = false
have_anycompatible_array = false
have_anycompatible_range = false
n_poly_args = 0
n_anycompatible_args = 0
anycompatible_actual_types = {71478080, 32764, 10973563, 0 , 750, 0, 177595336, 1, 39212672, 0, 177596648, 32549, 39037672, 0, 750, 
0, 0, 0,
  71478064, 32764, 11362386, 0, 177596648, 32549, 39037608, 0, 
71478112, 32764, 10974281, 0, 71478112, 32764, 177596648, 32549, 750, 0, 
177596584, 32549,
  71478144, 32764, 11082588, 0, 750, 0, 177596648, 32549, 71478432, 
32764, 6432778, 0, 38864840, 1, 71478704, 32764, 1, 3, 0, 0, 39704032, 0, 
38861280, 0, 0,
  0, 39136672, 0, 0, 0, 16, 0, 71478272, 32764, 10963018, 1, 71478368, 
32764, 39707920, 0, 16, 0, 38857776, 0, 20, 2476801540, 38864840, 1, 177596720, 
32549,
  177596648, 32549}
__func__ = "enforce_generic_type_consistency"
#4  0x00620ce2 in ParseFuncOrColumn (pstate=0x25b74a0, 
funcname=0x250f9e0, fargs=0x25dd5e0, last_srf=0x0, fn=0x25108f8, 
proc_call=false, location=11)
at parse_func.c:660
is_column = false
agg_order = 0x0
agg_filter = 0x0
agg_within_group = false
agg_star = false
agg_distinct = false
func_variadic = false
over = 0x0
could_be_projection = false
rettype = 2277
funcid = 750
l = 0x0
first_arg = 0x25dd588
nargs = 3
nargsplusdefs = 3
actual_arg_types = {2275, 26, 23, 0, 39702592, 0, 39548064, 0, 0, 0, 
39548064, 0, 39550048, 0, 177593384, 1, 39548624, 0, 39709168, 0, 71479120, 
32764,
  6392460, 0, 23, 0, 39261696, 0, 38864744, 0, 39548064, 0, 23, 0, 
39061000, 0, 2, 4294967295, 23, 23, 39709168, 0, 39548064, 0, 38865000, 0, 
39548064, 0,
  39708136, 0, 1, 0, 39548064, 0, 39708048, 0, 0, 0, 38864840, 0, 0, 0, 
6404911, 4294967295, 177593448, 32549, 39548064, 0, 71479008, 32764, 0, 0, 
177593448,
  32549, 39037672, 0, 38864616, 0, 38864800, 0, 39702592, 0, 71479128, 
32764, 71479040, 227, 1, 2, 4294967295, 23, 39709168, 0, 23, 23, 0, 0, 
71479200, 32764,
  6355260, 0}
declared_arg_types = 0x25dea70
argnames = 0x0
argdefaults = 0x0
retval = 0x617d5d 
retset = false
nvargs = 0
vatype = 0
fdresult = FUNCDETAIL_NORMAL
aggkind = 0 '\000'
pcbstate = {pstate = 0x25b74a0, location = 11, errcallback = {previous 
= 0x0, callback = 0x624111 , arg = 0x7ffc0442ad60}}
__func__ = "ParseFuncOrColumn"
#5  0x0061ae22 in transformFuncCall (pstate=0x25b74a0, fn=0x25108f8) at 
parse_expr.c:1558
last_srf = 0x0
targs = 0x25dd5e0
args = 0x0
#6  0x00617f31 in transformExprRecurse (pstate=0x25b74a0, 
expr=0x25108f8) at parse_expr.c:265
result = 0x40
__func__ = "transformExprRecurse"
#7  0x00617af9 in transformExpr (pstate=0x25b74a0, expr=0x25108f8, 
exprKind=EXPR_KIND_SELECT_TARGET) at parse_expr.c:155
result = 0x25dd0f0
sv_expr_kind = EXPR_KIND_NONE
#8  0x0062e56e in transformTargetEntry (pstate=0x25b74a0, 
node=0x25108f8, expr=0x0, exprKind=EXPR_KIND_SELECT_TARGET, colname=0x2510950 
"c0", resjunk=false)
at parse_target

segmentation fault using currtid and partitioned tables

2020-04-05 Thread Jaime Casanova
Hi,

Another one caught by sqlsmith, on the regression database run this
query (using any non-partitioned table works fine):

"""
select currtid('pagg_tab'::regclass::oid, '(0,156)'::tid) >= '(1,158)'::tid;
"""

This works on 11 (well it gives an error because the file doesn't
exists) but crash the server on 12+

attached the stack trace from master

-- 
Jaime Casanova  www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
#0  0x00a3a3f1 in table_beginscan_tid (rel=0x7ff8ad4d01b8, 
snapshot=0x1336430) at ../../../../src/include/access/tableam.h:842
flags = 8
#1  0x00a3b1e3 in currtid_byreloid (fcinfo=0x130c298) at tid.c:384
reloid = 37534
tid = 0x128f498
result = 0x135b960
rel = 0x7ff8ad4d01b8
aclresult = ACLCHECK_OK
snapshot = 0x1336430
scan = 0x10
#2  0x006fc889 in ExecInterpExpr (state=0x130c100, econtext=0x130be00, 
isnull=0x7fff1f9659e7) at execExprInterp.c:698
fcinfo = 0x130c298
args = 0x130c2b8
nargs = 2
d = 140733723337632
op = 0x130c2f0
resultslot = 0x130c008
innerslot = 0x0
outerslot = 0x0
scanslot = 0x0
dispatch_table = {0x6fc140 , 0x6fc165 
, 0x6fc19b , 0x6fc1d4 
,
  0x6fc20d , 0x6fc296 , 
0x6fc31f , 0x6fc3a8 , 0x6fc3d7 
,
  0x6fc406 , 0x6fc435 , 
0x6fc463 , 0x6fc50a , 0x6fc5b1 
,
  0x6fc658 , 0x6fc6b3 , 
0x6fc756 , 0x6fc78c ,
  0x6fc7f9 , 0x6fc8c8 , 
0x6fc8f6 , 0x6fc924 ,
  0x6fc92f , 0x6fc997 , 
0x6fc9f0 , 0x6fc9fb ,
  0x6fca63 , 0x6fcabc , 
0x6fcaf4 , 0x6fcb69 ,
  0x6fcb94 , 0x6fcbdf , 
0x6fcc2d , 0x6fcc88 ,
  0x6fccca , 0x6fcd0f , 
0x6fcd3d , 0x6fcd6b ,
  0x6fcda5 , 0x6fce08 , 
0x6fce6b , 0x6fcea5 ,
  0x6fced3 , 0x6fcf01 , 
0x6fcf31 , 0x6fd01d ,
  0x6fd073 , 0x6fd25b , 
0x6fd34e , 0x6fd432 ,
  0x6fd50e , 0x6fd535 , 
0x6fd55c , 0x6fd583 ,
  0x6fd5aa , 0x6fd5d8 , 
0x6fd5ff , 0x6fd745 ,
  0x6fd849 , 0x6fd870 , 
0x6fd89e , 0x6fd8cc ,
  0x6fd8fa , 0x6fd950 , 
0x6fd977 , 0x6fd99e ,
  0x6fcfa7 , 0x6fda1a , 
0x6fda41 , 0x6fd9c5 ,
  0x6fd9f3 , 0x6fda68 , 
0x6fda8f , 0x6fdb2d ,
  0x6fdb54 , 0x6fdbf2 , 
0x6fdc20 , 0x6fdc4e ,
  0x6fdc89 , 0x6fdd3c , 
0x6fddca , 0x6fde51 ,
  0x6fdede , 0x6fdffe , 
0x6fe0e8 , 0x6fe1bc ,
  0x6fe2d9 , 0x6fe3c0 , 
0x6fe491 , 0x6fe4bf ,
  0x6fe4ed }
#3  0x006fe563 in ExecInterpExprStillValid (state=0x130c100, 
econtext=0x130be00, isNull=0x7fff1f9659e7) at execExprInterp.c:1807
No locals.
#4  0x0074a22c in ExecEvalExprSwitchContext (state=0x130c100, 
econtext=0x130be00, isNull=0x7fff1f9659e7) at 
../../../src/include/executor/executor.h:313
retDatum = 18446462598752812812
oldContext = 0x130b990
#5  0x0074a295 in ExecProject (projInfo=0x130c0f8) at 
../../../src/include/executor/executor.h:347
econtext = 0x130be00
state = 0x130c100
slot = 0x130c008
isnull = false
#6  0x0074a47b in ExecResult (pstate=0x130bce8) at nodeResult.c:136
node = 0x130bce8
outerTupleSlot = 0x779642 
outerPlan = 0x0
econtext = 0x130be00
#7  0x00712656 in ExecProcNodeFirst (node=0x130bce8) at 
execProcnode.c:444
No locals.
#8  0x00707085 in ExecProcNode (node=0x130bce8) at 
../../../src/include/executor/executor.h:245
No locals.
#9  0x00709b39 in ExecutePlan (estate=0x130bab0, planstate=0x130bce8, 
use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0,
direction=ForwardScanDirection, dest=0x12b30c0, execute_once=true) at 
execMain.c:1646
slot = 0x0
current_tuple_count = 0
#10 0x007076a9 in standard_ExecutorRun (queryDesc=0x12ae280, 
direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364
estate = 0x130bab0
operation = CMD_SELECT
dest = 0x12b30c0
sendTuples = true
oldcontext = 0x12ae160
__func__ = "standard_ExecutorRun"
#11 0x007074d1 in ExecutorRun (queryDesc=0x12ae280, 
direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:308
No locals.
#12 0x00921d77 in PortalRunSelect (portal=0x12f3e70, forward=true, 
count=0, dest=0x12b30c0) at pquery.c:912
queryDesc = 0x12ae280
direction = ForwardScanDirection
nprocessed = 0
__func__ = "PortalRunSelect"
#13 0x00921a30 in PortalRun (portal=0x12f3e70, 
count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x12b30c0, 
altdest=0x12b30c0, qc=0x7fff1f965d60)
at pquery.c:756
_save_exception_stack = 0x7fff1f965e70
_save_context_stack = 0x0
_local_sigjmp_buf = {{__jmpbuf = {0, 60194379429867500, 4706656, 
140733723337632, 0

[WIP] Allow pg_upgrade to copy segments of the same relfilenode in parallel

2022-02-01 Thread Jaime Casanova
Hi,

This patch adds a new option (-J num, --jobs-per-disk=num) in 
pg_upgrade to speed up copy mode. This generates upto ${num} 
processes per tablespace to copy segments of the same relfilenode 
in parallel.

This can help when you have many multi gigabyte tables (each segment 
is 1GB by default) in different tablespaces (each tablespace in a 
different disk) and multiple processors.

In a customer's database (~20Tb) it went down from 6h to 4h 45min.

It lacks documentation and I need help with WIN32 part of it, I created
this new mail to put the patch on the next commitfest.

Original thread: 
https://www.postgresql.org/message-id/flat/YZVbtHKYP02AZDIO%40ahch-to

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
>From 0d04f79cb51d6be0ced9c6561cfca5bfe18c4bdd Mon Sep 17 00:00:00 2001
From: Jaime Casanova 
Date: Wed, 15 Dec 2021 12:14:44 -0500
Subject: [PATCH] Add --jobs-per-disk option to allow multiple processes per
 tablespace

This option is independent of the --jobs one. It's will fork new processes
to copy the different segments of a relfilenode in parallel.
---
 src/bin/pg_upgrade/option.c  |  8 ++-
 src/bin/pg_upgrade/parallel.c| 93 
 src/bin/pg_upgrade/pg_upgrade.h  |  4 ++
 src/bin/pg_upgrade/relfilenode.c | 59 +++-
 4 files changed, 139 insertions(+), 25 deletions(-)

diff --git a/src/bin/pg_upgrade/option.c b/src/bin/pg_upgrade/option.c
index 66fe16964e..46b1913a42 100644
--- a/src/bin/pg_upgrade/option.c
+++ b/src/bin/pg_upgrade/option.c
@@ -54,6 +54,7 @@ parseCommandLine(int argc, char *argv[])
 		{"link", no_argument, NULL, 'k'},
 		{"retain", no_argument, NULL, 'r'},
 		{"jobs", required_argument, NULL, 'j'},
+		{"jobs-per-disks", required_argument, NULL, 'J'},
 		{"socketdir", required_argument, NULL, 's'},
 		{"verbose", no_argument, NULL, 'v'},
 		{"clone", no_argument, NULL, 1},
@@ -103,7 +104,7 @@ parseCommandLine(int argc, char *argv[])
 	if (os_user_effective_id == 0)
 		pg_fatal("%s: cannot be run as root\n", os_info.progname);
 
-	while ((option = getopt_long(argc, argv, "d:D:b:B:cj:kNo:O:p:P:rs:U:v",
+	while ((option = getopt_long(argc, argv, "d:D:b:B:cj:J:kNo:O:p:P:rs:U:v",
  long_options, &optindex)) != -1)
 	{
 		switch (option)
@@ -132,6 +133,10 @@ parseCommandLine(int argc, char *argv[])
 user_opts.jobs = atoi(optarg);
 break;
 
+			case 'J':
+user_opts.jobs_per_disk = atoi(optarg);
+break;
+
 			case 'k':
 user_opts.transfer_mode = TRANSFER_MODE_LINK;
 break;
@@ -291,6 +296,7 @@ usage(void)
 	printf(_("  -d, --old-datadir=DATADIR old cluster data directory\n"));
 	printf(_("  -D, --new-datadir=DATADIR new cluster data directory\n"));
 	printf(_("  -j, --jobs=NUMnumber of simultaneous processes or threads to use\n"));
+	printf(_("  -J, --jobs_per_disk=NUM   number of simultaneous processes or threads to use per tablespace\n"));
 	printf(_("  -k, --linklink instead of copying files to new cluster\n"));
 	printf(_("  -N, --no-sync do not wait for changes to be written safely to disk\n"));
 	printf(_("  -o, --old-options=OPTIONS old cluster options to pass to the server\n"));
diff --git a/src/bin/pg_upgrade/parallel.c b/src/bin/pg_upgrade/parallel.c
index ee7364da3b..82f698a9ab 100644
--- a/src/bin/pg_upgrade/parallel.c
+++ b/src/bin/pg_upgrade/parallel.c
@@ -17,6 +17,9 @@
 #include "pg_upgrade.h"
 
 static int	parallel_jobs;
+static int	current_jobs = 0;
+
+static bool  reap_subchild(bool wait_for_child);
 
 #ifdef WIN32
 /*
@@ -277,6 +280,60 @@ win32_transfer_all_new_dbs(transfer_thread_arg *args)
 #endif
 
 
+
+/*
+ * parallel_process_relfile_segment()
+ *
+ * Copy or link file from old cluster to new one.  If vm_must_add_frozenbit
+ * is true, visibility map forks are converted and rewritten, even in link
+ * mode.
+ */
+void
+parallel_process_relfile_segment(FileNameMap *map, const char *type_suffix, bool vm_must_add_frozenbit, const char *old_file, const char *new_file)
+{
+#ifndef WIN32
+	pid_t		child;
+#else
+	HANDLE		child;
+	transfer_thread_arg *new_arg;
+#endif
+	if (user_opts.jobs <= 1 || user_opts.jobs_per_disk <= 1)
+		process_relfile_segment(map, type_suffix, vm_must_add_frozenbit, old_file, new_file);
+	else
+	{
+		/* parallel */
+
+		/* harvest any dead children */
+		while (reap_subchild(false) == true)
+			;
+
+		/* must we wait for a dead child? use a maximum of 3 childs per tablespace */
+		if (current_jobs >= user_opts.jobs_per_disk)
+			reap_subchild(true);
+
+		/* set this before we start the job */
+		current_jobs++;
+
+		/* Ensure stdio state is quiesced before

Re: 2022-01 Commitfest

2022-02-02 Thread Jaime Casanova
On Thu, Feb 03, 2022 at 01:28:53AM +0800, Julien Rouhaud wrote:
> 
> My understanding of "Returned with Feedback" is that the patch implements
> something wanted, but as proposed won't be accepted without a major redesign 
> or
> something like that.  Not patches that are going through normal "review /
> addressing reviews" cycles.  And definitely not bug fixes either.
> 
> If we close all patches that had a review just because they weren't perfect in
> their initial submission, we're just going to force everyone to re-register
> their patch for every single commit fest.  I don't see that doing anything
> apart from making sure that everyone stops contributing.
> 

I had the same problem last time, "Returned with feedback" didn't feel
fine in some cases.

After reading this i started to wish there was some kind of guide about
this, and of course the wiki has that guide (outdated yes but something
to start with).

https://wiki.postgresql.org/wiki/CommitFest_Checklist#Sudden_Death_Overtime

This needs some love, still mentions rrreviewers for example, but if we
updated and put here a clear definition of the states maybe it could
help to do CF managment.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2022-01 Commitfest

2022-02-02 Thread Jaime Casanova
On Wed, Feb 02, 2022 at 01:00:18PM -0500, Tom Lane wrote:
> 
> Anyway, thanks to Julien for doing this mostly-thankless task
> this time!
> 

Agreed, great work!

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: use AV worker items infrastructure for GIN pending list's cleanup

2021-05-14 Thread Jaime Casanova
On Sat, May 15, 2021 at 08:12:51AM +0200, Joel Jacobson wrote:
> On Mon, Apr 5, 2021, at 16:47, Jaime Casanova wrote:
> > On Mon, Apr 05, 2021 at 10:41:22AM -0300, Euler Taveira wrote:
> > > On Mon, Apr 5, 2021, at 3:31 AM, Jaime Casanova wrote:
> > > > When AV worker items where introduced 4 years ago, i was suggested that
> > > > it could be used for other things like cleaning the pending list of GIN
> > > > index when it reaches gin_pending_list_limit instead of making user
> > > > visible operation pay the price.
> > > > 
> > > > That never happened though. So, here is a little patch for that.
> > > > 
> > > > Should I add an entry for this on next commitfest?
> > > +1. It slipped through the cracks along the years. It is even suggested 
> > > in the
> > > current docs since the fast update support.
> > > 
> > > https://www.postgresql.org/docs/current/gin-tips.html
> > > 
> > 
> > Interesting, that comment maybe needs to be rewritten. I would go for
> > remove completely the first paragraph under gin_pending_list_limit entry
> 
> Thanks for working on this patch.
> 
> I found this thread searching for "gin_pending_list_limit" in pg hackers 
> after reading an interesting article found via the front page of Hacker News: 
> "Debugging random slow writes in PostgreSQL" 
> (https://iamsafts.com/posts/postgres-gin-performance/).
> 
> I thought it could be interesting to read about a real user story where this 
> patch would be helpful.
> 

A customer here has 20+ GIN indexes in a big heavily used table and
every time one of the indexes reaches gin_pending_list_limit (because of
an insert or update) a user feels the impact.

So, currently we have a cronjob running periodically and checking
pending list sizes to process the index before the limit get fired by an
user operation. While the index still is processed and locked the fact
that doesn't happen in the user face make the process less notorious and
in the mind of users faster.

This will provide the same facility, the process will happen "in the
background".

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Crash in BRIN minmax-multi indexes

2022-09-28 Thread Jaime Casanova
On Sun, Apr 04, 2021 at 07:52:50PM +0200, Tomas Vondra wrote:
> On 4/4/21 7:25 AM, Jaime Casanova wrote:
> > 
> > pgbench -i postgres
> > psql -c "CREATE INDEX ON pgbench_history USING brin (tid 
> > int4_minmax_multi_ops);" postgres
> > pgbench -c2 -j2 -T 300 -n postgres
> > 
> 
> Fixed and pushed too.
> 
> Turned out to be a silly bug in forgetting to remember the number of
> ranges after deduplication, which sometimes resulted in assert failure.
> It's a bit hard to trigger because concurrency / good timing is needed
> while summarizing the range, requiring a call to "union" function. Just
> running the pgbench did not trigger the issue for me, I had to manually
> call the brin_summarize_new_values().
> 
> For the record, I did a lot of testing with data randomized in various
> ways - the scripts are available here:
> 
> https://github.com/tvondra/brin-randomized-tests
> 
> It was focused on discovering issues in the distance functions, and
> comparing results with/without the index. Maybe the next step should be
> adding some changes to the data, which might trigger more issues like
> this one.
> 

Just found one more ocurrance of this one with this index while an
autovacuum was running:

"""
CREATE INDEX bt_f8_heap_seqno_idx 
ON public.bt_f8_heap 
USING brin (seqno float8_minmax_multi_ops);
"""

Attached is a backtrace.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140737005547296, 2, 6, 6807829, 
94364324073680, 4611686018427388799, 140153051314854, 0, 281470681751456, 0, 0, 
0, 
0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f77ec9a4535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {0, 0, 0, 0, 0, 140153048813557, 2, 
  3631085705129160672, 7018350057190602853, 94364324073680, 
7003715780713122240, 0, 12346699825684934912, 140737005547536, 
  94364359032568, 140737005548400}}, sa_flags = -402386736, 
sa_restorer = 0x55d2ea197ef8}
sigs = {__val = {32, 0 }}
#2  0x55d2e870514a in ExceptionalCondition (conditionName=0x55d2e87843b0 
"neranges >= 2", errorType=0x55d2e8783f14 "FailedAssertion", 
fileName=0x55d2e8783f00 "brin_minmax_multi.c", lineNumber=1338) at 
assert.c:69
No locals.
#3  0x55d2e804966f in build_distances (distanceFn=0x55d2ea1b2c88, 
colloid=0, eranges=0x55d2ea1bf7c0, neranges=1) at brin_minmax_multi.c:1338
i = 21970
ndistances = -367317728
distances = 0x55d2ea1b2c88
#4  0x55d2e804c595 in brin_minmax_multi_union (fcinfo=0x7fffe338f4c0) at 
brin_minmax_multi.c:2841
bdesc = 0x55d2ea1b2e88
col_a = 0x55d2ea196fb8
col_b = 0x55d2ea1bb7c8
colloid = 0
serialized_a = 0x55d2ea1b1dc0
serialized_b = 0x55d2ea1b1df8
ranges_a = 0x55d2ea198798
ranges_b = 0x55d2ea196d78
attno = 1
attr = 0x7f77e3693e88
eranges = 0x55d2ea1bf7c0
neranges = 1
cmpFn = 0x55d2ea1b2cc0
distanceFn = 0x55d2ea1b2c88
distances = 0x55d2e8785c27
ctx = 0x55d2ea1bf6a0
oldctx = 0x55d2ea196b30
#5  0x55d2e87100e0 in FunctionCall3Coll (flinfo=0x55d2ea1adbb0, 
collation=0, arg1=94364359143048, arg2=94364359028664, arg3=94364359178184)
at fmgr.c:1171
fcinfodata = {fcinfo = {flinfo = 0x55d2ea1adbb0, context = 0x0, 
resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 3, 
args = 0x7fffe338f4e0}, 
  fcinfo_data = "\260\333\032\352\322U", '\000' , 
"\177\003\000\210.\033\352\322U\000\000\000k\031\352\322U\000\000\270o\031\352\322U\000\000\000\266\033\352\v\022\000\000ȷ\033\352\322U\000\000\000\333\032\352\322U\000"}
fcinfo = 0x7fffe338f4c0
result = 16812436272
__func__ = "FunctionCall3Coll"
#6  0x55d2e804451d in union_tuples (bdesc=0x55d2ea1b2e88, a=0x55d2ea196f90, 
b=0x55d2ea1b1cd0) at brin.c:1643
unionFn = 0x55d2ea1adbb0
col_a = 0x55d2ea196fb8
col_b = 0x55d2ea1bb7c8
opcinfo = 0x55d2ea1b2c70
keyno = 0
db = 0x55d2ea1bb7a0
cxt = 0x55d2ea1bb680
oldcxt = 0x55d2ea196b30
#7  0x55d2e8043fb2 in summarize_range (indexInfo=0x55d2ea1989b0, 
state=0x55d2ea198ac8, heapRel=0x7f77e3692330, heapBlk=5760, heapNumBlks=7281)
at brin.c:1452
newtup = 0x55d2ea1b1cd0
newsize = 40
didupdate = false
samepage = false
phbuf = 14589
phtup = 0x55d2ea1b1cd0
phsz = 40
offset = 16
scanNumBlks = 128
__func__ = "summarize_ran

Re: Crash in BRIN minmax-multi indexes

2022-10-03 Thread Jaime Casanova
On Mon, Oct 03, 2022 at 07:53:34PM +0200, Tomas Vondra wrote:
> On 9/29/22 08:53, Jaime Casanova wrote:
> > ...
> > 
> > Just found one more ocurrance of this one with this index while an
> > autovacuum was running:
> > 
> > """
> > CREATE INDEX bt_f8_heap_seqno_idx 
> > ON public.bt_f8_heap 
> > USING brin (seqno float8_minmax_multi_ops);
> > """
> > Attached is a backtrace.
> 
> Thanks for the report!
> 
> I think I see the issue - brin_minmax_multi_union does not realize the
> two summaries could have just one range each, and those can overlap so
> that merge_overlapping_ranges combines them into a single one.
> 
> This is harmless, except that the assert int build_distances is overly
> strict. Not sure if we should just remove the assert or only compute the
> distances with (neranges>1).
> 
> Do you happen to have the core dump? It'd be useful to look at ranges_a
> and ranges_b, to confirm this is indeed what's happening.
> 

I do have it.

(gdb) p *ranges_a
$4 = {
  typid = 701,
  colloid = 0,
  attno = 0,
  cmp = 0x0,
  nranges = 0,
  nsorted = 1,
  nvalues = 1,
  maxvalues = 32,
  target_maxvalues = 32,
  values = 0x55d2ea1987c8
}
(gdb) p *ranges_b
$5 = {
  typid = 701,
  colloid = 0,
  attno = 0,
  cmp = 0x0,
  nranges = 0,
  nsorted = 1,
  nvalues = 1,
  maxvalues = 32,
  target_maxvalues = 32,
  values = 0x55d2ea196da8
}

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Crash in BRIN minmax-multi indexes

2022-10-03 Thread Jaime Casanova
On Mon, Oct 03, 2022 at 10:29:38PM +0200, Tomas Vondra wrote:
> On 10/3/22 21:25, Jaime Casanova wrote:
> > On Mon, Oct 03, 2022 at 07:53:34PM +0200, Tomas Vondra wrote:
> >> On 9/29/22 08:53, Jaime Casanova wrote:
> >>> ...
> >>>
> >>> Just found one more ocurrance of this one with this index while an
> >>> autovacuum was running:
> >>>
> >>> """
> >>> CREATE INDEX bt_f8_heap_seqno_idx 
> >>> ON public.bt_f8_heap 
> >>> USING brin (seqno float8_minmax_multi_ops);
> >>> """
> >>> Attached is a backtrace.
> >>
> >> Thanks for the report!
> >>
> >> I think I see the issue - brin_minmax_multi_union does not realize the
> >> two summaries could have just one range each, and those can overlap so
> >> that merge_overlapping_ranges combines them into a single one.
> >>
> >> This is harmless, except that the assert int build_distances is overly
> >> strict. Not sure if we should just remove the assert or only compute the
> >> distances with (neranges>1).
> >>
> >> Do you happen to have the core dump? It'd be useful to look at ranges_a
> >> and ranges_b, to confirm this is indeed what's happening.
> >>
> > 
> > I do have it.
> > 
> > (gdb) p *ranges_a
> > $4 = {
> >   typid = 701,
> >   colloid = 0,
> >   attno = 0,
> >   cmp = 0x0,
> >   nranges = 0,
> >   nsorted = 1,
> >   nvalues = 1,
> >   maxvalues = 32,
> >   target_maxvalues = 32,
> >   values = 0x55d2ea1987c8
> > }
> > (gdb) p *ranges_b
> > $5 = {
> >   typid = 701,
> >   colloid = 0,
> >   attno = 0,
> >   cmp = 0x0,
> >   nranges = 0,
> >   nsorted = 1,
> >   nvalues = 1,
> >   maxvalues = 32,
> >   target_maxvalues = 32,
> >   values = 0x55d2ea196da8
> > }
> > 
> 
> Thanks. That mostly confirms my theory. I'd bet that this
> 
> (gdb) p ranges_a->values[0]
> (gdb) p ranges_b->values[0]
> 
> will print the same value. 
> 

you're right, they are same value

(gdb) p ranges_a->values[0]
$1 = 4679532294229561068
(gdb) p ranges_b->values[0]
$2 = 4679532294229561068

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: New committers: Daniel Gustafsson and John Naylor

2021-07-01 Thread Jaime Casanova
On Wed, Jun 30, 2021 at 04:43:58PM -0400, Tom Lane wrote:
> The Core Team would like to extend our congratulations to
> Daniel Gustafsson and John Naylor, who have accepted invitations
> to become our newest Postgres committers.
> 
> Please join me in wishing them much success and few bugs.
> 

Congrats Daniel and John. Keep your good work!

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: doc: BRIN indexes and autosummarize

2022-07-04 Thread Jaime Casanova
On Mon, Jul 04, 2022 at 09:38:42PM +0200, Alvaro Herrera wrote:
> What about this?
> 
> -- 
> Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/
> "Java is clearly an example of money oriented programming"  (A. Stepanov)

> diff --git a/doc/src/sgml/brin.sgml b/doc/src/sgml/brin.sgml
> index caf1ea4cef..0a715d41c7 100644
> --- a/doc/src/sgml/brin.sgml
> +++ b/doc/src/sgml/brin.sgml
> @@ -73,31 +73,55 @@
> summarized range, that range does not automatically acquire a summary
> tuple; those tuples remain unsummarized until a summarization run is
> invoked later, creating initial summaries.
> -   This process can be invoked manually using the
> -   brin_summarize_range(regclass, bigint) or
> -   brin_summarize_new_values(regclass) functions;
> -   automatically when VACUUM processes the table;
> -   or by automatic summarization executed by autovacuum, as insertions
> -   occur.  (This last trigger is disabled by default and can be enabled
> -   with the autosummarize parameter.)
> -   Conversely, a range can be de-summarized using the
> -   brin_desummarize_range(regclass, bigint) function,
> -   which is useful when the index tuple is no longer a very good
> -   representation because the existing values have changed.
> +  
> +

I feel that somewhere in this paragraph it should be mentioned that is
off by default.

otherwise, +1

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




CFM for september commitfest

2021-08-26 Thread Jaime Casanova
Hi everyone,

Do we already have a ${subject}? Otherwise I could offer my self.
If anyone agree, this would be my first time as CFM as I would
appreciate some help.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-09-01 Thread Jaime Casanova
On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:
> It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest to
> In Progress and opened the November one for new entries.  Jaime Casanova has
> volunteered for CFM [0], so let’s help him close the 284 still open items in
> the queue.
> 

Thank you Daniel for editing the commitfest entries, that's something I
cannot do.

And you're right, we have 284 patches in the queue (excluding committed, 
returned with feedback, withdrawn and rejected)... 18 of them for more than
10 commitfests!

Needs review: 192. 
Waiting on Author: 68. 
Ready for Committer: 24

If you have a patch in this commitfest, please check in
http://commitfest.cputube.org/ if your patch still applies and passes
tests. 

Thanks to all of you for your great work!

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: WIP: System Versioned Temporal Table

2021-09-01 Thread Jaime Casanova
Hi,

This doesn't pass tests because of lack of some file. Can we fix that
please and send the patch again?

On Tue, Aug 10, 2021 at 7:20 AM Simon Riggs 
wrote:

> On Wed, 14 Jul 2021 at 12:48, vignesh C  wrote:
>
> > The patch does not apply on Head anymore, could you rebase and post a
> > patch. I'm changing the status to "Waiting for Author".
>
> OK, so I've rebased the patch against current master to take it to v15.
>
> I've then worked on the patch some myself to make v16 (attached),
> adding these things:
>
> * Add code, docs and test to remove the potential anomaly where
> endtime < starttime, using the sqlstate 2201H as pointed out by Vik
> * Add code and tests to handle multiple changes in a transaction
> correctly, according to SQL Std
> * Add code and tests to make Foreign Keys behave correctly, according to
> SQL Std
> * Fixed nascent bug in relcache setup code
> * Various small fixes from Japin's review - thanks! I've used
> starttime and endtime as default column names
> * Additional tests and docs to show that the functionality works with
> or without PKs on the table
>
> I am now satisfied that the patch does not have any implementation
> anomalies in behavioral design, but it is still a long way short in
> code architecture.
>
> There are various aspects still needing work. This is not yet ready
> for Commit, but it is appropriate now to ask for initial design
> guidance on architecture and code placement by a Committer, so I am
> setting this to Ready For Committer, in the hope that we get the
> review in SeptCF and a later version can be submitted for later commit
> in JanCF. With the right input, this patch is about a person-month
> away from being ready, assuming we don't hit any blocking issues.
>
> Major Known Issues
> * SQLStd says that it should not be possible to update historical
> rows, but those tests show we fail to prevent that and there is code
> marked NOT_USED in those areas
> * The code is structured poorly around
> parse-analyze/rewriter/optimizer/executor and that needs positive
> design recommendations, rather than critical review
> * Joins currently fail because of the botched way WHERE clauses are
> added, resulting in duplicate names
> * Views probably don't work, but there are no tests
> * CREATE TABLE (LIKE foo) doesn't correctly copy across all features -
> test for that added, with test failure accepted for now
> * ALTER TABLE is still incomplete and also broken; I suggest we remove
> that for the first version of the patch to reduce patch size for an
> initial commit.
>
> Minor Known Issues
> * Logical replication needs some minor work, no tests yet
> * pg_dump support looks like it exists and might work easily, but
> there are no tests yet
> * Correlation names don't work in FROM clause - shift/reduce errors
> from double use of AS
> * Add test and code to prevent triggers referencing period cols in the
> WHEN clause
> * No tests yet to prove you can't set various parameters/settings on
> the period time start/end cols
> * Code needs some cleanup in a few places
> * Not really sure what value is added by
> lock-update-delete-system-versioned.spec
>
> * IMHO we should make the PK definition use "endtime DESC", so that
> the current version is always the first row found in the PK for any
> key, since historical indexes will grow bigger over time
>
> There are no expected issues with integration with MERGE, since SQLStd
> explains how to handle that.
>
> Other reviews are welcome.
>
> --
> Simon Riggshttp://www.EnterpriseDB.com/
>


--


Re: 2021-09 Commitfest

2021-09-01 Thread Jaime Casanova
On Wed, Sep 01, 2021 at 06:32:38PM +0200, Magnus Hagander wrote:
> On Wed, Sep 1, 2021 at 4:26 PM Jaime Casanova
>  wrote:
> >
> > On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:
> > > It is now 2021-09-01 Anywhere On Earth so I’ve set the September 
> > > commitfest to
> > > In Progress and opened the November one for new entries.  Jaime Casanova 
> > > has
> > > volunteered for CFM [0], so let’s help him close the 284 still open items 
> > > in
> > > the queue.
> > >
> >
> > Thank you Daniel for editing the commitfest entries, that's something I
> > cannot do.
> 
> I've added cf admin permissions to you as well now.
> 

I have the power! mwahahaha!
eh! i mean, thanks Magnus ;)

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Numeric x^y for negative x

2021-09-01 Thread Jaime Casanova
On Fri, Aug 06, 2021 at 09:27:03PM +0100, Dean Rasheed wrote:
> On Fri, 6 Aug 2021 at 21:26, Tom Lane  wrote:
> >
> > Dean Rasheed  writes:
> > > On Fri, 6 Aug 2021 at 17:15, Tom Lane  wrote:
> > >> Looks plausible by eyeball (I've not tested).
> >
> > > So, I have back-branch patches for this ready to go. The question is,
> > > is it better to push now, or wait until after next week's releases?
> >
> > I'd push now, given we have a failing buildfarm member.
> >
> > Admittedly, there may be nobody else using that compiler out in
> > the real world, but we don't know that.
> >
> 
> OK. Will do.
> 

Hi Dean,

It seems you already committed this. But it's still as "Ready for
committer" in the commitfest app. 

Are we waiting for something else or we can mark it as committed?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: shared-memory based stats collector

2021-09-02 Thread Jaime Casanova
On Mon, Jul 26, 2021 at 06:27:54PM -0700, Andres Freund wrote:
> Hi,
> 
> On 2021-07-26 17:52:01 +0900, Kyotaro Horiguchi wrote:
> > > > Yeah, thank you very much for checking that. However, this patch is
> > > > now developed in Andres' GitHub repository.  So I'm at a loss what to
> > > > do for the failure..
> > > 
> > > I'll post a rebased version soon.
> > 
> > (Sorry if you feel being hurried, which I didn't meant to.)
> 
> No worries!
> 
> I had intended to post a rebase by now. But while I did mostly finish
> that (see [1]) I unfortunately encountered a new issue around
> partitioned tables, see [2]. Currently I'm hoping for a few thoughts on
> that thread about the best way to address the issues.
> 
> Greetings,
> 
> Andres Freund
> 
> [1] https://github.com/anarazel/postgres/tree/shmstat
> [2] 
> https://www.postgresql.org/message-id/20210722205458.f2bug3z6qzxzpx2s%40alap3.anarazel.de
> 
> 

Hi Andres,

Are you going planning to post a rebase soon?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: SQL:2011 application time

2021-09-04 Thread Jaime Casanova
On Sat, Jul 03, 2021 at 10:46:55AM -0700, Paul A Jungwirth wrote:
> On Fri, Jul 2, 2021 at 2:39 PM Paul A Jungwirth
>  wrote:
> >
> > On Wed, Jun 30, 2021 at 10:39 AM Paul A Jungwirth
> >  wrote:
> > > Here is a set of patches to add SQL:2011 application-time support (aka
> > > valid-time).
> >
> > Here is a small fix to prevent `FOR PORTION OF valid_at FROM MAXVALUE
> > TO foo` or `FROM foo TO MINVALUE`. I rebased on latest master too.
> 
> Here is a patch set that cleans up the catalog docs for pg_period. The
> columns have changed since that was written, and also we use a
> different sgml structure on those pages now. Note pg_period still
> contains a couple essentially-unused columns, perislocal and
> perinhcount. Those are intended for supporting table inheritance, so
> I've left them in.
> 

Hi Paul,

Thanks for working on this. It would be a great improvement.

I wanted to test the patches but:

patch 01: does apply but doesn't compile, attached the compile errors.
patch 04: does not apply clean.

Please fix and resend.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
parse_utilcmd.c: En la función ‘generateClonedIndexStmt’:
parse_utilcmd.c:1730:2: aviso: ISO C90 prohíbe mezclar declaraciones y código 
[-Wdeclaration-after-statement]
  Period *p = makeNode(Period);
  ^~
tablecmds.c: En la función ‘ATPrepCmd’:
tablecmds.c:4637:24: error: tipo incompatible para el argumento 1 de 
‘ATSimplePermissions’
ATSimplePermissions(rel, ATT_TABLE);
^~~
tablecmds.c:403:48: nota: se esperaba ‘AlterTableType’ {también conocido como 
‘enum AlterTableType’} pero el argumento es de tipo ‘Relation’ {también 
conocido como ‘struct RelationData *’}
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ~~~^~~
tablecmds.c:308:23: aviso: el paso del argumento 2 de ‘ATSimplePermissions’ 
crea un puntero desde un entero sin una conversión [-Wint-conversion]
 #define  ATT_TABLE0x0001
   ^~
tablecmds.c:4637:29: nota: en expansión de macro ‘ATT_TABLE’
ATSimplePermissions(rel, ATT_TABLE);
 ^
tablecmds.c:403:66: nota: se esperaba ‘Relation’ {también conocido como ‘struct 
RelationData *’} pero el argumento es de tipo ‘int’
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ~^~~
tablecmds.c:4637:4: error: faltan argumentos para la función 
‘ATSimplePermissions’
ATSimplePermissions(rel, ATT_TABLE);
^~~
tablecmds.c:403:13: nota: se declara aquí
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ^~~
tablecmds.c:4645:24: error: tipo incompatible para el argumento 1 de 
‘ATSimplePermissions’
ATSimplePermissions(rel, ATT_TABLE);
^~~
tablecmds.c:403:48: nota: se esperaba ‘AlterTableType’ {también conocido como 
‘enum AlterTableType’} pero el argumento es de tipo ‘Relation’ {también 
conocido como ‘struct RelationData *’}
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ~~~^~~
tablecmds.c:308:23: aviso: el paso del argumento 2 de ‘ATSimplePermissions’ 
crea un puntero desde un entero sin una conversión [-Wint-conversion]
 #define  ATT_TABLE0x0001
   ^~
tablecmds.c:4645:29: nota: en expansión de macro ‘ATT_TABLE’
ATSimplePermissions(rel, ATT_TABLE);
 ^
tablecmds.c:403:66: nota: se esperaba ‘Relation’ {también conocido como ‘struct 
RelationData *’} pero el argumento es de tipo ‘int’
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ~^~~
tablecmds.c:4645:4: error: faltan argumentos para la función 
‘ATSimplePermissions’
ATSimplePermissions(rel, ATT_TABLE);
^~~
tablecmds.c:403:13: nota: se declara aquí
 static void ATSimplePermissions(AlterTableType cmdtype, Relation rel, int 
allowed_targets);
 ^~~
tablecmds.c: En la función ‘alter_table_type_to_string’:
tablecmds.c:6179:2: aviso: el valor de enumeración ‘AT_AddPeriod’ no se maneja 
en un switch [-Wswitch]
  switch (cmdtype)
  ^~
tablecmds.c:6179:2: aviso: el valor de enumeración ‘AT_DropPeriod’ no se maneja 
en un switch [-Wswitch]
tablecmds.c: En la función ‘ATExecDropPeriod’:
tablecmds.c:7968:23: error: tipo incompatible para el argumento 1 de 
‘ATSimplePermissions’
   ATSimplePermissions(rel, ATT_TABLE);
   ^~~
tablecmds.c:6325:36: nota: se esperaba ‘AlterTableType’ {también conocido como 

Re: use AV worker items infrastructure for GIN pending list's cleanup

2021-09-08 Thread Jaime Casanova
On Mon, May 17, 2021 at 01:46:37PM +0900, Masahiko Sawada wrote:
> On Mon, Apr 5, 2021 at 3:31 PM Jaime Casanova
>  wrote:
> >
> > Hi,
> >
> > When AV worker items where introduced 4 years ago, i was suggested that
> > it could be used for other things like cleaning the pending list of GIN
> > index when it reaches gin_pending_list_limit instead of making user
> > visible operation pay the price.
> >
> > That never happened though. So, here is a little patch for that.
> 
> Thank you for working on this.
> 
> I like the idea of cleaning the GIN pending list using by autovacuum
> work item. But with the patch, we request and skip the pending list
> cleanup if the pending list size exceeds gin_pending_list_limit during
> insertion. But autovacuum work items are executed after an autovacuum
> runs. So if many insertions happen before executing the autovacuum
> work item, we will end up greatly exceeding the threshold
> (gin_pending_list_limit) and registering the same work item again and
> again. Maybe we need something like a soft limit and a hard limit?
> That is, if the pending list size exceeds the soft limit, we request
> the work item. OTOH, if it exceeds the hard limit
> (gin_pending_list_limit) we cleanup the pending list before insertion.
> We might also need to have autovacuum work items ignore the work item
> if the same work item with the same arguments is already registered.
> In addition to that, I think we should avoid the work item for
> cleaning the pending list from being executed if an autovacuum runs on
> the gin index before executing the work item.
> 

Thanks for your comments on this. I have been working on a rebased
version, but ENOTIME right now. 

Will mark this one as "Returned with feedback" and resubmit for
november.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: PoC Refactor AM analyse API

2021-09-08 Thread Jaime Casanova
On Fri, Feb 19, 2021 at 12:06:12PM +1000, Denis Smirnov wrote:
> Thanks for your review, Heikki.
> 
> I have made the changes you have requested.
> 
> 1. All modifications interconnected with column projection were reverted 
> (they should be added in https://commitfest.postgresql.org/31/2922 if the 
> current patch would be merged one day).
> 2. I have returned PROGRESS_ANALYZE_* states.
> 3. qsort() was moved into heapam_acquire_sample_rows(). Also a comment was 
> added, that the acquire_sample_rows() AM function must return the tuples in a 
> physical table order.
> 4. table_beginscan_analyze() was removed as a redundant function.
> 5. acquire_sample_rows() comment about reservoir was changed.
> 

Hi Denis,

This doesn't apply anymore because of c6fc50c, can you resubmit a new
patch?

Please note that the patch must be submitted with a .patch extension
instead of .txt, that way the CI at http://commitfest.cputube.org/ will
be able to execute automatic tests on it.

Regards,

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: LogwrtResult contended spinlock

2021-09-08 Thread Jaime Casanova
On Tue, Feb 02, 2021 at 08:19:19PM -0300, Alvaro Herrera wrote:
> Hello,
> 
> So I addressed about half of your comments in this version merely by
> fixing silly bugs.  The problem I had which I described as
> "synchronization fails" was one of those silly bugs.
> 

Hi Álvaro,

Are we waiting for another version of the patch based on Andres'
comments? Or this version is good enough for testing?

BTW, current patch still applies cleanly.

regards,

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Asymmetric partition-wise JOIN

2021-09-09 Thread Jaime Casanova
On Thu, Sep 09, 2021 at 09:50:46AM +, Aleksander Alekseev wrote:
> It looks like this patch needs to be updated. According to 
> http://cfbot.cputube.org/ it applies but doesn't pass any tests. Changing the 
> status to save time for reviewers.
> 
> The new status of this patch is: Waiting on Author

Just to give some more info to work on I found this patch made postgres
crash with a segmentation fault.

"""
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x556e37ef1b55 in bms_equal (a=0x7f6e37a9c5b0, b=0x7f6e37a9c5b0) at 
bitmapset.c:126
126 if (shorter->words[i] != longer->words[i])
"""

attached are the query that triggers the crash and the backtrace.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL


bug.sql
Description: application/sql
#0  0x556e37ef1b55 in bms_equal (a=0x7f6e37a9c5b0, b=0x7f6e37a9c5b0) at 
bitmapset.c:126
shorter = 0x7f6e37a9c5b0
longer = 0x7f6e37a9c5b0
shortlen = 2139062143
longlen = 32622
i = 138057
#1  0x556e37fc9f5d in find_param_path_info (rel=0x556e3944c490, 
required_outer=0x7f6e37a9c5b0) at relnode.c:1634
ppi = 0x7f6e37a9cc08
lc__state = {l = 0x7f6e37a9cc40, i = 0}
lc = 0x7f6e37a5f898
#2  0x556e37fbeaeb in reparameterize_path_by_child (root=0x556e394340e0, 
path=0x7f6e37a9c6c8, child_rel=0x7f6e37a77ac0) at pathnode.c:4205
new_path = 0x7f6e37a89408
new_ppi = 0x7f6e37a77d30
old_ppi = 0x7f6e37a9cc08
required_outer = 0x7f6e37a9c5b0
#3  0x556e37f66545 in try_nestloop_path (root=0x556e394340e0, 
joinrel=0x7f6e37a88a80, outer_path=0x7f6e37a788e8, inner_path=0x7f6e37a9c6c8, 
pathkeys=0x0, jointype=JOIN_LEFT, extra=0x7ffde36f3e50)
at joinpath.c:662
required_outer = 0x0
workspace = {startup_cost = 46733.4674, total_cost = 
1195882.0625, run_cost = 1149148.595, inner_run_cost = 0.020375816993464052, 
inner_rescan_run_cost = 0.020375816993464052, 
  outer_rows = 6.9224208015025731e-310, inner_rows = 
6.9224208087218603e-310, outer_skip_rows = 6.9224208013235237e-310, 
inner_skip_rows = 4.640852265067508e-310, numbuckets = 960708832, 
  numbatches = 21870, inner_rows_total = 6.9224208047606396e-310}
innerrel = 0x556e3944c490
outerrel = 0x7f6e37a77ac0
innerrelids = 0x556e3944c2c8
outerrelids = 0x7f6e37a77d30
inner_paramrels = 0x7f6e37a9c5b0
outer_paramrels = 0x0
#4  0x556e37f67bf9 in match_unsorted_outer (root=0x556e394340e0, 
joinrel=0x7f6e37a88a80, outerrel=0x7f6e37a77ac0, innerrel=0x556e3944c490, 
jointype=JOIN_LEFT, extra=0x7ffde36f3e50) at joinpath.c:1702
innerpath = 0x7f6e37a9c6c8
mpath = 0x0
lc2__state = {l = 0x7f6e37a9d0a0, i = 1}
lc2 = 0x7f6e37a9d0c0
outerpath = 0x7f6e37a788e8
merge_pathkeys = 0x0
lc1__state = {l = 0x7f6e37a786c8, i = 0}
save_jointype = JOIN_LEFT
nestjoinOK = true
useallclauses = false
inner_cheapest_total = 0x7f6e37a9c3b0
matpath = 0x7f6e37a89370
lc1 = 0x7f6e37a786e0
__func__ = "match_unsorted_outer"
#5  0x556e37f65c14 in add_paths_to_joinrel (root=0x556e394340e0, 
joinrel=0x7f6e37a88a80, outerrel=0x7f6e37a77ac0, innerrel=0x556e3944c490, 
jointype=JOIN_LEFT, sjinfo=0x7f6e37a88630, 
restrictlist=0x7f6e37a88a28) at joinpath.c:291
extra = {restrictlist = 0x7f6e37a88a28, mergeclause_list = 
0x7f6e37a88e28, inner_unique = true, sjinfo = 0x7f6e37a88630, semifactors = 
{outer_match_frac = 0.00039215686274509802, match_count = 2550}, 
  param_source_rels = 0x0}
mergejoin_allowed = true
lc = 0x0
joinrelids = 0x7f6e37a886e8
#6  0x556e37f6a3d9 in populate_joinrel_with_paths (root=0x556e394340e0, 
rel1=0x7f6e37a77ac0, rel2=0x556e3944c490, joinrel=0x7f6e37a88a80, 
sjinfo=0x7f6e37a88630, restrictlist=0x7f6e37a88a28)
at joinrels.c:825
__func__ = "populate_joinrel_with_paths"
#7  0x556e37f6bca5 in extract_asymmetric_partitionwise_subjoin 
(root=0x556e394340e0, joinrel=0x7f6e37a86bd8, append_path=0x7f6e37a7b6c8, 
inner_rel=0x556e3944c490, jointype=JOIN_LEFT, extra=0x7ffde36f4150)
at joinrels.c:1617
child_path = 0x7f6e37a788e8
child_joinrelids = 0x7f6e37a885e0
child_sjinfo = 0x7f6e37a88630
child_rel = 0x7f6e37a77ac0
parent_relids = 0x7f6e37a88608
child_joinrel = 0x7f6e37a88a80
child_restrictlist = 0x7f6e37a88a28
lc__state = {l = 0x7f6e37a7b8a8, i = 0}
result = 0x0
lc = 0x7f6e37a7b8c0
#8  0x556e37f6bfb8 in try_asymmetric_partitionwise_join 
(root=0x556e394340e0, joinrel=0x7f6e37a86bd8, outer_rel=0x7f6e37a65f68, 
inner_rel=0x556e3944c490, jointype=JOIN_LEFT, extra=0x7ffde36f4150)
at joinrels.c:1713
_save_exception_stack = 0x7ffde36f4b80
 

Re: Feedback on table expansion hook (including patch)

2021-09-09 Thread Jaime Casanova
On Wed, May 12, 2021 at 10:19:17PM +0900, Amit Langote wrote:
> (Sorry about being very late to this thread.)
> 
> > Would it be unreasonable of us to ask for a worked-out example making
> > use of the proposed hook?  That'd go a long way towards resolving the
> > question of whether you can do anything useful without duplicating
> > lots of code.
> >
> > I've also been wondering, given the table-AM projects that are
> > going on, whether we shouldn't refactor things to give partitioned
> > tables a special access method, and then shove most of the planner
> > and executor's hard-wired partitioning logic into access method
> > callbacks.  That would make it a lot more feasible for extensions
> > to implement custom partitioning-like behavior ... or so I guess.
> 
> Interesting proposition...
> 

Since there is no clear definition here, we seems to be expecting an
example of how the hook will be used and there have been no activity
since may.

I suggest we move this to Returned with feedback. Which I'll do in a
couple hours.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Toast compression method options

2021-09-09 Thread Jaime Casanova
On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:
> On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar  wrote:
> >
> > On Wed, Jul 14, 2021 at 5:35 PM vignesh C  wrote:
> > >
> > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar  wrote
> > >
> > > The patch does not apply on Head anymore, could you rebase and post a
> > > patch. I'm changing the status to "Waiting for Author".
> >
> > Okay, I will rebase and send it by next week.
> 
> I have rebased the patch.
> 

Hi,

This doesn't apply cleanly nor compile.
Are you planning to send a rebased version?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Toast compression method options

2021-09-10 Thread Jaime Casanova
On Fri, Sep 10, 2021 at 10:54:04AM +0530, Dilip Kumar wrote:
> On Fri, 10 Sep 2021 at 10:40 AM, Jaime Casanova <
> jcasa...@systemguards.com.ec> wrote:
> 
> > On Mon, Jul 19, 2021 at 01:24:03PM +0530, Dilip Kumar wrote:
> > > On Sun, Jul 18, 2021 at 9:15 PM Dilip Kumar 
> > wrote:
> > > >
> > > > On Wed, Jul 14, 2021 at 5:35 PM vignesh C  wrote:
> > > > >
> > > > > On Thu, May 6, 2021 at 7:24 PM Dilip Kumar 
> > wrote
> > > > >
> > > > > The patch does not apply on Head anymore, could you rebase and post a
> > > > > patch. I'm changing the status to "Waiting for Author".
> > > >
> > > > Okay, I will rebase and send it by next week.
> > >
> > > I have rebased the patch.
> > >
> >
> > Hi,
> >
> > This doesn't apply cleanly nor compile.
> > Are you planning to send a rebased version?
> 
> 
> I will do that early next week.
> 

Great! I'm marking the patch as "waiting on author". 
Thanks for keep working on this.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Numeric x^y for negative x

2021-09-10 Thread Jaime Casanova
On Thu, Sep 02, 2021 at 07:27:09AM +0100, Dean Rasheed wrote:
> On Thu, 2 Sept 2021 at 00:39, Jaime Casanova
>  wrote:
> >
> > Hi Dean,
> >
> > It seems you already committed this. But it's still as "Ready for
> > committer" in the commitfest app.
> >
> > Are we waiting for something else or we can mark it as committed?
> >
> 
> It's mostly done, but there is one more corner case where it loses
> precision. I'll post an update shortly.
> 

Great! I'm marking this as "waiting on author".

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: WIP: System Versioned Temporal Table

2021-09-10 Thread Jaime Casanova
On Tue, Aug 10, 2021 at 01:20:14PM +0100, Simon Riggs wrote:
> On Wed, 14 Jul 2021 at 12:48, vignesh C  wrote:
> 
> > The patch does not apply on Head anymore, could you rebase and post a
> > patch. I'm changing the status to "Waiting for Author".
> 
> OK, so I've rebased the patch against current master to take it to v15.
> 
> I've then worked on the patch some myself to make v16 (attached),
> adding these things:
> 

Hi Simon,

This one doesn't apply nor compile anymore.
Can we expect a rebase soon?


-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: SQL:2011 application time

2021-09-10 Thread Jaime Casanova
On Mon, Sep 06, 2021 at 12:52:37PM -0700, Paul A Jungwirth wrote:
> On Sat, Sep 4, 2021 at 12:56 PM Jaime Casanova
>  wrote:
> >
> > patch 01: does apply but doesn't compile, attached the compile errors.
> > patch 04: does not apply clean.
> 
> Thanks for taking a look! I've rebased & made it compile again. v7 attached.
> 

patch 01: does apply but gives a compile warning (which is fixed by patch
02)
"""
parse_utilcmd.c: In function ‘generateClonedIndexStmt’:
parse_utilcmd.c:1730:2: warning: ISO C90 forbids mixed declarations and code 
[-Wdeclaration-after-statement]
  Period *p = makeNode(Period);
  ^~
"""

patch 03: produces these compile errors.  

analyze.c: In function ‘transformForPortionOfBound’:
analyze.c:1171:3: warning: ISO C90 forbids mixed declarations and code 
[-Wdeclaration-after-statement]
   A_Const*n2 = makeNode(A_Const);
   ^~~
analyze.c:1172:10: error: ‘union ValUnion’ has no member named ‘type’
   n2->val.type = T_Null;
  ^
analyze.c:1172:18: error: ‘T_Null’ undeclared (first use in this function)
   n2->val.type = T_Null;
  ^~
analyze.c:1172:18: note: each undeclared identifier is reported only once for 
each function it appears in



-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-09-10 Thread Jaime Casanova
On Wed, Sep 01, 2021 at 09:26:33AM -0500, Jaime Casanova wrote:
> On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:
> > It is now 2021-09-01 Anywhere On Earth so I’ve set the September commitfest 
> > to
> > In Progress and opened the November one for new entries.  Jaime Casanova has
> > volunteered for CFM [0], so let’s help him close the 284 still open items in
> > the queue.
> > 
> 
> Thank you Daniel for editing the commitfest entries, that's something I
> cannot do.
> 
> And you're right, we have 284 patches in the queue (excluding committed, 
> returned with feedback, withdrawn and rejected)... 18 of them for more than
> 10 commitfests!
> 
> Needs review: 192. 
> Waiting on Author: 68. 
> Ready for Committer: 24
> 

Hi everyone,

On the first 10 days of this commitfest some numbers have moved, mostly
thanks to Daniel Gustafsson and good work from committers:

Needs review: 171. 
Waiting on Author: 79. 
Ready for Committer: 15.

How could we advance on the "needs review" queue? It's just too long!

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Polyphase merge is obsolete

2021-09-10 Thread Jaime Casanova
On Wed, Jul 14, 2021 at 06:04:14PM +0300, Heikki Linnakangas wrote:
> On 14/07/2021 15:12, vignesh C wrote:
> > On Sat, Jan 23, 2021 at 3:49 AM Heikki Linnakangas  wrote:
> > > Here's an updated version that fixes one bug:
> > > 
> > > The CFBot was reporting a failure on the FreeBSD system [1]. It turned
> > > out to be an out-of-memory issue caused by an underflow bug in the
> > > calculation of the size of the tape read buffer size. With a small
> > > work_mem size, the memory left for tape buffers was negative, and that
> > > wrapped around to a very large number. I believe that was not caught by
> > > the other systems, because the other ones had enough memory for the
> > > incorrectly-sized buffers anyway. That was the case on my laptop at
> > > least. It did cause a big slowdown in the 'tuplesort' regression test
> > > though, which I hadn't noticed.
> > > 
> > > The fix for that bug is here as a separate patch for easier review, but
> > > I'll squash it before committing.
> > 
> > The patch does not apply on Head anymore, could you rebase and post a
> > patch. I'm changing the status to "Waiting for Author".
> 
> Here's a rebased version. I also squashed that little bug fix from previous
> patch set.
> 

Hi,

This patch does not apply, can you submit a rebased version?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




new regexp_*(text, text, int) functions crash

2021-09-11 Thread Jaime Casanova
Hi,

BTW, this only happens when the third parameter is large. Here is an
example that consistently crash here:

select regexp_count('jaime.casanova', 'valid', 102481);

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Schema variables - new implementation for Postgres 15

2021-09-11 Thread Jaime Casanova
On Fri, Sep 10, 2021 at 10:06:04AM +0200, Pavel Stehule wrote:
> Hi
> 
> čt 9. 9. 2021 v 12:21 odesílatel Erik Rijkers  napsal:
> 
> >  > [schema-variables-20210909.patch]
> >
> > Hi Pavel,
> >
> > The patch applies and compiles fine but 'make check' for the
> > assert-enabled fails on 131 out of 210 tests.
> >
> > (while compiling HEAD checks run without errors for both assert-disabled
> > and assert-enabled)
> >
> >
> 
> Please, check, attached patch. I fixed a routine for processing a list of
> identifiers - now it works with the identifier's node more sensitive.
> Previous implementation of strVal was more tolerant.
> 

Hi Pavel,

Just noted that there is no support for REASSIGN OWNED BY:

"""
regression=# create variable random_number numeric;
CREATE VARIABLE
regression=# alter variable random_number owner to jcm;
ALTER VARIABLE
regression=# reassign owned by jcm to jaime;
ERROR:  unexpected classid 9222
"""


TEMP variables are not schema variables? at least not attached to the
schema one expects:

"""
regression=# create temp variable random_number numeric ;
CREATE VARIABLE
regression=# \dV
   List of variables
  Schema   | Name  |  Type   | Is nullable | Is mutable | Default |  
Owner   | Transaction
al end action
---+---+-+-++-+--+
--
 pg_temp_4 | random_number | numeric | t   | t  | | 
jcasanov |
(1 row)

regression=# select public.random_number;
ERROR:  missing FROM-clause entry for table "public"
LINE 1: select public.random_number;
   ^
"""

There was a comment that TEMP variables should be DECLAREd instead of
CREATEd, i guess that is because those have similar behaviour. At least,
I would like to see similar messages when using the ON COMMIT DROP
option in a TEMP variable:

"""
regression=# create temp variable random_number numeric on commit drop;
CREATE VARIABLE
regression=# \dV
Did not find any schema variables.
regression=# declare q cursor  for select 1;
ERROR:  DECLARE CURSOR can only be used in transaction blocks
"""

About that, why are you not using syntax ON COMMIT RESET instead on
inventing ON TRANSACTION END RESET? seems better because you already use
ON COMMIT DROP.

I will test more this patch tomorrow. Great work, very complete.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Confusing messages about index row size

2021-09-11 Thread Jaime Casanova
Hi everyone,

I tried an old test that at some point crashed the database... that is
already fixed.

So now it gives a good ERROR message:

"""
postgres=# create table t1 (col1 text, col2 text);
CREATE TABLE
postgres=# create unique index on t1 ((col1 || col2));
CREATE INDEX
postgres=# insert into t1 values((select array_agg(md5(g::text))::text from
postgres(# generate_series(1, 256) g), version());
ERROR:  index row requires 8552 bytes, maximum size is 8191
"""

great, so I reduced the length of the index row size:

"""
postgres=# insert into t1 values((select array_agg(md5(g::text))::text from 
generate_series(1, 200) g), version());
ERROR:  index row size 6704 exceeds btree version 4 maximum 2704 for index 
"t1_expr_idx"
DETAIL:  Index row references tuple (0,1) in relation "t1".
HINT:  Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text 
indexing.
"""

So, what is it? the index row size could be upto 8191 or cannot be
greater than 2704?

regards,

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Schema variables - new implementation for Postgres 15

2021-09-12 Thread Jaime Casanova
On Sun, Sep 12, 2021 at 05:38:42PM +0200, Pavel Stehule wrote:
> Hi
> 
> > """
> > regression=# create temp variable random_number numeric on commit drop;
> > CREATE VARIABLE
> > regression=# \dV
> > Did not find any schema variables.
> > regression=# declare q cursor  for select 1;
> > ERROR:  DECLARE CURSOR can only be used in transaction blocks
> > """
> >
> 
> I have different result
> 
> postgres=# create temp variable random_number numeric on commit drop;
> CREATE VARIABLE
> postgres=# \dV
>  List of variables
> ┌┬───┬─┬─┬┬─┬───┬──┐
> │ Schema │ Name  │  Type   │ Is nullable │ Is mutable │ Default │
> Owner │ Transactional end action │
> ╞╪═══╪═╪═╪╪═╪═══╪══╡
> │ public │ random_number │ numeric │ t   │ t  │ │
> tom2  │  │
> └┴───┴─┴─┴┴─┴───┴──┘
> (1 row)
> 
> 
> 

Hi, 

Thanks, will test rebased version.
BTW, that is not the temp variable. You can note it because of the
schema or the lack of a "Transaction end action". That is a normal
non-temp variable that has been created before. A TEMP variable with an
ON COMMIT DROP created outside an explicit transaction will disappear
immediatly like cursor does in the same situation.


-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




brin multi minmax crash for inet value

2021-09-12 Thread Jaime Casanova
Hi Tomas,

Just noted that this query crash the server. Execute it in the
regression database:

"""
update brintest_multi set inetcol = '192.168.204.50/0'::inet;
"""

Attached is the backtrace. Let me know if you need something else to
track it.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140730302333136, 2, 6, 435, 
94894241509376, 
4611686018427388799, 140219046775462, 0, 281470681751456, 0, 0, 0, 
0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f874a40f535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {
  0, 0, 0, 0, 0, 140219044532213, 2, 7292513299284754848, 
7003713384994060596, 
  94894241509376, 7003719963994494672, 0, 3387215930523559936, 
14073030276, 0, 
  140730302334240}}, sa_flags = 1234071552, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x564e49f91849 in ExceptionalCondition (
conditionName=0x564e4a00f6e4 "(delta >= 0) && (delta <= 1)", 
errorType=0x564e4a00ef74 "FailedAssertion", fileName=0x564e4a00ef60 
"brin_minmax_multi.c", 
lineNumber=2368) at assert.c:69
No locals.
#3  0x564e498f16db in brin_minmax_multi_distance_inet 
(fcinfo=0x7ffe53ae05c0)
at brin_minmax_multi.c:2368
delta = -1.1641532182693481e-08
i = -1
len = 4
addra = 0x564e4b6ee950 ""
addrb = 0x564e4b6ee970 ""
ipa = 0x564e4b6c8828
ipb = 0x564e4b6c5a78
lena = 0
lenb = 0
#4  0x564e49f9bcc4 in FunctionCall2Coll (flinfo=0x564e4b60c640, 
collation=0, 
arg1=94894272841768, arg2=94894272830072) at fmgr.c:1160
fcinfodata = {fcinfo = {flinfo = 0x564e4b60c640, context = 0x0, 
resultinfo = 0x0, 
fncollation = 0, isnull = false, nargs = 2, args = 0x7ffe53ae05e0}, 
  fcinfo_data = "@\306`KNV", '\000' , 
"\002\000(\210lKNV\000\000\000\000\000\000\000\000\000\000xZlKNV\000\000\000\350nKNV\000"}
fcinfo = 0x7ffe53ae05c0
result = 94894272996608
__func__ = "FunctionCall2Coll"
#5  0x564e498ef621 in build_distances (distanceFn=0x564e4b60c640, 
colloid=0, 
eranges=0x564e4b6ee620, neranges=11) at brin_minmax_multi.c:1352
a1 = 94894272841768
a2 = 94894272830072
r = 94894241543955
i = 0
ndistances = 10
distances = 0x564e4b6ee838
#6  0x564e498f0199 in compactify_ranges (bdesc=0x564e4b60ca58, 
ranges=0x564e4b6da6c0, 
max_values=32) at brin_minmax_multi.c:1822
cmpFn = 0x564e4b60c678
distanceFn = 0x564e4b60c640
eranges = 0x564e4b6ee620
neranges = 11
distances = 0x564e0008
ctx = 0x564e4b6ee500
oldctx = 0x564e4b6c4b80
#7  0x564e498f1735 in brin_minmax_multi_serialize (bdesc=0x564e4b60ca58, 
src=94894272915136, 
dst=0x564e4b6c5030) at brin_minmax_multi.c:2386
ranges = 0x564e4b6da6c0
s = 0x564e4b6c6110
#8  0x564e498f795b in brin_form_tuple (brdesc=0x564e4b60ca58, blkno=0, 
tuple=0x564e4b6c4ca0, 
size=0x7ffe53ae0858) at brin_tuple.c:165
datumno = 1
values = 0x564e4b6c5298
nulls = 0x564e4b6c5ad0
anynulls = true
rettuple = 0x741231600
keyno = 9
idxattno = 9
phony_infomask = 0
phony_nullbitmap = 0x564e4b6c5b28 "\177\177\177~\177\177\177\177"
len = 94894241579157
hoff = 1024
data_len = 94894272830256
i = 32766
untoasted_values = 0x564e4b6c53b0
nuntoasted = 0
#9  0x564e498e79d1 in brininsert (idxRel=0x7f87412316b0, 
values=0x7ffe53ae09b0, 
nulls=0x7ffe53ae0990, heaptid=0x564e4b6b49d0, heapRel=0x7f874122f288, 
checkUnique=UNIQUE_CHECK_NO, indexUnchanged=false, 
indexInfo=0x564e4b6b8710) at brin.c:281
lp = 0x7f8741a3cfb8
origsz = 952
newsz = 94894244461288
page = 0x7f8741a3cf80 ""
origtup = 0x564e4b6c5b48
newtup = 0x7ffe53ae0890
samepage = false
need_insert = true
off = 9
brtup = 0x7f8741a3d108
dtup = 0x564e4b6c4ca0
pagesPerRange = 1
origHeapBlk = 0
heapBlk = 0
bdesc = 0x564e4b60ca58
revmap = 0x564e4b6b8aa8
buf = 114
tupcxt = 0x564e4b6c4b80
oldcxt = 0x564e4b6b2bc0
autosummarize = false
__func__ = "brininsert"
#10 0x564e49989165 in index_insert (indexRelation=0x7f87412316b0, 
values=0x7ffe53ae09b0, 
isnull=0x7ffe53ae0990, heap_t_ctid=0x564e4b6b49d0, 
heapRelation=0x7f874122f288, 
checkUnique=UNIQUE_CHECK_NO, indexUnchanged=false, 
indexI

Re: brin multi minmax crash for inet value

2021-09-12 Thread Jaime Casanova
On Sun, Sep 12, 2021 at 08:23:44PM -0500, Justin Pryzby wrote:
> On Sun, Sep 12, 2021 at 07:44:47PM -0500, Jaime Casanova wrote:
> > Hi Tomas,
> > 
> > Just noted that this query crash the server. Execute it in the
> > regression database:
> 
> If I'm not wrong, this is the crash fixed by e1fbe1181 in April.
> 
> Could you check what HEAD your server is compiled from ?
> 

That was with yesterday's head but trying with today's head this
same update works fine.

Maybe there is something else happening here, will try to investigate
tomorrow.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Polyphase merge is obsolete

2021-09-15 Thread Jaime Casanova
On Sat, Sep 11, 2021 at 01:35:27AM -0500, Jaime Casanova wrote:
> On Wed, Jul 14, 2021 at 06:04:14PM +0300, Heikki Linnakangas wrote:
> > On 14/07/2021 15:12, vignesh C wrote:
> > > On Sat, Jan 23, 2021 at 3:49 AM Heikki Linnakangas  
> > > wrote:
> > > > Here's an updated version that fixes one bug:
> > > > 
> > > > The CFBot was reporting a failure on the FreeBSD system [1]. It turned
> > > > out to be an out-of-memory issue caused by an underflow bug in the
> > > > calculation of the size of the tape read buffer size. With a small
> > > > work_mem size, the memory left for tape buffers was negative, and that
> > > > wrapped around to a very large number. I believe that was not caught by
> > > > the other systems, because the other ones had enough memory for the
> > > > incorrectly-sized buffers anyway. That was the case on my laptop at
> > > > least. It did cause a big slowdown in the 'tuplesort' regression test
> > > > though, which I hadn't noticed.
> > > > 
> > > > The fix for that bug is here as a separate patch for easier review, but
> > > > I'll squash it before committing.
> > > 
> > > The patch does not apply on Head anymore, could you rebase and post a
> > > patch. I'm changing the status to "Waiting for Author".
> > 
> > Here's a rebased version. I also squashed that little bug fix from previous
> > patch set.
> > 
> 
> Hi,
> 
> This patch does not apply, can you submit a rebased version?
> 

BTW, I'm marking this one as "waiting on author"

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




right join with partitioned table crash

2021-09-15 Thread Jaime Casanova
Hi everyone,

Here's another crash caught by sqlsmith.

"""
drop table if exists fkpart3_pk5 cascade;
drop table if exists inet_tbl;

create table fkpart3_pk5 (
a integer not null primary key
)
partition by range (a);

create table fkpart3_pk51 partition of fkpart3_pk5
for values from (4000) to (4500);

create table inet_tbl (
c cidr,
i inet
);

select
1 as c0
from
(select null::integer as c9,
ref_0.a as c24
   from fkpart3_pk5 as ref_0
) as subq_0
right join public.inet_tbl as sample_0 on (cast(null as cidr) = c)
where subq_0.c9 <= subq_0.c24
"""


Attached the backtrace.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {4194304, 140736805622640, 2, 6, 556, 
94745029079040, 
4611686018427388799, 139894076431014, 0, 281470681751456, 0, 0, 0, 
0, 0, 0}}
pid = 
tid = 
ret = 
#1  0x7f3ba0830535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, 
sa_mask = {__val = {
  0, 0, 0, 0, 0, 139894074187765, 2, 7075777569425871984, 
7003713384980111928, 
  94745029079040, 7003719963994515424, 0, 9449567279706640128, 
140736805622880, 0, 
  140736805623744}}, sa_flags = -1949470720, sa_restorer = 0x0}
sigs = {__val = {32, 0 }}
#2  0x562b8c3818a0 in ExceptionalCondition (
conditionName=0x562b8c527e38 "!bms_overlap(baserel->relids, 
required_outer)", 
errorType=0x562b8c527a26 "FailedAssertion", fileName=0x562b8c527abf 
"relnode.c", 
lineNumber=1313) at assert.c:69
No locals.
#3  0x562b8c0ef355 in get_baserel_parampathinfo (root=0x562b8d3ebd00, 
baserel=0x562b8d3e8ec0, required_outer=0x562b8d40e350) at relnode.c:1313
ppi = 0x562b8d3e8ec0
joinrelids = 0x562b8d40f468
pclauses = 0x562b8d40e490
rows = 6.9533220764192298e-310
lc = 0x562b8d40f4c0
#4  0x562b8c0debbd in create_append_path (root=0x562b8d3ebd00, 
rel=0x562b8d3e8ec0, 
subpaths=0x562b8d40f468, partial_subpaths=0x0, pathkeys=0x0, 
required_outer=0x562b8d40e350, 
parallel_workers=0, parallel_aware=false, rows=-1) at pathnode.c:1270
pathnode = 0x562b8d40f4c0
l = 0x0
#5  0x562b8c06ff7c in add_paths_to_append_rel (root=0x562b8d3ebd00, 
rel=0x562b8d3e8ec0, 
live_childrels=0x562b8d40f060) at allpaths.c:1610
required_outer = 0x562b8d40e350
lcr = 0x0
l__state = {l = 0x562b8d40f1c0, i = 0}
subpaths = 0x562b8d40f468
subpaths_valid = true
partial_subpaths = 0x562b8d40f110
pa_partial_subpaths = 0x562b8d40f168
pa_nonpartial_subpaths = 0x0
partial_subpaths_valid = true
pa_subpaths_valid = true
all_child_pathkeys = 0x0
all_child_outers = 0x562b8d40f1c0
l = 0x562b8d40f1d8
partial_rows = 500
#6  0x562b8c06f47b in set_append_rel_pathlist (root=0x562b8d3ebd00, 
rel=0x562b8d3e8ec0, 
rti=4, rte=0x562b8d3e7df0) at allpaths.c:1269
parentRTindex = 4
live_childrels = 0x562b8d40f060
l = 0x0
#7  0x562b8c06e33a in set_rel_pathlist (root=0x562b8d3ebd00, 
rel=0x562b8d3e8ec0, rti=4, 
rte=0x562b8d3e7df0) at allpaths.c:481
__func__ = "set_rel_pathlist"
#8  0x562b8c06e023 in set_base_rel_pathlists (root=0x562b8d3ebd00) at 
allpaths.c:353
rel = 0x562b8d3e8ec0
rti = 4
#9  0x562b8c06dd72 in make_one_rel (root=0x562b8d3ebd00, 
joinlist=0x562b8d40c8a8)
at allpaths.c:223
rel = 0x8d3ebd00
rti = 6
total_pages = 20
#10 0x562b8c0aab06 in query_planner (root=0x562b8d3ebd00, 
qp_callback=0x562b8c0b07c1 , qp_extra=0x7fffd74e5cc0) 
at planmain.c:276
parse = 0x562b8d3e72f0
joinlist = 0x562b8d40c8a8
final_rel = 0x562b8d3ed520
__func__ = "query_planner"
#11 0x562b8c0ad2bf in grouping_planner (root=0x562b8d3ebd00, 
tuple_fraction=0)
at planner.c:1447
sort_input_targets = 0x562b8d3edec8
sort_input_target_parallel_safe = false
grouping_target = 0x562b8d3ebc70
scanjoin_target = 0x562b8d3edc40
activeWindows = 0x0
qp_extra = {activeWindows = 0x0, groupClause = 0x0}
sort_input_targets_contain_srfs = 0x0
have_grouping = false
wflists = 0x0
gset_data = 0x0
sort_input_target = 0x562b8d3edc98
grouping_targets = 0x562b8c017eee 
grouping_target_parallel_safe = false
scanjoin_targets = 0x562b8d315790
scanjoin_target_parallel_safe = false
grouping_targets_contain_srfs = 0x7fffd74e5d00
scanjoin_targets_contain_srfs = 0x0
scanjoin_target_same_exprs = false
pa

Re: Parallel Full Hash Join

2021-09-20 Thread Jaime Casanova
On Fri, Jul 30, 2021 at 04:34:34PM -0400, Melanie Plageman wrote:
> On Sat, Jul 10, 2021 at 9:13 AM vignesh C  wrote:
> >
> > On Mon, May 31, 2021 at 10:47 AM Greg Nancarrow  wrote:
> > >
> > > On Sat, Mar 6, 2021 at 12:31 PM Thomas Munro  
> > > wrote:
> > > >
> > > > On Tue, Mar 2, 2021 at 11:27 PM Thomas Munro  
> > > > wrote:
> > > > > On Fri, Feb 12, 2021 at 11:02 AM Melanie Plageman
> > > > >  wrote:
> > > > > > I just attached the diff.
> > > > >
> > > > > Squashed into one patch for the cfbot to chew on, with a few minor
> > > > > adjustments to a few comments.
> > > >
> > > > I did some more minor tidying of comments and naming.  It's been on my
> > > > to-do-list to update some phase names after commit 3048898e, and while
> > > > doing that I couldn't resist the opportunity to change DONE to FREE,
> > > > which somehow hurts my brain less, and makes much more obvious sense
> > > > after the bugfix in CF #3031 that splits DONE into two separate
> > > > phases.  It also pairs obviously with ALLOCATE.  I include a copy of
> > > > that bugix here too as 0001, because I'll likely commit that first, so


Hi Thomas,

Do you intend to commit 0001 soon? Specially if this apply to 14 should
be committed in the next days.

> > > > I rebased the stack of patches that way.  0002 includes the renaming I
> > > > propose (master only).  Then 0003 is Melanie's patch, using the name
> > > > SCAN for the new match bit scan phase.  I've attached an updated
> > > > version of my "phase diagram" finger painting, to show how it looks
> > > > with these three patches.  "scan*" is new.
> > >

0002: my only concern is that this will cause innecesary pain in
backpatch-ing future code... but not doing that myself will let that to
the experts

0003: i'm testing this now, not at a big scale but just to try to find
problems

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Use simplehash.h instead of dynahash in SMgr

2021-09-24 Thread Jaime Casanova
On Tue, Jun 22, 2021 at 02:15:26AM +1200, David Rowley wrote:
[...]
> 
> I've come up with a new hash table implementation that I've called
> generichash.   It works similarly to simplehash in regards to the

Hi David,

Are you planning to work on this in this CF?
This is marked as "Ready for committer" but it doesn't apply anymore.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: error_severity of brin work item

2021-09-24 Thread Jaime Casanova
On Wed, Mar 10, 2021 at 08:24:50AM -0500, David Steele wrote:
> On 12/1/20 5:25 PM, Justin Pryzby wrote:
> > On Tue, Dec 01, 2020 at 03:57:24PM -0300, Alvaro Herrera wrote:
> > 
> > > > Another idea is if perform_work_item() were responsible for discarding
> > > > relations which disappear.  Currently it does this, which is racy since 
> > > > it
> > > > holds no lock.
> > > 
> > > That has the property that it remains contained in autovacuum.c, but no
> > > other advantages I think.
> > 
> > It has the advantage that it moves all the try_open stuff out of brin.
> > 
> > I started implementing this, and then realized that the try_open stuff 
> > *has* to
> > be in the brin_summarize function, to handle the case that someone passes a
> > non-index, since it's SQL exposed.
> > So maybe we should use your LockOid patch now, and refactor in the future 
> > if we
> > add additional work-item types.
> 
> Thoughts on this, Álvaro? I can see that the first version of this patch was
> not ideal but the rework seems to have stalled. Since it is a bug perhaps it
> would be better to get something in as Justin suggests?
> 

Hi Álvaro,

Do you plan to work on this for this CF?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Evaluate expression at planning time for two more cases

2021-09-26 Thread Jaime Casanova
On Tue, Jul 06, 2021 at 01:09:56PM +1200, David Rowley wrote:
> On Tue, 9 Mar 2021 at 05:13, Ibrar Ahmed  wrote:
> > It was a minor change therefore I rebased the patch, please take a look.
> 
[...]
> 
> This is not really an acceptable way to determine the notnull
> attribute value. Andy Fan proposes a much better way in [1].
> RelOptInfo is meant to cache the required Relation data that we need
> during query planning. IIRC, Andy's patch correctly uses this and does
> so in an efficient way.
> 
> In any case, as you can see there's a bit of other work going on to
> smarten up the planner around NULL value detection. The UniqueKeys
> patch requires this and various other things have come up that really
> should be solved.
> 
> Surafel, I'd suggest we return this patch with feedback and maybe you
> could instead help reviewing the other patches in regards to the NOT
> NULL tracking and maybe come back to this once the dust has settled
> and everyone is clear on how we determine if a column is NULL or not.
> 
> Let me know your thoughts.
> 

Hi Surafel, 

We haven't seen an answer from you on this.
I'm marking the patch as "Returned with feedback" as was suggested.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Identify missing publications from publisher while create/alter subscription.

2021-09-27 Thread Jaime Casanova
On Thu, Aug 26, 2021 at 07:49:49PM +0530, vignesh C wrote:
> On Thu, Jul 15, 2021 at 5:57 PM vignesh C  wrote:
> >
> > On Tue, Jul 6, 2021 at 8:09 PM vignesh C  wrote:
> > >
> > > On Wed, Jun 30, 2021 at 8:23 PM vignesh C  wrote:
> > > >
> > > > On Sun, Jun 6, 2021 at 11:55 AM vignesh C  wrote:
> > > > >
> > > > > On Fri, May 7, 2021 at 6:44 PM vignesh C  wrote:
> > > > > >
> > > > > > Thanks for the comments, the attached patch has the fix for the 
> > > > > > same.
> > > > >
> 
> The patch was not applying on the head, attached patch which is rebased on 
> HEAD.
> 

Hi,

I'm testing this patch now. It doesn't apply cleanly but is the
documentation part, so while a rebase would be good it doesn't avoid me
to test.

A couple of questions:

+check_publications(WalReceiverConn *wrconn, List *publications,
+  bool validate_publication)
[...]
+connect_and_check_pubs(Subscription *sub, List *publications,
+  bool validate_publication)

I wonder why validate_publication is passed as an argument just to
return if it's false, why not just test it before calling those
functions? Maybe is just a matter of style.

+get_publications_str(List *publications, StringInfo dest, bool quote_literal)

what's the purpose of the quote_literal argument?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: FETCH FIRST clause PERCENT option

2021-09-28 Thread Jaime Casanova
On Thu, Mar 25, 2021 at 11:15:10AM -0400, David Steele wrote:
> On 1/26/21 11:29 AM, Surafel Temesgen wrote:
> > 
> > On Mon, Jan 25, 2021 at 2:39 PM Kyotaro Horiguchi
> > mailto:horikyota@gmail.com>> wrote:
> > 
> > Sorry for the dealy. I started to look this.
> > 
> > Hi kyotaro,
> > Thanks for looking into this but did we agree to proceed
> > on this approach? I fear that it will be the west of effort if
> > Andrew comes up with the patch for his approach.
> > Andrew Gierth: Are you working on your approach?
> 
> [Added Andrew to recipients]
> 
> Andrew, would you care to comment?
> 

Hi everyone,

This is still marked as needs review, but I think it should be marked
as "Returned with feedback" or "waiting on author".

suggestions?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




jsonb crash

2021-09-29 Thread Jaime Casanova
Hi,

I found a crash (segmentation fault) on jsonb.
This is the best I could do to reduce the query:

"""
select  
  75 as c1
from 
  public.pagg_tab_ml as ref_0,
  lateral (select  
ref_0.a as c5 
  from generate_series(1, 300) as sample_0
  fetch first 78 rows only
  ) as subq_0
where case when (subq_0.c5 < 2) 
   then cast(null as jsonb) 
   else cast(null as jsonb) 
  end ? ref_0.c
"""

And because it needs pagg_tab_ml it should be run a regression database.
This affects at least 14 and 15.

Attached is the backtrace.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL
#0  0x556895e8b402 in getJsonbOffset (jc=0x5568971c2dd4, index=) at jsonb_util.c:146
offset = 0
i = 3158063
#1  0x556895e8ba44 in JsonbIteratorNext (it=it@entry=0x7fff141bac68, 
val=val@entry=0x7fff141bac40, skipNested=skipNested@entry=false) at 
jsonb_util.c:926
__func__ = "JsonbIteratorNext"
#2  0x556895e89f47 in jsonb_hash (fcinfo=0x7fff141baca0) at jsonb_op.c:274
jb = 0x5568971c2dd0
it = 0x5568971c5958
v = {type = jbvObject, val = {numeric = 0x556800303030, boolean = 48, 
string = {len = 3158064, val = 0x7fff141bac70 ""}, array = {nElems = 3158064, 
elems = 0x7fff141bac70, rawScalar = 136}, object = {
  nPairs = 3158064, pairs = 0x7fff141bac70}, binary = {len = 
3158064, data = 0x7fff141bac70}, datetime = {value = 93905168117808, typid = 
337357936, typmod = 32767, tz = -1782147448}}}
r = 
hash = 0
__func__ = "jsonb_hash"
#3  0x556895f56bba in FunctionCall1Coll 
(flinfo=flinfo@entry=0x5568971c2d38, collation=, arg1=) at fmgr.c:1138
fcinfodata = {fcinfo = {flinfo = 0x5568971c2d38, context = 0x0, 
resultinfo = 0x0, fncollation = 100, isnull = false, nargs = 1, args = 
0x7fff141bacc0}, 
  fcinfo_data = "8-\034\227hU", '\000' , 
"d\000\000\000\000\000\001\000x\277A\213\321\177\000\000\000'\033\227hU\000"}
fcinfo = 0x7fff141baca0
result = 
__func__ = "FunctionCall1Coll"
#4  0x556895c8830e in MemoizeHash_hash (tb=tb@entry=0x5568971c9490, 
key=key@entry=0x0) at nodeMemoize.c:175
hkey = 
i = 0
mstate = 
pslot = 0x5568971c2ca0
hashkey = 0
numkeys = 2
hashfunctions = 0x5568971c2d38
collations = 0x5568971a10e0
#5  0x556895c88b4f in memoize_insert (found=0x7fff141badbf, key=0x0, 
tb=0x5568971c9490) at ../../../src/include/lib/simplehash.h:758
hash = 
hash = 
#6  cache_lookup (mstate=mstate@entry=0x55689718c488, 
found=found@entry=0x7fff141badbf) at nodeMemoize.c:423
key = 
entry = 
oldcontext = 
#7  0x556895c89893 in ExecMemoize (pstate=0x55689718c488) at 
nodeMemoize.c:609
entry = 
outerslot = 
found = false
node = 0x55689718c488
outerNode = 
slot = 
__func__ = "ExecMemoize"
#8  0x556895c62dc5 in ExecProcNodeFirst (node=0x55689718c488) at 
execProcnode.c:463
No locals.
#9  0x556895c918f1 in ExecProcNode (node=0x55689718c488) at 
../../../src/include/executor/executor.h:257
No locals.
#10 ExecNestLoop (pstate=0x55689718aba8) at nodeNestloop.c:160
node = 0x55689718aba8
nl = 0x5568971a1158
innerPlan = 0x55689718c488
outerPlan = 0x55689718adb0
outerTupleSlot = 
innerTupleSlot = 
joinqual = 0x0
otherqual = 0x0
econtext = 0x55689718acc0
lc = 
#11 0x556895c62dc5 in ExecProcNodeFirst (node=0x55689718aba8) at 
execProcnode.c:463
No locals.
#12 0x556895c5af50 in ExecProcNode (node=0x55689718aba8) at 
../../../src/include/executor/executor.h:257
No locals.
#13 ExecutePlan (estate=estate@entry=0x55689718a8a0, planstate=0x55689718aba8, 
use_parallel_mode=, operation=operation@entry=CMD_SELECT, 
sendTuples=sendTuples@entry=true, 
numberTuples=numberTuples@entry=0, direction=ForwardScanDirection, 
dest=0x5568971a2938, execute_once=true) at execMain.c:1551
slot = 
current_tuple_count = 0
#14 0x556895c5bbf1 in standard_ExecutorRun (queryDesc=0x5568971772f0, 
direction=ForwardScanDirection, count=0, execute_once=) at 
execMain.c:361
estate = 0x55689718a8a0
operation = CMD_SELECT
dest = 0x5568971a2938
sendTuples = true
oldcontext = 0x5568971771d0
__func__ = "standard_ExecutorRun"
#15 0x556895c5bcba in ExecutorRun 
(queryDesc=queryDesc@entry=0x5568971772f0, 
direction=direction@entry=ForwardScanDirection, count=count@entry=0, 
execute_once=) at execMain.c:305
No locals.
#16 0x556895e17fe9 in PortalRunSelect (portal=portal@entry=0x5568971138c0, 
forward=forward@entry=true, count=0, count@entry=9223372036854775807, 
dest=dest@entry=0x5568971a2938) at pquery.c:919
quer

Re: Diagnostic comment in LogicalIncreaseXminForSlot

2021-09-30 Thread Jaime Casanova
On Tue, Sep 07, 2021 at 11:14:23AM +0530, Amit Kapila wrote:
> On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat
>  wrote:
> >
> > Yeah, I agree. Sorry for missing that.
> >
> > The updated patch looks good to me.
> >
> 
> Pushed!
> 

This patch is still on "Needs review"!
Should we change it to Committed or is expected something else 
about it?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Numeric x^y for negative x

2021-09-30 Thread Jaime Casanova
On Mon, Sep 13, 2021 at 07:29:13PM +0100, Dean Rasheed wrote:
> On Mon, 13 Sept 2021 at 17:51, Alvaro Herrera  wrote:
> >
> > I came here just to opine that there should be a comment about there not
> > being a clamp to the maximum scale.  For example, log_var says "Set the
> > scales .. so that they each have more digits ..." which seems clear
> > enough; I think the new comment is a bit on the short side.
> >
> 
> OK, that's a fair point. Updated version attached.
> 

Hi Dean,

Are you planning to commit this soon?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: FETCH FIRST clause PERCENT option

2021-09-30 Thread Jaime Casanova
On Tue, Jan 26, 2021 at 07:29:11PM +0300, Surafel Temesgen wrote:
> On Mon, Jan 25, 2021 at 2:39 PM Kyotaro Horiguchi 
> wrote:
> 
> > Sorry for the dealy. I started to look this.
> >
> >
> Hi kyotaro,
> Thanks for looking into this but did we agree to proceed
> on this approach? I fear that it will be the west of effort if
> Andrew comes up with the patch for his approach.
> Andrew Gierth: Are you working on your approach?
> 

Hi Surafel,

I'm marking this one as "returned with feedback". Then you or Andrew can
submit a new version of the patch for the next CF.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-10-01 Thread Jaime Casanova
On Sat, Sep 11, 2021 at 12:52:10AM -0500, Jaime Casanova wrote:
> On Wed, Sep 01, 2021 at 09:26:33AM -0500, Jaime Casanova wrote:
> > On Wed, Sep 01, 2021 at 03:10:32PM +0200, Daniel Gustafsson wrote:
> > > It is now 2021-09-01 Anywhere On Earth so I’ve set the September 
> > > commitfest to
> > > In Progress and opened the November one for new entries.  Jaime Casanova 
> > > has
> > > volunteered for CFM [0], so let’s help him close the 284 still open items 
> > > in
> > > the queue.
> > > 
> > 
> > Thank you Daniel for editing the commitfest entries, that's something I
> > cannot do.
> > 
> > And you're right, we have 284 patches in the queue (excluding committed, 
> > returned with feedback, withdrawn and rejected)... 18 of them for more than
> > 10 commitfests!
> > 
> > Needs review: 192. 
> > Waiting on Author: 68. 
> > Ready for Committer: 24
> > 
> 
> Hi everyone,
> 
> On the first 10 days of this commitfest some numbers have moved, mostly
> thanks to Daniel Gustafsson and good work from committers:
> 
> Needs review: 171. 
> Waiting on Author: 79. 
> Ready for Committer: 15.
> 

Hi,

During this commitfest there around 40 patches committed, there where
some patches already committed at the beggining.

 Committed: 55.

In the last hours Michael Paquier made a scan over the patch queue and
even after that we still have a lot of patches open.

 Needs review: 131. 
 Waiting on Author: 47. 
 Ready for Committer: 12. 

I understand this CF was in the middle of the release of 14 and that
affected too.

Anyway we need to advance to a close, so I need help with:

- what should we do with WoA patches? moving them to the Next CF?
- How can we reduce the number of Needs Review patches? some of them
  have been in silence for more than a month!

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Using indexUnchanged with nbtree

2021-10-01 Thread Jaime Casanova
On Thu, Jul 01, 2021 at 09:22:38AM -0700, Peter Geoghegan wrote:
> On Thu, Jul 1, 2021 at 8:23 AM Simon Riggs  
> wrote:
> > Definitely some good ideas here.
> 
> I have been meaning to come up with some kind of solution to the
> problem of "self-blocking" LP_DEAD bit setting within the
> kill_prior_tuple mechanism. It's hard to argue against that.
> 
> > I'm out of time to do anything for this CF, so I've moved this back to 
> > later CF.
> >
> > I'm planning to work on this more, but I won't try to fold in all of
> > your ideas above. Not cos they are bad ones, just there is enough room
> > for 2-4 related patches here.
> 
> I'm a little concerned about relying on the indexUnchanged flag like
> this. It is currently just supposed to be a hint, but your proposal
> makes it truly critical. Currently the consequences are no worse than
> the risk that we'll maybe waste some cycles on the occasional useless
> bottom-up index deletion pass. With your patch it absolutely cannot be
> falsely set (though it should still be okay if it is falsely unset).
> 
> Of course it should be correct (with or without this new
> optimization), but the difference still matters. And so I think that
> there ought to be a clear benefit to users from the new optimization,
> that justifies accepting the new risk. Some kind of benchmark showing
> an improvement in latency and/or throughput. Something like that.
> Doesn't have to be a huge improvement.
> 

Hi Simon,

This has been stalled since July, and based on Peter's comment i feel we
should mark this as RwF. Which i'm doing now.

Please feel free to resubmit for Next Commitfest.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: Numeric x^y for negative x

2021-10-01 Thread Jaime Casanova
On Fri, Oct 01, 2021 at 07:56:33AM +0100, Dean Rasheed wrote:
> On Thu, 30 Sept 2021 at 18:25, Jaime Casanova
>  wrote:
> >
> > Are you planning to commit this soon?
> >
> 
> Yes, I'll take a look at it next week.
> 

Hi Dean,

Great! I'll move the CF entry to the Next Commitfest so we can move to
closable state.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: shared-memory based stats collector

2021-10-01 Thread Jaime Casanova
On Thu, Sep 02, 2021 at 10:20:50AM -0500, Jaime Casanova wrote:
> On Mon, Jul 26, 2021 at 06:27:54PM -0700, Andres Freund wrote:
> > Hi,
> > 
> > On 2021-07-26 17:52:01 +0900, Kyotaro Horiguchi wrote:
> > > > > Yeah, thank you very much for checking that. However, this patch is
> > > > > now developed in Andres' GitHub repository.  So I'm at a loss what to
> > > > > do for the failure..
> > > > 
> > > > I'll post a rebased version soon.
> > > 
> > > (Sorry if you feel being hurried, which I didn't meant to.)
> > 
> > No worries!
> > 
> > I had intended to post a rebase by now. But while I did mostly finish
> > that (see [1]) I unfortunately encountered a new issue around
> > partitioned tables, see [2]. Currently I'm hoping for a few thoughts on
> > that thread about the best way to address the issues.
> > 
> > Greetings,
> > 
> > Andres Freund
> > 
> > [1] https://github.com/anarazel/postgres/tree/shmstat
> > [2] 
> > https://www.postgresql.org/message-id/20210722205458.f2bug3z6qzxzpx2s%40alap3.anarazel.de
> > 
> > 
> 
> Hi Andres,
> 
> Are you going planning to post a rebase soon?
> 

Hi,

we haven't heard about this since July so I will mark this as RwF

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: [PATCH] ProcessInterrupts_hook

2021-10-01 Thread Jaime Casanova
On Tue, Jun 29, 2021 at 01:32:26PM +0800, Craig Ringer wrote:
> On Sat, 20 Mar 2021 at 03:46, Tom Lane  wrote:
> 
> > Robert Haas  writes:
> > > On Fri, Mar 19, 2021 at 3:25 PM Tom Lane  wrote:
> > >> I'm not very comfortable about the idea of having the postmaster set
> > >> child processes' latches ... that doesn't sound terribly safe from the
> > >> standpoint of not allowing the postmaster to mess with shared memory
> > >> state that could cause it to block or crash.  If we already do that
> > >> elsewhere, then OK, but I don't think we do.
> >
> > > It should be unnecessary anyway. We changed it a while back to make
> > > any SIGUSR1 set the latch 
> >
> > Hmm, so the postmaster could send SIGUSR1 without setting any particular
> > pmsignal reason?  Yeah, I suppose that could work.  Or we could recast
> > this as being a new pmsignal reason.
> >
> 
> I'd be fine with either way.
> 
> I don't expect to be able to get to working on a concrete patch for this
> any time soon, so I'll be leaving it here unless someone else needs to pick
> it up for their extension work. The in-principle agreement is there for
> future work anyway.

Hi Craig,

There is still a CF entry for this. Should we close it as withdrawn? or
maybe RwF?

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-10-01 Thread Jaime Casanova
On Fri, Oct 01, 2021 at 08:53:23AM -0500, Jaime Casanova wrote:
> 
> Anyway we need to advance to a close, so I need help with:
> 
> - what should we do with WoA patches? moving them to the Next CF?

Correcting myself, we cannot move WoA patches. So we should just close
them with RwF.

Barring objections I will do that in the next couple of hours.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-10-01 Thread Jaime Casanova
On Fri, Oct 01, 2021 at 01:34:45PM -0400, Tom Lane wrote:
> Jaime Casanova  writes:
> > Correcting myself, we cannot move WoA patches. So we should just close
> > them with RwF.
> 
> Uh, really?  I don't think that's been common practice in the past.
> I thought we generally just pushed everything forward to the next CF
> with the same status.
> 

Actually i thought the same thing but found that I couldn't.

Just tried again to get the error message: "A patch in status Waiting on
Author cannot be moved to next commitfest."

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




Re: 2021-09 Commitfest

2021-10-01 Thread Jaime Casanova
On Fri, Oct 01, 2021 at 01:43:23PM -0400, Andrew Dunstan wrote:
> 
> On 10/1/21 1:31 PM, Jaime Casanova wrote:
> > On Fri, Oct 01, 2021 at 08:53:23AM -0500, Jaime Casanova wrote:
> >> Anyway we need to advance to a close, so I need help with:
> >>
> >> - what should we do with WoA patches? moving them to the Next CF?
> > Correcting myself, we cannot move WoA patches. So we should just close
> > them with RwF.
> >
> > Barring objections I will do that in the next couple of hours.
> >
> 
> Isn't the usual procedure to change their status, move them, and then
> change it back again? ISTR something like that when I managed a CF.
> 

Really?! That sounds tedious!
I will do that but we should improve that process.

-- 
Jaime Casanova
Director de Servicios Profesionales
SystemGuards - Consultores de PostgreSQL




  1   2   >