> On Nov 11, 2025, at 09:50, Yugo Nagata <[email protected]> wrote:
> 
> On Fri, 7 Nov 2025 18:33:17 +0900
> Fujii Masao <[email protected]> wrote:
> 
>> I plan to commit the patch soon, but let's keep discussing and
>> investigating the case you mentioned afterward!
> 
> I'm sorry for the late reply and for not joining the discussion earlier.
> 
> I've spent some time investigating the code in pgbench and libpq, and
> it seems to me that your commit looks fine.
> 
> However, I found another issue related to the --continue-on-error option,
> where an assertion failure occurs in the following test case:
> 
> $ cat pgbench_error.sql 
> \startpipeline
> select 1/0;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \endpipeline
> 
> $ pgbench -f pgbench_error.sql -M extended --continue-on-error -T 1
> pgbench (19devel)
> starting vacuum...end.
> pgbench: pgbench.c:3594: discardUntilSync: Assertion `res == ((void *)0)' 
> failed.
> 
> Even after removing the Assert(), we get the following error:
> 
> pgbench: error: client 0 aborted: failed to exit pipeline mode for rolling 
> back the failed transaction
> 
> This happens because discardUntilSync() does not expect that a 
> PGRES_TUPLES_OK may be
> received after \syncpipeline, and also fails to discard all 
> PGRES_PIPELINE_SYNC results
> when multiple \syncpipeline commands are used.
> 
> I've attached a patch to fix this.
> If a PGRES_PIPELINE_SYNC is followed by something other than 
> PGRES_PIPELINE_SYNC or NULL,
> it means that another PGRES_PIPELINE_SYNC will eventually follow after some 
> other results.
> In this case, we should reset the receive_sync flag and continue discarding 
> results.
> 
> I think this fix should be back-patched, since this is not a bug introduced by
> --continue-on-error. The same assertion failure occurs in the following test 
> case,
> where transactions are retried after a deadlock error:
> 
> $ cat deadlock.sql 
> \startpipeline
> select * from a order by i for update;
> select 1;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \endpipeline
> 
> $ cat deadlock2.sql 
> \startpipeline
> select * from a order by i desc for update;
> select 1;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \syncpipeline
> select 1;
> \endpipeline
> 
> $ pgbench -f deadlock.sql -f deadlock2.sql -j 2 -c 2 -M extended  
> 
> Regards,
> Yugo Nagata
> 
> -- 
> Yugo Nagata <[email protected]>
> <0001-Make-sure-discardUntilSync-discards-until-the-last-s.patch>

Hi Yugo-san,

I am also debugging the patch for the other purpose when I saw your email, so I 
tried to reproduce the problem with your script.

I think in master branch, we can simply fix the problem by calling 
discardAvailableResults(st) before discardUntilSync(st), like this:

```
                                        /* Read and discard until a sync point 
in pipeline mode */
                                        if (PQpipelineStatus(st->con) != 
PQ_PIPELINE_OFF)
                                        {
                                                discardAvailableResults(st); # 
<=== Add this line
                                                if (!discardUntilSync(st))
                                                {
                                                        st->state = 
CSTATE_ABORTED;
                                                        break;
                                                }
                                        }
```

But this is not good for back-patch.

Best regards,
--
Chao Li (Evan)
HighGo Software Co., Ltd.
https://www.highgo.com/






Reply via email to