Hello,
Here is the third version of the patch for pgbench thanks to Fabien Coelho
comments. As in the previous one, transactions with serialization and
deadlock failures are rolled back and retried until they end successfully or
their number of tries reaches maximum.
Here is some partial re
Hi,
Hello!
Just had a need for this feature, and took this to a short test
drive. So some comments:
- it'd be useful to display a retry percentage of all transactions,
similar to what's displayed for failed transactions.
- it'd be useful to also conveniently display the number of retried
On Fri, Aug 11, 2017 at 10:50 PM, Andres Freund wrote:
> On 2017-07-21 19:32:02 +0300, Marina Polyakova wrote:
> > Here is the third version of the patch for pgbench thanks to Fabien
> Coelho
> > comments. As in the previous one, transactions with serialization and
> > deadlock failures are rolle
Hi,
On 2017-07-21 19:32:02 +0300, Marina Polyakova wrote:
> Here is the third version of the patch for pgbench thanks to Fabien Coelho
> comments. As in the previous one, transactions with serialization and
> deadlock failures are rolled back and retried until they end successfully or
> their numb
Hello again!
Here is the third version of the patch for pgbench thanks to Fabien
Coelho comments. As in the previous one, transactions with serialization
and deadlock failures are rolled back and retried until they end
successfully or their number of tries reaches maximum.
Differences from t
Well, the short version may be to only do a full transaction retry and
to document that for now savepoints are not handled, and to let that
for future work if need arises.
I agree with you.
For progress the output must be short and readable, and probably we do
not care about whether retries ca
Ok, fine. My point was just to check before proceeding.
And I'm very grateful for that :)
--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscripti
And I'm not sure that we should do all the stuff for savepoints rollbacks
because:
- as I see it now it only makes sense for the deadlock failures;
- if there's a failure what savepoint we should rollback to and start the
execution again?
ISTM that it is the point of having savepoint in the
If the answer is no, then implement something in pgbench directly.
The structure of variables is different, the container structure of the
variables is different, so I think that the answer is no.
Ok, fine. My point was just to check before proceeding.
--
Fabien.
--
Sent via pgsql-hacker
Given that the number of variables of a pgbench script is expected to
be pretty small, I'm not sure that the sorting stuff is worth the
effort.
I think it is a good insurance if there're many variables..
My suggestion is really to look at both implementations and to answer
the question "should
Not necessarily? It depends on where the locks triggering the issue
are set, if they are all set after the savepoint it could work on a
second attempt.
Don't you mean the deadlock failures where can really help rollback to
Yes, I mean deadlock failures can rollback to a savepoint and work on
a
Note that there is something for psql (src/bin/psql/variable.c) which
may or may not be shared. It should be checked before recoding
eventually the same thing.
Thank you very much for pointing this file! As I checked this is another
structure: here there's a simple list, while in pgbench we
Hello Marina,
Not necessarily? It depends on where the locks triggering the issue
are set, if they are all set after the savepoint it could work on a
second attempt.
Don't you mean the deadlock failures where can really help rollback to
Yes, I mean deadlock failures can rollback to a savepo
I was not convinced by the overall memory management around variables
to begin with, and it is even less so with their new copy management.
Maybe having a clean "Variables" data structure could help improve
the
situation.
Note that there is something for psql (src/bin/psql/variable.c) which
ma
On 13-07-2017 19:32, Fabien COELHO wrote:
Hello,
Hi!
[...] I didn't make rollbacks to savepoints after the failure because
they cannot help for serialization failures at all: after rollback to
savepoint a new attempt will be always unsuccessful.
Not necessarily? It depends on where the loc
I was not convinced by the overall memory management around variables
to begin with, and it is even less so with their new copy management.
Maybe having a clean "Variables" data structure could help improve the
situation.
Ok!
Note that there is something for psql (src/bin/psql/variable.c) wh
Hello,
[...] I didn't make rollbacks to savepoints after the failure because
they cannot help for serialization failures at all: after rollback to
savepoint a new attempt will be always unsuccessful.
Not necessarily? It depends on where the locks triggering the issue are
set, if they are al
Another detail I forgot about this point: there may be a memory leak
on variables copies, ISTM that the "variables" array is never freed.
I was not convinced by the overall memory management around variables
to begin with, and it is even less so with their new copy management.
Maybe having a clea
Here are a round of comments on the current version of the patch:
Thank you very much again!
There is a latent issue about what is a transaction. For pgbench a
transaction is a full script execution.
For postgresql, it is a statement or a BEGIN/END block, several of
which may appear in a scrip
LastBeginState -> RetryState? I'm not sure why this state is a pointer in
CState. Putting the struct would avoid malloc/free cycles. Index "-1" may be
used to tell it is not set if necessary.
Another detail I forgot about this point: there may be a memory leak on
variables copies, ISTM that
Hello Marina,
There's the second version of my patch for pgbench. Now transactions
with serialization and deadlock failures are rolled back and retried
until they end successfully or their number of attempts reaches maximum.
In details:
- You can set the maximum number of attempts by the a
Hello everyone!
There's the second version of my patch for pgbench. Now transactions
with serialization and deadlock failures are rolled back and retried
until they end successfully or their number of attempts reaches maximum.
In details:
- You can set the maximum number of attempts by the ap
On Thu, Jun 15, 2017 at 10:16 PM, Andres Freund wrote:
> On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:
> > Advanced options:
> > - mostly for testing built-in scripts: you can set the default
> transaction
> > isolation level by the appropriate benchmarking option (-I);
>
> I'm less convi
The number of retries and maybe failures should be counted, maybe with
some adjustable maximum, as suggested.
If we fix the maximum number of attempts the maximum number of failures
for one script execution will be bounded above
(number_of_transactions_in_script * maximum_number_of_attempts)
The current error handling is either "close connection" or maybe in
some cases even "exit". If this is changed, then the client may
continue execution in some unforseen state and behave unexpectedly.
We'll see.
Thanks, now I understand this.
ISTM that the retry implementation should be impleme
Hello Marina,
I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...
Could you tell a little bit more exactly.. What implications are you worried
about?
The current error handling is either "close connection" or maybe i
Hello Marina,
Hello, Fabien!
A few comments about the submitted patches.
Thank you very much for them!
I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...
Could you tell a little bit more exactly.. What implication
Hello Marina,
A few comments about the submitted patches.
I agree that improving the error handling ability of pgbench is a good
thing, although I'm not sure about the implications...
About the "retry" discussion: I agree that retry is the relevant option
from an application point of view.
> To be clear, part of "retrying from the beginning" means that if a> result from one statement is used to determine the content (or> whether to run) a subsequent statement, that first statement must be> run in the new transaction and the results evaluated again to> determine what to use for the la
On Fri, Jun 16, 2017 at 5:31 AM, Marina Polyakova
wrote:
> And thank you very much for your explanation how and why transactions with
> failures should be retried! I'll try to implement all of it.
To be clear, part of "retrying from the beginning" means that if a
result from one statement is use
>> P.S. Does this use case (do not retry transaction with serialization or
>> deadlock failure) is most interesting or failed transactions should be
>> retried (and how much times if there seems to be no hope of success...)?
>
> I can't quite parse that sentence, could you restate?
The way I read
P.S. Does this use case (do not retry transaction with serialization
or
deadlock failure) is most interesting or failed transactions should
be
retried (and how much times if there seems to be no hope of
success...)?
I can't quite parse that sentence, could you restate?
The way I read it was
Hi,
Hello!
I think that's a good idea and sorely needed.
Thanks, I'm very glad to hear it!
- if there were these failures during script execution this
"transaction" is
marked
appropriately in logs;
- numbers of "transactions" with these failures are printed in
progress, in
aggregation l
On Thu, Jun 15, 2017 at 4:18 PM, Alvaro Herrera
wrote:
> Kevin Grittner wrote:
> As far as I understand her proposal, it is exactly the opposite -- if a
> transaction fails, it is discarded. And this P.S. note is asking
> whether this is a good idea, or would we prefer that failing
> transaction
On Fri, Jun 16, 2017 at 9:18 AM, Alvaro Herrera
wrote:
> Kevin Grittner wrote:
>> On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund wrote:
>> > On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:
>
>> >> P.S. Does this use case (do not retry transaction with serialization or
>> >> deadlock failure
Kevin Grittner wrote:
> On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund wrote:
> > On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:
> >> P.S. Does this use case (do not retry transaction with serialization or
> >> deadlock failure) is most interesting or failed transactions should be
> >> ret
On Thu, Jun 15, 2017 at 2:16 PM, Andres Freund wrote:
> On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:
>> I suggest a patch where pgbench client sessions are not disconnected because
>> of serialization or deadlock failures and these failures are mentioned in
>> reports.
>
> I think that's
Hi,
On 2017-06-14 11:48:25 +0300, Marina Polyakova wrote:
> Now in pgbench we can test only transactions with Read Committed isolation
> level because client sessions are disconnected forever on serialization
> failures. There were some proposals and discussions about it (see message
> here [1] an
Sounds like a good idea.
Thank you!
Please add to the next CommitFest
Done: https://commitfest.postgresql.org/14/1170/
and review
somebody else's patch in exchange for having your own patch reviewed.
Of course, I remember about it.
--
Marina Polyakova
Postgres Professional: http://www.p
On Wed, Jun 14, 2017 at 4:48 AM, Marina Polyakova
wrote:
> Now in pgbench we can test only transactions with Read Committed isolation
> level because client sessions are disconnected forever on serialization
> failures. There were some proposals and discussions about it (see message
> here [1] and
40 matches
Mail list logo