Hello Karl,
Doc patch, against master. Documents encode() and decode() base64
format.
It is already documented. Enhance documentation, though.
Builds for me.
For me as well. Looks ok.
Attached: doc_base64_v1.patch
References RFC2045 section 6.8 to define base64.
Did you consider re
Hello Karl,
Attached: doc_base64_v3.patch
I'm ok with referencing the historical MIME RFC.
"RFC2045 section 6.8" -> "RFC 2045 Section 6.8"
you can link to the RFC directly with:
https://tools.ietf.org/html/rfc2045#section-6.8";>RFC 2045
Section 6.8
--
Fabien.
Attached: doc_base64_v7.patch
Patch applies cleanly, doc compiles, navigation tested and ok.
"... section 6.8" -> "... Section 6.8" (capital S).
"The string and the binary encode and decode functions..." sentence looks
strange to me, especially with the English article that I do not really
Hello David,
This patch hasn't receive any review in a while and I'm not sure if that's
because nobody is interested or the reviewers think it does not need any more
review.
It seems to me that this patch as implemented does not quite satisfy any one.
I think we need to hear something from
Hello Ryohei-san,
About socket_timeout:
From: Fabien COELHO
are actually finished. I cannot say that I'm thrilled by that behavior. ISTM
that libpq
should at least attempt to cancel the query
Would you please tell me why you think so?
As I understand the "client-side timeout
Hello Karl,
I registered as a reviewer in the CF app.
"The string and the binary encode and decode functions..." sentence
looks strange to me, especially with the English article that I do
not really master, so maybe it is ok. I'd have written something more
straightforward, eg: "Functions en
Attached is a rebase.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 5df54a8e57..1db6b75823 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -6032,6 +6032,96 @@ main(int argc, char **argv)
return exit_code;
}
+/* display the progr
Bonjour Michaël,
Here is a partial review:
- 0001 if a patch to refactor the routine for the control file
update. I have made it backend-aware, and we ought to be careful with
error handling, use of fds and such, something that v4 was not very
careful about.
This refactoring patch is ok for
Hello Robert,
wrote:
The main purpose of this parameter is to avoid client's waiting for DB server
infinitely, not reducing the server's burden.
This results in not waiting end-user, which is most important.
+1. If the server fails to detect that the client has gone away,
that's the serv
Bonjour Michaël,
Yes, that would be nice, for now I have focused. For pg_resetwal yes
we could do it easily. Would you like to send a patch?
I probably can do that before next Monday. I'll prioritize reviewing the
latest instance of this patch, though.
This seem contradictory to me: you
Hallo Michael,
I would bother rounding down < 100% to 100, because then you would get
1560/1492 MB (100\%, X MB/s)
which is kind of silly.
No, we cap the total_size to current_size so you won't see that (but
total_size will potentially gradually increase). pg_basebackup has the
same beha
Michaël-san,
Now the set of patches is:
- 0001, add --enable and --disable. I have tweaked a bit the patch so
as "action" is replaced by "mode" which is more consistent with other
tools like pg_ctl. pg_indent was also complaining about one of the
new enum structures.
Patch applies cleanly,
I do not think it is a good thing that two commands can write to the data
directory at the same time, really.
We don't prevent either a pg_resetwal and a pg_basebackup to run in
parallel. That would be... Interesting.
Yep, I'm trying again to suggest that this kind of thing should be
pre
For whatever it is worth, the patch looks good to me.
A minor nitpick would be to use a verb in the part:
`cost when the parameter in (0, 1)`
maybe:
`cost when the parameter's value is in (0, 1)` or similar.
Looks ok.
Apart from that, I would suggest it that the patch could be moved to
Hello,
Yep. That is the issue I think is preventable by fsyncing updated data
*then* writing & syncing the control file, and that should be done by
pg_checksums.
Well, pg_rewind works similarly: control file gets updated and then
the whole data directory gets flushed.
So it is basically pro
Hallo Michael,
I propose we re-read the control file for the enable case after we
finished operating on all files and (i) check the instance is still
offline and (ii) update the checksums version from there. That should be
a small but worthwhile change that could be done anyway.
That looks li
Hello Robert,
Also, I do not see the downside of sending a cancel query before severing
the connection. If it is not processed, too bad, but if it is then it is
for the better.
If the network connection is dead, which is the situation the patch
intends to detect,
Hmmm... ISTM that we are n
Hello Fabien-san.
The 2nd patch is 700 KB, I think that there is a unvoluntary file copy.
If the user reconnects, eg "\c db", the setting is lost. The
re-connection handling should probably take care of this parameter, and maybe
others.
I think your opinion is reasonable, but it seems not
HI think that your patch is responsible for the added option at least.
I agree that the underlying issue that other parameters should probably
also be reused, which would be a bug fix, does not belong to this thread.
This doesn't seem to be a bug. \connect just establishes a new connection,
echo 'select 1' > select.sql
while /bin/true; do
pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;
date;
done;
Indeed. I'll look at it over the weekend.
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of
echo 'select 1' > select.sql
while /bin/true; do
pgbench -n -f select.sql -R 1000 -j 8 -c 8 -T 1 > /dev/null 2>&1;
date;
done;
Indeed. I'll look at it over the weekend.
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of
Bonjour Michaël,
If the block size the tool is compiled with does not match the data
folder block size, then users would get incorrect checksums failures,
Or worse, incorrect checksump writing under "enabling"?
Initial proposal:
"%s: data directory block size %d is different to compiled-in
Hello,
Fabien, I was wondering whether you can apply TCP_USER_TIMEOUT patch and
continue discussion about 'socket_timeout'?
I can apply nothing, I'm just a small-time reviewer.
Committers on the thread are Michaël-san and Robert, however I'm not sure
that they are very sensitive to "please
Hello Tomas,
So I guess this is a bug in 12788ae49e1933f463bc59a6efe46c4a01701b76, or
one of the other commits touching this part of the code.
I could not reproduce this issue on head, but I confirm on 11.2.
AFAICS on head it's fixed by 3bac77c48f166b9024a5ead984df73347466ae12
Thanks for
Something like "%s: database folder is incompatible" for the first
line sounds kind of better per the feedback gathered. And then on the
second line:
"The database cluster was initialized with block size %u, but
pg_checksums was compiled with block size %u."
Ok. "%s" progname instead of "pg_
Bonjour Michaël-san,
Yes, that would be nice, for now I have focused. For pg_resetwal yes
we could do it easily. Would you like to send a patch?
Here is a proposal for "pg_resetwal".
The implementation basically removes a lot of copy paste and calls the
new update_controlfile function ins
I'm not sure that prefixing the two lines with the comment line is very
elegant, I'd suggest to put spaces, and would still try to shorten the
second sentence, maybe:
I suggest to keep two lines, and only prefix the first one.
As you feel. For me the shorter the better, though, if the infor
Michaël-san,
The issue here is that trying to embed directly the fsync routines
from file_utils.c into pg_resetwal.c messes up the inclusions because
pg_resetwal.c includes backend-side includes, which themselves touch
fd.h :(
In short your approach avoids some extra mess with the include
depe
You have one error at the end of update_controlfile(), where close()
could issue a frontend-like error for the backend, calling exit() on the
way. That's not good. (No need to send a new patch, I'll fix it
myself.)
Indeed. I meant to merge the "if (close(fd))", but ended merging the error
Bonjour Michaël,
Here are my notes about the fixes:
Thanks for the fixes.
- pg_resetwal got broken because the path to the control file was
incorrect. Running tests of pg_upgrade or the TAP tests of
pg_resetwal showed the failure.
Hmmm… I thought I had done that with "make check-world":-
Hallo Peter,
Updated patch. I have squashed the two previously separate patches
together in this one.
Ok.
I do not understand the value of the SAVEPOINT in the tests.
The purpose of the SAVEPOINT in the test is because it exercises
different switch cases in CommitTransactionCommand() an
I have rebased it now.
Thanks. Will look at it.
If the all of aboves are involved, the line would look as the
follows.
[=== ] ( 63% of 12.53 GB, 179 MB/s, ETC 26s)
# Note that this is just an opinion.
(pg_checksum runs fast at the beginning so ETC behaves s
Bonjour Michaël,
Please find attached an updated patch set, I have rebased that stuff
on top of my recent commits to refactor the control file updates.
Patch applies cleanly, compiles, make check-world seems ok, doc build ok.
It would help if the patch includes a version number. I assume tha
Ok, this might not work, because of the following, less likely, race
condition:
postmaster opens control file RW
pg_checksums moves control file, postmater open file handle follows
...
So ISTM that we really need some locking to have something clean.
We are talking about complicating a m
Hallo Andres,
And you're basically adding it because Fabien doesn't like
postmaster.pid and wants to invent another lockout mechanism in this
thread.
I did not suggest to rename the control file, but as it is already done by
another command it did not look like a bad idea in itself, or at le
Hallo Andres,
[...]
pg_upgrade in link mode intentionally wants to *permanently* disable a
cluster. And it explicitly writes a log message about it. That's not a
case to draw inferrence for this case.
Ok. My light knowledge of pg_upgrade inner working does not extend to this
level of prec
Michaël-san,
In short, you keep the main feature with:
- No tweaks with postmaster.pid.
- Rely just on the control file indicating an instance shutdown
cleanly.
- No tweaks with the system ID.
- No renaming of the control file.
Hmmm… so nothing:-)
I think that this feature is useful, in comp
Michaël-san,
I think that a clear warning not to run any cluster command in parallel,
under pain of possible cluster corruption, and possibly other caveats about
replication, should appear in the documentation.
I still have the following extra documentation in my notes:
Ok, it should have b
Bonjour Michaël,
On Wed, Mar 20, 2019 at 05:46:32PM +0100, Fabien COELHO wrote:
I think that the motivation/risks should appear before the solution. "As xyz
..., ...", or there at least the logical link should be outlined.
It is not clear for me whether the following sentences, w
Anyway, as this stuff is very useful for upgrade scenarios
a-la-pg_upgrade, for backup validation and as it does not produce false
positives, I would really like to get something committed for v12 in its
simplest form...
Fine with me, the detailed doc is not a showstopper and can be improve
Bonjour Michaël,
Does that look fine to you?
Mostly.
Patch v9 part 1 applies cleanly, compiles, global and local check ok, doc
build ok.
On write(), the error message is not translatable whereas it is for all
others.
I agree that a BIG STRONG warning is needed about not to start the cl
Done the switch for this case. For pg_rewind actually I think that this
is an area where its logic could be improved a bit. So first the data
folder is synced, and then the control file is updated.
Attached is a quick patch about "pg_rewind", so that the control file is
updated after every
Hello Christoph,
- pg_log(PG_PROGRESS, "syncing target data directory\n");
- syncTargetDirectory();
Doesn't the control file still need syncing?
Indeed it does, and it is done in update_controlfile if the last argument
is true. Basically update_controlfile latest version alway
Bonjour Michaël,
On Sat, 23 Mar 2019, Michael Paquier wrote:
On Fri, Mar 22, 2019 at 03:18:26PM +0100, Fabien COELHO wrote:
Attached is a quick patch about "pg_rewind", so that the control file
is updated after everything else is committed to disk.
Could you start a new thread
Hola Álvaro,
What I'm going to do now is to write a patch to remove the \cset part of
the commit and post it, intending to push at some point next week.
Per your request, here is a patch which removes \cset from pgbench. Sigh.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sg
What I'm going to do now is to write a patch to remove the \cset part of
the commit and post it, intending to push at some point next week.
Per your request, here is a patch which removes \cset from pgbench. Sigh.
Again, only the removal is a little deeper. This lifts the constraint
about n
Bonjour Michaël,
Here is an attempt at improving the Notes.
Mostly it is a reordering from more important (cluster corruption) to less
important (if interrupted a restart is needed), some reordering from
problem to solutions instead of solution/problem/solution, some sentence
simplification.
Hello Tom,
I started to look through this, and the more I looked the more unhappy
I got that we're having this discussion at all. The zipfian support
in pgbench is seriously over-engineered and under-documented. As an
example, I was flabbergasted to find out that the end-of-run summary
stati
Hello again,
I started to look through this, and the more I looked the more unhappy
I got that we're having this discussion at all. The zipfian support
in pgbench is seriously over-engineered and under-documented. As an
example, I was flabbergasted to find out that the end-of-run summary
stat
What is the point of that, and if there is a point, why is it nowhere
mentioned in pgbench.sgml?
The attached patch simplifies the code by erroring on cache overflow, instead
of the LRU replacement strategy and unhelpful final report. The above lines
are removed.
Same, but without the comp
What is the point of that, and if there is a point, why is it nowhere
mentioned in pgbench.sgml?
The attached patch simplifies the code by erroring on cache overflow,
instead of the LRU replacement strategy and unhelpful final report.
The above lines are removed.
Eh? Do I understand correct
Hello Tomas,
What would a user do with this information, and how would they know
what to do?
Sure, but it was unclear what to do. Extending the cache to avoid that
would look like over-engineering.
That seems like a rather strange argument. What exactly is so complex on
resizing the cache
Hallo Peter,
In "xact.c", maybe I'd assign blockState in the else branch, instead of
overriding it?
I think it was better the way it is, since logically the block state is
first set, then set again after the new transaction starts.
Ok.
About the static _SPI_{commit,rollback} functions: I
Hello Tom & Tomas,
If the choice is between reporting the failure to the user, and
addressing the failure, surely the latter would be the default option?
Particularly if the user can't really address the issue easily
(recompiling psql is not very practical solution).
I remain of the opinion t
Hello Tom,
If this is done, some people with zipfian distribution that currently
work might be unhappy.
After giving it some thought, I think that this cannot be fully fixed for
12.
Just to clarify --- my complaint about "over engineering" referred to
the fact that a cache exists at all; f
Bonjour Michaël,
The attached patch reorders the cluster fsyncing and control file changes in
"pg_rewind" so that the later is done after all data are committed to disk,
so as to reflect the actual cluster status, similarly to what is done by
"pg_checksums", per discussion in the thread about o
I pushed the refactoring now, to extract the progress reporting to a separate
function. That seems like an improvement independently from the rest of the
patch.
Sure.
I'll take another look at the rest, probably tomorrow.
Ok.
Attached is the remainder of the patch rebased to current head
Bonjour Michaël,
Here is an attempt at improving the Notes. [...]
So, the ordering of the notes for each paragraph is as follows: 1)
Replication issues when mixing different checksum setups across nodes.
2) Consistency of the operations if killed. 3) Don't start Postgres
while the operatio
Hello Raymond,
Would it make sense to check for pgss_enabled in the post_parse_analyze_hook
function?
Probably.
- /* Safety check... */
- if (!pgss || !pgss_hash)
+ /* Safety check...and ensure that pgss is enabled before we do any work
*/
+ if (!pgss || !pgss_hash
Hello Ryohei-san,
About the backend v11 patch.
Patch applies cleanly, though compiles with a warning.
Make check ok, although the feature is not tested.
I'm okay with exposing this parameter.
Documentation:
ISTM this is not about libpq connections but about TCP connections. There
can be non
Hello Heikki,
I stared at the new test case for a while, and I must say it looks very
cryptic. It's not exactly this patch's fault - all the pgbench tests are
cryptic -
Perl is cryptic. Regexprs are cryptic.
but I think we need to do something about that before adding any more
tests. I'm
Hello Heikki,
Indeed, yet again, I forgot the attachement:-(
I stared at the new test case for a while, and I must say it looks very
cryptic. It's not exactly this patch's fault - all the pgbench tests are
cryptic -
Perl is cryptic. Regexprs are cryptic.
but I think we need to do something
Hallo Michael,
About patch v12:
Patch applies cleanly, compiles. make check ok, but feature is not tested.
Doc build ok.
Although I'm in favor of it, I'm not sure that the signal think will make
it for 12. Maybe it is worth compromising, doing a simple version for now,
and resubmitting th
I still think that the speed should compute a double to avoid integer
rounding errors within the computation. ISTM that rounding should only be
done for display in the end.
Ok, also done this way. I decided to print only full MB and not any
further digits as those don't seem very relevant.
Hello Raymond,
Note that this does not mean that the patch should not be applied, it
looks like an oversight, but really I do not have the performance
degradation you are suggesting.
I appreciate your input and I want to come up with a canonical test that
makes this contention more obvious
Hallo Michael,
Or anything which converts to double early.
Are you sure, seeing elapsed is a double already?
Argh, I missed that. You are right that a double elapsed is enough for the
second part. However, with
+ current_speed = (current_size / MEGABYTES) / (elapsed / 1000.0);
the
Otherwise a very minor comment: I'd invert !force and the computations in
the return condition to avoid the computations when not needed.
The force is only ever true right at the end of the program so it would
not save anything really and detract from the main gist of that
statement, so I lef
Hallo Michael,
but I'd advise that you split it in (1) progress and (2) signal
toggling so that the first part is more likely to make it before 12
freeze.
Ok, done so in the attached.
Fine.
I think that it is good to show the overall impact of the signal stuff, in
particular the fact that
Hello Ryohei-san,
I have further remarks after Kirk-san extensive review on these patches.
* About TCP interface v18.
For homogeneity with the surrounding cases, ISTM that "TCP_user_timeout"
should be ""TCP-user-timeout".
* About TCP backend v19 patch
I still disagree with "on other syst
Bonjour Michaël,
Getting to know the total size and the current size are the two
important factors that matter when it comes to do progress reporting
in my opinion. I have read the patch, and I am not really convinced
by the need to show the progress report based on an interval of 250ms
as we
Bonjour Michaël,
I do not think that it matters. I like to see things moving, and the
performance impact is null.
Another point is that this bloats the logs redirected to a file by 4
compared to the initial proposal. I am not sure that this helps
much for anybody.
Hmmm. Progress is more an
Hello Peter,
About the pg_dump code, I'm wondering whether it is worth generating
LOCALE as it breaks backward compatibility (eg dumping a new db to load it
into a older version).
We don't really care about backward compatibility here. Moving forward,
with ICU and such, we don't want to hav
Find attached updated patches which also work against old servers.
I can't check that for sure.
* About toast table addition v7:
Patch applies cleanly, compiles, make check ok, no doc.
This addition show the main table of a toast table, which is useful.
Field relnamespace oid in pg_class
Hello Thomas,
Function nbits(), which was previously discussed, has been simplified by
using the function pg_popcount64().
Hi Fabien, Suzuki-san,
I am not smart enough to commit this
I'm in no hurry:-)
or judge its value for benchmarking,
I think that it is valuable given that we have
Hello devs,
While doing some performance tests and reviewing patches, I needed to
create partitioned tables. Given the current syntax this is time
consumming.
The attached patch adds two options to create a partitioned "account"
table in pgbench.
It allows to answer quickly simple questio
Pushed with minor fiddling with the toast-table code, and rather
more significant hacking on the partitioned-index code. Notably,
0003 had broken output of Tablespace: footers for everything except
indexes.
Argh, sorry for the review miss.
It's possibly not Justin's fault that that wasn't
Hello Andrew,
Unfortunately, this isn't portable, as I've just discovered at the cost
of quite a bit of time. In particular, you can't assume expr is present
and in the path on Windows. The Windows equivalent would be something like:
\setshell two\
@set /a c = 1 + :one && echo %c%
H
Hello Simon,
While doing some performance tests and reviewing patches, I needed to
create partitioned tables. Given the current syntax this is time
consumming.
Good idea. I wonder why we didn't have it already.
Probably because I did not have to create partitioned table for some
testing:-)
Hello Yoshikazu,
I could not reproduce this issue on head, but I confirm on 11.2.
I could reproduce the stuck on 11.4.
Attached is a fix to apply on pg11.
I confirm the stuck doesn't happen after applying your patch.
Ok, thanks for the feedback.
+ /* under throttling we
Bonjour Daniel,
I kind of agree as well, but I was pretty sure that someone would complain
if the current behavior was changed.
If queries in a compound statement must be kept silent,
they can be converted to CTEs or DO-blocks to produce the
same behavior without having to configure anything
# and look at latency:
# no parts = 0.071 ms
# 1 hash = 0.071 ms (did someone optimize this case?!)
# 2 hash ~ 0.126 ms (+ 0.055 ms)
# 50 hash ~ 0.155 ms
# 100 hash ~ 0.178 ms
# 150 hash ~ 0.232 ms
# 200 hash ~ 0.279 ms
# overhead ~ (0.050 + [0.0005-0.0008] * nparts) ms
Hello Yoshikazu,
|...] So I'll mark this ready for committer.
Ok, thanks for the review.
--
Fabien.
Bonsoir Daniel,
FYI you forgot to remove that bit:
+ "SINGLELINE|SINGLESTEP|SHOW_ALL_RESULTS"))
Indeed. I found another such instance in "help.c".
Also copydml does not seem to be exercised with combined
queries, so do we need this chunk:
--- a/src/test/regress/sql/copydml.sql
Yep,
Hello Kyotaro-san,
Attached a v2 for the always-show-all-results variant. Thanks for the
debug!
I have some comments on this patch.
I'm +1 for always output all results without having knobs.
That makes 4 opinions expressed towards this change of behavior, and none
against.
Documentatio
Hello devs,
As pointed out by Kyotaro Horiguchi in
https://www.postgresql.org/message-id/20190726.131704.86173346.horikyota@gmail.com
FETCH_COUNT does not work with combined queries, and probably has never
worked since 2006.
What seems to happen is that ExecQueryUsingCursor is hardcoded
Hello Peter,
I have committed this with some additions.
Thanks for the push. It was really a pain to write a small libpq app
without navigation.
Also, due to some mysterious problems with the PDF toolchain I had to
remove some links. Your script would find those, so I won't list them
he
FETCH_COUNT does not work with combined queries, and probably has never
worked since 2006.
For the record, this bug has already been reported & discussed by Daniel
Vérité a few months back, see:
https://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.o
Attached v3 fixes strcasecmp non portability on windows, per postgresql
patch tester.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 816f9cc4c7..3e8e292e39 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -306,6 +306,
c.h defines a C Min macro conflicting with llvm new class
llvm:ElementCount Min member
Really? Well, we will hardly be the only code they broke with that.
I think we can just wait for them to reconsider.
FYI This is now on LLVM's release_90 branch, due out on August 28.
Maybe we should co
Hello Thomas,
I would just #undef Min for our small number of .cpp files that
include LLVM headers. It's not as though you need it in C++, which
has std::min() from .
Like so. Fixes the problem for me (llvm-devel-9.0.d20190712).
Hmmm. Not so nice, but if it works, why not, at least the i
Hello Tom,
They should be fully independent anyway, so the order should
not matter?
On what grounds do you claim that's true anywhere, let alone
everywhere?
I mean that the intersection of Postgres realm, a database written in C,
and LLVM realm, a compiler written in C++, should not inter
Thanks for the feedback. Attached v3 with further documentation updates.
Attached v4 also fixes pg_stat_statements non regression tests, per pg
patch tester travis run.
--
Fabien.diff --git a/contrib/pg_stat_statements/expected/pg_stat_statements.out b/contrib/pg_stat_statements/expected/pg
Bonjour Michaël,
Fabien, are you planning to send an updated patch? This stuff has
value.
Yep, I will try for this week.
--
Fabien.
Let's just commit the #undef so that seawasp is green and back to
being ready to tell us if something else breaks.
+1. I was afraid that working around this would be impossibly
painful ... but if it just takes one judiciously placed #undef,
let's do that and not argue about it.
Done.
Sea
Hello Kyotaro-san,
Thanks. I looked this more closely.
Indeed! Thanks for this detailed review.
+ * Marshal the COPY data. Either subroutine will get the
+ * connection out of its COPY state, then call PQresultStatus()
+ * once and report any error.
This comment doesn't explain what the r
My 0.02 €
It seems entirely crazy that encode() and decode() are no longer in the
same section, likewise that convert_from() and convert_to() aren't
documented together anymore.
Awkward, yes. But findable if you know what the categories are.
I suppose there could be 3 different categories:
Hello Jeremy,
Comments, notes?
One oddity about pg_relation_size and pg_table_size is that they can be
easily blocked by user activity. In fact it happens to us often in
reporting environments and we have instead written different versions of
them that avoid the lock contention and still gi
Hello Tom,
I'm also highly dubious about labeling this script "standard TPC-B",
when it resolves only some of the reasons why our traditional script
is not really TPC-B. That's treading on being false advertising.
IANAL, but it may not even be permissible to claim that we have
implemented "
Hello Tom,
[ shrug... ] TBH, the proposed patch does not look to me like actual
benchmark kit; it looks like a toy. Nobody who was intent on making their
benchmark numbers look good would do a significant amount of work in a
slow, ad-hoc interpreted language. I also wonder to what extent the
According to TPC themselves, "In contrast to TPC-A, TPC-B is not an
OLTP benchmark. Rather, TPC-B can be looked at as a database stress
test..." [1]. Sounds like classic pgbench to me.
Not sure where that leaves this patch. What problem is it actually
trying to solve?
That there is no builti
201 - 300 of 1351 matches
Mail list logo