Hello Andrew & Michaël,
My 0.02€:
There's a whole lot wrong with this code. To start with, why is that
unchecked eval there.
Yep. The idea was that other tests would go on being collected eg if the
file is not found, but it should have been checked anyway.
And why is it reading in log fil
Seem quite unnecessary. We haven't found that to be an issue elsewhere
in the code where slurp_file is used. And in the present case we know
the file exists because we got its name from list_files().
Agreed. That's an exchange between a hard failure mid-test and a
failure while letting the w
However, if slurp_file fails it raises an exception and aborts the
whole TAP unexpectedly, which is pretty unclean. So I'd suggest to
keep the eval, as attached. I tested it by changing the file name so
that the slurp fails.
Seem quite unnecessary. We haven't found that to be an issue elsewhe
Hello Tom,
moonjelly just reported an interesting failure [1].
I noticed. I was planning to have a look at it, thanks for digging!
It seems that with the latest bleeding-edge gcc, this code is
misoptimized:
else if (imax - imin < 0 || (imax - imin) + 1 < 0)
+# Check for functions that libpq must not call.
+# (If nm doesn't exist or doesn't work on shlibs, this test will silently
+# do nothing, which is fine.)
+.PHONY: check-libpq-refs
+check-libpq-refs: $(shlib)
+ @! nm -A -g -u $< 2>/dev/null | grep -e abort -e exit
"abort" and "exit" cou
The failure still represents a gcc bug, because we're using -fwrapv which
should disable that assumption.
Ok, I'll report it.
Done at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101254
--
Fabien.
Hello Tom,
The failure still represents a gcc bug, because we're using -fwrapv which
should disable that assumption.
Ok, I'll report it.
Done at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101254
Fixed at r12-1916-ga96d8d67d0073a7031c0712bc3fb7759417b2125
https://gcc.gnu.org/git/gitweb.
Bonjour Michaël,
Okay, I have extracted this part from your patch, and back-patched
this fix down to 11. The comments were a good addition, so I have
kept them. I have also made the second regex of check_pgbench_logs()
pickier with the client ID value expected, as it can only be 0.
Attached
Hello Thomas,
I've added an entry on the open item on the wiki. I'm unsure about who the
owner should be.
There is already an item: "Incorrect time maths in pgbench".
Argh *shoot*, I went over the list too quickly, looking for "log" as a
keyword.
Fabien, thanks for the updated patch, I
Fabien, thanks for the updated patch, I'm looking at it.
After looking at it again, here is an update which ensure 64 bits on
epoch_shift computation.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 4aeccd93af..7750b5d660 100644
--- a/src/bin/pgbench/pgben
Hello Yugo-san,
Thanks for the update!
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
Hmm, I don't know why your c
Hello Thomas,
After looking at it again, here is an update which ensure 64 bits on
epoch_shift computation.
The code in pgbench 13 aggregates into buckets that begin on the
boundaries of wall clock seconds, because it is triggered by changes
in time_t. In the current patch, we aggregate data
Hello David,
This patch adds the concept of "multiconnect" to pgbench (better
terminology welcome).
Good. I was thinking of adding such capability, possibly for handling
connection errors and reconnecting…
The basic idea here is to allow connections made with pgbench to use
different auth
Hello Greg,
Some quick feedback about the patch and the arguments.
Filling: having an empty string/NULL has been bothering me for some time.
However there is a significant impact on the client/server network stream
while initializing or running queries, which means that pgbench older
perfor
Although this patch is marked RFC, the cfbot shows it doesn't
even compile on Windows. I think you missed updating Mkvcbuild.pm.
Indeed. Here is a blind attempt at fixing the build, I'll check later to
see whether it works. It would help me if the cfbot results were
integrated into the cf a
Hello Tom,
Indeed. Here is a blind attempt at fixing the build, I'll check later to
see whether it works. It would help me if the cfbot results were
integrated into the cf app.
Hmm, not there yet per cfbot, not sure why not.
I'll investigate.
Anyway, after taking a very quick look at the
Hello Dean,
I haven't looked at the patch in detail, but one thing I object to is
the code to choose a random integer in an arbitrary range.
Thanks for bringing up this interesting question!
Currently, this is done in pgbench by getrand(), which has its
problems.
Yes. That is one of the m
Hello Tom,
I went to commit this, figuring that it was a trivial bit of code
consolidation, but as I looked around in common.c I got rather
unhappy with the inconsistent behavior of things. Examining
the various places that implement "echo"-related logic, we have
the three places this patch pr
"-- # QUERY\n%s\n\n"
Attached an attempt along those lines. I found another duplicate of the
ascii-art printing in another function.
Completion queries seems to be out of the echo/echo hidden feature.
Incredible, there is a (small) impact on regression tests for the \gexec
case. All oth
Hello Dean,
It may be true that the bias is of the same magnitude as FP multiply,
but it is not of the same nature. With FP multiply, the
more-likely-to-be-chosen values are more-or-less evenly distributed
across the range, whereas modulo concentrates them all at one end,
making it more lik
Hello Dean & Tom,
Here is a v4, which:
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
--
Fabien.diff --git a/contrib/file_fd
Here is a v4, which:
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
And a v5 where an unused test file does also compile if we
Hello Yura,
1. PostgreSQL source uses `uint64` and `uint32`, but not
`uint64_t`/`uint32_t`
2. I don't see why pg_prng_state could not be `typedef uint64
pg_prng_state[2];`
It could, but I do not see that as desirable. From an API design point of
view we want something clean and abstract, a
1. PostgreSQL source uses `uint64` and `uint32`, but not
`uint64_t`/`uint32_t`
Indeed you are right. Attached v6 does that as well.
--
Fabien.diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 2c2f149fb0..146b524076 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/cont
Hello Tatsuo-san,
So overall gain by the patch is around 15%, whereas the last test before
the commit was 14%. It seems the patch is still beneficial after the
commit.
Yes, that's good!
I had a quick look again, and about the comment:
/*
* If partitioning is not enabled and server ver
Hello Dean,
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
At the risk of repeating myself: do *not* invent your own sc
Now suppose we want a random number in the range [0,6). This is what
happens with your algorithm for each of the possible prng() return
values:
prng() returns 0 -- OK
prng() returns 1 -- OK
prng() returns 2 -- OK
prng() returns 3 -- OK
prng() returns 4 -- OK
prng() returns 5 -- OK
prng()
The important property of determinism is that if I set a seed, and then
make an identical set of calls to the random API, the results will be
identical every time, so that it's possible to write tests with
predictable/repeatable results.
Hmmm… I like my stronger determinism definition more t
Hello Yura,
I believe most "range" values are small, much smaller than UINT32_MAX.
In this case, according to [1] fastest method is Lemire's one (I'd take
original version from [2]) [...]
Yep.
I share your point that the range is more often 32 bits.
However, I'm not enthousiastic at combin
Hello Yura,
However, I'm not enthousiastic at combining two methods depending on
the range, the function looks complex enough without that, so I would
suggest not to take this option. Also, the decision process adds to
the average cost, which is undesirable.
Given 99.99% cases will be in the
Hello Dean,
Whilst it has been interesting learning and discussing all these
different techniques, I think it's probably best to stick with the
bitmask method, rather than making the code too complex and difficult
to follow.
Yes.
The bitmask method has the advantage of being very simple, eas
Finally, I think it would be better to treat the upper bound of the
range as inclusive.
This bothered me as well, but the usual approach seems to use range as the
number of values, so I was hesitant to depart from that. I'm still
hesitant to go that way.
Yeah, that bothered me too.
For exam
Hello Yura,
Given 99.99% cases will be in the likely case, branch predictor should
eliminate decision cost.
Hmmm. ISTM that a branch predictor should predict that unknown < small
should probably be false, so a hint should be given that it is really
true.
Why? Branch predictor is history ba
Hello Thomas,
Thanks! This doesn't seem to address the complaint, though. Don't
you need to do something like this? (See also attached.)
+initStats(&aggs, start - (start + epoch_shift) % 100);
ISTM that this is: (start + epoch_shift) / 100 * 100
That should reproduce wha
Hello Hannu,
I'm not sure we have transaction lasts for very short time that
nanoseconds matters.
Nanoseconds may not matter yet, but they could be handy when for
example we want to determine the order of parallel query executions.
We are less than an order of magnitude away from being able
Hello Thomas,
Isn't it better if we only have to throw away the first one?).
This should be the user decision to drop it or not, not the tool
producing it, IMO.
Let me try this complaint again. [...]
I understand your point.
For me removing silently the last bucket is not right because
Works for me: patch applies, global and local check ok. I'm fine with it.
I hoped we were done here but I realised that your check for 1-3 log
lines will not survive the harsh environment of the build farm.
Adding sleep(2) before the final doLog() confirms that. I had two
ideas:
1. Give up
Hello again,
I hoped we were done here but I realised that your check for 1-3 log
lines will not survive the harsh environment of the build farm.
Adding sleep(2) before the final doLog() confirms that. I had two
ideas:
So I think we should do 1 for now. Objections or better ideas?
At lea
Hello Vignesh,
I am changing the status to "Needs review" as the review is not
completed for this patch and also there are some tests failing, that
need to be fixed:
test test_extdepend ... FAILED 50 ms
Indeed,
Attached v4 simplifies the format and fixes this one.
I ran c
Hello,
Of course, users themselves should be careful of problematic script, but it
would be better that pgbench itself avoids problems if pgbench can beforehand.
Or, we should terminate the last cycle of benchmark regardless it is
retrying or not if -T expires. This will make pgbench behaves
Hello Thomas,
I committed the code change without the new TAP tests, because I
didn't want to leave the open item hanging any longer.
Ok. Good.
As for the test, ... [...]
Argh, so there are no tests that would have caught the regressions:-(
... I know it can fail, and your v18 didn't f
Hello,
Thanks for the catch and the proposed fix! Indeed, on errors the timing is
not updated appropriately.
ISTM that the best course is to update the elapsed time whenever a result
is obtained, so that a sensible value is always available.
See attached patch which is a variant of Richard
Probably it would be appropriate to add a test case. I'll propose something
later.
committed with a test
Thanks!
--
Fabien.
Hello Tomas,
At on of the pgcon unconference sessions a couple days ago, I presented
a bunch of benchmark results comparing performance with different
data/WAL block size. Most of the OLTP results showed significant gains
(up to 50%) with smaller (4k) data pages.
You wrote something about SS
Hello Robert,
I think for this purpose we should limit ourselves to algorithms
whose output size is, at minimum, 64 bits, and ideally, a multiple of
64 bits. I'm sure there are plenty of options other than the ones that
btrfs uses; I mentioned them only as a way of jump-starting the
discussion.
Just a note/reminder that "seawasp" has been unhappy for some days now
because of yet another change in the unstable API provided by LLVM:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-06-23%2023%3A18%3A17
llvmjit.c:1115:50: error: use of undeclared identifier 'L
Hello Thomas,
llvmjit.c:1233:81: error: too few arguments to function call, expected 3,
have 2
ref_gen =
LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);
Ah yes, I hadn't seen that one yet. That function grew a "Dispose"
argument, which we can just pass NU
i came to the same conclusions and went with Option 1 (see patch). Mainly
because most code in utils/adt is organized by type and this way it is
clear, that this is a thin wrapper around pg_prng.
Small patch update. I realized the new functions should live
array_userfuncs.c (rather than ar
Hello,
Thank you for your feedback. I attached a patch, that addresses most of your
points.
I'll look into it. It would help if the patch could include a version
number at the end.
Should the exchange be skipped when i == k?
The additional branch is actually slower (on my machine, test
Bonjour Michaël,
Good. I was thinking of adding such capability, possibly for handling
connection errors and reconnecting…
round-robin and random make sense. I am wondering how round-robin
would work with -C, though? Would you just reuse the same connection
string as the one chosen at the s
Hello David,
round-robin and random make sense. I am wondering how round-robin
would work with -C, though? Would you just reuse the same connection
string as the one chosen at the starting point.
Well, not necessarily, but this is debatable.
My expectation for such a behavior would be tha
Ok. That makes sense. The output reports "including connections
establishing" and "excluding connections establishing" regardless with
-C, so we should measure delays in the same way.
On second thought, it's more reasonable and less confusing not to
measure the disconnection delays at all? Si
I would think we should leave as it is for pg13 and before to not surprise
users.
Ok. Thank you for your opinion. I also agree with not changing the behavior of
long-stable branches, and I think this is the same opinion as Fujii-san.
Attached is the patch to fix to measure disconnection del
Hello Fujii-san,
ISTM that the patch changes pgbench so that it can skip counting
some skipped transactions here even for realistic rates under -T.
Of course, which would happen very rarely. Is this understanding right?
Yes. The point is to get out of the scheduling loop when time has expire
Hello Fujii-san,
Stop counting skipped transactions under -T as soon as the timer is
exceeded. Because otherwise it can take a very long time to count all of
them especially when quite a lot of them happen with unrealistically
high rate setting in -R, which would prevent pgbench from ending
Hello Aleksander,
Attached a v10 which is some kind of compromise where the interface uses
inclusive min and max bounds, so that all values can be reached.
Just wanted to let you know that cfbot [1] doesn't seem to be happy with
the patch. Apparently, some tests are falling. To be honest, I d
Hallo Peter,
It turns out that your v8 patch still has the issue complained about in [0].
The issue is that after COMMIT AND CHAIN, the internal savepoint is gone, but
the patched psql still thinks it should be there and tries to release it,
which leads to errors.
Indeed. Thanks for the cat
Hello again,
Just wanted to let you know that cfbot [1] doesn't seem to be happy with
the patch. Apparently, some tests are falling. To be honest, I didn't
invest too much time into investigating this. Hopefully, it's not a big
deal.
[1]: http://cfbot.cputube.org/
Indeed. I wish that these r
[1]: http://cfbot.cputube.org/
Indeed. I wish that these results would be available from the cf interface.
Attached a v11 which might improve things.
Not enough. Here is a v12 which might improve things further.
Not enough. Here is a v13 which might improve things further more.
--
Fabi
[1]: http://cfbot.cputube.org/
Indeed. I wish that these results would be available from the cf
interface.
Attached a v11 which might improve things.
Not enough. Here is a v12 which might improve things further.
Not enough. Here is a v13 which might improve things further more.
Not en
Hello Tom,
Just FTR, I strongly object to your removal of process-startup srandom()
calls.
Ok. The point of the patch is to replace and unify the postgres underlying
PRNG, so there was some logic behind this removal.
Those are not only setting the seed for our own use, but also ensuring
t
Just FTR, I strongly object to your removal of process-startup srandom()
calls.
Ok. The point of the patch is to replace and unify the postgres underlying
PRNG, so there was some logic behind this removal.
FTR, this was triggered by your comment on Jul 1:
[...] I see that you probably did
Attached v15 also does call srandom if it is there, and fixes yet another
remaining random call.
I think that I have now removed all references to "random" from pg source.
However, the test still fails on windows, because the linker does not find
a global variable when compiling extensions,
I guess the declaration needs PGDLLIMPORT.
Indeed, thanks!
Attached v16 adds that.
--
Fabien.diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c
index d19f73127c..b250ae912b 100644
--- a/contrib/amcheck/verify_nbtree.c
+++ b/contrib/amcheck/verify_nbtree.c
@@ -32,6
Hello Peter,
Attached v9 integrates your tests and makes them work.
Attached v11 is a rebase.
--
Fabien.diff --git a/contrib/pg_stat_statements/expected/pg_stat_statements.out b/contrib/pg_stat_statements/expected/pg_stat_statements.out
index b52d187722..0cf4a37a5f 100644
--- a/contrib/pg_st
Hello Tom,
As I threatened in another thread, I've looked through all of the
oldest commitfest entries to see which ones should maybe be tossed,
on the grounds that they're unlikely to ever get committed so we
should stop pushing them forward to the next CF.
psql - add SHOW_ALL_RESULTS opt
Attached v4 simplifies the format and fixes this one.
I think this goes way way overboard in terms of invasiveness. There's no
need to identify individual call sites of PSQLexec. [...]
ISTM that having the information was useful for the user who actually
asked for psql to show hidden queri
The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
--
Fabien.
Hello Yugo-san,
[...] One way to avoid these errors is to send Parse messages before
pipeline mode starts. I attached a patch to fix to prepare commands at
starting of a script instead of at the first execution of the command.
What do you think?
ISTM that moving prepare out of command ex
I attached the updated patch.
# About pgbench error handling v15
Patches apply cleanly. Compilation, global and local tests ok.
- v15.1: refactoring is a definite improvement.
Good, even if it is not very useful (see below).
While restructuring, maybe predefined variables could be ma
The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
I wrote a test to check psql
Hello Pavel,
The newly added PSQL_WATCH_PAGER feature which broke the patch does not
seem to be tested anywhere, this is tiring:-(
Do you have any idea how this can be tested?
The TAP patch sent by Peter on this thread is a very good start.
It requires some pager that doesn't use blocking
Hello,
Minimally for PSQL_WATCH_PAGER, the pager should exit after some time, but
before it has to repeat data reading. Elsewhere the psql will hang.
Sure. The "pager.pl" script I sent exits after reading a few lines.
can be solution to use special mode for psql, when psql will do write to
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
I wrote a test to check psql query cancel support. I checked that it fails
against the patch that was reverted. Maybe this is useful.
Here is the update
I tested manually for the pager feature, which mostly work, althoug
"pspg --stream" does not seem to expect two tables, or maybe there is
a way to switch between these that I have not found.
pspg doesn't support this feature.
Sure. Note that it is not a feature yet:-)
ISTM that having som
Bonjour Michaël,
My 0.02€:
- pgbench has its own parsing routines for int64 and double, with
an option to skip errors. That's not surprising in itself, but, for
strtodouble(), errorOK is always true, meaning that the error strings
are dead.
Indeed. However, there are "atof" calls for option
Bonjour Michaël-san,
The semantics for fatal vs error is that an error is somehow handled while a
fatal is not. If the log message is followed by an cold exit, ISTM that
fatal is the right call, and I cannot help if other commands do not do that.
ISTM more logical to align other commands to fat
Hello,
I do not understand your disagreement. Do you disagree about the
expected>> semantics of fatal? Then why provide fatal if it should not
be used? What is the expected usage of fatal?
I disagree about the fact that pgbench uses pg_log_fatal() in ways
that other binaries don't do.
Sure
[...] Thoughts?
For pgbench it is definitely ok to add the exit. For others the added
exits look reasonable, but I do not know them intimately enough to be sure
that it is the right thing to do in all cases.
All that does not seem to enter into the category of things worth
back-patching,
Hello Yugo-san,
I attached v2 patch including the documentation and some comments
in the code.
I've looked at this patch.
I'm unclear whether it does what it says: "exit immediately on abort", I
would expect a cold call to "exit" (with a clear message obviously) when
the abort occurs.
C
Hello Yugo-san,
There are cases where "goto done" is used where some error like
"invalid socket: ..." happens. I would like to make pgbench exit in
such cases, too, so I chose to handle errors below the "done:" label.
However, we can change it to call "exit" instead of "goo done" at each
place
Hello Yugo-san,
Some feedback about v2.
There is some dead code (&& false) which should be removed.
Maybe it should check that cancel is not NULL before calling PQcancel?
I think this is already checked as below, but am I missing something?
+ if (all_state[i].cancel != NULL)
+
Hello Yugo-san,
Currently, the psql's test of query cancelling (src/bin/psql/t/020_cancel.pl)
gets the PPID of a running psql by using "\!" meta command, and sends SIGINT to
the process by using "kill". However, IPC::Run provides signal() routine that
sends a signal to a running process, so I
Hello Yugo-san,
I attached the updated patch v3 including changes above, a test,
and fix of the typo you pointed out.
I'm sorry but the test in the previous patch was incorrect.
I attached the correct one.
About pgbench exit on abort v3:
Patch does not "git apply", but is ok with "patch"
Hello devs,
Pgbench is managing clients I/Os manually with select or poll. Much of
this could be managed by libevent.
Pros:
1. libevent is portable, stable, and widely used (eg Chromium, Memcached,
PgBouncer).
2. libevent implements more I/O wait methods, which may be more efficient on
so
Pgbench is managing clients I/Os manually with select or poll. Much of this
could be managed by libevent.
Or maybe libuv (used by nodejs?).
From preliminary testing libevent seems not too good at fine grain time
management which are used for throttling, whereas libuv advertised that it
is
Bonjour Michaël,
On Sun, Aug 13, 2023 at 11:22:33AM +0200, Fabien COELHO wrote:
Test run is ok on my Ubuntu laptop.
I have a few comments about this patch.
Argh, sorry!
I looked at what was removed (a lot) from the previous version, not what
was remaining and should also have been
Hello Thomas,
Pgbench is managing clients I/Os manually with select or poll. Much of this
could be managed by libevent.
Or maybe libuv (used by nodejs?).
From preliminary testing libevent seems not too good at fine grain time
management which are used for throttling, whereas libuv advertised
Interesting. In my understanding this also needs to make Latch
frontend-friendly?
It could be refactored to support a different subset of event types --
maybe just sockets, no latches and obviously no 'postmaster death'.
But figuring out how to make latches work between threads might also
be
4. libevent development seems slugish, last bugfix was published 3 years ago,
version
2.2 has been baking for years, but the development seems lively (+100
contributors).
Ugh, I would stay away from something like that. Would we become
hostage to an undelivering group? No thanks.
Ok.
Hello Nathan,
1. so I don't have to create the script and function manually each
time I want to test mainly the database (instead of the
client-database system)
2. so that new users of PostgreSQL can easily see how much better OLTP
workloads perform when packaged up as a server-side function
[...] Further changes are already needed for their "main" branch (LLVM
17-to-be), so this won't quite be enough to shut seawasp up.
For information, the physical server which was hosting my 2 bf animals
(seawasp and moonjelly) has given up rebooting after a power cut a few
weeks/months ago,
About pgbench exit on abort v4:
Patch applies cleanly, compiles, local make check ok, doc looks ok.
This looks ok to me.
--
Fabien.
Hello Nathan,
I'm unclear about what variety of scripts that could be provided given the
tables made available with pgbench. ISTM that other scenari would involve
both an initialization and associated scripts, and any proposal would be
bared because it would open the door to anything.
Why's
Hello Dave,
I am running pgbench with the following
pgbench -h localhost -c 100 -j 100 -t 2 -S -s 1000 pgbench -U pgbench
--protocol=simple
Without pgbouncer I get around 5k TPS
with pgbouncer I get around 15k TPS
Looking at the code connection initiation time should not be part of the
calcu
Hello Yugo-san,
In thread #0, setup_cancel_handler is called before the loop, the
CancelRequested flag is set when Ctrl+C is issued. In the loop, cancel
requests are sent when the flag is set only in thread #0. SIGINT is
blocked in other threads, but the threads will exit after their query
are
Yugo-san,
Some feedback about v1 of this patch.
Patch applies cleanly, compiles.
There are no tests, could there be one? ISTM that one could be done with a
"SELECT pg_sleep(...)" script??
The global name "all_state" is quite ambiguous, what about "client_states"
instead? Or maybe it could
Hello Bruce,
Hmmm. This seels to suggest that interacting with something outside
should be an option.
Our goal is not to implement every possible security idea someone has,
because we will never finish, and the final result would be too complex
to be unable.
Sure. I'm trying to propose some
Hello,
This patch was marked as ready for committer, but clearly there's an
ongoin discussion about what should be the default behavoir, if this
breaks existing apps etc. So I've marked it as "needs review" and moved
it to the next CF.
The issue is that root (aka Tom) seems to be against the
Hello Justin,
Rebased onto 7b48f1b490978a8abca61e9a9380f8de2a56f266 and renumbered OIDs.
Some feedback about v18, seen as one patch.
Patch applies cleanly & compiles. "make check" is okay.
pg_stat_file() and pg_stat_dir_files() now return a char type, as well as
the function which call th
801 - 900 of 1351 matches
Mail list logo