Hello Robert,
Done now. Meanwhile, two more machines have reported the mysterious message:
sh: ./configure: not found
...that first appeared on spurfowl a few hours ago. The other two
machines are eelpout and elver, both of which list Thomas Munro as a
maintainer. spurfowl lists Stephen Fros
BTW it's now visible at:
https://www.postgresql.org/docs/devel/glossary.html
Awesome! Linking beetween defs and to relevant sections is great.
BTW, I'm in favor of "an SQL" because I pronounce it "ess-kew-el", but I
guess that people who say "sequel" would prefer "a SQL". Failing that, I'm
Hi Corey,
ISTM that occurrences of these words elsewhere in the documentation should
link to the glossary definitions?
Yes, that's a big project. I was considering writing a script to compile
all the terms as search terms, paired with their glossary ids, and then
invoke git grep to identify
Attached is an attempt at improving things. I have added a explicit note and
hijacked an existing example to better illustrate the purpose of the
function.
A significant part of the complexity of the patch is the overflow-handling
implementation of (a * b % c) for 64 bits integers.
Howeve
Hello,
Do I need to precede those with some recursive chmod commands? Perhaps
the client should refuse to run if there is still something left after
these.
I think the latter would be a very good idea, just so that this sort of
failure is less obscure. Not sure about whether a recursive chm
Hallo Peter,
Attached v14 moves the status extraction before the possible clear. I've
added a couple of results = NULL after such calls in the code.
In the psql.sql test file, the test I previously added concluded with \set
ECHO none, which was a mistake that I have now fixed. As a result,
command = SELECT pg_terminate_backend(pg_backend_pid());
result 1 status = PGRES_FATAL_ERROR
error message = "FATAL: terminating connection due to administrator command
"
result 2 status = PGRES_FATAL_ERROR
error message = "FATAL: terminating connection due to administrator command
server clo
Hello Peter,
It would be better to do without. Also, it makes one wonder how others
are supposed to use this multiple-results API properly, if even psql can't
do it without extending libpq. Needs more thought.
Fine with me! Obviously I'm okay if libpq is repaired instead of writing
strange
Speaking of buildfarm breakage, seawasp has been failing for the
past several days. It looks like bleeding-edge LLVM has again
changed some APIs we depend on. First failure is here:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-01-28%2000%3A17%3A48
Indeed.
I'm so
Hello Andres,
I'm doubtful that tracking development branches of LLVM is a good
investment. Their development model is to do changes in-tree much more than we
do. Adjusting to API changes the moment they're made will often end up with
further changes to the same / related lines. Once they open
Hello Tom,
I concur with Fabien's analysis: we report the FATAL message from
the server during the first PQgetResult, and then the second call
discovers that the connection is gone and reports "server closed
the connection unexpectedly". Those are two independent events
(in libpq's view anywa
Hello Ian,
cfbot reports the patch no longer applies. As CommitFest 2022-11 is
currently underway, this would be an excellent time to update the patch.
Attached a v5 which is just a rebase.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 40e6a50a7f
Hello devs,
I want to abort a psql script. How can I do that? The answer seems to be
\quit, but it is not so simple:
- when the current script is from a terminal, you exit psql, OK
- when the current script is from a file (-f, <), you exit psql, OK
- when the current script is included
Hello Tom,
- when the current script is included from something,
you quit the current script and proceed after the \i of next -f, BAD
Question: is there any way to really abort a psql script from an included
file?
Under what circumstances would it be appropriate for a script to take
Hello David,
vagrant@vagrant:~$ /usr/local/pgsql/bin/psql -v ON_ERROR_STOP=1 -f two.psql
-f three.psql postgres
?column?
--
2
(1 row)
?column?
--
3
(1 row)
(there is a \quit at the end of two.psql)
Yep, that summarizes my issues!
ON_ERROR_STOP is only of SQL e
Hello David,
Question: is there any way to really abort a psql script from an
included file?
Under what circumstances would it be appropriate for a script to take
it on itself to decide that? It has no way of knowing what the next -f
option is or what the user intended.
Can we add an exit
Hmm. The refactoring is worth it as much as the differentiation
between QUERY and INTERNAL QUERY as the same pattern is repeated 5
times.
Now some of the output generated by test_extdepend gets a bit
confusing:
+-- QUERY:
+
+
+-- QUERY:
That's not entirely this patch's fault. Still that's n
Now some of the output generated by test_extdepend gets a bit
confusing:
+-- QUERY:
+
+
+-- QUERY:
That's not entirely this patch's fault. Still that's not really
intuitive to see the output of a query that's just a blank spot..
Hmmm.
What about adding an explicit \echo before these empty o
Hello Peter,
I had noticed that most getopt() or getopt_long() calls had their letter
lists in pretty crazy orders. There might have been occasional attempts
at grouping, but those then haven't been maintained as new options were
added. To restore some sanity to this, I went through and ord
Hello Tom,
That's a lot IMO, so my vote would be to discard this feature for now
and revisit it properly in the 15 dev cycle, so as resources are
redirected into more urgent issues (13 open items as of the moment of
writing this email).
I don't wish to tell people which open issues they ough
It would be useful to test replicating clusters with a (switch|fail)over
procedure.
Interesting idea but in general a failover takes sometime (like a few
minutes), and it will strongly affect TPS. I think in the end it just
compares the failover time.
Or are you suggesting to ignore the time
Hello Pavel,
1. print server version to the output of pgbench. Now only client side
version is printed
It is easy enough and makes sense. Maybe only if it differs from the
client side version ?
2. can ve generate some output in structured format - XML, JSON ?
It is obviously possible,
Hello Pavel,
This is not a simple question. Personally I prefer to show this info every
time, although it can be redundant. Just for check and for more simple
automatic processing.
When I run pgbench, I usually work with more releases together, so the
server version is important info.
Ok. Y
Finally it is unclear how to add such a feature with minimal impact on the
source code.
It is a question if this is possible without more changes or without
compatibility break :( Probably not. All output should be centralized.
Yes and no.
For some things we could have "void report_somety
Hello Thomas,
Since seawasp's bleeding-edge clang moved to "20210226", it failed
every run except 4, and a couple of days ago it moved to "20210508"
and it's still broken.
Indeed I have noticed that there is indeed an issue, but the investigation
is not very high on my current too deep pg-u
If you don't care about Ubuntu "apport" on this system (something for
sending crash/bug reports to developers with a GUI), you could
uninstall it (otherwise it overwrites the core_pattern every time it
restarts, no matter what you write in your sysctl.conf, apparently),
and then sudo sysctl -w
And of course this time it succeeded :-)
Hmmm. ISTM that failures are on and off every few attempts.
Just by the way, I noticed it takes ~40 minutes to compile. Is there
a reason you don't install ccache and set eg CC="ccache
/path/to/clang", CXX="ccache /path/to/clang++", CLANG="ccache
/p
On 2021-05-11 12:16:44 +1200, Thomas Munro wrote:
OK we got the SIGABRT this time, but still no backtrace. If the
kernel's core_pattern is "core", gdb is installed, then considering
that the buildfarm core_file_glob is "core*" and the script version is
recent (REL_12), then I'm out of ideas.
Hello Andres,p
Unless perhaps the hard rlimit for -C is set? ulimit -c -H should show
that.
Possibly I have just added "ulimit -c unlimited" in the script, we should
see the effect on next round.
If it's the hard limit that won't help, because the hard limit can only
be increased by a priv
Possibly I have just added "ulimit -c unlimited" in the script, we should see
the effect on next round.
for def5b065 it ended on on the contrib ltree test:
2021-05-12 20:12:52.528 CEST [3042602:410] pg_regress/ltree LOG:
disconnection: session time: 0:00:13.426 user=buildfarm
database=co
Hello Andres,
It finally failed with a core on 8f72bba, in llvm_shutdown, AFAIKS in a
free while doing malloc-related housekeeping.
My guess is that there is an actual memory corruption somewhere. It is
unobvious whether it is in bleeding-edge llvm or bleeding-edge postgres
though.
The is
The issue is non-deterministically triggered in contrib checks, either in
int or ltree, but not elsewhere. This suggests issues specific to these
modules, or triggered by these modules. Hmmm…
Hmm, yeah. A couple of different ways that ltreetest fails without crashing:
https://buildfarm.postg
Forgot to post the actual values:
r = 2563421694876090368
r = 2563421694876090365
Smells a bit like a precision problem in the workings of pg_erand48(),
but as soon as I saw floating point numbers I closed my laptop and ran
for the door.
Yup. This test has a touching, but entirel
Confirmed, thanks for looking. I can reproduce it on my machine with
-m32. It's somewhat annoying that the buildfarm didn't pick it up
sooner :-(
On Wed, 19 May 2021 at 08:28, Michael Paquier wrote:
On Wed, May 19, 2021 at 09:06:16AM +0200, Fabien COELHO wrote:
I see two simple
Hello Dean,
Or, (3) remove this test? I am not quite sure what there is to gain
with this extra test considering all the other tests with permute()
already present in this script.
Yes, I think removing the test is the best option. It was originally
added because there was a separate code pat
Or, (3) remove this test? I am not quite sure what there is to gain
with this extra test considering all the other tests with permute()
already present in this script.
Yes, I think removing the test is the best option. It was originally
added because there was a separate code path for larger
We know that seawasp was okay as of
configure: using compiler=clang version 13.0.0
(https://github.com/llvm/llvm-project.git
f22d3813850f9e87c5204df6844a93b8c5db7730)
and not okay as of
configure: using compiler=clang version 13.0.0
(https://github.com/llvm/llvm-project.git
0e8f5e4a686483
Hello pg-devs,
I have given a go at proposing a replacement for rand48.
POSIX 1988 (?) rand48 is a LCG PRNG designed to generate 32 bits integers
or floats based on a 48 bits state on 16 or 32 bits architectures. LCG
cycles on the low bits, which can be quite annoying. Given that we run on
6
Hello Tomas,
I have given a go at proposing a replacement for rand48.
So what is the motivation for replacing rand48? Speed, quality of produced
random numbers, features rand48 can't provide, or what?
Speed can only be near rand48, see below. Quality (eg no trivial cycles,
does not fail t
Hello Andrey,
- NOT to invent a new design!
Radical version of this argument would be to use de-facto standard and
ubiquitous MT19937.
Indeed, I started considering this one for this reason, obviously.
Though, I suspect, it's not optimal solution to the date.
"not optimal" does not do
Hello Aleksander,
- better software engineering
- similar speed (slightly slower)
- better statistical quality
- quite small state
- soundness
Personally, I think your patch is great.
Thanks for having a look!
Speaking of the speed I believe we should consider the performance of
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:tested, passed
Although the patch looks OK I would like to keep the stat
Hello Yura,
didn't measure impact on raw performance yet.
Must be done. There c/should be a guc to control this behavior if the
performance impact is noticeable.
--
Fabien.
While working on something in "psql/common.c" I noticed some triplicated
code, including a long translatable string. This minor patch refactors
this in one function.
--
Fabien.diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index 7a95465111..4fd80ec6bb 100644
--- a/src/bin/psql/com
With our current PRNG infrastructure it doesn't cost much to have
a separate PRNG for every purpose. I don't object to having
array_shuffle() and array_sample() share one PRNG, but I don't
think it should go much further than that.
Thanks for your thoughts, Tom. I have a couple of questions.
Bonjour Daniel,
Good catch! Thanks for the quick fix!
As usual, what is not tested does not:-(
Attached a tap test to check for the expected behavior with multiple
command \g.
--
Fabien.diff --git a/src/bin/psql/t/001_basic.pl b/src/bin/psql/t/001_basic.pl
index f447845717..c81feadd4e 10064
serinus has been complaining about the new gcd functions in 13~:
moonjelly, which also runs a bleeding-edge gcc, started to fail the same
way at about the same time. Given that none of our code in that area
has changed, it's hard to think it's anything but a broken compiler.
Maybe somebod
serinus has been complaining about the new gcd functions in 13~:
moonjelly, which also runs a bleeding-edge gcc, started to fail the same
way at about the same time. Given that none of our code in that area
has changed, it's hard to think it's anything but a broken compiler.
Maybe somebod
Hello Michael,
The cause is that the time unit is changed to usec but the patch
forgot to convert agg_interval into the same unit in doLog. I tempted
to change it into pg_time_usec_t but it seems that it is better that
the unit is same with other similar variables like duration.
As the option
Bonjour Michaël,
Here is an updated patch. While having a look at Kyotaro-san patch, I
noticed that the aggregate stuff did not print the last aggregate. I think
that it is a side effect of switching the precision from per-second to
per-µs. I've done an attempt at also fixing that which seems
Bonjour Michaël,
+ /* flush remaining stats */
+ if (!logged && latency == 0.0)
+ logAgg(logfile, agg);
You are right, this is missing the final stats. Why the choice of
latency here for the check?
For me zero latency really says that there
Hello Hayato-san,
I played pgbench with wrong parameters,
That's good:-)
and I found bug-candidate.
1. Do initdb and start.
2. Initialize schema and data with "scale factor" = 1.
3. execute following command many times:
$ pgbench -c 101 -j 10 postgres
Then, sometimes the negative " ini
Hello Peter,
My overly naive trust in non regression test to catch any issues has been
largely proven wrong. Three key features do not have a single tests. Sigh.
I'll have some time to look at it over next week-end, but not before.
I have reverted the patch and moved the commit fest entry t
+ while ((next = agg->start_time + agg_interval *
INT64CONST(100)) <= now)
I can find the similar code to convert "seconds" to "us" using casting like
end_time = threads[0].create_time + (int64) 100 * duration;
or
next_report = last_report + (int64) 100 * progress
Hello Yugo-san,
For example, when I usee a large rate (-R) for throttling and a
small latency limit (-L) values with a duration (-T), pbbench
got stuck.
$ pgbench -T 5 -R 1 -L 1;
Indeed, it does not get out of the catchup loop for a long time because
even scheduling takes more time
I attached a patch for this fix.
The patch mostly works for me, and I agree that the bench should not be in
a loop on any parameters, even when "crazy" parameters are given…
However I'm not sure this is the right way to handle this issue.
The catch-up loop can be dropped and the automaton ca
Wouldn't it be better to comment it like any other function?
Sure. Attached.
--
Fabien.diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index 9a00499510..00e5bf290b 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -523,6 +523,18 @@ PrintTiming(double elapsed_msec)
Hello Yugo-san,
TState has a field called "conn_duration" and this is, the comment says,
"cumulated connection and deconnection delays". This value is summed over
threads and reported as "average connection time" under -C/--connect.
If this options is not specified, the value is never used.
Y
Hmmm. Possibly. Another option could be not to report anything after some
errors. I'm not sure, because it would depend on the use case. I guess the
command returned an error status as well.
I did not know any use cases and decisions , but I vote to report nothing when
error occurs.
I woul
Hello Michaël,
I think we don't have to call doLog() before logAgg(). If we call doLog(),
we will count an extra transaction that is not actually processed because
accumStats() is called in this.
Yes, calling both is weird.
The motivation to call doLog is to catch up zeros on slow rates, so
Attached a v3 which adds a boolean to distinguish recording vs flushing.
Better with the attachement… sorry for the noise.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index d7479925cb..3b27ffebf8 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgb
Michaël-san, Yugo-san,
I am fine with this version, but I think it would be better if we have
a comment explaining what "tx" is for.
Yes. Done.
Also, how about adding Assert(tx) instead of using "else if (tx)" because
we are assuming that tx is always true when agg_interval is not used, rig
+ * The function behaviors changes depending on sample_rate (a fraction of
+ * transaction is reported) and agg_interval (transactions are aggregated
+ * over the interval and reported once).
The first part of this sentence has an incorrect grammar.
Indeed. v5 attempts to improve comments.
Hello Yugo-san,
When connection break while the bench has already started,
maybe it makes more sense to proceed,
The result would be incomplete also in this case. However, the reason why
it is worth to proceed is that such information is still useful for users,
or we don't want to waste the
Hello Greg,
I have a lot of community oriented work backed up behind this right now, so
I'm gonna be really honest. This time rework commit in its current form
makes me uncomfortable at this point in the release schedule. The commit
has already fought through two rounds of platform specific
pg_time_now(). This uses INSTR_TIME_SET_CURRENT in it, but this macro
can call clock_gettime(CLOCK_MONOTONIC[_RAW], ) instead of gettimeofday
or clock_gettime(CLOCK_REALTIME, ). When CLOCK_MONOTONIC[_RAW] is used,
clock_gettime doesn't return epoch time. Therefore, we can use
INSTR_TIME_SET_CURR
Hello Yugo-san,
I think we can fix this issue by using gettimeofday for logging as before
this was changed. I attached the patch.
I cannot say that I'm thrilled by having multiple tv stuff back in several
place. I can be okay with one, though. What about the attached? Does it
make sense?
At
Hello,
I cannot say that I'm thrilled by having multiple tv stuff back in several
place. I can be okay with one, though. What about the attached? Does it
make sense?
+1 The patch rounds down sd->start_time from ms to s but it seems to
me a degradation.
Yes, please we should not use time.
Hello Thomas,
Before I could get to startup timing I noticed the pgbench logging
output was broken via commit 547f04e7 "Improve time logic":
https://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.org
It does suck that we broke the logging and that it took 3 months for
a
Wouldn't it be better to put all those fixes into the same bag?
Attached.
--
Fabien.
Wouldn't it be better to put all those fixes into the same bag?
Attached.
Even better if the patch is not empty.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index d7479925cb..3df92bdd2b 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@
Hello Yugo-san,
By the way, the issue of inital connection erros reported in this thread
will be fixed by the patch attached in my previous post (a major part are
written by you :-)
That does not, on its own, ensure that it is bug free:-)
). Is this acceptable for you?
I disagree on two
Second, currently the *only* function to change the client state is
advanceConnectionState, so it can be checked there and any bug is only
there. We had issues before when several functions where doing updates,
and it was a mess to understand what was going on. I really want that it
stays that
I found you forgot to fix printProgressReport().
Indeed.
Also, according to the document, interval_start in Aggregated Logging
seems to be printed in seconds instead of ms.
Indeed. I'm unsure about what we should really want there, but for a beta
bug fix I agree that it must simply compl
Is there an identified issue beyond the concrete example Greg gave of
the timestamps?
AFAICS, there is a patch which fixes all known issues linked to pgbench
logging. Whether there exists other issues is possible, but the "broken"
area was quite specific. There are also some TAP tests on pgb
Hello Greg,
I think the only thing you and I disagree on is that you see a "first
issue in a corner case" where I see a process failure that is absolutely
vital for me to improve.
Hmmm. I agree that improvements are needed, but for me there is simply a
few missing (removed) tap tests which
Hello Thomas,
I prepared a draft revert patch for discussion, just in case it comes
in handy. This reverts "pgbench: Improve time logic.", but "pgbench:
Synchronize client threads." remains (slightly rearranged).
I had a quick look.
I had forgotten that this patch also fixed the long-runni
Hello,
Doing this means we regard any state other than CSTATE_FINISHED as
aborted. So, the current goto's to done in threadRun are effectively
aborting a part or the all clients running on the thread. So for
example the following place:
pgbench.c:6713
/* must be something wrong */
/* must be something wrong */
pg_log_error("%s() failed: %m", SOCKET_WAIT_METHOD);
goto done;
Should say such like "thread %d aborted: %s() failed: ...".
After having a lookg, there are already plenty such cases. I'd say not to
change anything for beta, and think of it
Hello Tom,
One point here is that printing the server version requires
access to a connection, which printResults() hasn't got
because we already closed all the connections by that point.
I solved that by printing the banner during the initial
connection that gets the scale factor, does vacuum
Hello Tom,
Why not move the printVersion call right after the connection is
created, at line 6374?
I started with that, and one of the 001_pgbench_with_server.pl
tests fell over --- it was expecting no stdout output before a
"Perhaps you need to do initialization" failure. If you don't
mind
It'd sure be nice if seawasp stopped spamming the buildfarm failure log,
too.
There was a silent API breakage (same API, different behavior, how nice…)
in llvm main that Andres figured out, which will have to be fixed at some
point, so this is reminder that it is still a todo… Not sure when
Note that if no connections are available, then you do not get the
version, which may be a little bit strange. Attached v3 prints out the
local version in that case. Not sure whether it is worth the effort.
I'm inclined to think that the purpose of that output is mostly
to report the server v
Hello Tom,
So I'm not very confident that the noise will go away quickly, sorry.
Could you please just shut down the animal until that's dealt with?
Hmmm… Obviously I can.
However, please note that the underlying logic of "a test is failing,
let's just remove it" does not sound right to m
Hello Tom,
Could you please just shut down the animal until that's dealt with?
The test is failing because there is a problem, and shuting down the test
to improve a report does not in any way help to fix it, it just helps to
hide it.
Our buildfarm is run for the use of the Postgres projec
There was a silent API breakage (same API, different behavior, how nice…)
in llvm main that Andres figured out, which will have to be fixed at some
point, so this is reminder that it is still a todo…
If it were *our* todo, that would be one thing; but it isn't.
Over on the other thread[1] we
Upon reflection, that amounts to the same thing really, so yeah,
scratch that plan. I'll defer until after that (and then I'll be
leaning more towards the revert option).
Sigh. I do not understand anything about the decision processus.
If you do revert, please consider NOT reverting the tps
Bonjour Michaël,
If this were core server code threatening data integrity I would be
inclined to be more strict, but after all pg_bench is a utility program,
and I think we can allow a little more latitude.
+1. Let's be flexible here. It looks better to not rush a fix, and
we still have som
Hello Yugo-san,
Thanks a lot for continuing this work started by Marina!
I'm planning to review it for the July CF. I've just added an entry there:
https://commitfest.postgresql.org/33/3194/
--
Fabien.
Hello,
+# note: this test is time sensitive, and may fail on a very
+# loaded host.
+# note: --progress-timestamp is not tested
+my $delay = pgbench(
+ '-T 2 -P 1 -l --aggregate-interval=1 -S -b se@2'
+ . ' --rate=20 --latency-limit=1000 -j ' . $nthreads
+ . ' -c 3 -r',
Hello Yugo-san:
# About v12.1
This is a refactoring patch, which creates a separate structure for
holding variables. This will become handy in the next patch. There is also
a benefit from a software engineering point of view, so it has merit on
its own.
## Compilation
Patch applies cleanl
Bonjour Michaël,
Could it be possible to document the intention of the test and its
coverage then? With the current patch, one has to guess what's the
intention behind this case.
Ok, see attached.
+check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,
+ qr{^\d{10,} \d{1,2}
Ola Álvaro,
... or, actually, even better would be to use a TODO block, so that the
test is run and reports its status, but if it happens not to succeed it
will not cause the whole test to fail. That way you'll accumulate some
evidence that may serve to improve the test in the future until it
Bonjour Michaël,
Using grep() with "$re" results in all the fields matching. Using on
the contrary "/$re/" in grep(), like list_files(), would only match
the first one, which is correct.
Ok, good catch. Perl is kind of a strange language.
With this issue fixed, I have bumped into what looks
Hello Simon,
Indeed.
There is already a "ready" patch in the queue, see:
https://commitfest.postgresql.org/33/3034/
--
Fabien.
(What were we thinking in allowing this in the first place?)
Temporary debug leftovers that got through, I'd say.
Thanks Michaël for the clean up!
--
Fabien.
Hello Thomas,
Seawasp should turn green on its next run.
Hopefully.
It is not scheduled very soon because Tom complained about the induced
noise in one buildfarm report, so I put the check to once a week.
I changed it to start a run in a few minutes. I've rescheduled to once a
day after
Seawasp should turn green on its next run.
It did!
--
Fabien.
Hello Yugo-san,
I'm wondering whether we could use "vars" instead of "variables" as a
struct field name and function parameter name, so that is is shorter and
more distinct from the type name "Variables". What do you think?
The struct "Variables" has a field named "vars" which is an array of
Hello Yugo-san,
# About v12.2
## Compilation
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
When compiling ther
701 - 800 of 1351 matches
Mail list logo