Artur Zakirov wrote:
> I've tested the patch and it seems it works as expected.
Thanks for looking at this!
> It seems it isn't necessary to handle "\." within
> "CopyAttributeOutCSV()" (file "src/backend/commands/copyto.c")
> anymore.
It's still useful to produce CSV data that can be s
Tom Lane wrote:
> > [ v6-0001-Support-backslash-dot-on-a-line-by-itself-as-vali.patch ]
>
> I did some more work on the docs and comments, and pushed that.
Thanks!
> Returning to my upthread thought that
>
> >>> I think we should fix it so that \. that's not alone on a line
> >>> throw
Andrey M. Borodin wrote:
> I'm sending amendments addressing your review as a separate step in patch
> set. Step 1 of this patch set is identical to v39.
Some comments about the implementation of monotonicity:
+/*
+ * Get the current timestamp with nanosecond precision for UUID generati
Andrey M. Borodin wrote:
> I've addressed all items, except formatting a table...
Sorry for not following up sooner.
To illustrate my point upthread that was left unaddressed, let's say
I have a server with an incorrect date in the future.
A session generates an uuid
postgres=# select
Oleg Tselebrovskiy wrote:
> I've discovered a bug with string comparison using modified ICU
> collations
> Using a direct comparison and sorting values gives different results
>
> The easiest way to reproduce is the following:
>
> postgres=# create collation "en-US-u-kr-latn-dig
Michael Paquier wrote:
> Perhaps an \extended command that behaves outside a pipeline makes
> sense to force the use of queries without parameters to use the
> extended mode, but I cannot get much excited about the concept knowing
> all the meta-commands we have now (not talking about the
Anthonin Bonnefoy wrote:
> 0002: Allows ';' to send a query using extended protocol when within a
> pipeline by using PQsendQueryParams
It's a nice improvement!
> with 0 parameters. It is not
> possible to send parameters with extended protocol this way and
> everything will be propagate
Christoph Berg wrote:
> Perhaps this form could be improved by changing `\copy (select) to file`
> to something like `select \gcopy (to file)`. That might make :expansion
> in the "select" part easier to handle.
In this direction (COPY TO), it was already taken care of by
commit 6d3ede5f1
Jelte Fennema-Nio wrote:
> As an example you can copy paste this tiny script:
>
> \startpipeline
> select pg_sleep(5) \bind \g
> \endpipeline
>
> And then it will show these "extra argument ... ignored" warnings
>
> \startpipeline: extra argument "select" ignored
> \startpipeline: extra
Anthonin Bonnefoy wrote:
> Another possible option would be to directly send the command without
> requiring an additional meta-command, like "SELECT 1 \bind". However,
> this would make it more painful to introduce new parameters, plus it
> makes the \bind and \bind_named inconsistent as
Anthonin Bonnefoy wrote:
> > What is the reasoning here behind this restriction? \gx is a wrapper
> > of \g with expanded mode on, but it is also possible to call \g with
> > expanded=on, bypassing this restriction.
>
> The issue is that \gx enables expanded mode for the duration of the
Anthonin Bonnefoy wrote:
> So if I understand correctly, you want to automatically convert a
> simple query into an extended query when we're within a pipeline. That
> would be doable with:
>
> --- a/src/bin/psql/common.c
> +++ b/src/bin/psql/common.c
> @@ -1668,7 +1668,16 @@ ExecQueryAnd
Hi,
On large scripts, pgbench happens to consume a lot of CPU time.
For instance, with a script consisting of 5 "SELECT 1;"
I see "pgbench -f 50k-select.sql" taking about 5.8 secs of CPU time,
out of a total time of 6.7 secs. When run with perf, this profile shows up:
81,10% pgbench pgben
Tom Lane wrote:
> > I got nerd-sniped by this question and spent some time looking into
> > it.
Thank you for the patch! LGTM.
Best regards,
--
Daniel Vérité
https://postgresql.verite.pro/
Tom Lane wrote:
> > I see "pgbench -f 50k-select.sql" taking about 5.8 secs of CPU time,
> > out of a total time of 6.7 secs. When run with perf, this profile shows up:
>
> You ran only a single execution of a 50K-line script? This test
> case feels a little bit artificial. Having said
Jeff Davis wrote:
> The main challenge is backwards compatibility. Users of FTS would need
> to recreate all of their tsvectors and indexes dependent on them. It's
> even possible that some users only have tsvectors and don't store the
> original data in the database, which would further c
Jeff Davis wrote:
> Even if it's not a collatable type, it should use the database
> collation rather than going straight to libc. Again, is that something
> that can ever be fixed or are we just stuck with libc semantics for
> full text search permanently, even if you initialize the clust
Jeff Davis wrote:
> I have attached a patch 0001 that
> fixes a misleading hint, but it's still not great.
+1 for the patch
> When using ICU or the builtin provider, it still requires coming up
> with some valid locale name for LC_COLLATE and LC_CTYPE
No, since the following invocation
David G. Johnston wrote:
> > It's \pset null for boolean values
> >
>
> v1, Ready aside from bike-shedding the name.
An annoying weakness of this approach is that it cannot detect
booleans inside arrays or composite types or COPY output,
meaning that the translation of t/f is incomplete.
Jeff Davis wrote:
> * The libc C.UTF-8 locale was a reasonable default (though not a
> natural language collation). But now that we have C.UTF-8 available
> from the builtin provider, then we should encourage that instead of
> relying on the slower, platform-specific libc implementation.
301 - 320 of 320 matches
Mail list logo