On Mon, Nov 11, 2024 at 8:36 AM Ba Jinsheng wrote:
> The default configurations of PostgreSQL incur the error: "ERROR: could not
> resize shared memory segment "/PostgreSQL.3539600020" to 2097152 bytes: No
> space left on device"
No comment on your optimiser experiments for now, but for this e
On Mon, Oct 7, 2024 at 9:02 AM Shiv Iyer wrote:
>- As the string length increases, the performance degrades exponentially
> when using special characters. This is due to the collation’s computational
> complexity for each additional character comparison.
That's a pretty interesting observat
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
> So would it make sense for postgres to perform reads in bigger blocks? Is it
> easy-ish to implement (where would one look for that)? Or must the I/O unit be
> tied to postgres' page size?
FYI as of last week we can do a little bit of t
On Sat, Sep 16, 2023 at 7:42 AM Tom Lane wrote:
> Sadly, this proves very little about Linux's behavior. glibc's idea
> of en_US involves some very complicated multi-pass sort rules.
> AFAICT from the FreeBSD sort(1) man page, FreeBSD defines en_US
> as "same as C except case-insensitive", wherea
On Thu, Jul 13, 2023 at 6:50 AM Dimitrios Apostolou wrote:
> Interesting and kind of sad that the last update on the wiki page is from
> 2021. What is the latest prototype? I'm not sure I'm up to the task of
> putting my database to the test. ;-)
It works pretty well, certainly well enough to tr
On Wed, Jul 12, 2023 at 5:12 AM Thomas Munro wrote:
> "gathering"
(Oops, for reads, that's "scattering". As in scatter/gather I/O but I
picked the wrong one...).
On Wed, Jul 12, 2023 at 1:11 AM Dimitrios Apostolou wrote:
> Note that I suspect my setup being related, (btrfs compression behaving
> suboptimally) since the raw device can give me up to 1GB/s rate. It is however
> evident that reading in bigger chunks would mitigate such setup
> inefficiencies.
On Fri, May 5, 2023 at 8:37 AM ProfiVPS Support wrote:
> I feel like ANYTHING would be better than this. Even risking loosing _some_
> of the latest data in case of a server crash (if it crashes we lose data
> anyways until restart, ofc we could have HA I know and we will when there'll
> be a
On Tue, Apr 18, 2023 at 2:43 PM peter plachta wrote:
> I was trying to understand whether there are any known workarounds for random
> access + index vacuums. Are my vacuum times 'normal' ?
Ah, it's not going to help on the old versions you mentioned, but for
what it's worth: I remember noticing
On Thu, Feb 2, 2023 at 1:54 PM Alex Kaiser wrote:
> Thanks for the explanation. Yes, that is the query plan I was imagining. I do
> see how chopping it up could result in an unfair distribution. But my counter
> to that would be that wouldn't chopping it up still be better than not. If
> things
On Wed, Feb 1, 2023 at 6:39 PM Alex Kaiser wrote:
> select * from testing where id in (1608377,5449811, ... <1000 random ids>
> ,4654284,3558460);
>
> Essentially I have a list of 1000 ids and I would like the rows for all of
> those ids.
>
> This seems like it would be pretty easy to paralleliz
On Tue, Dec 20, 2022 at 8:59 AM Frits Jalvingh wrote:
> @ranier
> These files ONLY exist during the query. They get deleted as soon as the
> query terminates, by Postgres itself. Once the query terminates pgsql_tmp is
> completely empty. Considering what Thomas said (and the actual occurrence of
On Mon, Dec 19, 2022 at 1:51 PM Thomas Munro wrote:
> It's really the limit for a single file
Oops, sorry I take that back. It should be per process.
On Mon, Dec 19, 2022 at 9:11 AM Justin Pryzby wrote:
> On Sun, Dec 18, 2022 at 06:29:41PM +0100, Frits Jalvingh wrote:
> > Just to be clear: my real question is: why is temp_file_limit not
> > working at the specified size? Because this is my real problem: when a
> > query is dying like this it wi
On Wed, Aug 24, 2022 at 3:06 PM Tom Lane wrote:
> Thomas Munro writes:
> > Oh, one comment there is actually obsolete now AFAIK. Unless there is
> > some reason to think personality(ADDR_NO_RANDOMIZE) might not work in
> > some case where sysctl -w kernel.randomize_va_spa
On Tue, Aug 23, 2022 at 2:42 PM Thomas Munro wrote:
> > 0002 isn't quite related, but while writing 0001 I noticed a nearby
> > use of /proc/sys/... which I thought should be converted to sysctl.
> > IMO /proc/sys pretty much sucks, at least for documentation purposes,
&
On Tue, Aug 23, 2022 at 3:53 PM Tom Lane wrote:
> Thomas Munro writes:
> > On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> > +service the requests, with those clients receiving unhelpful
> > +connection failure errors such as Resource temporarily
> > +un
On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> 0001 adds a para about how to raise the listen queue length.
+service the requests, with those clients receiving unhelpful
+connection failure errors such as Resource temporarily
+unavailable.
LGTM but I guess I would add "... or Conn
On Mon, Aug 22, 2022 at 2:18 PM Tom Lane wrote:
> Thomas Munro writes:
> > On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
> >> Hmm. It'll be awhile till the 128 default disappears entirely
> >> though, especially if assorted BSDen use that too. Probably
&g
On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
> Thomas Munro writes:
> > Yeah retrying doesn't seem that nice. +1 for a bit of documentation,
> > which I guess belongs in the server tuning part where we talk about
> > sysctls, perhaps with a link somewhere n
On Mon, Aug 22, 2022 at 10:55 AM Tom Lane wrote:
> Not sure what I think at this point about making libpq retry after
> EAGAIN. It would make sense for this particular undocumented use
> of EAGAIN, but I'm worried about others, especially the documented
> reason. On the whole I'm inclined to lea
On Mon, Aug 22, 2022 at 9:48 AM Tom Lane wrote:
> It's also pretty unclear why the kernel would want to return
> EAGAIN instead of letting the nonblock connection path do the
> waiting, which is why I'm suspecting a bug rather than designed
> behavior.
Could it be that it fails like that if the l
On Fri, Nov 19, 2021 at 6:03 AM Robert Creager wrote:
> Which would be better? Discard plans or forcing custom plans? Seems like
> wrapping a copy might be better than the Postgres.conf change as that would
> affect all statements. What kind of performance hit would we be taking with
> that
On Thu, Nov 18, 2021 at 1:18 PM Robert Creager wrote:
> So, how do I go about capturing more information for the big brains (you
> guys) to help figure this out? I have all our resources at mine (and hence
> your) disposal.
As a workaround, does it help if you issue DISCARD PLANS before your
C
On Thu, Nov 18, 2021 at 8:28 AM Tom Lane wrote:
> Justin Pryzby writes:
> > It shows that the process is running FK triggers.
>
> Indeed, and doing a seqscan therein. Normally I'd suppose that
> this reflects a lack of an index, but RI_FKey_check should always
> be doing something that matches t
On Wed, Nov 17, 2021 at 11:51 AM Thomas Munro wrote:
> It's the right output format, but isn't /pid == '$PID'/ only going to
> match one single process called "postgres"? Maybe /execname ==
> "postgres"/ to catch them all?
Oh, duh, it's the top CPU one. Makes sense. Never mind :-)
On Wed, Nov 17, 2021 at 11:40 AM Robert Creager
wrote:
> Presuming this is the type of output you are expecting:
>
> CPU IDFUNCTION:NAME
> 0 58709:tick-10s
>
>
> postgres`AtEOXact_LargeObject+0x11
> postgres`CommitTrans
On Tue, Nov 16, 2021 at 5:43 PM Robert Creager wrote:
> One CPU is pegged, the data has been sent over STDIN, so Postgres is not
> waiting for more, there are no other queries running using this select:
So PostgreSQL is eating 100% CPU, with no value shown in
wait_event_type, and small numbers o
On Fri, Mar 12, 2021 at 10:09 PM Luca Ferrari wrote:
>fdatasync 16269.365 ops/sec 61 usecs/op
>fsync 8471.429 ops/sec 118 usecs/op
> Non-sync'ed 8kB writes:
>write278484.510 ops/sec
On Tue, Feb 23, 2021 at 5:49 AM Luca Ferrari wrote:
> I'm running a virtual machine with FreeBSD 12.2, PostgreSQL 12.5 and
> UFS as filesystem.
> I was experimenting with fsync = off and pgbench, and I see no
> particular difference in tps having fsync enabled or disabled.
> Now, the same tiny tes
On Fri, Aug 21, 2020 at 9:58 AM Jim Jarvie wrote:
> However, as I scale up the number of concurrent connections, I see a spike in
> CPU (to 100% across 80 cores) when the SELECT FOR UPDATE SKIP LOCKED executes
> and the select processes wait for multiple minutes (10-20 minutes) before
> complet
o reproduce this I was mostly
running the same query (based on my guess at what ingredients are
needed), but perhaps it requires a particular allocation pattern that
will require more randomness to reach... hmm.
--
Thomas Munro
http://www.enterprisedb.com
ame query in a tight loop for 8 hours, and then
not seen it again in the past 3 weeks. Clearly there is issue needing
a fix here, but I don't yet know what it is.
--
Thomas Munro
http://www.enterprisedb.com
he bug reported as #15585 (ie some
type of rare corruption). Are you able to reproduce this
consistently? Can you please show the query plan?
--
Thomas Munro
http://www.enterprisedb.com
ill reproduce it on current master, REL_11_STABLE or REL_10_STABLE?
--
Thomas Munro
http://www.enterprisedb.com
org/message-id/flat/87sh48ffhb@news-spur.riddles.org.uk
[1] https://www.postgresql.org/docs/10/static/parallel-safety.html
--
Thomas Munro
http://www.enterprisedb.com
) I just cloned https://github.com/citusdata/citus.git and
skimmed through "git diff origin/release-7.4..origin/release-7.5", and
nothing is jumping out at me. Can you still see the problem with
Citus 7.4?
--
Thomas Munro
http://www.enterprisedb.com
#x27;s using multiple Gather[Merge] nodes and parallel
bitmap heap scans? Was it a regular backend process or a parallel
worker process (or a Citus worker process, if that is a thing?) that
raised the error?
--
Thomas Munro
http://www.enterprisedb.com
workaround until 10.2
would be to reduce max_parallel_workers_per_gather to 0 to prevent
parallelism completely for this query.
--
Thomas Munro
http://www.enterprisedb.com
39 matches
Mail list logo