On 2024-Sep-10, James Pang wrote:
> There is no foreign keys, but there is one session who did transactions
> to tables with savepoints, one savepoints/per sql in same transaction. But
> sessions with query "SELECT “ do not use savepoints , just with a lot of
> sessions running same query and
On 2024-Sep-10, Amine Tengilimoglu wrote:
> Hi,
>
>I encountered this in a project we migrated to PostgreSQL
> before, and unfortunately, it’s a situation that completely degrades
> performance. We identified the cause as savepoints being used excessively
> and without control. O
On 2024-Sep-10, James Pang wrote:
> Hi experts,
> we have a Postgresql v14.8 database, almost thousands of backends hang
> on MultiXactOffsetSLRU at the same time, all of these sessions running same
> query "SELECT ", from OS and postgresql slow log, we found all of these
> query on "BIND"
On 2024-Mar-01, James Pang wrote:
> one question:
> we need to increase all SLRU buffers together , MULTIXACT, XACT,
> Subtrans, COMMIT TS , for example, got all of them doubled based on
> existing size ?
No need.
> or only increase Subtrans , or Subtrans and multixact ?
Just increase the
Hi Chema,
On 2024-Feb-26, Chema wrote:
> Dear pgsqlers,
>
> I'm trying to optimize simple queries on two tables (tenders & items) with
> a couple million records. Besides the resulting records, the app also
> displays the count of total results. Doing count() takes as much time as
> the other
On 2024-Feb-02, James Pang (chaolpan) wrote:
> Possible to increase Subtrans SLRU buffer size ?
Not at present -- you need to recompile after changing
NUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h,
NUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in
src/include/access/multix
On 2023-Sep-14, bruno da silva wrote:
> This problem is more acute when the FK Table stores a small number of rows
> like types or codes.
Right, because the likelihood of multiple transactions creating
new references to the same row is higher.
> I think in those cases an enum type should be used
On 2023-Sep-13, bruno da silva wrote:
> I just had an outage on postgres 14 due to multixact members limit exceeded.
Sadly, that's not as uncommon as we would like.
> So the documentation says "There is a separate storage area which holds the
> list of members in each multixact, which also uses
I came here to talk about partitionwise join, but then noticed you have
already thought of that:
On 2023-Jun-18, nicolas paris wrote:
> Note that both plan acome from the same partitioned by hash table with
> 100 parts, with a unique index on the list_id + hash_key. For strategy
> 2.1, I turned o
On 2022-Nov-29, Mladen Gogala wrote:
> Hmmm, I think I will run pgbench with and without JIT on and see the
> difference.
I doubt you'll notice anything, because the pgbench queries will be far
below the JIT cost, so nothing will get JIT compiled at all. Or are you
planning on using a custom set
On 2022-Nov-28, Mladen Gogala wrote:
> As for JIT, I've recently asked that question myself. I was told that
> PostgreSQL with LLVM enabled performs approximately 25% better than without
> it.
Hmm, actually, normally you're better off turning JIT off, because it's
very common to diagnose cases of
On 2022-Nov-28, Mladen Gogala wrote:
> You'll probably be glad to learn that we have hints now.
What hints are you talking about? As I understand, we still don't have
Oracle-style query hints.
--
Álvaro HerreraBreisgau, Deutschland — https://www.EnterpriseDB.com/
On 2022-Apr-14, Benjamin Tingle wrote:
> It doesn't help if I partition temp_data by textfield beforehand either
> (using the same scheme as the target table). It still opts to concatenate
> all of temp_data, hash it, then perform a sequential scan against the
> target partitions.
Does it still d
On 2021-Jul-22, l...@laurent-hasson.com wrote:
> Yes, agreed Peter... The "lower priority" issue was mentioned, but not
> in terms of the applicability of the fix overall. Personally, I would
> prefer going the size_t route vs int/long/int64 in C/C++/. Of course,
> as a user, I'd love a patch on V
On 2021-Jul-08, Justin Pryzby wrote:
> If I'm not wrong, this is the same thing you asked 2 week ago.
>
> If so, why not continue the conversation on the same thread, and why not
> reference the old thread ?
>
> I went to the effort to find the old conversation.
> https://www.postgresql.org/mess
Hello
On 2021-Apr-13, Andres Freund wrote:
> > The concerns that had come to my mind were more along the lines
> > of things like pg_dump requiring a larger footprint in the shared
> > lock table. We could alleviate that by increasing the default
> > value of max_locks_per_transaction, perhaps.
On 2021-Apr-13, Andres Freund wrote:
> > Sounds like I should file this as a requested improvement?
>
> The ability to lock a toast table? Yea, it might be worth doing that. I
> seem to recall this being discussed not too long ago...
Yep, commit 59ab4ac32460 reverted by eeda7f633809. There were
On 2020-Sep-14, David Rowley wrote:
> On Tue, 8 Sep 2020 at 06:05, Raj wrote:
> >
> > > This would not exactly look like a bug, because the message says "to
> > > be locked", so at least it's not allowing two workers to lock the same
> > > tuple. But it seems that the skip-locked mode should not
On 2020-Jun-20, Tom Lane wrote:
> I wrote:
> > ... oh, now I see: apparently, your filter condition is such that *no*
> > rows of the objectcustomfieldvalues table get past the filter:
> >
> > -> Index Scan using objectcustomfieldvalues3 on
> > objectcustomfieldvalues objectcustomf
On 2020-Feb-19, Lars Aksel Opsahl wrote:
> With the values above I did see same performance problems and we ended
> with a lot of subtransControlLock.
>
> So I started to change the code based on your feedbacks.
>
> - What seems to work very good in combination with a catch exception
> and retry
On 2020-Feb-16, Lars Aksel Opsahl wrote:
> On a server with 32 cores and 250 GB memory, with CentOS 7 and kernel
> 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using
> dblink. (https://github.com/larsop/postgres_execute_parallel) . I have
> tried to disconnect and reconnect in the
On 2020-Jan-13, Shira Bezalel wrote:
> Hi All,
>
> I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a
> significant performance gain in one specific query. This is really great,
> but I'm just looking to understand why.
pg12 reads half the number of buffers. I bet it's because of
On 2019-Nov-14, Craig James wrote:
> I'm completely baffled by this problem: I'm doing a delete that joins three
> modest-sized tables, and it gets completely stuck: 100% CPU use forever.
Do you have any FKs there? If any delete is cascading, and you don't
have an index on the other side, it'd d
On 2019-Jun-26, Justin Pryzby wrote:
> > Also, Should pg_buffercache perhaps be run at the beginning and end of the
> > week, to see if there is a significant difference?
>
> Yes; buffercache can be pretty volatile, so I'd save it numerous times each at
> beginning and end of week.
Be careful wi
On 2019-Jun-26, Hugh Ranalli wrote:
> From my research in preparing for the upgrade, I understood transparent
> huge pages were a good thing, and should be enabled. Is this not correct?
It is not.
> Wouldn't the plan be the same at both
> the start of the week (when the problematic table is esse
On 2019-Apr-15, Tom Lane wrote:
> It's barely conceivable that in your particular query, there's something
> acting to break that which doesn't manifest typically; but I think it's
> much more likely that you simply haven't found the culprit allocation.
> It's quite feasible that many many ExecHas
On 2019-Apr-15, Gunther wrote:
> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715
> #1 0x0084e6cd in palloc (size=385) at mcxt.c:938
> #2 0x0061019c in ExecHashJoinGetSavedTuple
> (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,
> tupleSlot=
On 2019-Feb-14, Mariel Cherkassky wrote:
> I meant the anaylze, if anaylze will run very often on the original table,
> arent there disadvantages for it ?
It'll waste time and resources pointlessly. Don't do it -- it won't do
any good.
--
Álvaro Herrerahttps://www.2ndQuadrant.
On 2019-Feb-13, Mariel Cherkassky wrote:
> To be honest, it isnt my db, but I just have access to it ...
Well, I suggest you forget the password then :-)
> Either way, so I need to change the vacuum_Analyze_scale/threshold for the
> original table ? But the value will be too high/low for the ori
On 2019-Feb-13, Mariel Cherkassky wrote:
> Hey,
> I have a very big toasted table in my db(9.2.5).
Six years of bugfixes missing there ... you need to think about an
update.
> Autovacuum doesnt gather
> statistics on it because the analyze_scale/threshold are default and as a
> result autoanalyz
On 2019-Jan-30, Mariel Cherkassky wrote:
> It seems that the version of the db is 9.6.10 :
>
> psql -U db -d db -c "select version()";
> Password for user db:
> version
> ---
> PostgreSQL 9.6.1
On 2019-Jan-29, Shreeyansh Dba wrote:
> The virtualxid lock is special. It’s a exclusive lock on the transaction’s
> own virtual transaction ID that every transaction always holds. No other
> transaction can ever acquire it while the transaction is running.
> The purpose of this is to allow one tr
On 2019-Jan-17, Mariel Cherkassky wrote:
> But you said that the threshold that is chosen for the toasted table is
> identical to the originals table threshold right ?
You can configure them identical, or different. Up to you.
> Is that a normal behavior that the original table has 1000recrods
On 2019-Jan-17, Mariel Cherkassky wrote:
> I did it for the original table. But I see in the logs that the autovacuun
> on the toasted table isn't synced with the autovacuun of the original
> table. Therefore I thought that it worth to set it also for the toasted
> table. Can you explain why in th
On 2019-Jan-17, Mariel Cherkassky wrote:
> I tried to set the same threshold for the toasted table but got an error
> that it is a catalog table and therefore permission is denied.
> 2019-01-17 12:04:15 EST db116109 ERROR: permission denied:
> "pg_toast_13388392" is a system catalog
> 2019-01-17
On 2018-Dec-17, Tom Lane wrote:
> Queries like yours are kinda sorta counterexamples to that, but pretty
> much all the ones I've seen seem like crude hacks (and this one is not
> an exception). Writing a bunch of code to support them feels like
> solving the wrong problem. Admittedly, it's not
On 2018-Dec-06, Amit Langote wrote:
> The partitionwise join related
> changes in PG 11 moved the add_child_rel_equivalences call in
> set_append_rel_size such that child EC members would be added even before
> checking if the child rel is dummy, but for a reason named in the comment
> above the c
On 2018-Dec-06, Amit Langote wrote:
Hi
> [ Parallel SeqScan on precio_126 to precio_998 ]
>
> > -> Parallel Seq Scan on precio_999 p_874
> > (cost=0.00..27.50 rows=1 width=16)
> >Filter: ((fecha >= '1990-05-06
> > 00:00:00'::timestamp without
So, the slowness in this test seems to come from
add_child_rel_equivalences() and bms_overlap() therein, according to
perf (mine and Justin's) ... apparently we end up with a lot of
equivalence class members. I added a debugging block to spit out the
number of ECs as well as the number of members
On 2018-Nov-26, Jakub Glapa wrote:
> Justin thanks for the information!
> I'm running Ubuntu 16.04.
> I'll try to prepare for the next crash.
> Couldn't find anything this time.
As I recall, the appport stuff in Ubuntu is terrible ... I've seen it
take 40 minutes to write the crash dump to disk,
On 2018-Sep-25, Justin Pryzby wrote:
> I asked few weeks ago [0] but didn't get a response on -docs so resending here
> for wider review/discussion/.
I support the idea of adding a link to "Performance Optimization".
That's not a protected page, so you should be able to do it.
> [0]
> https://
On 2018-Aug-20, Fred Habash wrote:
> How do we go about calculating appropriate values for these two parameters ...
I don't know a lot about your system, so don't have anything to go on.
Also, Aurora is mostly unknown to me. What did Amazon say?
> > 'NUM_SUBTRANS_BUFFERS'?
> TOTAL_MAX_CACHED_SU
On 2018-Aug-17, Fred Habash wrote:
> Aurora Postgres 9.6.3
Oh, okay, I don't know this one. Did you contact Amazon support?
> So, no chance to recompile (AFAIK).
> Is there a design anti-pattern at the schema or data access level that we
> should look for and correct?
Maybe ...
> And as for t
On 2018-Aug-16, Fred Habash wrote:
> One of our database API's is run concurrently by near 40 sessions. We see
> all of them waiting back and forth on this wait state.
What version are you running?
> Why is it called Subtrans Control Lock?
It controls access to the pg_subtrans structure, which
彭昱傑 wrote:
> My postgre version is 9.4.9, and I face a space issue.
Latest in 9.4 is 9.4.17, so you're missing about two years of bug fixes.
> Every time I restart postgre server, it generates a new history file:
That's strange -- it shouldn't happen ... sounds like you're causing a
crash each
pavan95 wrote:
> Hi Álvaro Herrera,
>
> Please find the corresponding output:
OK, these settings look pretty normal, so they don't explain your
problem.
What is checkpoint_segments set to? And checkpoint_timeout?
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Develop
Please show the output of these queries in the relevant databases:
select name, setting, source, sourcefile, sourceline from pg_settings where
name like '%vacuum%';
select oid::regclass, reloptions from pg_class where reloptions is not null;
--
Álvaro Herrerahttps://www.2ndQuadr
Rodrigo Rosenfeld Rosas wrote:
> Em 05-12-2017 15:25, Tom Lane escreveu:
> > > Normally this is because you lack indexes on the referencing columns, so
> > > the query that scans the table to find the referencing rows is a
> > > seqscan.
> > Actually though ... the weird thing about this is that I
Rodrigo Rosenfeld Rosas wrote:
> explain analyze delete from field_values where transaction_id=226;
> QUERY PLAN
> ---
> Delete on field_valu
49 matches
Mail list logo