>
>
> Yes there are very good reason for the way removal for dead rows work now,
> but is there any chance of adding an option when creating table to disable
> this behavior for instance for unlogged tables ?
>
>
>
Are you saying your job is I/O bound (not memory or cpu). And that you can
only imp
On Tue, Aug 29, 2023 at 3:57 PM Rondat Flyag wrote:
> I took the dump just to store it on another storage (external HDD). I
> didn't do anything with it.
>
> 29.08.2023, 21:42, "Jeff Janes" :
>
>
>
> On Tue, Aug 29, 2023 at 1:47 PM Rondat Flyag
> wrote:
>
> I have a legacy system that uses `Posg
300M rows isn't "huge", but it is starting to get to be real data.
Some notes/very general rules of thumb since you asked a very general
question:
1. Consider updating the statistics on the table from the default sample
of 100 rows to something larger - especially if you have a wide variety of
da
I've been thinking about the max_wal_senders parameter lately and wondering
if there is any harm in setting it too high. I'm wondering if I should try
to shave a few senders off, perhaps to match my logical replicas + 1,
instead of just leaving it at the default of 10. Or vice-versa, can
clients
On Mon, Jan 30, 2023 at 4:32 PM Mok wrote:
> Hi,
>
> Unfortunately there is no pg_stat_activity data available as we are
> unaware of the issue until it has already happened.
>
> The version we are on is 12.11.
>
> I don't think it is due to locks as there are none in the logs. Vacuums
> are logg
>
>
> On Sun, 22 Jan 2023 at 13:34, aditya desai wrote:
>
>> Hi,
>> Is there any way to improve performance of LIKE clause on VIEWS.
>>
>> select * From request_vw where upper(status) like '%CAPTURED%' - 28
>> seconds.
>>
>> select * from request_vw where status='CAPTURED'
>>
>> Application team
I inherited a database with several single-digit billion row tables. Those
tables have a varchar(36) column populated with uuids (all connected to
each other via FKs) each currently supported by a btree index.
After the recent conversations about hash indexes I thought I'd do some
comparisons to
I was able to reproduce a similar issue with using `= ANY(VALUES)`
> instead of `= ANY(ARRAY)`:
>
> 1. fast query using =ANY(ARRAY): https://explain.depesz.com/s/dwP8
> 2. slow query using =ANY(ARRAY(expr)): https://explain.depesz.com/s/3hGb
> 3. slow query using =ANY(VALUES): https://explain.depes
I like to use pgbadger to collect data on what is happening in RDS
instances. You have to turn up a bunch of logging in RDS:
1. Turn on connection logging, duration logging, lock_waits, and anything
else that you are interested in studying.
2. Then grab all of your postgresql logs from AWS. I w
On Fri, Aug 12, 2022 at 3:02 PM Rick Otten wrote:
>
>
> On Fri, Aug 12, 2022 at 2:50 PM Nico Heller wrote:
>
>> Good day,
>>
>> consider the following query:
>>
>> WITH aggregation(
>> SELECT
>> a.*,
>>
On Mon, Aug 1, 2022 at 10:16 AM Rick Otten wrote:
>
>> The other problem I ran into, which I'm still building a test case for
>> and I fear might be a bug if I can easily reproduce it,
>> is if I did the original select in a CTE, and then did a sort outside of
>&
>
>
> The other problem I ran into, which I'm still building a test case for and
> I fear might be a bug if I can easily reproduce it,
> is if I did the original select in a CTE, and then did a sort outside of
> the CTE, even though the CTE found 0 rows, the database
> still spent a _ton_ of time s
On Wed, Jul 27, 2022 at 8:55 AM Rick Otten wrote:
> I'm spinning up a new Postgresql 14 database where I'll have to store a
> couple years worth of time series data at the rate of single-digit millions
> of rows per day. Since this has to run in AWS Aurora, I can't
I'm spinning up a new Postgresql 14 database where I'll have to store a
couple years worth of time series data at the rate of single-digit millions
of rows per day. Since this has to run in AWS Aurora, I can't use
TimescaleDB.
I've been soliciting advice for best practices for building this.
One
On Sat, Jul 16, 2022 at 12:07 PM Rory Campbell-Lange <
r...@campbell-lange.net> wrote:
> I'd be grateful for some comments on the advisability of using a large
> number of concurrent logical replication publications/subscriptions.
> Below I've set out the current environment and a suggested design
On Mon, Nov 15, 2021 at 8:57 AM Mladen Gogala
wrote:
>
> On 11/15/21 00:04, Pavel Stehule wrote:
>
>
> Hi
>
>
>> There is a CentOS8-stream version which solves the problem but I cannot
>> use that in the office. I will probably have to wait for another month
>> before OL8 has everything that I ne
On Thu, Aug 5, 2021 at 12:57 AM Nikhil Shetty
wrote:
> Hi,
>
> Thank you for the suggestion.
>
> We tried by dropping indexes and it worked faster compared to what we saw
> earlier. We wanted to know if anybody has done any other changes that helps
> speed-up initial data load without dropping in
On Thu, May 6, 2021 at 10:38 AM Semen Yefimenko
wrote:
> Hi there,
>
> I've recently been involved in migrating our old system to SQL Server and
> then PostgreSQL. Everything has been working fine so far but now after
> executing our tests on Postgres, we saw a very slow running query on a
> larg
On Tue, May 4, 2021 at 6:05 AM Alex wrote:
> Shouldn't this process be automatic based on some heuristics?
>
> Saving 10ms planning but costing 14s execution is catastrophic.
>
> For example, using some statistics to limit planner time to some percent
> of of previous executions.
> This way, if q
On Tue, Mar 23, 2021 at 2:52 AM Paul McGarry wrote:
>
>
> On Tue, 23 Mar 2021 at 16:13, Justin Pryzby wrote:
>
>> On Tue, Mar 23, 2021 at 03:00:38PM +1100, Paul McGarry wrote:
>> > I have a query where Postgresql (11.9 at the moment) is making an odd
>> plan
>> > choice, choosing to use index sc
On Mon, Mar 1, 2021 at 8:44 AM Bob Jolliffe wrote:
> Was just about to reply similarly. Mind you it perhaps does raise the
> question : are the default postgresql settings perhaps too
> conservative or too static. For example, in the absence of other
> explicit configuration, might it make more
On Fri, Mar 27, 2020 at 10:14 AM Ekaterina Amez
wrote:
>
> it's there a better way to do this. I'm testing on version 9.2 BUT
> production server is 8.4 (legacy application, supposed to be in at least
> 9.2 but recently discovered it was 8.4, planning upgrade but not now).
> Config parameters are
On Wed, Nov 13, 2019 at 5:47 AM Morris de Oryx
wrote:
> Disclaimer: Out over my skis again.
>
> From what you say here, and over on SO, it sounds like you've got two
> problems:
>
> * Matching on *huge *numbers of records because of common tags.
>
> * A dynamic collection of tags as they're custo
On Tue, Oct 8, 2019 at 7:37 PM Arya F wrote:
> As my table has gotten bigger, it takes longer to get a single row back
> when querying a row by its btree index.
>
> Right now the database is running on a traditional HDD. SSDs have a much
> faster seek time than traditional HDDs.
>
> Would switchi
On Wed, Sep 11, 2019 at 12:38 PM Dinesh Somani wrote:
> I think Merlin has outlined pretty much all the options and very neatly.
> (As an asides Merlin could you possibly elaborate on the "C Hack" how that
> might be accomplished.)
>
> To OP, I am curious if the performance changes were the query
On Tue, Sep 10, 2019 at 12:53 AM yash mehta wrote:
> Hi Michael/Justin/Flo,
>
> Thank you all for your assistance. As Michael said, looks like there are
> no more tricks left.
>
> On Mon, Sep 9, 2019 at 9:09 PM Michael Lewis wrote:
>
>> If you can't modify the query, then there is nothing more t
On Thu, Aug 1, 2019 at 2:15 PM Andres Freund wrote:
> Hi,
>
> On 2019-08-01 23:36:33 +0530, Purav Chovatia wrote:
> > > If you've set synchronous_commit = off, and you still get only 1200
> > > transactions/sec, something else is off. Are you sure you set that?
> > I am not very surprised with th
On Mon, Jul 29, 2019 at 2:16 AM Jean Baro wrote:
>
> We have a new Inventory system running on its own database (PG 10 AWS
> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than
> 10GB at the moment. We provided 1TB to get more IOPS from EBS.
>
> As we don't have a lot of
On Mon, Jul 22, 2019 at 2:42 AM Fabio Pardi wrote:
> Hello,
>
>
> I recently spent a bit of time benchmarking effective_io_concurrency on
> Postgres.
>
> I would like to share my findings with you:
>
>
> https://portavita.github.io/2019-07-19-PostgreSQL_effective_io_concurrency_benchmarked/
>
> C
On Wed, Jun 26, 2019 at 5:16 AM Hervé Schweitzer (HER) <
herve.schweit...@dbi-services.com> wrote:
> You now that Postgres don’t have any shared_pool as Oracle, and the
> session information ( execution plan, etc..) are only available for the
> current session. Therefore I also highly recommend to
On Sun, Jun 23, 2019 at 10:21 AM Rick Otten
wrote:
> I'm not sure where else to look, so I'm asking here for tips.
>
> I have a table in a remote (Heroku-managed) postgresql database (PG 10.7).
>
> On the other end, (PG 11.3) I have a foreign table configured with a
>
I'm not sure where else to look, so I'm asking here for tips.
I have a table in a remote (Heroku-managed) postgresql database (PG 10.7).
On the other end, (PG 11.3) I have a foreign table configured with a
materialized view in front of it.
Up until Thursday evening, it was taking about 12 - 15 s
On Mon, Apr 8, 2019 at 8:04 AM Julien Rouhaud wrote:
> On Mon, Apr 8, 2019 at 1:49 PM Daulat Ram
> wrote:
> >
> > Please confirm ! Can we migrate Oracle 12c database (12.1.0.1.0) running
> on Solaris to PostgreSQL 11.2 on Linux (Ubuntu). Also, please suggest the
> tools and pre-requisites.
> A
On Wed, Dec 19, 2018 at 6:45 PM Vladimir Ryabtsev
wrote:
> > The fundamental issue is that "ANY" has two meanings in PG, one of them
> following the SQL standard and one not:
>
> Oh yes, I was aware about two forms but it did not come into my mind, I
> was thinking I use the same form in both cas
On Sun, Dec 9, 2018 at 10:19 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Hi,
> I'm trying to understand why my database consume so much space. I checked
> the space it consume on disk :
>
>
Have you tried running pg_repack? (It is an extension.)
On Fri, Sep 14, 2018 at 12:34 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Thu, Sep 13, 2018 at 12:49 PM, Fd Habash wrote:
>
>> In API function may invoke 10 queries. Ideally, I would like to know what
>> queries are invoked by it and how long each took.
>>
>>
>>
>> I’m using pg
Can you use a materialized view to do the bytea_agg() and then refresh
concurrently whenever you need updated data?
The refresh concurrently might take a few hours or days to run to keep the
matview up to date, but your queries would be pretty fast.
A possible problem is that you are running out
On Thu, Feb 8, 2018 at 11:40 AM, Vitaliy Garnashevich <
vgarnashev...@gmail.com> wrote:
> Anyway, there are still some strange things happening when
> effective_io_concurrency is non-zero.
>
> ...
>
> Vitaliy
>
>
I was researching whether I could optimize a concatenated lvm2 volume when
I have d
Some of my data processes use large quantities of temp space - 5 or 6T
anyway.
We are running in Google Cloud. In order to get the best performance out
of all of my queries that might need temp space, I've configured temp space
on a concatenated local (volatile) SSD volume. In GCE, local SSD's a
>
>
>>
>> Setting enable_seqscan=off takes one of the shorter queries I was working
>> with from about 3 minutes to 300ms. This is a comparable performance
>> improvement to where I put a materialized view (with indexes) on top of the
>> materialized views instead of using a simple view on top of
On Tue, Feb 6, 2018 at 3:02 PM, Rick Otten wrote:
> Ooo. I wasn't aware of that option. (Learn something new every day!)
>
> Setting enable_seqscan=off takes one of the shorter queries I was working
> with from about 3 minutes to 300ms. This is a comparable performance
>
On Tue, Feb 6, 2018 at 1:18 PM, Justin Pryzby wrote:
> On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:
> > On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane wrote:
> >
> > > Rick Otten writes:
> > > > I'm wrestling with a very similar problem too - excep
On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane wrote:
> Rick Otten writes:
> > I'm wrestling with a very similar problem too - except instead of
> official
> > partitions I have a views on top of a bunch (50+) of unioned materialized
> > views, each "partition&qu
On Sun, Feb 4, 2018 at 8:19 AM, legrand legrand wrote:
> What is the value of guc constrain_exclusion ?
>
>
>
In my use case, which is a big union all behind a view, setting this to
off, on, or partition makes no difference. It still sequence scans all of
the sub-tables in pg 10.1 whereas it use
On Sun, Feb 4, 2018 at 5:14 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
>
> Hi,
> I configured range partitions on a date column of my main table(log_full).
> Each partition represents a day in the month. Every day partition has a
> list parition of 4 tables on a text column.
>
> l
We moved our stuff out of AWS a little over a year ago because the
performance was crazy inconsistent and unpredictable. I think they do a
lot of oversubscribing so you get strange sawtooth performance patterns
depending on who else is sharing your infrastructure and what they are
doing at the tim
n 29, 2018 at 3:52 PM, Thomas Munro wrote:
> On Tue, Jan 30, 2018 at 5:37 AM, Tom Lane wrote:
> > Rick Otten writes:
> >> I'm wondering if there is anything I can tune in my PG 10.1 database to
> >> avoid these errors:
> >
> >> $ psql -f failing_
I'm wondering if there is anything I can tune in my PG 10.1 database to
avoid these errors:
$ psql -f failing_query.sql
psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free pages
CONTEXT: parallel worker
I tried throttling back the number of parallel workers to just 2, that
did
Since upgrading to PG 10 a few weeks ago I've been experimenting with hash
indexes. One thing I've noticed is that they seem to take a _lot_ longer
to create than btree indexes, particularly on large tables.
I've got a moderately sized table of about 38M rows and the create index
using hash for a
After reading this article about keys in relational databases, highlighted
on hacker news this morning:
https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.html
I keep pondering the performance chart, regarding uuid insert, shown
towards the bottom of the article. I believe he was doing that
On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro wrote:
> Hello,
>
> We are still seeing queries (by UserID + UserCountry) taking over 2
> seconds, even when there is no batch insert going on at the same time.
>
> Each query returns from 100 to 200 messagens, which would be a 400kb pay
> load, which
51 matches
Mail list logo