Stephen Frost writes:
> * Job (j...@colliniconsulting.it) wrote:
>> We thought to implement one partition for day.
>> Do you think it should be fine?
> Really depends on what you're doing. If you're running very short
> queries that pull out just a record or a few records, then you're going
> to
Greetings,
* Job (j...@colliniconsulting.it) wrote:
> >If this is really what you're mostly doing, having constraint exclusion and
> >an index on 'profile' would probably be enough, if you insist on continuing
> >to have the table partitioned by day (which I continue to argue is a bad
> >idea-
> We thought to implement one partition for day.
That would be 365 partitions in a year.
In our experience INSERTS suffers the most in a partitioned table because
triggers are the only way to route the row to the proper child (inherited)
table.
Question: How is your insert pattern? Do you inse
Hi Stephen,
Thank you for your excellent opinion!
>If this is really what you're mostly doing, having constraint exclusion and an
>index on 'profile' would probably be enough, if you insist on continuing to
>have the table partitioned by day (which I continue to argue is a bad idea-
>based on
On 2/20/2017 5:22 AM, Stephen Frost wrote:
You probably shouldn't be partitioning by day for such a small dataset,
unless you've only got a few days worth of data that make up those 800m
records.
agreed. we do like 6 months retention by weeks, so there's 26 or so
partitions, that is reasonabl
> Unfortunately, that'll require locking each table and scanning it to make
> sure that the CHECK constraint isn't violated.
Actually, CHECK constraints can be added with the NOT VALID clause.
New tuples will be checked immediately, while the validation of existing tuples
can be done later usin
Greetings,
* Job (j...@colliniconsulting.it) wrote:
> here is primary a partitioned table (for 20/2/2017 logs):
> flashstart=# \d webtraffic_archive_day_2017_02_20;
> Table
> "public.webtraffic_archive_day_2017_02_20"
> Column |Type
7;::text)
Heap Blocks: lossy=225509
-> Bitmap Index Scan on
webtraffic_archive_day_2017_02_16_timestamp_date_wbidx (cost=0.00..120.00
rows=1 width=0) (actual time=29.277..29.277 rows=2255360 loops=1)
Index Cond: (("timestamp")::d
Greetings,
* Job (j...@colliniconsulting.it) wrote:
> we have a test machine with Postgresql 9.6.1 an about 800.000.000 record in a
> table.
> Table is partitioned by day, with indexes on partitioned table.
You probably shouldn't be partitioning by day for such a small dataset,
unless you've onl
Please share us an explain analyze of your query and \d+ of your table
2017-02-20 13:33 GMT+01:00 Job :
> Hu guys,
>
> we have a test machine with Postgresql 9.6.1 an about 800.000.000 record
> in a table.
> Table is partitioned by day, with indexes on partitioned table.
>
> Also a simple query (
Hu guys,
we have a test machine with Postgresql 9.6.1 an about 800.000.000 record in a
table.
Table is partitioned by day, with indexes on partitioned table.
Also a simple query (for example i want to search log occurred in a specific
day), is immediate in tha partitioned table (table_2017_02_1
Evan Martin writes:
> Thanks, Tom. You mean this bit, right?
> -> Seq Scan on _test_pos (cost=0.00..10728.00 rows=1 width=4)
> Filter:
> ((('010120E6101C401C40'::geography &&
> _st_expand(pos, 30::double precision)) AND ...
> I tried to find some i
Thanks, Tom. You mean this bit, right?
-> Seq Scan on _test_pos (cost=0.00..10728.00 rows=1 width=4)
Filter:
((('010120E6101C401C40'::geography &&
_st_expand(pos, 30::double precision)) AND ...
I tried to find some info on selectivity estimation f
Evan Martin writes:
> I've run into a weird query performance problem. I have a large, complex
> query which joins the results of several set-returning functions with
> some tables and filters them by calling another function, which involves
> PostGIS calls (ST_DWithin). This used to run in abo
I've run into a weird query performance problem. I have a large, complex
query which joins the results of several set-returning functions with
some tables and filters them by calling another function, which involves
PostGIS calls (ST_DWithin). This used to run in about 10 seconds until I
change
On 11 July 2011 17:19, Jonathan Barber wrote:
> I'm trying to debug a jboss/hibernate application that uses PostgreSQL
> as a backend, for which PostgreSQL is reporting a lot of queries as
> taking around 4398046 ms (~73 minutes) plus or minus 10 ms to
> complete. I have two questions about this.
I'm trying to debug a jboss/hibernate application that uses PostgreSQL
as a backend, for which PostgreSQL is reporting a lot of queries as
taking around 4398046 ms (~73 minutes) plus or minus 10 ms to
complete. I have two questions about this.
First, when I look at the logs, the long queries appea
"DANG Trieu" <[EMAIL PROTECTED]> writes:
> On the "abstract" table (event):
> gsmlog3_30=# explain analyze select * from event where timestamp >
> 1226952050683 order by timestamp desc limit 10;
Turning on constraint exclusion might help here, if you have appropriate
check constraints for the rang
Thank you Tomas for the reply.
I'm including the requested information below. To sumarize my problem:
The same query with "order by desc" on 2 tables (concrete and abstract).
A backward PK index scan is used when run on the concrete table, but not
when run on the abstract table.
a) Was the table
ate: Tue, 18 Nov 2008 16:06:16 +0100
> Subject: Re: [GENERAL] Slow queries when ORDER BY ... DESC with table
> inheritance (no index scan backwards)
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: pgsql-general@postgresql.org
>
> A lot of important information is m
A lot of important information is missing in your post, for example:
a) Was the table analyzed recently? Is the table vacuumed regularly?
b) How large are the tables? (Number of tuples and pages. SELECT
reltuples, relpages FROM pg_class WHERE relname LIKE 'event%')
c) What values are used for the
Hi all,
I'm a newbie to Postgres so please bear with me. I have a schema that
uses inherited tables. I need the queries on my 'event' table to always
be in descending order of the primary key, i.e. scan the index backwards
(for obvious performance reasons). Somehow the ORDER BY doesn't seem to
be
It's OK, I'm an idiot.
I read s many times that you need to get the data types in the query the same as
the column
types or indexes won't work. So I go and get it wrong!
I had defined the parameter to the function as timestamp (equivalent to SQLServer
datetime), but the
column data type is
Thank for the reply.
I will move this onto the perfromance group, except that for some odd reason I cant
see a
performance group on usenet. I will subscribe to the mailing list
I will gladly upload the schema (script) or anything else that anyone wants, but it
may not be
appropriate for a n
I have a very similar problem. I put the following SQL into a
function:
SELECT VS.*,VL.TEL1,SC.CONTRACT_ID,SC.CONTRACT_REF, SC.MAX_HOURS,
SC.MIN_HOURS,
(SELECT COUNT(*) FROM TIMESHEET_DETAIL
JOIN MAIN_ORDER ON (MAIN_ORDER.ORDER_ID = TIMESHEET_DETAIL.ORDER_ID
AND MAIN_ORDER.CLIENT_ID = $3)
WHERE
> seems that by creating a view ( with the query i mentioned before ),
> my
> queries were sped up by roughly 10 seconds... odd odd odd.
>
Not really so odd when you think of everything the backend has to do to
process a query. Parsing, planning, and optimizing (not necessarily in
that order) ca
On Fri, 18 Sep 1998, Thomas Good wrote:
> On Fri, 18 Sep 1998, David Hartwig wrote:
>
> > I would like to see it! I was not aware that table aliasing could
> > have any impact on performance.
>
> [SNIP]
> I was a bit amazed myself. Federico Passaro, on the SQL list,
> helped me out some ti
On Fri, 18 Sep 1998, David Hartwig wrote:
> I would like to see it! I was not aware that table aliasing could
> have any impact on performance.
Hello Oliver and David,
I was a bit amazed myself. Federico Passaro, on the SQL list,
helped me out some time ago when a query was failing.
His co
28 matches
Mail list logo