"Wetzel, Juergen (Juergen)" writes:
> Tom Lane writes:
>> You might get some traction by creating indexes on lower(searchfield1) etc.
> I will try that. Does that mean the column statistics will only be collected
> when there's an index on the table/column?
No; ANALYZE collects stats on plain
Andreas Kretschmer writes:
> please consider my plan B) and increase the stats. See my other mail.
I tried that also. Combined with the partial index. But still same result.
Bill Moran writes:
> LIKE queries are probably challenging to plan, especially when they're
> not
> left-anchored: how c
Bill Moran writes:
> LIKE queries are probably challenging to plan, especially when they're not
> left-anchored: how can the planner be reasonalbly expected to estimate how
> many rows will be matched by a given LIKE expression.
Yeah, especially without any statistics. The core problem here appe
On Thu, 1 Jun 2017 16:45:17 +0200
Andreas Kretschmer wrote:
>
> Am 01.06.2017 um 14:07 schrieb Wetzel, Juergen (Juergen):
> >
> > Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen):
> Only 130 rows out of the 3 have ARCHIVED = 0
> >>> in this case i would suggest a partial index:
>
Am 01.06.2017 um 14:07 schrieb Wetzel, Juergen (Juergen):
Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen):
Only 130 rows out of the 3 have ARCHIVED = 0
in this case i would suggest a partial index:
create index on (archived) where archived = 0;
Thanks, Andreas.
Sorry for the
Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen):
>>> Only 130 rows out of the 3 have ARCHIVED = 0
>> in this case i would suggest a partial index:
>> create index on (archived) where archived = 0;
> Thanks, Andreas.
>
> Sorry for the confusion about the table names.
> The hint with
Am 31.05.2017 um 13:27 schrieb Wetzel, Juergen (Juergen):
Only 130 rows out of the 3 have ARCHIVED = 0
in this case i would suggest a partial index:
create index on (archived) where archived = 0;
Thanks, Andreas.
Sorry for the confusion about the table names.
The hint with the partial
>> Only 130 rows out of the 3 have ARCHIVED = 0
> in this case i would suggest a partial index:
> create index on (archived) where archived = 0;
Thanks, Andreas.
Sorry for the confusion about the table names.
The hint with the partial index sounds as it could solve the problem. I will
tes
Am 30.05.2017 um 10:42 schrieb Wetzel, Juergen (Juergen):
> I have a question concerning the query planner. I observe that chosen
query plan differs on length and content of a like
> search expression. We have a view combining data from two tables,
both containing same number of rows (round a
I have a question concerning the query planner. I observe that chosen query
plan differs on length and content of a like search expression.
We have a view combining data from two tables, both containing same number of
rows (round about 3). Used PostgreSQL version is 9.3.15 on Windows.
DDL of
hi guys,
I met a query performance issue in postgresql 9.6.2 with multiple tables
joined.
there were 2 slow queries,and the reasons were the same:the optimizer
generate a bad explain which using nest loop.
attached is the query and its explain.all tables are small and the
indexes were only c
> Do you have different hardware configuration for master and standby?
Unfortunately, I do. Changing this is beyond my control at the moment.
Also, I made a mistake in my first email. The standby server has 32GB of
RAM. Here are the specs:
*PRIMARY SERVER*
CPU: Intel Xeon E5-1650 v2 @ 3.50GHz
RAM
On Fri, Jul 15, 2016 at 4:17 PM Kaixi Luo wrote:
> Hello,
>
> I have a primary PostgreSQL server with 64GB of RAM that is replicated
> using streaming replication to a hot standby server with 16GB of RAM.
>
Do you have different hardware configuration for master and standby? I am
not sure if tha
Hello,
I have a primary PostgreSQL server with 64GB of RAM that is replicated
using streaming replication to a hot standby server with 16GB of RAM. My
problem is as follows: I've detected a query that takes a long time to run
on my primary server but runs very fast on the standby server. I did an
It works fine now, on my test server execution time went down from 6.4
seconds to 1.4 seconds and on the production server went down from 3.2
sec to 600ms.
To optimize the query I changed the order of some joins(the joins that
where used to limit rows are at the begining of the query)
I tried s
On May 09, Sterpu Victor modulated:
> I tested it now, EXPLAIN ANALYZE is not showing LEFT JOIN at all if
> I don't select from the joined tables.
> Now is clear why the query is so mutch more efficient when I select
> less data.
>
> Thank you
>
With so many joins, you may want to experiment wit
ctor"
Cc: "Rob Imig" ; "PostgreSQL General"
; "David G. Johnston"
Sent: 9/5/2016 10:04:54 AM
Subject: Re: [GENERAL] Slow query when the select list is big
On 9 May 2016 at 18:46, David G. Johnston
wrote:
On Sunday, May 8, 2016, Sterpu Victor wrote:
Yes but it
.
-- Original Message --
From: "Sterpu Victor"
To: "Sterpu Victor" ; "David Rowley"
Cc: "Rob Imig" ; "PostgreSQL General"
; "David G. Johnston"
Sent: 9/5/2016 11:01:56 AM
Subject: Re[2]: [GENERAL] Slow query when the select li
2016-05-09 11:01 GMT+03:00 Sterpu Victor :
> I went to 2.4 seconds by joining first the tables that produce many rows.
As you're changing your query quite often, it'd be handy, if you could post
both:
- new query version
- it's `EXECUTE (analyze, buffers)` output
If you provide either one or an
DATE(J1031101.validfrom)<= DATE('2016-05-01') ))
ORDER BY J1031101.validfrom DESC
LIMIT 20 OFFSET 0
-- Original Message --
From: "Sterpu Victor"
To: "David Rowley"
Cc: "Rob Imig" ; "PostgreSQL General"
; "David G. Jo
(J1033359.validto IS NULL AND
DATE(J1031101.validfrom)<= DATE('2016-05-01') ))
ORDER BY J1031101.validfrom DESC
LIMIT 20 OFFSET 0
-- Original Message --
From: "David Rowley"
To: "Sterpu Victor"
Cc: "Rob Imig" ; "PostgreSQL General"
On 9 May 2016 at 18:46, David G. Johnston wrote:
> On Sunday, May 8, 2016, Sterpu Victor wrote:
>>
>> Yes but it is very big.
>> I don't understand why the select list is influencing the CPU usage.
>> I was expecting that only the join and where clauses would influence CPU.
>>
>
> PostgreSQL is s
On Sunday, May 8, 2016, Sterpu Victor wrote:
> Yes but it is very big.
> I don't understand why the select list is influencing the CPU usage.
> I was expecting that only the join and where clauses would influence CPU.
>
>
PostgreSQL is smart enough to optimize away stuff that it knows doesn't
imp
On 5/8/2016 11:09 PM, Sterpu Victor wrote:
Yes but it is very big.
I don't understand why the select list is influencing the CPU usage.
I was expecting that only the join and where clauses would influence CPU.
what was the query that generated that really complicated execution
plan? it sure l
Can you share the full query and output of EXPLAIN ? Not much data here
yet.
On Mon, May 9, 2016 at 6:58 AM Sterpu Victor wrote:
> I have a big query that takes about 7 seconds to run(time sending the data
> to the client is not counted).
> Postgres uses 100% of 1 CPU when solving this query. I t
I have a big query that takes about 7 seconds to run(time sending the
data to the client is not counted).
Postgres uses 100% of 1 CPU when solving this query. I tried to run the
query on a HDD and on a SSD with no difference. HDD show about 10% usage
while the query runs.
The query has a big "
On 3/2/16, drum.lu...@gmail.com wrote:
> On 3 March 2016 at 10:33, Vitaly Burovoy wrote:
>> On 3/2/16, drum.lu...@gmail.com wrote:
>> > Hi all...
>> >
>> > I'm working on a Slow Query. It's faster now (It was 20sec before) but
>> > still not good.
>> >
>> > Can you have a look and see if you can
On 3 March 2016 at 10:33, Vitaly Burovoy wrote:
> On 3/2/16, drum.lu...@gmail.com wrote:
> > Hi all...
> >
> > I'm working on a Slow Query. It's faster now (It was 20sec before) but
> > still not good.
> >
> > Can you have a look and see if you can find something?
> > Cheers
> >
> > Query:
> >
>
On 3/2/16, drum.lu...@gmail.com wrote:
> Hi all...
>
> I'm working on a Slow Query. It's faster now (It was 20sec before) but
> still not good.
>
> Can you have a look and see if you can find something?
> Cheers
>
> Query:
>
> WITH jobs AS (
> ...
> FROM
> jobs AS job
> JOIN
>
Hi all...
I'm working on a Slow Query. It's faster now (It was 20sec before) but
still not good.
Can you have a look and see if you can find something?
Cheers
Query:
WITH jobs AS (
SELECT
job.id,
job.clientid,
CONCAT(customer.company, ' ', customer.name_first, ' ',
c
UPDATED:
Index created:create index concurrently inode_segments_st_ino_target_pidx on
gorfs.inode_segments (st_ino desc, st_ino_target desc) where nfs_migration_date
is null;
NEW EXPLAIN ANALYZE:http://explain.depesz.com/s/Swu
I also am able to create a temporary table to store migrations, which
Thanks Vitaly for all your help. I'll have a very deep look on the links you
have provided. In the meantime, I'll also post here what I need.. IF you could
help one more time, would be very very nice.
Thank you again.
This can either be nfs_file_path or nfs_migration_date (both new columns).
Ad
which is not
necessary for answering.
>
> From: smerl...@outlook.com
> To: clavadetsc...@swisspug.org; vitaly.buro...@gmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Slow Query - PostgreSQL 9.2
> Date: Mon, 11 Jan 2016 21:37:43 +
>
> Hey guys.
;
> From: smerl...@outlook.com
> To: clavadetsc...@swisspug.org; vitaly.buro...@gmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Slow Query - PostgreSQL 9.2
> Date: Mon, 11 Jan 2016 20:02:54 +
>> Still getting a slw one..
>> Any thoughts?
&
-> Index Scan using "pk_inode_segments" on "inode_segments" "mv"
(cost=0.00..19.94 rows=1 width=16) (never executed)""
Index Cond: ((("st_ino")::bigint =
("fi"
Hello
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Saulo Merlo
> Sent: Montag, 11. Januar 2016 08:12
> To: Vitaly Burovoy
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Slow Q
id" =
> "t"."media_subtype_id";
> <>
It seems alias for "t" is not "gorfs"."inode_segments" (it is "p"),
but "gorfs"."inodes" (in the second "LEFT JOIN" clause).
So, the correct DDL is:
CREATE
ta_length", "t"."st_atime" AS "last_accessed",
"t"."st_mtime" AS "last_modified", "t"."st_ctime" AS "last_changed",
"t"."checksum_md5", ("mst"."media_type"
On 1/10/16, Saulo Merlo wrote:
> CREATE INDEX CONCURRENTLY index_name ON gorfs.inode_segments(st_ctime);
> ERROR: column "st_ctime" does not exist
> Look the error I've got
>
> Lucas
>
>> Date: Sun, 10 Jan 2016 22:43:21 -0800
>> Subject: Re:
CREATE INDEX CONCURRENTLY index_name ON gorfs.inode_segments(st_ctime);
ERROR: column "st_ctime" does not exist
Look the error I've got
Lucas
> Date: Sun, 10 Jan 2016 22:43:21 -0800
> Subject: Re: [GENERAL] Slow Query - PostgreSQL 9.2
> From: vitaly.buro...@gmail.com
&g
g/docs/9.2/static/sql-createindex.html
> I've rewriten the query as well. Thank you for that!
>
> Thank you
> Lucas
>> Date: Sun, 10 Jan 2016 21:23:01 -0800
>> Subject: Re: [GENERAL] Slow Query - PostgreSQL 9.2
>> From: vitaly.buro...@gmail.com
>> To: smerl.
016 21:23:01 -0800
> Subject: Re: [GENERAL] Slow Query - PostgreSQL 9.2
> From: vitaly.buro...@gmail.com
> To: smerl...@outlook.com
> CC: pgsql-general@postgresql.org
>
> On 1/10/16, Vitaly Burovoy wrote:
> > On 1/10/16, Saulo Merlo wrote:
> >> I've got a slo
On 1/10/16, Vitaly Burovoy wrote:
> On 1/10/16, Saulo Merlo wrote:
>> I've got a slow query.. I'd like to make it faster.. Make add an index?
>> Query:
>> SELECT
>> <>
>> FROM gorfs.nodes AS f
>> <>
>> WHERE f.file_data IS NOT NULL
>> AND ((transaction_timestamp() AT TIME ZONE 'UTC') > (f.l
On 1/10/16, Saulo Merlo wrote:
> I've got a slow query.. I'd like to make it faster.. Make add an index?
> Query:
> SELECT j.clientid AS client_id,
>ni.segment_index AS note_id,
>f.inode_id AS file_id,
>f.node_full_path AS filename,
>f.last_changed AS date_created,
I've got a slow query.. I'd like to make it faster.. Make add an index?
Query:
SELECT j.clientid AS client_id,
ni.segment_index AS note_id,
f.inode_id AS file_id,
f.node_full_path AS filename,
f.last_changed AS date_created,
f.file_data AS main_binary,
medi
Update :
My query SELECT * FROM v_actor JOIN f_intervenant_ref ON (actor_id =
ir_actor_id) WHERE ir_dos_id = '5226' took 7 secs.
If I substitute the _RETURN rule for the view and add the above join, it takes
31 ms.
Mark Watson
Service au client - R&D
Tél. 418 659-7272 ou 1 888 692-1050
www.j
>-Message d'origine-
>De : Tom Lane [mailto:t...@sss.pgh.pa.us]
>Envoyé : March-16-15 5:07 PM
>À : Tomas Vondra
>Cc : pgsql-general@postgresql.org; Marc Watson
>Objet : Re: [GENERAL] Slow query with join
>
>Tomas Vondra writes:
>> On 16.3.2015 19:50, Ma
Tomas Vondra writes:
> On 16.3.2015 19:50, Marc Watson wrote:
>> I hope someone can help me with a problem I'm having when joining a
>> view with a table. The view is somewhat involved, but I can provide the
>> details if necessary
> First, get rid of the ORDER BY clauses in the subselects - it's
On 16.3.2015 19:50, Marc Watson wrote:
> Hello all,
> I am using PostgreSQL 9.4.1, compiled by Visual C++ build 1800,
64-bit, as downloaded from EnterpriseDB, and is running on my dev system
under Win 7 64-bit.
> I hope someone can help me with a problem I'm having when joining a
view with a table.
On Mon, Mar 16, 2015 at 11:50 AM, Marc Watson
wrote:
> Hello all,
> I am using PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit, as
> downloaded from EnterpriseDB, and is running on my dev system under Win 7
> 64-bit.
>
[...]
>
> However, when I combine the two queries into one, th
Hello all,
I am using PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit, as
downloaded from EnterpriseDB, and is running on my dev system under Win 7
64-bit.
I hope someone can help me with a problem I'm having when joining a view with a
table. The view is somewhat involved, but I can
This went outside the purview of the mailing list.
I wanted to get some input regarding the odd behaviour of the query
planner.
Mostly out of curiosity.
This (http://explain.depesz.com/s/vj4) query plan has actual time = 17217
vs.
this one (http://explain.depesz.com/s/ojX) which has actual time =
Hi,
I have typical setup consisting of two tables (demography and ses) with a
typical filter-join-groupby-orderby query.
Schemas:
demography (pid int, countyid int)
ses (pid int, exposed_time int)
query:
select countyid, count(pid)
from demography, ses
where demography.pid = ses.pid
and expose
stgresql.org]
> på vegne av Antonio Goméz Soto [antonio.gomez.s...@gmail.com]
> Sendt: 22. mai 2013 10:50
> Til: pgsql-general@postgresql.org
> Emne: [GENERAL] Slow query and using wrong index, how to fix? Probably naive
> question..
>
> Hi,
>
> I am using postgre
Fra: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org]
på vegne av Antonio Goméz Soto [antonio.gomez.s...@gmail.com]
Sendt: 22. mai 2013 10:50
Til: pgsql-general@postgresql.org
Emne: [GENERAL] Slow query and using wrong index, how to fix? Probably naive
question..
Hi,
I am us
Antonio Goméz Soto wrote:
> I am using postgresql 8.1 (CentOS5). I have the following table:
>
> system # \d history
>Table "public.history"
> Column | Type | Modifiers
> --+--+-
Hi,
I am using postgresql 8.1 (CentOS5). I have the following table:
system # \d history
Table "public.history"
Column | Type | Modifiers
--+--+--
Nicholas Wieland wrote:
> Hi, I've tried to post on stackoverflow, but nobody is apparently
> able to help me.
> I'm not going to repeat everything here, there's quite some code
> in there that is nicely formatted, but if this is a problem I can
> repost it in here.
> What seems incredibly str
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Nicholas Wieland
Sent: Thursday, August 09, 2012 11:47 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Slow query
Hi, I've tried to post on stackoverflow, but nobody is apparently ab
Hi, I've tried to post on stackoverflow, but nobody is apparently able to help
me.
I'm not going to repeat everything here, there's quite some code in there that
is nicely formatted, but if this is a problem I can repost it in here.
http://stackoverflow.com/questions/11865504/postgresql-slow-que
On 14 Oct 2011, at 13:58, Alexander Farber wrote:
> Hi Alban and others -
>
> On Fri, Oct 14, 2011 at 1:34 PM, Alban Hertroys wrote:
>> Anyway, I think you get the sequential scans because the UNION requires to
>> sort all the data from both tables to guarantee that the results are unique
>> (
Hi Alban and others -
On Fri, Oct 14, 2011 at 1:34 PM, Alban Hertroys wrote:
> Anyway, I think you get the sequential scans because the UNION requires to
> sort all the data from both tables to guarantee that the results are unique
> (hence that long Sort Key at the 7th line of explain output).
On 14 Oct 2011, at 11:14, Alexander Farber wrote:
> I've added 3 new indices on both tables:
>
>
> quincy=> \d quincynoreset
> Table "public.quincynoreset"
> Column|Type | Modifiers
> -+-+---
> apps
2011/10/14 Alexander Farber :
> Thank you -
>
> On Fri, Oct 14, 2011 at 11:30 AM, Pavel Stehule
> wrote:
>> you should to use a DECLARE statement
>> http://www.postgresql.org/docs/9.1/interactive/sql-declare.html
>> and fetch statement
>> http://www.postgresql.org/docs/9.1/interactive/sql-fetch.h
Thank you -
On Fri, Oct 14, 2011 at 11:30 AM, Pavel Stehule wrote:
> you should to use a DECLARE statement
> http://www.postgresql.org/docs/9.1/interactive/sql-declare.html
> and fetch statement
> http://www.postgresql.org/docs/9.1/interactive/sql-fetch.html
I've managed to create a cursor
and c
Hello
you should to use a DECLARE statement
http://www.postgresql.org/docs/9.1/interactive/sql-declare.html
and fetch statement
http://www.postgresql.org/docs/9.1/interactive/sql-fetch.html
Regards
Pavel Stehule
2011/10/14 Alexander Farber :
> I've also tried opening cursor:
>
> quincy=> op
I've also tried opening cursor:
quincy=> open ref for select to_char(qdatetime, '-MM-DD') as
QDATETIME,ID,NAME,CATEGORY,APPSVERSION,OSVERSION,DETAILS,DEVINFO from
quincyview where qdatetime <= now() order by QDATETIME desc ;
ERROR: syntax error at or near "open"
LINE 1: open ref for select to
Hello Bill and others,
On Thu, Oct 13, 2011 at 4:09 PM, Bill Moran wrote:
> In response to Alexander Farber :
>> I use PostgreSQL 8.4.7 on CentOS 6.0 / 64 bit machine
>> with Quad-Core AMD Opteron(tm) Processor 2352 and
>> 16 GB RAM and use it for 1 PHP script - which selects
>> and displays data
In response to Alexander Farber :
> Hello,
>
> I use PostgreSQL 8.4.7 on CentOS 6.0 / 64 bit machine
> with Quad-Core AMD Opteron(tm) Processor 2352 and
> 16 GB RAM and use it for 1 PHP script - which selects
> and displays data in jQuery DataTables (i.e. an
> HTML-table which can be viewed page
On Oct 13, 2011, at 9:41, Alexander Farber wrote:
>
> Does anybody please have an idea,
> how to speed up my select statements?
>
Create one or more indexes.
David J.
Hello,
I use PostgreSQL 8.4.7 on CentOS 6.0 / 64 bit machine
with Quad-Core AMD Opteron(tm) Processor 2352 and
16 GB RAM and use it for 1 PHP script - which selects
and displays data in jQuery DataTables (i.e. an
HTML-table which can be viewed page by page).
I select records from 1 view which uni
On Jul 16, 2011, at 4:14 PM, - - wrote:
> I would like to count rows in q whose mid does not exist in t.
I would write such a query like this:
SELECT COUNT(*)
FROM q
LEFT OUTER JOIN t
ON (t.mid = q.mid)
WHERE t.mid IS NULL;
And I would make sure there was an index on t.mid. (
> - - writes:
> > The weird thing is that before I updated my server the query was about 5
> > times faster.
> > I've googled and I think the problem lies with the under-estimation of the
> > query planner about the number of rows in the nested table.I will be trying
> > the 'set enable_seqsca
2011/7/16 - -
>
> The weird thing is that before I updated my server the query was about 5
> times faster.
>
Updated it from what to what, and how?
--
Mike Nolan
no...@tssi.com
- - writes:
> The weird thing is that before I updated my server the query was about 5
> times faster.
> I've googled and I think the problem lies with the under-estimation of the
> query planner about the number of rows in the nested table.I will be trying
> the 'set enable_seqscan = false' so
On Jul 16, 2011, at 6:32, - - wrote:
The following query seems to take ages despite the EXPLAIN stating that an
index is used.Also, the condition (WHERE t.mid = q.mid) should be a one-to-one
mapping, should it not? In this case the mapping is to 3641527 rows.
Table q has no indexes and not
On Jul 16, 2011, at 6:32, - - wrote:
> The following query seems to take ages despite the EXPLAIN stating that an
> index is used.
> Also, the condition (WHERE t.mid = q.mid) should be a one-to-one mapping,
> should it not? In this case the mapping is to 3641527 rows.
>
> Table q has no inde
The following query seems to take ages despite the EXPLAIN stating that an
index is used.Also, the condition (WHERE t.mid = q.mid) should be a one-to-one
mapping, should it not? In this case the mapping is to 3641527 rows.
Table q has no indexes and not referenced by other tables. Table t has a
devan
From: Alex -
To:
Date: 01/22/2010 09:42 AM
Subject:[GENERAL] Slow Query / Check Point Segments
Sent by:pgsql-general-ow...@postgresql.org
Hi
i am experience slow queries when i run some functions. I noticed the
following entries in my server log.
>From t
Greg Smith wrote:
John R Pierce wrote:
I know the database has a lot of write volume overall, and its only
one of several databases running in different zones on the server. I
know nothing about the SAN, I suspect its a EMC Symmetrix of some
sort. Probably a generation or two behind latest.
John R Pierce wrote:
I know the database has a lot of write volume overall, and its only
one of several databases running in different zones on the server. I
know nothing about the SAN, I suspect its a EMC Symmetrix of some
sort. Probably a generation or two behind latest. The operations
p
Greg Smith wrote:
My guess is that there's something wrong with your config such that
writes followed by fsync are taking longer than they should. When I
see "sync=0.640 s" into a SAN where that sync operation should be near
instant, I'd be looking for issues in the ZFS intent log setup, how
John R Pierce wrote:
we're having a similar problem with a very update intensive database
that is part of a 24/7 manufacturing operation (no breathing time
unless there's an unusual line down situtation)
Your problem is actually a bit different.
2010-01-23 01:08:13 MYTLOG: checkpoint complet
Greg Smith wrote:
2010-01-22 12:21:48 JSTLOG: checkpoint complete: wrote 83874 buffers
(16.0%); 0 transaction log file(s) added, 0 removed, 32 recycled;
write=138.040 s, sync=0.000 s, total=138.063 s
2010-01-22 12:23:32 JSTLOG: checkpoint complete: wrote 82856 buffers
(15.8%); 0 transaction l
...@hotmail.com
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Slow Query / Check Point Segments
Alex - wrote:
checkpoint_segments = 32# in logfile segments,
min 1, 16MB each
checkpoint_timeout = 30min # range 30s-1h
These parameters are not
Alex - wrote:
checkpoint_segments = 32# in logfile segments, min 1,
16MB each
checkpoint_timeout = 30min # range 30s-1h
These parameters are not so interesting on their own. The important
thing to check is how often checkpoints are happening, and how much work
e
Oh yeah, what's your swappiness setting (assuming you're running some
flavor of linux:
sysctl -a|grep swapp
should tell you. I set it to something small like 5 or so on db
servers. Default of 60 is fine for an interactive desktop but usually
too high for a server.
--
Sent via pgsql-general ma
On Thu, Jan 21, 2010 at 9:13 PM, Alex - wrote:
> Hi
> i am experience slow queries when i run some functions. I noticed the
> following entries in my server log.
> From this, can anyone tell me if I need to change some config parmeters?
> System has 18GB Memory
> shared_buffers = 4GB
Hii am experience slow queries when i run some functions. I noticed the
following entries in my server log.
>From this, can anyone tell me if I need to change some config parmeters?
System has 18GB Memoryshared_buffers = 4GB# min
128kBtemp_buffers = 32MB #
Kevin Galligan wrote, On 29-10-08 23:35:
An example of a slow query is...
select count(*) from bigdatatable where age between 22 and 40 and state
= 'NY';
explain analyze returned the following...
Aggregate (cost=5179639.55..5179639.56 rows=1 width=0) (actual
time=389529.895..389529.897 ro
On Wed, Oct 29, 2008 at 9:18 PM, Kevin Galligan <[EMAIL PROTECTED]> wrote:
> I'm approaching the end of my rope here. I have a large database.
> 250 million rows (ish). Each row has potentially about 500 pieces of
> data, although most of the columns are sparsely populated.
>
*snip*
>
> So, went
b. But if Access manages okay
with the depivoted table, it might be worth a try. Based on 500 fields, 250M
records, 2% filled it looks like it might depivot your table overnight, or
better. You'd finish with about 2.5 billion rows.
Best wishes,
Nick
> -Original Message-----
&g
Oct 29, 2008 at 7:52 PM, Dann Corbit <[EMAIL PROTECTED]> wrote:
>*From:* Kevin Galligan [mailto:[EMAIL PROTECTED]
> *Sent:* Wednesday, October 29, 2008 4:34 PM
> *To:* Dann Corbit
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: FW: [GENERAL] Slow query performan
From: Kevin Galligan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 29, 2008 4:34 PM
To: Dann Corbit
Cc: pgsql-general@postgresql.org
Subject: Re: FW: [GENERAL] Slow query performance
Sorry for the lack of detail. Index on both state and age. Not a
clustered on both as the queries are
From: Kevin Galligan [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 29, 2008 3:16 PM
To: Dann Corbit
Subject: Re: [GENERAL] Slow query performance
Columns are as follows:
account | integer |
city | character varying(20) |
zip | character(5) |
dincome
An example of a slow query is...
select count(*) from bigdatatable where age between 22 and 40 and state =
'NY';
explain analyze returned the following...
Aggregate (cost=5179639.55..5179639.56 rows=1 width=0) (actual
time=389529.895..389529.897 rows=1 loops=1)
-> Bitmap Heap Scan on bigda
On Wed, Oct 29, 2008 at 2:18 PM, Kevin Galligan <[EMAIL PROTECTED]> wrote:
> I'm approaching the end of my rope here. I have a large database.
> 250 million rows (ish). Each row has potentially about 500 pieces of
> data, although most of the columns are sparsely populated.
A couple of notes her
I'm approaching the end of my rope here. I have a large database.
250 million rows (ish). Each row has potentially about 500 pieces of
data, although most of the columns are sparsely populated.
What I'm trying to do is, essentially, search for sub-sets of that
data based on arbitrary queries of
I just ANALYZEd the database and times are back to normal. Sorry for the noise.
Regards, Clodoaldo Pinto Neto
2008/2/27, Clodoaldo <[EMAIL PROTECTED]>:
> Postgresql 8.2.6, Fedora 8, 2 GB memory.
>
> A query that used to perform in a few seconds is now taking 64 seconds
> with 100% cpu:
>
> fah
Postgresql 8.2.6, Fedora 8, 2 GB memory.
A query that used to perform in a few seconds is now taking 64 seconds
with 100% cpu:
fahstats=> explain analyze
fahstats-> select
fahstats-> donor::smallInt as new_members,
fahstats-> active_members,
fahstats-> d.data::date as day,
fahstats-> isod
1 - 100 of 140 matches
Mail list logo