On Wed, Jun 10, 2015 at 2:08 PM, Merlin Moncure wrote:
> On Wed, Jun 10, 2015 at 3:55 PM, Patrick Krecker wrote:
>> OK. Well, fortunately for us, we have a lot of possible solutions this
>> problem, and it sounds like actually getting statistics for attributes
>> ? 'r
OK. Well, fortunately for us, we have a lot of possible solutions this
problem, and it sounds like actually getting statistics for attributes
? 'reference' is not realistic. I just wanted to make sure it wasn't
some configuration error on our part.
Can anyone explain where exactly the estimate for
Hi everyone --
I had an issue the other day where a relatively simple query went from
taking about 1 minute to execute to taking 19 hours. It seems that the
planner chooses to use a materialize sometimes [1] and not other times
[2]. I think the issue is that the row count estimate for the result
o
On Wed, Dec 10, 2014 at 2:44 AM, Maila Fatticcioni
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hello.
> I need to tune a postgres installation I've just made to get a better
> performance. I use two identical servers with a hot replication
> configuration. The two servers have the
pears to me that postgres thinks the index
scan is much more expensive than it really is. However, given the
accurate statistics, I can't see how.
BTW I tried playing with random_page_cost. If I lower it to 2.0 then
it chooses the fast plan. At 3.0 it chooses the slow plan.
Thanks!
Patrick
effective_cache_size = 22303MB
I have been struggling to make these types of query fast because they are
very common (basically fetching all of the metadata for a document, and we
have a lot of metadata and a lot of documents). Any help is appreciated!
Thanks,
Patrick
Linux or ZFS
without sync.
Best regards,
Patrick
--- On Tue, 1/8/13, k...@rice.edu wrote:
From: k...@rice.edu
Subject: Re: [PERFORM] Sub optimal performance with default setting of
Postgresql with FreeBSD 9.1 on ZFS
To: "Patrick Dung"
Cc: pgsql-performance@postgresql.org
Date: Tuesday,
The web site is responsive and the benchmark result is more or less the same as
FreeBSD with the 'sync' turned off.
3)
For FreeBSD, same setting with Postgresql on UFS:
The performance is between ZFS (default, sync enabled) and ZFS (sync disabled).
Thanks,
Patrick
--- On Mon, 1/7/13,
arrow it down. Running
the query from my webserver yielded much better times, but from a quick look it
seems my 8.4 server is still a bit slower. I will share more details as I dig
into it more tomorrow or Monday.
-Patrick
- Original Message -
From: "Merlin Moncure"
To:
(cost=0.00..1141.68 rows=29668 width=53) (actual
time=0.117..20.890 rows=29668 loops=1)"
" -> Hash (cost=2.60..2.60 rows=60 width=21) (actual time=0.112..0.112 rows=60
loops=1)"
" -> Seq Scan on test (cost=0.00..2.60 rows=60 width=21) (actual
time=0.035..0.069 rows=
rows retrieved.
Anyone have any ideas on where I should start looking to figure this out? I
didn't perform any special steps when moving to v8.4, I just did a pg_dump from
the 8.3 server and restored it on the new 8.4 servers. Maybe that is where I
made a mistake.
Thanks!
Patrick
, lack of readahead,
TLB misses, etc
cpu_tuple_cost = 1.0
cpu_index_tuple_cost = 0.5
cpu_operator_cost = 0.25
effective_cache_size = 1000MB
shared_buffers = 1000MB
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Wednesday, March 24, 2010 5:47 PM
To: Eger, Patrick
least
=)
Best regards, Patrick
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Monday, March 22, 2010 12:22 PM
To: Christian Brink
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgre
I'm running 8.4.2 and have noticed a similar heavy preference for
sequential scans and hash joins over index scans and nested loops. Our
database is can basically fit in cache 100% so this may not be
applicable to your situation, but the following params seemed to help
us:
seq_page_cost = 1.0
ran
-performance/2008-03/msg00371.php
In my case, it was by a factor of 2.
Of course, I can't turn off nested loop in my database,
it will impact performance on small tables too much...
So there is no easy fix for that, it seems,
beside playing with per-column statistics-gathering target maybe?
Pa
r in which it is joined, even when it is scanned with an index?
I'm pretty sure it is because of the reduced table sizes,
since the server configuration is the same.
Thoughts?
Thanks,
Patrick
restart Pg. Once restarted we were able to do a VACUUM FULL and this
took care of the issue.
hth
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Matteo Sgalaberni
yed product.
Thanks again for your input,
--patrick
On 4/18/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> "patrick keshishian" <[EMAIL PROTECTED]> writes:
> > I've been struggling with some performance issues with certain
> > SQL queries. I was prepping a l
..45973.09
rows=4037 width=0)
Index Cond: (offer_id = 7141)
Filter: (oid > 1::oid)
(4 rows)
Time: 27.301 ms
db=# select count(*) from pk_c2 b0 where b0.offer_id=7141 and oid > 1;
count
---
1
(1 row)
Time: 1.900 ms
What gives?
This seems just too hokey for my taste.
--p
On 4/13/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> On Thu, Apr 13, 2006 at 06:26:00PM -0700, patrick keshishian wrote:
> > $ dropdb dbname
> > $ createdb dbname
> > $ pg_restore -vsOd dbname dbname.DUMP
>
> That step is pointless, because the next pg_restore
On 4/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> "patrick keshishian" <[EMAIL PROTECTED]> writes:
> > With these settings and running:
> > pg_restore -vaOd dbname dbname.DUMP
>
> If you had mentioned you were using random nondefault switches, we&
That's just over 14 minutes!
Ideas?
Is this because the -c option drops all foreign keys and
so the restore goes faster? Should this be the preferred,
recommended and documented method to run pg_restore?
Any drawbacks to this method?
Thanks,
--patrick
On 4/12/06, Tom Lane <[EMAIL PRO
(within reason) :)
Best regards,
--patrick
[EMAIL PROTECTED]:/tmp$ date
Mon Apr 10 15:13:19 PDT 2006
[EMAIL PROTECTED]:/tmp$ pg_restore -ad dbname customer_db.DUMP ; date
^C
[EMAIL PROTECTED]:/tmp$ date
Wed Apr 12 10:40:19 PDT 2006
[EMAIL PROTECTED]:/tmp$ uname -a
Linux devbox 2.4.31 #6 Sun Jun 5 19:04
We have size and color in the product table itself. It is really an
attribute of the product. If you update the availability of the product
often, I would split out the quantity into a separate table so that you can
truncate and update as needed.
Patrick Hatcher
Development Manager Analytics
and monthly table partitioning.
So, my question in short, Is there any plan to at least do Global unique check constraints (or at least a global unique index) and is there a thread/documentation somewhere about what are the future planned changes to table partitioning?
Thanks
Patrick Carriere
09 rows=64 width=8) (actual
time=93.710..362.802 rows=63 loops=1)
Index Cond: ((date_dim_id >=
'2005-10-30'::date) AND (date_dim_id <= '2005-12-31'::date))
Total runt
sults. The ending resultset is around 169K rows which,
if I'm reading the analyze output, is more than double. Any suggestions?
TIA
-patrick
Select gmmid, gmmname, divid, divname, feddept, fedvend,itemnumber as
mstyle,amc_week_id,
sum(tran_itm_total) as net_dollars
FROM
public.tbldetaillevel
Thanks. No foreign keys and I've been bitten by the mismatch datatypes and
checked that before sending out the message :)
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
Tom
loat8
)
WITHOUT OIDS;
Patrick Hatcher
Development Manager Analytics/MIO
Macys.com
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Index Cond: (("outer".appl_xref)::text =
(h.appl_xref)::text)
Filter: ((appl_id)::text = 'MCOM'::text)
-> Index Scan using mdcupcidx on mdc_upc u
(cost=0.00..4.01 rows=1 width=12)
At the time this was the only process running on the box so I set
sort_mem= 228000;
It's a 12G box.
Tom Lane wrote:
Patrick Hatcher <[EMAIL PROTECTED]> writes:
Hash Join (cost=1246688.42..4127248.31 rows=12702676 width=200)
Hash Cond: ("outer".cus_num = "inne
t8,
CONSTRAINT ddwcus_pk PRIMARY KEY (cus_nbr)
)
WITH OIDS;
CREATE INDEX cdm_ddwcust_id_idx
ON cdm.cdm_ddw_customer
USING btree
(cus_nbr);
CREATE TABLE cdm.bcp_ddw_ck_cus
(
cus_num int8,
indiv_fkey int8 NOT NULL
)
WITHOUT OIDS;
Tom Lane wrote:
Patrick Hatcher <[EMAIL PROTECTED]> wr
;.cus_nbr)
-> Seq Scan on bcp_ddw_ck_cus b (cost=0.00..195690.76 rows=12702676
width=16)
-> Hash (cost=874854.34..874854.34 rows=12880834 width=192)
-> Seq Scan on cdm_ddw_customer (cost=0.00..874854.34
rows=12880834 width=192)
John A Meinel wrote:
Patrick Hatcher
just be quicker to run a JOIN statement to a temp file and then reinsert?
TIA
Patrick
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's dataty
d add the column "city_id" since 2 different
streets in 2 different cities may have the same name.
Amicalement
Patrick
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
ost = 0.01 # (same)
#cpu_index_tuple_cost = 0.001 # (same)
#cpu_operator_cost = 0.0025 # (same)
Patrick Hatcher
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
care about security but only about speed and capacity ( maybe
this switch was not set properly at this time...).
Thank you for these interesting links: I 've sent
them to my system engineer with my two hands !
Amicalement
Patrick
- Original Message -
From:
Gustavo
F
for that. As Simon and Gustavo suggested,
I will check my SCSI disks first.
Thank a lot for your advises !
Amicalement,
Patrick
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
a is always in RAM,
rather than maybe> in RAM.I am not an Linux expert: Is it possible
(and how) to create a RAMdisk ?
Thank a lot for your help !
Patrick
ed to the types you mentionned (reiserfs
vs ext3) ?
I don't use RAID since the security is not a
concern.
Thank a lot for your help !
Patrick
Hi
Patrick, How is configured your disk array? Do you
have a Perc 4?Tip: Use reiserfs instead ext3, raid 0+1 and deadline
I/O scheduler in kernel linux 2.6
supports
only 'copy aggregate from' and 'select'.
- is it possible to define a sort of RAM filesystem (as it exists
in DOS/Windows) which I could create and populate my databases into
? ...since the databases does not support updates for this
application.
Sorry for my
Dear,
We are using PostgreSQL for 4 Years now, one can say it is a blessing to
maintain. Our previous database was number one (;-), it was much harder
to maintain so labor is a pro for PostgreSQL ...
Kind Regards
Patrick Meylemans
IT Manager
WTCM-CRIF
Celestijnenlaan 300C
3001 Helerlee
At 11
I do mass inserts daily into PG. I drop the all indexes except my primary key and then use the COPY FROM command. This usually takes less than 30 seconds. I spend more time waiting for indexes to recreate.Patrick HatcherMacys.Com [EMAIL PROTECTED] wrote: -To: [EMAIL PROTECTED]From: Christoph
hey become
obvious. This is a common failure pattern with caches.
Patrick B. Kelly
--
http://patrickbkelly.org
---(end of broadcast)---
TIP 2: you can get off all lis
index scans (as it is even more likely
> that you will not get all the data.)
I may try this as well as trying a suggestion by Pierre-Fr«±d«±ric
Caillaud to use EXISTS, though my initial attempt to use it didn't
seem to be any faster than my or
Hi John,
Thanks for your reply and analysis.
--- John Meinel <[EMAIL PROTECTED]> wrote:
> patrick ~ wrote:
> [...]
> > pkk=# explain analyze execute pkk_01(241 );
&g
Sorry for the late reply. Was feeling a bit under the weather
this weekend and didn't get a chance to look at this.
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> patrick ~ <[EMAIL PROTECTED]> writes:
> > PREPARE pkk_00 ( integer ) )
>
> This is what you want to d
5:
pkk=# select offer_id, count(*) from pkk_purchase where offer_id in ( 795, 2312
) group by offer_id ;
offer_id | count
--+---
795 | 4
2312 | 1015
(2 rows)
Time: 21.118 ms
--patrick
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> patrick ~ <[EMAI
is are:
1. Is this really the only solution left for me?
2. Am I in anyway screwing the db doing this?
Best regards,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of broadcas
Just wanted to know if there were any insights after looking at
requested 'explain analyze select ...'?
Thanks,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of
ill
attempt to play around with it.
I was asked (in a prior post) whether running the statement a second
time after the VACUUM improves in performance. It does not. After
the VACUUM the statement remains slow.
Thanks for your help,
--patrick
--- Tom Lane <[EMAIL PROTECTED]> wrote:
>
x27;t being stressed by any other processes.
Thanks for reading,
--patrick
__
Do you Yahoo!?
Check out the new Yahoo! Front Page.
www.yahoo.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
at database. Should
add ALL these individual pages together and pad the total and use this
as my new max_fsm_pages? Should I do the same thing with max_fsm_relations?
TIA
Patrick
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
RESTRICT
> > "people_attributes_income_fkey" FOREIGN KEY (income) REFERENCES
> > attribute_values(value_id) ON DELETE RESTRICT
> > "people_attributes_occupation_fkey" FOREIGN KEY (occupation)
> > REFERENCES attribute_values(value_id
&
(smoking) REFERENCES
attribute_values(value_id) ON D
ELETE RESTRICT
"people_attributes_want_children_fkey" FOREIGN KEY (want_children)
REFERENCES attribute_values(va
lue_id) ON DELETE RESTRICT
Is it all the foreign keys that are stalling the drop? I have done VACUUM
ANALYZE on the
ode that writes
code" on a few occassions during the attribute "mapping" process. For
example, keeping an associative array of all the attributes without fetching
that data from the database each time. My advice: if you're not a masochist,
use a template engine (or simply p
Thanks for the help.
I found the culprit. The user
had created a function within the function (
pm.pm_price_post_inc(prod.keyp_products)).
Once this was fixed the time dropped dramatically.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Patrick
o complete. Is there something I should
be looking for in my conf settings?
TIA
Patrick
SQL:
---Bring back only selected records
to run through the update process.
--Without the function the SQL takes
< 10secs to return 90,000 records
SELECT count(pm.pm_delta_function_amazon(upc.keyp_upc,
I upgraded to 7.4.3 this morning and
did a vacuum full analyze on the problem table and now the indexes show
the correct number of records
Patrick Hatcher
Macys.Com
Josh Berkus <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
09/21/04 10:49 AM
To
"Patrick Hatcher"
&
Nope. It's been running like a champ for while now.
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Josh B
uot;Robert Treat" <[EMAIL PROTECTED]>
To: "Patrick Hatcher" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, September 20, 2004 11:12 PM
Subject: Re: [PERFORM] vacuum full & max_fsm_pages question
> On Tuesday 21 September 2004 00:01, Patrick Hatcher wr
e currently reusable.CPU
1.37s/0.35u sec elapsed 4.79 sec.INFO: analyzing
"cdm.cdm_email_data"INFO: "cdm_email_data": 65869 pages, 3000 rows
sampled, 392333 estimated total rows
#After vacuum full(s)
mdc_oz=# select count(*) from
cdm.cdm_email_data; count- 5433358(1
row)
TIA
Patrick
I have currently implemented a schema for my "Dating Site" that is storing
user search preferences and user attributes in an int[] array using the
contrib/intarray package (suggested by Greg Stark). But there are a few
problems.
a) query_int can't be cast to int4.
b) query_int ca
is the only one that used the index, but the only really acceptable
method here is Method 1.
My questions are...
- Is there any hope in getting this to use an efficient index?
- Any mathmaticians know if there is a way to reorder my bitwise comparison to
have the operator use = and
.
Patrick Hatcher
<[EMAIL PROTECTED]
om>To
S
w versions) is 4474020460 bytes.
544679 pages are or will become empty, including 0 at the end of the table.
692980 pages containing 4433398408 free bytes are potential move
destinations.
CPU 29.55s/4.13u sec elapsed 107.82 sec.
TIA
Patrick Hatcher
---
Thanks!
Patrick Hatcher
Andrew McMillan <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
06/21/04 03:11 AM
To
Patrick Hatcher <[EMAIL PROTECTED]>
cc
[EMAIL PROTECTED]
Subject
Re: [PERFORM] Slow vacuum
performance
On Fri, 2004-06-18 at 19:51 -0700, Patrick H
21 but wanted to make
sure
shared_buffers = 2000 # min 16, at least max_connections*2, 8KB
each
sort_mem = 12288# min 64, size in KB
# - Free Space Map -
max_fsm_pages = 10 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min 100
in seconds
#checkpoint_warning = 30# 0 is off, in seconds
#commit_delay = 0 # range 0-10, in microseconds
#commit_siblings = 5# range 1-1000
TIA
Patrick Hatcher
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Do you have an index on ts.bytes? Josh had suggested this and after I put
it on my summed fields, I saw a speed increase. I can't remember the
article was that Josh had written about index usage, but maybe he'll chime
in and supply the URL for his article.
hth
Patri
here's the URL:
http://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php
Patrick Hatcher
Macys.Com
Legacy Integration Developer
415-422-1610 office
HatcherPT - AIM
Pa
, but the
larger the recordset the slower the data is return to the client. I played
around with the cache size on the driver and found a value between 100 to
200 provided good results.
HTH
Patrick Ha
Thank you
Patrick Hatcher
"scott.ma
again
Patrick Hatcher
Josh Berkus
ce, 100.0% idle
CPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle
Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K
buff
Swap: 2044056K av, 0K used, 2044056K free 6257620K
cached
Patrick Hatcher
---(e
75 matches
Mail list logo