I am not sure I agree with that evaluation.
I only have 2 dell database servers and they have been 100% reliable.
Maybe he is referring to support which does tend be up to who you get.
When I asked about performance on my new server they were very helpful but I
did have a bad time on my NAS device
I bought both the 1u 2 proc and the larger
4 proc have been very good.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tool
Any chance it's a vacuum thing?
Or configuration (out of the box it needs adjusting)?
Joel Fradkin
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Thursday, September 01, 2005 2:11 PM
To: Matthew Sackman
Cc: pgsql-perfor
issues.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
This e
running better
then when I was on MSSQL, and MYSQL was just slower on the tests I did.
I loaded both MYSQL and postgres on both my 4 processor Dell running red hat
AS3 and Windows XP on a optiplex.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
time consuming.
Any ideas for how to approach getting the same data set in a
faster manner are greatly appreciated.
Joel Fradkin
ecompleted,
dateauditkeyed, datekeyingcomplete, section, createdby, division, auditscoredesc,
locationnum, text_response
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with
My only comment is what is the layout of your data (just one table with
indexes?).
I found on my date with dozens of joins my view speed was not good for me to
use, so I made a flat file with no joins and it flies.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941
and audit applications) I created
denormalized files and maintain them through code. All reporting comes off
those and it is lightning fast.
I just want to say again thanks to everyone who has helped me in the past
few months.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
conclusion on where is best to get one (I really
want two one for development too).
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based
.
shared_buffers = 24576
work_mem = 32768
max_fsm_pages = 10
max_fsm_relations = 1500
fsync = true
wal_sync_method = open_sync
wal_buffers = 2048
checkpoint_segments = 100
effective_cache_size =
524288
default_statistics_target =
250
Any help is appreciated.
Joel Fradkin
Just realize, you probably *don't* want to set that in postgresql.conf.
You just want to issue an "SET enable_seqscan TO off" before issuing one
of the queries that are mis-planned.
I believe all the tested queries (90 some odd views) saw an improvement.
I will however take the time to verify this
BTW, your performance troubleshooting will continue to be hampered if you
can't share actual queries and data structure. I strongly suggest that you
make a confidentiality contract with a support provider so that you can
give them detailed (rather than general) problem reports.
I am glad to h
Sorry I am using Redhat AS4 and postgres
8.0.2
Joel
You didnt tell us what OS
are you using, windows?
If you want good
performance you must install unix on that machine,
---
up functionality and have one
for reporting and one for inserts and updates; so not sure which machine would
be best for which spot (reminder the more robust is a 4proc with 8 gigs and 2
proc is 4 gigs, both dells).
Thank you for any ideas in this arena.
Joel Fradkin
Tried changing the settings and saw no change in a test using asp.
The test does several selects on views and tables.
It actually seemed to take a bit longer.
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
tool last time I did stress testing).
Joel Fradkin
Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel. 941-753-7111 ext 305
[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology & advanced tools.
C 2004. WAZAGUA, Inc. All rights
"Index Scan using ix_tblviwauditcube_clientnum on tblviwauditcube
(cost=0.00..35895.75 rows=303982 width=708) (actual time=0.145..1320.432
rows=316490 loops=1)"
" Index Cond: ((clientnum)::text = 'MSI'::text)"
"Total runtime: 1501.028 ms"
I would very, very strongly encourage you to run multi-user tests before
deciding on mysql. Mysql is nowhere near as capable when it comes to
concurrent operations as PostgreSQL is. From what others have said, it
doesn't take many concurrent operations for it to just fall over. I
can't speak from e
Query tool, but it certainly seems encouraging.
I am going to visit Josh's tests he wanted me to run on the LINUX server.
Joel Fradkin
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
One further question is: is this really a meaningful test? I mean, in
production are you going to query 30 rows regularly?
It is a query snippet if you will as the view I posted for audit and case
where tables are joined are more likely to be ran.
Josh and I worked over this until we got ex
tableC
ursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseS
erverSidePrepare=0"
Joel Fradkin
-Original Message-
From: Mohan, Ross [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 21, 2005 9:42 AM
To: [EMAIL PROTECTED]
Subject: RE: [PERFORM] Joel's Performance Iss
I suspect he's using pgadmin.
Yup I was, but I did try running on the linux box in psql, but it was
running to the screen and took forever because of that.
The real issue is returning to my app using ODBC is very slow (Have not
tested the ODBC for MYSQL, MSSQL is ok (the two proc dell is runnin
Why is MYSQL returning 360,000 rows, while Postgres is only returning
330,000? This may not be important at all, though.
I also assume you are selecting from a plain table, not a view.
Yes plain table. Difference in rows is one of the datasets had sears data in
it. It (speed differences found) is
like 89 on the second run.
The first run was 147 secs all told.
These are all on my 2 meg desktop running XP.
I can post the config. I noticed the postgres was using 70% of the cpu while
MSSQL was 100%.
Joel Fradkin
>I would of spent more $ with Command, but he does need my data base to help
ly on the performance on the 4 proc box.
Joel Fradkin
-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 20, 2005 1:54 PM
To: Joel Fradkin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon
There have been some discussions on this list and others in general about
Dell's version of RAID cards, and server support, mainly linux support.
I was pretty impressed with the Dell guy. He spent the day with me remotely
and went through my system 6650 with powervault. Changed my drives from ext3
pgAdmin III uses libpq, not the ODBC driver.
Sorry I am not too aware of all the semantics.
I guess the question is if it is normal to take 2 mins to get 160K of
records, or is there something else I can do (I plan on limiting the query
screens using limit and offset; I realize this will only be e
Sorry if this posts twice I posted and did not see it hit the list.
What are the statistics
for tbljobtitle.id and tbljobtitle.clientnum
I added default_statistics_target = 250 to the config and re-loaded the data
base. If that is what you mean?
--- how many distinct values of each,
tbljobtitl
What are the statistics
for tbljobtitle.id and tbljobtitle.clientnum
I added default_statistics_target = 250 to the config and re-loaded the data
base. If that is what you mean?
--- how many distinct values of each,
tbljobtitle.id 6764 for all clients 1018 for SAKS
tbljobtitle.clientnum 237 di
Another odd thing is when I tried turning off merge joins on the XP desktop
It took 32 secs to run compared to the 6 secs it was taking.
On the Linux (4proc box) it is now running in 3 secs with the mergejoins
turned off.
Unfortunately it takes over 2 minutes to actually return the 160,000+ rows.
Joel Fradkin
Turning off merg joins seems to of done it but what do I need to do so I am
not telling the system explicitly not to use them, I must be missing some
setting?
On linux box.
explain analyze select * from viwassoclist where clientnum ='SAKS'
"Hash Join (cost=98
Might make a good deal more data, but I
think from the app's point of view it is a good idea anyway, just not sure
how to handle editing.
Joel Fradkin
"Merge Join (cost=49697.60..50744.71 rows=14987 width=113) (actual
time=11301.160..12171.072 rows=160593 loops=1)"
" Merge Cond
analyze.
Joel Fradkin
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> Basically it tells postgres how many values should it keep for
> statistics per column. The config default_statistics_target
> is the default (= used when creating table) and ALTER... is
> a way to change it
at Linux so not sure I have got it yet still testing.
Still kind puzzled why it chose tow different option, but one is running
windows version of postgres, so maybe that has something to do with it.
The data bases and configs (as far as page cost) are the same.
Joel Fradkin
Wazagua, Inc.
2520
Josh from commandprompt.com had me alter the config to have
default_statistics_target = 250
Is this somehow related to what your asking me to do?
I did do an analyze, but have only ran the viw a few times.
Joel Fradkin
-Original Message-
From: Dawid Kuroczko [mailto:[EMAIL PROTECTED
I have done a vacuum and a vacuum analyze.
I can try again for kicks, but it is not in production so no new records are
added and vacuum analyze is ran after any mods to the indexes.
I am still pursuing Dell on why the monster box is so much slower then the
desktop as well.
Joel Fradkin
are you sure the query was identical in each case.
I just ran a second time same results ensuring that the query is the same.
Not sure why it is doing a column10 thing. Any ideas what to look for?
Both data bases are a restore from the same backup file.
One is running redhat the other XP, I beli
obtitle jt
(cost=0.00..239.76 rows=6604 width=37) (actual time=0.016..11.323 rows=5690
loops=1)"
"Filter: (1 = presentationid)"
" -> Sort (cost=24825.80..25265.84 rows=176015 width=53)
(actual time=8729.320..8945.292 rows=177041 loops=1)&quo
Here is the config for the AS4 server.
# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
# name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on
e?
" -> Seq Scan on tblassociate a (cost=0.00..30849.25
rows=176933 width=53) (actual time=543.931..1674.518 rows=177041 loops=1)"
" Filter: ((clientnum)::text = 'SAKS'::text)"
which if I understand this (not saying I do) is ta
Filter: (1 = presentationid)"
" -> Sort (cost=60296.29..60796.09 rows=199922 width=53)
(actual time=13406.000..13859.000 rows=176431 loops=1)"
"Sort Key: a.jobtitleid, (a.clientnum)::text"
"-> Seq Scan on tblass
shared_buffers = 8000 # min 16, at least max_connections*2, 8KB
each
work_mem = 8192#1024# min 64, size in KB
max_fsm_pages = 3 # min max_fsm_relations*16, 6 bytes each
effective_cache_size = 4 #1000 # typically 8KB each
random_page_cost = 1.2#4
timestamp,
close_time timestamp,
insurance_loc_id varchar(50),
lpregionid int4,
sic int4,
CONSTRAINT pk_tbllocation PRIMARY KEY (clientnum, locationid),
CONSTRAINT ix_tbllocation_1 UNIQUE (clientnum, locationnum,
name),
CONSTRAINT ix_tbllocation_unique_number UNIQUE (clientnum,
divisionid, regionid, districtid, locationnum)
)
Joel Fradkin
I am no expert, but have been asking them a bunch and I think your missing a
key concept.
The data is best on several drives.
I could be completely off, but if I understood (I just finished doing the
same kind of thing minus several databases) you want your WAL on fast drives
in raid 1 and your da
45 matches
Mail list logo