| unset
wal_buffers| 1024
wal_debug | 0
wal_sync_method| open_sync
zero_damaged_pages | off
Sean Shanny wrote:
To all,
Essentials: Running 7.4.1 on OSX on a loaded G5 with dual procs, 8GB
memory, direct attached
To all,
Essentials: Running 7.4.1 on OSX on a loaded G5 with dual procs, 8GB
memory, direct attached via fibre channel to a fully optioned 3.5TB
XRaid (14 spindles, 2 sets of 7 in RAID 5) box running RAID 50.
Background: We are loading what are essentially xml based access logs
from about 20
Simon Riggs wrote:
Sean Shanny
Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two
7
disk hardware based RAID5 sets software striped to form a RAID50 set.
The DB, WALS, etc are all on that file set
| text| not null default 'Not Available'::text
userid_key | integer |
Indexes:
"f_pageviews_pkey" primary key, btree (id)
"idx_pageviews_date" btree (date_key)
"idx_pageviews_session" btree (session_key)
scott.marlowe wrote:
On Sun,
. The joys of
building a data warehouse and trying to make it as fast as possible.
Thanks.
--sean
Tom Lane wrote:
Sean Shanny <[EMAIL PROTECTED]> writes:
New results with the above changes: (Rather a huge improvement!!!)
Thanks Scott. I will next attempt to make the cpu_* changes
scott.marlowe wrote:
On Fri, 20 Feb 2004, Sean Shanny wrote:
max_connections = 100
# - Memory -
shared_buffers = 16000 # min 16, at least max_connections*2,
8KB each
sort_mem = 256000 # min 64, size in KB
You might wanna drop sort_mem somewhat and just set it
To all,
This is a 2 question email. First is asking about general tuning of the
Apple hardware/postgres combination. The second is whether is is
possible to speed up a particular query.
PART 1
Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5
:
Sean Shanny <[EMAIL PROTECTED]> writes:
Here is the pg_stats data. The explain analyze queries are still running.
select * from pg_stats where tablename = 'f_pageviews' and attname =
'content_key';
schemaname | tablename | attname | null_frac
Here is one of the explain analyzes. This is the from the faster
query. Ignore the total runtime as we are currently doing other queries
on this machine so it is slightly loaded.
Thanks.
--sean
explain analyze select count (distinct (persistent_cookie_key) ) from
f_pageviews where date_key
Here is the pg_stats data. The explain analyze queries are still running.
select * from pg_stats where tablename = 'f_pageviews' and attname =
'date_key';
schemaname | tablename | attname | null_frac | avg_width |
n_distinct | most_common_vals
|
I am running explain analyze now and will post results as they finish.
Thanks.
--sean
Tom Lane wrote:
Please show EXPLAIN ANALYZE output for your queries, not just EXPLAIN.
Also it would be useful to see the pg_stats rows for the date_key and
content_key columns.
regards, tom lane
To all,
The facts:
PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI
drives in hardware RAID 0 configuration. Database size with indexes is
currently 122GB. Schema for the table in question is at the end of this
email. The DB has been vacuumed full and analyzed. Between
Gaetano,
I don't believe we have ever run the system without it turned on.
Another switch to fiddle with. :-)
--sean
Gaetano Mendola wrote:
Sean Shanny wrote:
We are currently running on a Dell 2650 with 2 Xeon 2.8 processors in
hyper-threading mode, 4GB of ram, and 5 SCSI drives in a
I should also add that we have already done a ton of tuning based on the
archives of this list so we are not starting from scratch here.
Thanks.
--sean
Sean Shanny wrote:
To all,
We are building a data warehouse composed of essentially click stream
data. The DB is growing fairly quickly as
To all,
We are building a data warehouse composed of essentially click stream
data. The DB is growing fairly quickly as to be expected, currently at
90GB for one months data. The idea is to keep 6 months detailed data on
line and then start aggregating older data to summary tables. We have 2
or that. It's called
EVERYBODY, and they meet at the bar."
-- Drew Carey
This signature generated by
... and I Quote!!(tm) Copyright (c) 1999 SpaZmodic Frog Software, Inc.
www.spazmodicfrog.com
-----Original Message-
From: Sean S
John,
Are you treating each insertion as a separate transaction? If so the
performance will suffer. I am doing the same thing in building a data
warehouse using PG. I have to load millions of records each night. I
do two different things:
1) If I need to keep the insertions inside the java
17 matches
Mail list logo