William Yu wrote:
This is an intriguing thought which leads me to think about a similar
solution for even a production server and that's a solid state drive for
just the WAL. What's the max disk space the WAL would ever take up?
There's quite a few 512MB/1GB/2GB solid state drives available now
After a long battle with technology, [EMAIL PROTECTED] (Josh Berkus), an earthling,
wrote:
>> As long as pg_autovacuum remains a contrib module, I don't think
>> any changes to the system catelogs will be make. If pg_autovacuum
>> is deemed ready to move out of contrib, then we can talk about the
Nid wrote:
I've been digging around in the code and found where we are executing the
VACUUM command. VACUUM ANALYZE is executed every 15 minutes. We haven't
tried VACUUM FULL ANALYZE. I think I read that using FULL is a good idea
once a day or something. Just doing a VACUUM ANALYZE doesn't see
I've been digging around in the code and found where we are executing the
VACUUM command. VACUUM ANALYZE is executed every 15 minutes. We haven't
tried VACUUM FULL ANALYZE. I think I read that using FULL is a good idea
once a day or something. Just doing a VACUUM ANALYZE doesn't seem to be
prev
<[EMAIL PROTECTED]> writes:
> But it was not this bad in 7.3 as far as i understand.
No, I believe this behavior is present in any recent release of
PostgreSQL.
-Neil
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send a
Torsten Schulz <[EMAIL PROTECTED]> writes:
> Our Server:
> Dual-CPU with 1.2 GHz
> 1.5 GB RAM
What kind of I/O subsystem is in this machine? This is an x86 machine,
right?
> Has anyone an idea what's the best configuration for thta server?
It is difficult to say until you provide some informatio
William Yu <[EMAIL PROTECTED]> writes:
> > You're right, though, mirroring a solid state drive is pretty pointless; if
> > power fails, both mirrors are dead.
>
> Actually no. Solid state memory is non-volatile. They retain data even without
> power.
Note that flash ram only has a finite number
The archives of this list provides many ideas for improving performance,
but the problem we are having is gradually degrading performance ending
in postgres shutting down. So it's not a matter of optimizing a complex
query to take 5 seconds instead of 60 seconds. >From what I can tell we
are
My apologies for the "From" name of MK Spam.
That references an email account I made for signing up for things on the
net. :]
Nid
- Original Message -
From:
MK Spam
To: [EMAIL PROTECTED]
Sent: Monday, November 24, 2003 4:03
PM
Subject: [PERFORM] Where to
I've scanned some of the archives and have learned
a lot about different performance tuning practices. I will be looking into
using many of these ideas but I'm not sure they address the issue I am currently
experiencing.
First, I'm a total newb with postgresql. Like
many before me, I hav
Torsten Schulz wrote:
Gaetano Mendola wrote:
Torsten Schulz wrote:
Yes, I know: very difficult question, but I don't know what to do now.
Our Server:
Dual-CPU with 1.2 GHz
1.5 GB RAM
Our Problem: We are a Community. Between 19 and 21 o clock we have
>350 User in the Community. But then, the Da
On Mon, 24 Nov 2003, Torsten Schulz wrote:
> sort_mem = 32768 # min 32
32 meg per sort can be a lot in total if you have many clients sorting
things. I assume you have checked so that the computer is not pushed into
swapping when you have the peak with lots of users. A swapping comp
Gaetano Mendola wrote:
Torsten Schulz wrote:
Yes, I know: very difficult question, but I don't know what to do now.
Our Server:
Dual-CPU with 1.2 GHz
1.5 GB RAM
Our Problem: We are a Community. Between 19 and 21 o clock we have
>350 User in the Community. But then, the Database are very slow. A
Torsten Schulz wrote:
Yes, I know: very difficult question, but I don't know what to do now.
Our Server:
Dual-CPU with 1.2 GHz
1.5 GB RAM
Our Problem: We are a Community. Between 19 and 21 o clock we have >350
User in the Community. But then, the Database are very slow. And we have
per CPU ~20-
Yes, I know: very difficult question, but I don't know what to do now.
Our Server:
Dual-CPU with 1.2 GHz
1.5 GB RAM
Our Problem: We are a Community. Between 19 and 21 o clock we have >350
User in the Community. But then, the Database are very slow. And we have
per CPU ~20-30% idle-time.
Has any
On Thursday 20 November 2003 21:04, stephen farrell wrote:
> I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)
> SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a
> and x=b -- postgres7.4 is running out of memory.
When this has happened to me it's
On Fri, Nov 21, 2003 at 04:24:25PM -0500, Matthew T. O'Connor wrote:
> I don't want to add tables to existing databases, as I consider that
> clutter and I never like using tools that clutter my production
> databases. [...]
>
> Actually, this might be a necessary addition as pg_autovacuum cur
On 30/5/03 6:17 pm, "scott.marlowe" <[EMAIL PROTECTED]> wrote:
> On Fri, 30 May 2003, Adam Witney wrote:
>
>> Hi scott,
>>
>> Thanks for the info
>>
>>> You might wanna do something like go to all 146 gig drives, put a mirror
>>> set on the first 20 or so gigs for the OS, and then use the rema
I'm having a problem with a queyr like: INSERT INTO FACT (x,x,x,x,x,x)
SELECT a.key,b.key,c.key,d.key,e.key,f.key from x,a,b,c,d,e,f where x=a
and x=b -- postgres7.4 is running out of memory. I'm not sure
why this would happen -- does it buffer the subselect before doing the
insert?
Things
Hi, again.
I've turned on only log_connections and log_statement. See the following
log fragment (I've included lines only related to opening of new
connection);
Nov 21 11:06:44 postgres[3359]: [3-1] LOG: connection received: host= port=
Nov 21 11:06:44 postgres[3359]: [4-1] LOG: connection au
Tom Lane wrote:
Chester Kustarz <[EMAIL PROTECTED]> writes:
vacuum is to reclaim dead tuples. this means it depends on update and
delete. analyze depends on data values/distribution. this means it depends on
insert, update, and delete. thus the dependencies are slightly different
between the 2
Josh Berkus wrote:
Matthew,
For small tables, you don't need to vacuum too often. In the testing I
did a small table ~100 rows, didn't really show significant performance
degredation until it had close to 1000 updates.
This is accounted for by using the "threshold" value. That way sma
Robert Treat wrote:
Just thinking out loud here, so disregard if you think its chaff but...
if we had a system table pg_avd_defaults
[snip]
As long as pg_autovacuum remains a contrib module, I don't think any
changes to the system catelogs will be make. If pg_autovacuum is
deemed ready to m
Josh Berkus wrote:
William,
The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could
get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You
shouldn't need to do mirroring with a solid state drive.
I wouldn't count on RAID0 improving the speed of SANDisk's much.
William,
> The SanDisks do seem a bit pokey at 16MBps. On the otherhand, you could
> get 4 of these suckers, put them in a mega-RAID-0 stripe for 64MBps. You
> shouldn't need to do mirroring with a solid state drive.
I wouldn't count on RAID0 improving the speed of SANDisk's much. How are you
c
> Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
>> I am sure there is no transaction open with the table banner_stats2. Still VACUUM
>> FULL does
>> not seems to effective in removing the
>> dead rows.
>
> That is not the issue --- the limiting factor is what is your oldest open
> transaction,
Josh Berkus wrote:
William,
When my current job batch is done, I'll save a copy of the dir and give
the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local
store and run that through the hooper.
We'll be interested in the results. The Sandisk won't be much of a
performance te
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> I am sure there is no transaction open with the table banner_stats2.
> Still VACUUM FULL does not seems to effective in removing the
> dead rows.
That is not the issue --- the limiting factor is what is your oldest
open transaction, period. Wheth
William,
> When my current job batch is done, I'll save a copy of the dir and give
> the WAL on ramdrive a test. And perhaps even buy a Sandisk at the local
> store and run that through the hooper.
We'll be interested in the results. The Sandisk won't be much of a
performance test; last I chec
This is an intriguing thought which leads me to think about a similar
solution for even a production server and that's a solid state drive for
just the WAL. What's the max disk space the WAL would ever take up?
There's quite a few 512MB/1GB/2GB solid state drives available now in
the ~$200-$500
I am sure there is no transaction open with the table banner_stats2.
Still VACUUM FULL does not seems to effective in removing the
dead rows.
Can any one please help?
Regds
mallah
tradein_clients=# VACUUM FULL verbose banner_stats2 ;
INFO: vacuuming "public.banner_stats2"
INFO: "banner_stats
William Yu wrote:
My situation is this. We have a semi-production server where we
pre-process data and then upload the finished data to our production
servers. We need the fastest possible write performance. Having the DB
go corrupt due to power loss/OS crash is acceptable because we can
always
- Original Message -
From: "Josh Berkus" <[EMAIL PROTECTED]>
To: "Nick Barr" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Saturday, November 22, 2003 5:19 PM
Subject: Re: pg_autoconfig.pl
> Nick,
>
> > Josh have you managed to put together the rest of the calculations yet?
>
> Given th
33 matches
Mail list logo