"Stephen Denne" <[EMAIL PROTECTED]> writes:
> I altered the update statement slightly, and reran the query.
> The altered query has been running over 3 hours now,
> without using lots of memory (38M private bytes).
> 2046 temp files were created (2.54GB worth),
> which have recently changed from s
> "Stephen Denne" <[EMAIL PROTECTED]> writes:
> > A simple update query, over roughly 17 million rows,
> > populating a newly added column in a table, resulted in an
> > out of memory error when the process memory usage reached
> > 2GB. Could this be due to a poor choice of some configuration
>
"Stephen Denne" <[EMAIL PROTECTED]> writes:
> A simple update query, over roughly 17 million rows, populating a newly added
> column in a table, resulted in an out of memory error when the process memory
> usage reached 2GB. Could this be due to a poor choice of some configuration
> parameter, o
"Dmitry Potapov" <[EMAIL PROTECTED]> writes:
> Sorry, it was just EXPLAIN. I can't run EXPLAIN ANALYZE on that
> (production) server, so I uploaded 3days old backup to a spare box and
> here's what I've got:
> -> Merge Join (cost=0.00..4955790.28 rows=1 width=59)
> (actual time=0.048..4
Em Thu, 24 Jan 2008 13:00:24 -0500
Rick Schumeyer <[EMAIL PROTECTED]> escreveu:
> On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT
> 2007 i686 GNU/Linux)
> I edited postgresql.conf and changed:
>
> shared_buffers = 5000 work_mem = 16384
> max_stack_depth = 4096
>
Em Fri, 25 Jan 2008 12:46:20 +1300
"Stephen Denne" <[EMAIL PROTECTED]> escreveu:
> A simple update query, over roughly 17 million rows, populating a
> newly added column in a table, resulted in an out of memory error
> when the process memory usage reached 2GB. Could this be due to a
> poor choice
A simple update query, over roughly 17 million rows, populating a newly added
column in a table, resulted in an out of memory error when the process memory
usage reached 2GB. Could this be due to a poor choice of some configuration
parameter, or is there a limit on how many rows I can update in
On Thu, 24 Jan 2008, Tory M Blue wrote:
Postg: 8.2.1fc6
8.2.1 has a nasty bug related to statistics collection that causes
performance issues exactly in the kind of heavy update situation you're
in. That's actually why i asked for the exact 8.2 version. You should
plan an upgrade as soon
On Jan 24, 2008 1:01 PM, Merlin Moncure <[EMAIL PROTECTED]> > I'll
give you a little hint about postgresql.conf...tuning shared
> buffers rarely has a huge impact on performance...the o/s will
oops. i meant to say the o/s will cache the files just fine...the
setting that _does_ affect query perfor
On Thu, 24 Jan 2008, Rick Schumeyer wrote:
On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT 2007
i686 GNU/Linux)
I edited postgresql.conf and changed:
shared_buffers = 5000 work_mem = 16384
max_stack_depth = 4096
and then restarted postgres. The puzzling part is
On Jan 22, 2008 11:11 PM, Joshua Fielek <[EMAIL PROTECTED]> wrote:
> The database itself is not that large -- a db_dump of the sql file as
> text is only about 110MB. I haven't checked the exact size of the actual
> data base, but the entire data directory is smaller than the available
> memory at
On Jan 23, 2008 4:31 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> Generally if you have a system doing many updates and inserts that slows
> for that long, it's because it hit a checkpoint. I'm not sure what your
> memory-related issues are but it's possible that might be from a backlog
> of sessi
[EMAIL PROTECTED] (Florian Weimer) writes:
>> So, that web site seems to list products starting at about 32GB in a
>> separate rack-mounted box with redundant everything. I'd be more
>> interested in just putting the WAL on an SSD device, so 500MB or 1GB
>> would be quite sufficient. Can anyone poi
On a linux box (Linux db1 2.6.18.8-md #1 SMP Wed May 23 17:21:37 EDT
2007 i686 GNU/Linux)
I edited postgresql.conf and changed:
shared_buffers = 5000 work_mem = 16384
max_stack_depth = 4096
and then restarted postgres. The puzzling part is that postgres
actually started. When I hav
[EMAIL PROTECTED] ("Scott Marlowe") writes:
> On Jan 23, 2008 1:57 PM, Guy Rouillier <[EMAIL PROTECTED]> wrote:
>> Scott Marlowe wrote:
>> > I assume you're talking about solid state drives? They have their
>> > uses, but for most use cases, having plenty of RAM in your server will
>> > be a bette
On Jan 23, 2008 12:29 PM, Thomas Lozza <[EMAIL PROTECTED]> wrote:
> Auto vacuum is enabled. Here are the settings:
>
> autovacuum = true
> autovacuum_naptime = 900
> autovacuum_vacuum_threshold = 2000
> autovacuum_analyze_threshold = 1000
> autovacuum_vacuum_scale_factor = 0.25
> autovacuum_analyze
> So, that web site seems to list products starting at about 32GB in a
> separate rack-mounted box with redundant everything. I'd be more
> interested in just putting the WAL on an SSD device, so 500MB or 1GB
> would be quite sufficient. Can anyone point me towards such a device?
A dedicated RAID
On Wed, 23 Jan 2008, Brian Hurt wrote:
I think the original poster was talking about drives like these:
http://www.texmemsys.com/
Basically, they're not using Flash, they're just big ol' hunks of
battery-backed RAM. Not unlike a 10GB battery backed buffer for your raid,
except there is no rai
On Jan 23, 2008 2:57 PM, Guy Rouillier <[EMAIL PROTECTED]> wrote:
> Unless somebody has changes some physics recently, I'm not understanding
> the recent discussions of SSD in the general press. Flash has a limited
> number of writes before it becomes unreliable. On good quality consumer
> grade,
Joshua Fielek wrote:
We have an application that has been having some issues with performance
within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a
nice little performance increase just off the improved query
optimization, but we are still having other performance issues.
Wh
On Wed, Jan 23, 2008 at 07:54:24PM -0500, Greg Smith wrote:
> (a standard mechanical drive under heavy write load also wears out faster
> than one doing less work).
Wasn't this one of the myths that was dispelled in the Google disk paper a
while ago?
/* Steinar */
--
Homepage: http://www.sesse.n
21 matches
Mail list logo