Mark Kirkwood wrote:
Brandon Shalton wrote:
Hello all,
My hard disk is filling up in the /base directory to where it has
consumed all 200gig of that drive.
All the posts that i see keep saying move to a bigger drive, but at
some point a bigger drive would just get consumed.
How can i ke
Brandon Shalton wrote:
Hello all,
My hard disk is filling up in the /base directory to where it has
consumed all 200gig of that drive.
All the posts that i see keep saying move to a bigger drive, but at
some point a bigger drive would just get consumed.
How can i keep the disk from filli
> From: Scott Feldstein
> Subject: [PERFORM] update, truncate and vacuum
>
> Hi,
> I have a couple questions about how update, truncate and
> vacuum would work together.
>
> 1) If I update a table foo (id int, value numeric (20, 6))
> with update foo set value = 100 where id = 1
>
> Would a va
On Jul 25, 2007, at 11:53 AM, Y Sidhu wrote:
I am wondering if reindexing heavily used tables can have an impact
on vacuum times. If it does, will the impact be noticeable the next
time I vacuum? Please note that I am doing vacuum, not vacuum full.
I am on a FreeBSD 6.1 Release, Postgresql i
1) Yes
All rows are treated the same, there are no in place updates.
2) No
Truncate recreates the object as a new one, releasing the space held by the old
one.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Scott Feldstein [mailto:[EMAIL PROTECTED]
Sent: Thursday,
Hi,
I have a couple questions about how update, truncate and vacuum would
work together.
1) If I update a table foo (id int, value numeric (20, 6))
with
update foo set value = 100 where id = 1
Would a vacuum be necessary after this type of operation since the
updated value is a numeric? (as
Will try 16 and 32 CLOGBUFFER tomorrow:
But here is locks data again with about increased time profiling (about
2 minutes) for the connection with about 2000 users:
bash-3.00# time ./4_lwlock_waits.d 13583
^C
Lock IdMode Count
ProcArrayLock Sh
On Thu, 2007-07-26 at 15:44 -0400, Jignesh K. Shah wrote:
> BEAUTIFUL!!!
>
> Using the Patch I can now go upto 1300 users without dropping.. But now
> it still repeats at 1300-1350 users..
OK, can you try again with 16 and 32 buffers please? We need to know
how many is enough and whether this n
Tom Lane wrote:
That path would be taking CLogControlLock ... so you're off by at least
one. Compare the script to src/include/storage/lwlock.h.
Indeed, the indexing was off by one due to the removal of
BuffMappingLock in src/include/storage/lwlock.h between 8.1 and 8.2
which was not updat
BEAUTIFUL!!!
Using the Patch I can now go upto 1300 users without dropping.. But now
it still repeats at 1300-1350 users..
I corrected the Lock Descriptions based on what I got from lwlock.h and
retried the whole scalability again with profiling again.. This time it
looks like the ProcArrayL
[EMAIL PROTECTED] (Jeff Davis) writes:
> On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:
>> Hi all,
>> whats the benefits of replication by using slony in
>> postgresql??
>> My office is separate in several difference place..its
>> about hundreds branch office in the difference
>> place..so
On Thu, 2007-07-26 at 01:44 -0700, angga erwina wrote:
> Hi all,
> whats the benefits of replication by using slony in
> postgresql??
> My office is separate in several difference place..its
> about hundreds branch office in the difference
> place..so any one can help me to replicate our dbase
> by
On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote:
> However at 900 Users where the big drop in throughput occurs:
> It gives a different top "consumer" of time:
postgres`LWLockAcquire+0x1c8
> postgres`SimpleLruReadPage+0x1ac
> postgres`Transact
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> For 600-850 Users: that potential mislabeled CheckPointStartLock or
> LockID==12 comes from various sources where the top source (while system
> is still doing great) comes from:
> postgres`LWLockAcquire+0x1c8
> postgr
I will look for runs with longer samples..
Now the script could have mislabeled lock names.. Anyway digging into
the one that seems to increase over time... I did stack profiles on how
that increases... and here are some numbers..
For 600-850 Users: that potential mislabeled CheckPointStart
On Thu, 2007-07-26 at 10:29 -0400, Jignesh K. Shah wrote:
> The count is only for a 10-second snapshot.. Plus remember there are
> about 1000 users running so the connection being profiled only gets
> 0.01 of the period on CPU.. And the count is for that CONNECTION only.
Is that for one proces
Brandon Shalton wrote:
Hello all,
My hard disk is filling up in the /base directory to where it has
consumed all 200gig of that drive.
All the posts that i see keep saying move to a bigger drive, but at some
point a bigger drive would just get consumed.
How can i keep the disk from filli
"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> The count is only for a 10-second snapshot.. Plus remember there are
> about 1000 users running so the connection being profiled only gets
> 0.01 of the period on CPU.. And the count is for that CONNECTION only.
OK, that explains the low absolute
On Thu, 2007-07-26 at 09:18 -0700, Brandon Shalton wrote:
> Hello all,
>
> My hard disk is filling up in the /base directory to where it has consumed
> all 200gig of that drive.
>
> All the posts that i see keep saying move to a bigger drive, but at some
> point a bigger drive would just get
The count is only for a 10-second snapshot.. Plus remember there are
about 1000 users running so the connection being profiled only gets
0.01 of the period on CPU.. And the count is for that CONNECTION only.
Anyway using the lock wait script it shows the real picture as you
requested. Here t
In response to "Brandon Shalton" <[EMAIL PROTECTED]>:
> Hello all,
>
> My hard disk is filling up in the /base directory to where it has consumed
> all 200gig of that drive.
>
> All the posts that i see keep saying move to a bigger drive, but at some
> point a bigger drive would just get con
Hello all,
My hard disk is filling up in the /base directory to where it has consumed
all 200gig of that drive.
All the posts that i see keep saying move to a bigger drive, but at some
point a bigger drive would just get consumed.
How can i keep the disk from filling up other than get lik
Hi all,
whats the benefits of replication by using slony in
postgresql??
My office is separate in several difference place..its
about hundreds branch office in the difference
place..so any one can help me to replicate our dbase
by using slony?? and why slony??
thanks,
Algebra.corp
Bayu
_
23 matches
Mail list logo