Jeff Frost <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Huh. That does sound like it's a version-to-version difference.
>> There's nothing in the CVS log that seems related though. Are you
>> willing to post your test case?
>>
> It's a customer DB, so I'll contact them and see if we can boil
Tom Lane wrote:
> Jeff Frost <[EMAIL PROTECTED]> writes:
>
>> On Thu, 30 Oct 2008, Tom Lane wrote:
>>
Any idea why I don't see it on 8.3.4?
>>> I think it's more likely some small difference in your test conditions
>>> than any real version-to-version difference. In pa
Jeff Frost <[EMAIL PROTECTED]> writes:
> On Thu, 30 Oct 2008, Tom Lane wrote:
>>> Any idea why I don't see it on 8.3.4?
>>
>> I think it's more likely some small difference in your test conditions
>> than any real version-to-version difference. In particular I think the
>> "still see" test might
On Thu, 30 Oct 2008, Tom Lane wrote:
Any idea why I don't see it on 8.3.4?
I think it's more likely some small difference in your test conditions
than any real version-to-version difference. In particular I think the
"still see" test might be influenced by the ages of transactions running
con
Jeff Frost <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Okay. What that means is that the indexes were created on data that had
>> already been inserted and updated to some extent, resulting in
>> HOT-update chains that turned out to be illegal for the new indexes.
>> The way we deal with this
Tom Lane wrote:
> Okay. What that means is that the indexes were created on data that had
> already been inserted and updated to some extent, resulting in
> HOT-update chains that turned out to be illegal for the new indexes.
> The way we deal with this is to mark the indexes as not usable by any
Jeff Frost <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> This may be a HOT side-effect ... is pg_index.indcheckxmin set for
>> the index?
>>
> Yep, sure enough, the 'act' table's indexes have it set and jefftest and
> jefftest2's indexes do not.
Okay. What that means is that the indexes were
Tom Lane wrote:
> Jeff Frost <[EMAIL PROTECTED]> writes:
>
>> I've run across a strange problem with PG 8.3.3 not using indexes on a
>> particular table after building the table during a transaction.
>>
>
> This may be a HOT side-effect ... is pg_index.indcheckxmin set for
> the index?
>
Jeff Frost <[EMAIL PROTECTED]> writes:
> I've run across a strange problem with PG 8.3.3 not using indexes on a
> particular table after building the table during a transaction.
This may be a HOT side-effect ... is pg_index.indcheckxmin set for
the index?
regards, tom lane
I've run across a strange problem with PG 8.3.3 not using indexes on a
particular table after building the table during a transaction.
You can see a transcript of the issue here:
http://gist.github.com/21154
Interestingly, if I create another temp table 'CREATE TEMP TABLE AS
SELECT * FROM act' a
On Thu, Oct 30, 2008 at 4:41 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson
> <[EMAIL PROTECTED]> wrote:
>
>> Another thing to note, we have VACUUM ANALYZE running on an hourly
>> interval and the switch from CPU to IO wait appears to always coincide
On Thu, Oct 30, 2008 at 3:41 PM, Oliver Johnson
<[EMAIL PROTECTED]> wrote:
> Another thing to note, we have VACUUM ANALYZE running on an hourly
> interval and the switch from CPU to IO wait appears to always coincide
> with a vacuum.
Why are you not using autovacuum with appropriate wait paramete
On Thursday 30 October 2008, "Oliver Johnson" <[EMAIL PROTECTED]>
wrote:
> Another thing to note, we have VACUUM ANALYZE running on an hourly
> interval and the switch from CPU to IO wait appears to always coincide
> with a vacuum.
>
> What might cause this shift?
The extra disk access caused by
Hi everybody,
I am running a bake/load test and I am seeing sudden, daily shifts
from CPU utilization to IO wait. The load harness has been running
for 3 weeks and should be putting a uniform load on the application.
The application processes data on a daily basis and a sawtooth CPU
pattern on th
Thanks guys,
Lots of info here that I didn't know about! Since I have one of the
latest Opensolaris builds, I guess the write throttle feature is
already in there. Sadly, the blog doesn't say what build has it
included.
For writes, I do everything synchronized because we really need a
c
On Thu, Oct 30, 2008 at 9:55 AM, Joshua D. Drake <[EMAIL PROTECTED]>wrote:
> On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:
>
> >
> > Remember that PostgreSQL doesn't cache anything on its own so
> > if you do
> > want to hit disk it has to be in file cache.
> >
> >
>
> If you do very large aggregates, you may need even 1GB on work_mem.
> However, a setting that high would require very careful tuning and reduction
> of space used by shared_buffers and the ZFS ARC. Its dangerous since each
> connection with a large aggregate or sort may consume a lot of mem
On Thu, 2008-10-30 at 14:00 -0300, Alvaro Herrera wrote:
> Joshua D. Drake wrote:
>
> > However that isn't the exact same thing as a "cache" at least as I was
> > trying to describe it. shared buffers are used to keep track of pages
> > (as well as some other stuff) and their current status. That
Joshua D. Drake wrote:
> However that isn't the exact same thing as a "cache" at least as I was
> trying to describe it. shared buffers are used to keep track of pages
> (as well as some other stuff) and their current status. That is not the
> same as caching a relation.
Um, having a page in shar
Hi Scott,
Thaks for the clear answers!
Scott Carey wrote:
You must either increase the memory that ZFS uses, or increase
Postgresql shard_mem and work_mem to get the aggregate of the two to
use more RAM.
I believe, that you have not told ZFS to reserve 8GB, but rather told
it to limit itsel
On Thu, 2008-10-30 at 09:46 -0700, Scott Carey wrote:
>
> Remember that PostgreSQL doesn't cache anything on its own so
> if you do
> want to hit disk it has to be in file cache.
>
> By my understanding, this is absolutely false. Postgres caches pages
> from tabl
On Thu, Oct 30, 2008 at 9:05 AM, Joshua D. Drake <[EMAIL PROTECTED]>wrote:
> On Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:
> > Joshua D. Drake wrote:
> > >
> > > PostgreSQL is only going to use what it needs. It relies on the OS for
> > > much of the caching etc...
> > >
> > So th
You must either increase the memory that ZFS uses, or increase Postgresql
shard_mem and work_mem to get the aggregate of the two to use more RAM.
I believe, that you have not told ZFS to reserve 8GB, but rather told it to
limit itself to 8GB.
Some comments below:
On Thu, Oct 30, 2008 at 8:15 AM,
On Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:
> Joshua D. Drake wrote:
> >
> > PostgreSQL is only going to use what it needs. It relies on the OS for
> > much of the caching etc...
> >
> So that would actually mean that I could raise the setting of the ARC
> cache to far more tha
Joshua D. Drake wrote:
PostgreSQL is only going to use what it needs. It relies on the OS for
much of the caching etc...
So that would actually mean that I could raise the setting of the ARC
cache to far more than 8 GB? As I said, our database is 250 GB, So I
would expect that postgres nee
Christiaan Willemsen wrote:
Hi there,
The problem is getting it to use that much. At the moment, it only uses
almost 9 GB, so by far not enough. The problem is getting it to use
more... I hope you can help me with working config.
PostgreSQL is only going to use what it needs. It relies on t
Hi,
you could set effective_cache_size to a high value (free memory on your
server that is used for caching).
Christiaan Willemsen wrote:
Hi there,
I configured OpenSolaris on our OpenSolaris Machine. Specs:
2x Quad 2.6 Ghz Xeon
64 GB of memory
16x 15k5 SAS
The filesystem is configured using
Hi there,
I configured OpenSolaris on our OpenSolaris Machine. Specs:
2x Quad 2.6 Ghz Xeon
64 GB of memory
16x 15k5 SAS
The filesystem is configured using ZFS, and I think I have found a
configuration that performs fairly well.
I installed the standard PostgreSQL that came with the OpenSolar
28 matches
Mail list logo