Tom Lane wrote:
> Martijn van Oosterhout writes:
> > That's why people suggest partitions. Then you only vacuum the
> > partitions that are new and the old ones never need to be touched...
>
> This will all work a lot better once we track XID wraparound risk on a
> per-table rather than per-datab
Jim C. Nasby wrote:
> On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
> > Ideally, the transaction management system would be proportional to the
> > marginal change in size of the database rather than the gross size of the
> > database. That is VACCUM being O(N) should be replace
Martijn van Oosterhout writes:
> That's why people suggest partitions. Then you only vacuum the
> partitions that are new and the old ones never need to be touched...
This will all work a lot better once we track XID wraparound risk on a
per-table rather than per-database basis. I hope that will
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
> VACCUM needs to be run for two reasons.
> 1) To recover the transaction counter.
> 2) To recover records marked for deletion.
>
> VACCUM needs to be run over the entire database. If the data in the database
> is N, then VACCUM is O
On Fri, Jun 09, 2006 at 06:20:21PM -0700, Trent Shipley wrote:
> Ideally, the transaction management system would be proportional to the
> marginal change in size of the database rather than the gross size of the
> database. That is VACCUM being O(N) should be replaced (or there should be
> an
On Tuesday 2006-06-06 20:11, Mark Woodward wrote:
> > Mark Woodward wrote:
> >> OK, here's my problem, I have a nature study where we have about 10
> >> video
> >> cameras taking 15 frames per second.
> >> For each frame we make a few transactions on a PostgreSQL database.
> >
> > Maybe if you grou
Ühel kenal päeval, N, 2006-06-08 kell 12:09, kirjutas Tom Lane:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > If the aim is to *only* avoid transaction wraparound, then maybe we
> > could introduce VACUUM FREEZE ONLY; which never removes any old tuples,
> > but instead just marks them by setting
On Thu, Jun 08, 2006 at 09:13:33AM -0700, Joshua D. Drake wrote:
> Tom Lane wrote:
> >"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> >>I was recently involved in a project where we had to decrease the
> >>checkpoint_timeout . The problem was, that the database was performing
> >>so many transact
Tom Lane wrote:
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
I was recently involved in a project where we had to decrease the
checkpoint_timeout . The problem was, that the database was performing
so many transactions that if we waiting for 5 minutes, checkpoint would
take entirely too long.
Hannu Krosing <[EMAIL PROTECTED]> writes:
> If the aim is to *only* avoid transaction wraparound, then maybe we
> could introduce VACUUM FREEZE ONLY; which never removes any old tuples,
> but instead just marks them by setting xmin=xmax for them, in addition
> to its freezing of live-and-visible-to
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> I was recently involved in a project where we had to decrease the
> checkpoint_timeout . The problem was, that the database was performing
> so many transactions that if we waiting for 5 minutes, checkpoint would
> take entirely too long.
Seems li
Mark Woodward wrote:
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
I guess what I am saying is that PostgreSQL isn't "smooth," between
checkpoints and vacuum, it is near impossible to make a product that
performs consistently under high load.
Have you tuned the bgwriter and all
> On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
>> I guess what I am saying is that PostgreSQL isn't "smooth," between
>> checkpoints and vacuum, it is near impossible to make a product that
>> performs consistently under high load.
>
> Have you tuned the bgwriter and all the vacuu
Ühel kenal päeval, K, 2006-06-07 kell 17:45, kirjutas Jim C. Nasby:
> Plus, if the only issue here is in fact the long-running transaction for
> vacuum, there's other ways to address that which would be a lot less
> intrusive than doing something like going to 64 bit XIDs. IIRC, in 8.2
> vacuum wi
Tom Lane wrote:
> Zdenek Kotala <[EMAIL PROTECTED]> writes:
>> Koichi Suzuki wrote:
>>> I've once proposed a patch for 64bit transaction ID, but this causes
>>> some overhead to each tuple (XMIN and XMAX).
>
>> Did you check performance on 32-bit or 64-bit systems and 64-bit binary
>> version of
On Wed, Jun 07, 2006 at 07:07:55PM -0400, Mark Woodward wrote:
> I guess what I am saying is that PostgreSQL isn't "smooth," between
> checkpoints and vacuum, it is near impossible to make a product that
> performs consistently under high load.
Have you tuned the bgwriter and all the vacuum_cost s
> On Wed, Jun 07, 2006 at 11:47:45AM -0400, Tom Lane wrote:
>> Zdenek Kotala <[EMAIL PROTECTED]> writes:
>> > Koichi Suzuki wrote:
>> >> I've once proposed a patch for 64bit transaction ID, but this causes
>> >> some overhead to each tuple (XMIN and XMAX).
>>
>> > Did you check performance on 32-bi
On Thu, Jun 08, 2006 at 01:30:42AM +0300, Hannu Krosing wrote:
> ??hel kenal p??eval, T, 2006-06-06 kell 13:53, kirjutas Christopher
> Browne:
> > >> We have triggers that fire is something interesting is found on insert.
> > >> We want this thing to run for a log time.
> > >> From the numbers, you
Ühel kenal päeval, T, 2006-06-06 kell 13:53, kirjutas Christopher
Browne:
> >> We have triggers that fire is something interesting is found on insert.
> >> We want this thing to run for a log time.
> >> From the numbers, you can see the PostgreSQL database is VERY loaded.
> >> Running VACUUM may no
On Wed, Jun 07, 2006 at 11:47:45AM -0400, Tom Lane wrote:
> Zdenek Kotala <[EMAIL PROTECTED]> writes:
> > Koichi Suzuki wrote:
> >> I've once proposed a patch for 64bit transaction ID, but this causes
> >> some overhead to each tuple (XMIN and XMAX).
>
> > Did you check performance on 32-bit or 6
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> Koichi Suzuki wrote:
>> I've once proposed a patch for 64bit transaction ID, but this causes
>> some overhead to each tuple (XMIN and XMAX).
> Did you check performance on 32-bit or 64-bit systems and 64-bit binary
> version of PGSQL? I think that toda
On Wed, Jun 07, 2006 at 01:48:50PM +0200, Zdenek Kotala wrote:
> Koichi Suzuki wrote:
> >I've once proposed a patch for 64bit transaction ID, but this causes
> >some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
> >transaction ID has to pay about a couple of percent of performance.
Koichi Suzuki wrote:
> I've once proposed a patch for 64bit transaction ID, but this causes
> some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
> transaction ID has to pay about a couple of percent of performance. If
> 64bit transaction ID is a reasonable fix, I've already pos
Koichi Suzuki wrote:
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance.
If 64bit transaction ID is a reasonable fix, I've already posted thi
I've once proposed a patch for 64bit transaction ID, but this causes
some overhead to each tuple (XMIN and XMAX). Pgbench with 64bit
transaction ID has to pay about a couple of percent of performance. If
64bit transaction ID is a reasonable fix, I've already posted this
patch. Anyone can
> Mark Woodward wrote:
>> OK, here's my problem, I have a nature study where we have about 10
>> video
>> cameras taking 15 frames per second.
>> For each frame we make a few transactions on a PostgreSQL database.
>
> Maybe if you grouped multiple operations on bigger transactions, the I/O
> saving
Rod Taylor wrote:
> With one potential snafu -- it blocks new SELECTs against the parent
> table while truncate runs on the child (happens with constraint
> exclusion as well).
>
> If your transactions are short then it won't be an issue. If you have
> mixed length transactions (many short which
On Tue, 2006-06-06 at 13:53 -0400, Christopher Browne wrote:
> Clinging to sanity, [EMAIL PROTECTED] (Hannu Krosing) mumbled into her beard:
> > Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
> >> OK, here's my problem, I have a nature study where we have about 10 video
> >> c
Clinging to sanity, [EMAIL PROTECTED] (Hannu Krosing) mumbled into her beard:
> Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
>> OK, here's my problem, I have a nature study where we have about 10 video
>> cameras taking 15 frames per second.
>> For each frame we make a few t
Mark Woodward wrote:
> OK, here's my problem, I have a nature study where we have about 10 video
> cameras taking 15 frames per second.
> For each frame we make a few transactions on a PostgreSQL database.
Maybe if you grouped multiple operations on bigger transactions, the I/O
savings could be en
Ühel kenal päeval, T, 2006-06-06 kell 08:42, kirjutas Mark Woodward:
> OK, here's my problem, I have a nature study where we have about 10 video
> cameras taking 15 frames per second.
> For each frame we make a few transactions on a PostgreSQL database.
> We want to keep about a years worth of data
On Tue, 6 Jun 2006, Mark Woodward wrote:
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
I would suggest doing all the inserts of one frame in one transaction.
Ma
OK, here's my problem, I have a nature study where we have about 10 video
cameras taking 15 frames per second.
For each frame we make a few transactions on a PostgreSQL database.
We want to keep about a years worth of data at any specific time.
We have triggers that fire is something interesting is
33 matches
Mail list logo