Greg Smith wrote:
> The biggest downside of [MVCC] is that if you have an old client
> lingering around, things that happened in the database after it started
> can't be cleaned up.
Just to clarify for readers: Idle clients aren't generally an issue.
It's only clients that are idle with an open t
Dave Crooke wrote:
Since the Jackrabbit tables are in the same namespace / user / schema
as ours, am I right in thinking that this is effectively blocking the
entire auto-vaccum system from doing anything at all?
Yes, but the problem is actually broader than that: it wouldn't matter
if it wa
Merlin Moncure wrote:
> Postgres can handle multiple 1000 insert/sec but your hardware most
> likely can't handle multiple 1000 transaction/sec if fsync is on.
commit_delay or async commit should help a lot there.
http://www.postgresql.org/docs/8.3/static/wal-async-commit.html
http://www.postgre
Thanks folks for the quick replies.
1. There is one transaction, connected from the JVM, that is showing
"IDLE in transaction" this appears to be a leftover from
Hibernate looking at the schema metadata. It's Apache Jackrabbit, not
our own code:
hyper9test_1_6=# select c.relname, l.* from pg
On Wed, Nov 4, 2009 at 2:18 PM, Dave Crooke wrote:
> 2. If there is a hanging transaction, what's the best way to trace it from
> the PG end? Client is classic Java (Spring / Hibernate / Apache DBCP) if
> that matters.
Last place I worked we had the same issue and it was in our jdbc
settings or
Dave Crooke wrote:
> I'm not defending the decision to store blobs in a database (it was
> taken a while ago, before the need for frequent updates of the XML)
> and it isn't something that can be readily changed at short notice,
> so please no advice about "don't do that" :-)
I wouldn't sweat
Hi folks
I had a couple of semi-newbie questions about this, which I couldn't find
obvious answers to in the archives ... we are using Postgres 8.3, and the
behaviour is the same across Windows and Linux.
I am working with an app which, among other things stores XML files (average
about 50KB in s
Brian Karlak writes:
> On Nov 4, 2009, at 8:47 AM, Jeff Janes wrote:
>> Why would the index scan take 1 ms two of the times it is done but 5ms
>> the third time? Isn't it the same index scan each time? Or does the
>> change in queue.status change the plan?
> The final update is a different quer
On Nov 4, 2009, at 8:47 AM, Jeff Janes wrote:
Worker daemons do a bounded, ordered, limited SELECT to grab a row,
which
they lock by setting a value in the queue.status column.
So you do a select, and then an update?
I do a select for update in a stored proc:
FOR queue_item IN
SELECT *
On Nov 3, 2009, at 9:31 PM, Tom Lane wrote:
Brian Karlak writes:
My question is this: is there some way that I can keep a cursor /
pointer / reference / whatever to the row I fetched originally, so
that I don't have to search for it again when I'm ready to write
results?
If you don't expect
> I have an application wherein a process needs to read data from a stream and
> store the records for further analysis and reporting. The data in the stream
> is in the form of variable length records with clearly defined fields – so
> it can be stored in a database or in a file. The only caveat i
On Tue, Nov 3, 2009 at 12:30 PM, Brian Karlak wrote:
> Hello All --
>
> I have a simple queuing application written on top of postgres which I'm
> trying to squeeze some more performance out of.
>
> The setup is relatively simple: there is a central queue table in postgres.
> Worker daemons do a
On Tue, Nov 3, 2009 at 7:12 PM, Jay Manni wrote:
> Hi:
>
>
>
> I have an application wherein a process needs to read data from a stream and
> store the records for further analysis and reporting.
Where is the stream coming from? What happens if the process reading
the stream fails but the one ge
Jay Manni wrote:
The data in the stream is in the form of variable length records with
clearly defined fields ? so it can be stored in a database or in a
file. The only caveat is that the rate of records coming in the stream
could be several 1000 records a second.
There's a few limits to be
n Tue, Nov 3, 2009 at 10:12 PM, Jay Manni wrote:
> Hi:
>
> I have an application wherein a process needs to read data from a stream and
> store the records for further analysis and reporting. The data in the stream
> is in the form of variable length records with clearly defined fields – so
> it c
Jeremy Harris wrote:
On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:
All,
I'm trying to understand the free memory usage and why it falls below
17G sometimes and what could be causing it. Any pointers would be
appreciated.
r...@prod1 # prtconf
System Configuration: Sun Microsystems sun4
16 matches
Mail list logo