"Vadim Mikheev" <[EMAIL PROTECTED]> writes:
> Anyway, deadlock in my tests are very correlated with new log file
> creation - something probably is still wrong...
Well, if you can reproduce it easily, seems like you could get in there
and verify or disprove my theory about where the deadlock is.
> At this point I must humbly say "yes, you told me so", because if I
No, I didn't - I must humbly say that I didn't foresee this deadlock,
so "I didn't tell you so" -:)
Anyway, deadlock in my tests are very correlated with new log file
creation - something probably is still wrong...
Vadim
-
At 21:24 17/03/01 -0500, Tom Lane wrote:
>Philip Warner <[EMAIL PROTECTED]> writes:
>> Looking at Tatsuos original message, it looks like the lowest level call
was:
>> #0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0,
>> ropt=0x8057c90) at pg_backup_archiver.c:474
>
>> which
"Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> And you know - I've run same tests on ~ Mar 9 snapshot
> without any problems.
Oh, I see it:
Process A is doing GetSnapShotData. It holds SInvalLock and calls
ReadNewTransactionId, which wants XidGenLockId.
Process B is doing GetNewTransactionId.
At 20:57 17/03/01 -0500, Tom Lane wrote:
>Philip Warner <[EMAIL PROTECTED]> writes:
>> At 12:31 17/03/01 -0500, Tom Lane wrote:
>>> This would be a lot simpler and cleaner if _PrintData() simply didn't
>>> append a zero byte to the buffer contents. Philip, is it actually
>>> necessary for it to d
Looking at Tatsuos original message, it looks like the lowest level call was:
#0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0,
ropt=0x8057c90) at pg_backup_archiver.c:474
which probably has nothing to do with BLOBs. I think it's a different
problem entirely, caused by a mi
At 21:08 17/03/01 -0500, Tom Lane wrote:
>Philip Warner <[EMAIL PROTECTED]> writes:
>>> Considering that the data we are working with is binary, and may contain
>>> nulls, any code that insisted on null-termination would probably be ipso
>>> facto broken.
>
>> But we're not; this is the same code
Philip Warner <[EMAIL PROTECTED]> writes:
> Looking at Tatsuos original message, it looks like the lowest level call was:
> #0 0x804abd4 in _enableTriggersIfNecessary (AH=0x8057d30, te=0x0,
> ropt=0x8057c90) at pg_backup_archiver.c:474
> which probably has nothing to do with BLOBs.
Oh ...
Philip Warner <[EMAIL PROTECTED]> writes:
>> Oh, isn't this the code that pushes large-object bodies around? I
>> should think the problem would've been noticed much sooner if not...
> It does both, which is why I was also surprised.
Hmm ... digging through the code, it does look like one of th
Tom Lane wrote:
> Samuel Sieb <[EMAIL PROTECTED]> writes:
> > Just as another suggestion, what about sending the data to a different
> > computer, so instead of tying up the database server with processing the
> > statistics, you have another computer that has some free time to do the
> > processi
Philip Warner <[EMAIL PROTECTED]> writes:
>> Considering that the data we are working with is binary, and may contain
>> nulls, any code that insisted on null-termination would probably be ipso
>> facto broken.
> But we're not; this is the same code that sends the COPY output back to PG.
Oh, isn
Philip Warner <[EMAIL PROTECTED]> writes:
> At 12:31 17/03/01 -0500, Tom Lane wrote:
>> This would be a lot simpler and cleaner if _PrintData() simply didn't
>> append a zero byte to the buffer contents. Philip, is it actually
>> necessary for it to do that?
> Strictly, I think the answer is tha
At 12:31 17/03/01 -0500, Tom Lane wrote:
>
>This would be a lot simpler and cleaner if _PrintData() simply didn't
>append a zero byte to the buffer contents. Philip, is it actually
>necessary for it to do that?
>
Strictly, I think the answer is that it is not necessary. The output of the
uncompr
Thanks, at least the problem I have reported seems gone after I
applied your patch.
--
Tatsuo Ishii
> After looking more closely I see that pg_restore has two different
> buffer overrun conditions in this one routine. Attached is take two
> of my patch.
>
> This would be a lot simpler and clean
I checked README.intarray to see what the most recent date was, and it
was Jan 10, so I knew that this version was newer. I then did a diff -c
against the current CVS and I didn't see anything unusual in the
changes.
Attached is the CVS diff command line showing me all the changes made:
cvs
Bruce Momjian writes:
> Installed in CVS. Thanks.
You overwrote the changes that other people have made meanwhile.
>
> > Mark,
> >
> > we prepared new version of contrib-intarray -
> > index support for 1-D integer arrays using GiST.
> >
> > Changes:
> >
> >
> > - Improved regression test
> >
Installed in CVS. Thanks.
> Mark,
>
> we prepared new version of contrib-intarray -
> index support for 1-D integer arrays using GiST.
>
> Changes:
>
>
> - Improved regression test
> - Current implementation provides index support for one-dimensional
>array of int4's - gist__int_ops, s
Seems we have an older version in CVS. I will update it now. I assume
/contrib is available for changes up until release, as usual.
> Mark,
>
> we prepared new version of contrib-intarray -
> index support for 1-D integer arrays using GiST.
>
> Changes:
>
>
> - Improved regression test
> -
> > But per-table stats aren't something that people will look at often,
> > right? They can sit in the collector's memory for quite a while. See
> > people wanting to look at per-backend stuff frequently, and that is why
> > I thought share memory should be good, and a global area for aggregate
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Even better, have an SQL table updated with the per-table stats
> periodically.
>>
>> That will be horribly expensive, if it's a real table.
> But per-table stats aren't something that people will look at often,
> right? They can sit in the collector'
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > The only open issue is per-table stuff, and I would like to see some
> > circular buffer implemented to handle that, with a collection process
> > that has access to shared memory.
>
> That will get us into locking/contention issues. OTOH, frequent
Bruce Momjian <[EMAIL PROTECTED]> writes:
> The only open issue is per-table stuff, and I would like to see some
> circular buffer implemented to handle that, with a collection process
> that has access to shared memory.
That will get us into locking/contention issues. OTOH, frequent trips
to th
After looking more closely I see that pg_restore has two different
buffer overrun conditions in this one routine. Attached is take two
of my patch.
This would be a lot simpler and cleaner if _PrintData() simply didn't
append a zero byte to the buffer contents. Philip, is it actually
necessary f
> ... and a lot more load on the CPU. Same-machine "network" connections
> are much cheaper (on most kernels, anyway) than real network
> connections.
>
> I think all of this discussion is vast overkill. No one has yet
> demonstrated that it's not sufficient to have *one* collector process
> an
"Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
> xlog.c revision 1.55 from Feb 26 already had log file
> zero-filling, so ...
>>
>> Oh, you're right, I didn't study the CVS log carefully enough. Hmm,
>> maybe the control file lock isn't the problem. The abort() in
>> s_lock_stuck should have left
Samuel Sieb <[EMAIL PROTECTED]> writes:
> Just as another suggestion, what about sending the data to a different
> computer, so instead of tying up the database server with processing the
> statistics, you have another computer that has some free time to do the
> processing.
> Some drawbacks are
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> pg_restore crushes if dump data includes large objects...
This is probably the same problem that Martin Renters reported
yesterday. I have a patch that seems to fix it on my machine,
but I haven't heard back from Martin whether it solves his case
comple
On Sat, Mar 17, 2001 at 09:33:03AM -0500, Jan Wieck wrote:
>
> The general problem remains. We only have one central
> collector with a limited receive capacity. The more load is
> on the machine, the smaller it's capacity gets. The more
> complex the DB schemas get
Philip Warner wrote:
> At 13:49 16/03/01 -0500, Jan Wieck wrote:
> >
> >Similar problem as with shared memory - size. If a long
> >running backend of a multithousand table database needs to
> >send access stats per table - and had accessed them all up to
> >now - it'll be
At 17:36 17/03/01 +0900, Tatsuo Ishii wrote:
>I know that new pg_dump can dump out large objects. But what about
>pg_dumpall? Do we have to dump out a whole database cluster by using
>pg_dumpall then run pg_dump separetly to dump large objects?
That won't even work, since pg_dump won't dump BLOBs
At 13:49 16/03/01 -0500, Jan Wieck wrote:
>
>Similar problem as with shared memory - size. If a long
>running backend of a multithousand table database needs to
>send access stats per table - and had accessed them all up to
>now - it'll be alot of wasted bandwidth.
Not if
pg_restore crushes if dump data includes large objects...
--
Tatsuo Ishii
[t-ishii@srapc1474 7.1]$ createdb test
CREATE DATABASE
[t-ishii@srapc1474 7.1]$ psql -c "select lo_import('/boot/vmlinuz')" test
lo_import
---
20736
(1 row)
[t-ishii@srapc1474 7.1]$ pg_dump -F c -b test > te
I know that new pg_dump can dump out large objects. But what about
pg_dumpall? Do we have to dump out a whole database cluster by using
pg_dumpall then run pg_dump separetly to dump large objects? That
seems pain...
--
Tatsuo Ishii
---(end of broadcast)---
33 matches
Mail list logo