Grant <[EMAIL PROTECTED]> writes:
> Is it just me or is an address on the hackers list who's mail is handled
> by wmail.metro.taejon.kr not existant?
I've had to institute a sendmail access block against that site :-(
It bounces a useless complaint for every damn posting I make. What's
worse is
Is it just me or is an address on the hackers list who's mail is handled
by wmail.metro.taejon.kr not existant?
On Tue, 31 Jul 2001, Mail Delivery Subsystem wrote:
> The original message was received at Tue, 31 Jul 2001 14:25:00 +1000 (EST)
> from IDENT:[EMAIL PROTECTED]
>
>-- The following
> > > Can you see a scenario where a programmer would forget to delete the
> > > data from pg_largeobject and the database becoming very large filled
> > > with orphaned large objects?
> >
> > Sure. My point wasn't that the functionality isn't needed, it's that
> > I'm not sure vacuumlo does it
Tom Lane wrote:
>
> Grant <[EMAIL PROTECTED]> writes:
> > Can you see a scenario where a programmer would forget to delete the
> > data from pg_largeobject and the database becoming very large filled
> > with orphaned large objects?
>
> Sure. My point wasn't that the functionality isn't needed,
Grant <[EMAIL PROTECTED]> writes:
> Can you see a scenario where a programmer would forget to delete the
> data from pg_largeobject and the database becoming very large filled
> with orphaned large objects?
Sure. My point wasn't that the functionality isn't needed, it's that
I'm not sure vacuuml
> > Is it possible to get [vacuumlo] included in the main vacuumdb program for
> > support to vacuum orphaned large objects?
>
> Hmm. I'm not convinced that vacuumlo is ready for prime time...
> in particular, how safe is it in the presence of concurrent
> transactions that might be adding or re
Grant <[EMAIL PROTECTED]> writes:
> Is it possible to get [vacuumlo] included in the main vacuumdb program for
> support to vacuum orphaned large objects?
Hmm. I'm not convinced that vacuumlo is ready for prime time...
in particular, how safe is it in the presence of concurrent
transactions that
Bruce Momjian <[EMAIL PROTECTED]> writes:
>> I'm somewhat surprised that HPUX does not --- it tends to follow its
>> SysV heritage when there's a conflict between that and BSD practice.
>> Guess they went BSD on this one.
> I thought HPUX was mostly SysV tools on BSD kernel.
No, it was all SysV
I sent the email below to the creator of contrib/vacuumlo/ with no reply
just yet.
Is it possible to get his code included in the main vacuumdb program for
support to vacuum orphaned large objects?
Or... Any suggestions, what do people think?
Thanks.
-- Forwarded message --
Dat
> Bill Studenmund <[EMAIL PROTECTED]> writes:
> > Looking at source on the web, I found:
>
> > kernel/signal.c:1042
>
> > * Note the silly behaviour of SIGCHLD: SIG_IGN means that the
> > * signal isn't actually ignored, but does automatic child
> > * reaping, while SIG_DFL is explicitly said by
Bruce Momjian <[EMAIL PROTECTED]> writes:
> The auto-reaping is standard SysV behavior, while BSD is really ignore.
You'll recall the ECHILD exception was installed by Tatsuo after seeing
problems on Solaris. Evidently Solaris uses the auto-reap behavior too.
I'm somewhat surprised that HPUX d
On Mon, 30 Jul 2001, bpalmer wrote:
> > The developer's corner will soon be going away. I'm in the process of
> > putting together a developer's site. Different URL, different look,
> > beta announcements will be there, regression database will be there,
> > developement docs, etc. If you want
> The developer's corner will soon be going away. I'm in the process of
> putting together a developer's site. Different URL, different look,
> beta announcements will be there, regression database will be there,
> developement docs, etc. If you want a sneak preview:
>
> http://developer.
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > The auto-reaping is standard SysV behavior, while BSD is really ignore.
>
> You'll recall the ECHILD exception was installed by Tatsuo after seeing
> problems on Solaris. Evidently Solaris uses the auto-reap behavior too.
SVr4/Solaris took the Sy
* Tom Lane <[EMAIL PROTECTED]> [010730 18:34]:
> I wrote:
> > If autoconf releases were happening on a regular basis, we could get
> > away with just tracking the released version of autoconf for these
> > files. However, they aren't and we can't.
>
> Just moments after writing that, I was start
I wrote:
> If autoconf releases were happening on a regular basis, we could get
> away with just tracking the released version of autoconf for these
> files. However, they aren't and we can't.
Just moments after writing that, I was startled to read on another
mailing list that the long-mythical
On Mon, 30 Jul 2001, Tom Lane wrote:
> Bill Studenmund <[EMAIL PROTECTED]> writes:
> > Looking at source on the web, I found:
>
> > kernel/signal.c:1042
>
> > * Note the silly behaviour of SIGCHLD: SIG_IGN means that the
> > * signal isn't actually ignored, but does automatic child
> > * reaping,
The developer's corner will soon be going away. I'm in the process of
putting together a developer's site. Different URL, different look,
beta announcements will be there, regression database will be there,
developement docs, etc. If you want a sneak preview:
http://developer.postgres
Bill Studenmund <[EMAIL PROTECTED]> writes:
> I see three choices:
> 1) Change back to SIG_DFL for normal behavior. I think this will be fine
> as we run w/o problem on systems that lack this behavior. If
> turning off automatic child reaping would cause a problem, we'd
> have s
Bill Studenmund <[EMAIL PROTECTED]> writes:
> Looking at source on the web, I found:
> kernel/signal.c:1042
> * Note the silly behaviour of SIGCHLD: SIG_IGN means that the
> * signal isn't actually ignored, but does automatic child
> * reaping, while SIG_DFL is explicitly said by POSIX to force
On Mon, 30 Jul 2001, Tom Lane wrote:
> Bill Studenmund <[EMAIL PROTECTED]> writes:
> > All ECHILD is doing is saying there was no child. Since we aren't really
> > waiting for the child, I don't see how that's a problem.
>
> You're missing the point: on some platforms the system() call is
> retur
> A more general solution is for indexscan to collect up a bunch of TIDs
> from the index, sort them in-memory by TID order, and then probe into
> the heap with those TIDs. This is better than the above because you get
> nice ordering of the heap accesses across multiple key values, not just
> am
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I have thought of a few new TODO performance items:
> 1) Someone at O'Reilly suggested that we order our duplicate index
> entries by tid so if we are hitting the heap for lots of duplicates, the
> hits will be on sequential pages. Seems like a nice id
> So why do we cache sequetially-read pages? Or at least not have an
> option to control it?
>
> Oracle (to the best of my knowledge) does NOT cache pages read by a
> sequential index scan for at least two reasons/assumptions (two being
> all that I can recall):
>
> 1. Caching pages for sequent
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > New TODO entries are:
> > * Add queue of backends waiting for spinlock
>
> I already see:
>
> * Create spinlock sleepers queue so everyone doesn't wake up at once
That is an old copy of the TODO. I reworded it. You will only see this
now:
> 3) I am reading the Solaris Internals book and there is mention of a
> "free behind" capability with large sequential scans. When a large
> sequential scan happens that would wipe out all the old cache entries,
> the kernel detects this and places its previous pages first
> on the free list.
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Should we be spinning waiting for spinlock on multi-cpu machines? Is
> that the answer?
A multi-CPU machine is actually the only case where a true spinlock
*does* make sense. On a single CPU you might as well yield the CPU
immediately, because you hav
Bruce Momjian <[EMAIL PROTECTED]> writes:
> New TODO entries are:
> * Add queue of backends waiting for spinlock
I already see:
* Create spinlock sleepers queue so everyone doesn't wake up at once
BTW, I agree with Vadim's opinion that we should add a new type of lock
(intermediate betwe
Bill Studenmund <[EMAIL PROTECTED]> writes:
> All ECHILD is doing is saying there was no child. Since we aren't really
> waiting for the child, I don't see how that's a problem.
You're missing the point: on some platforms the system() call is
returning a failure indication because of ECHILD. It'
> > > We could use POSIX spinlocks/semaphores now but we
> > > don't because of performance, right?
> >
> > No. As long as no one proved with test that mutexes are bad for
> > performance...
> > Funny, such test would require ~ 1 day of work.
>
> Good question. I know the number of function call
> > > > * Order duplicate index entries by tid
> > >
> > > In other words - add tid to index key: very old idea.
> >
> > I was thinking during index creation, it would be nice to
> > order them by tid, but not do lots of work to keep it that way.
>
> I hear this "not do lots of work" so
> > > * Order duplicate index entries by tid
> >
> > In other words - add tid to index key: very old idea.
>
> I was thinking during index creation, it would be nice to
> order them by tid, but not do lots of work to keep it that way.
I hear this "not do lots of work" so often from you -:)
Da
On Sun, 22 Jul 2001, Tatsuo Ishii wrote:
> > [EMAIL PROTECTED] writes:
> > > I have written a postgres C function that
> > > uses a popen linux system call. Orginally when I first tried it I kept
> > > getting an ECHILD. I read a little bit more on the pclose function
> > > and the wait system c
> New TODO entries are:
>
> * Order duplicate index entries by tid
In other words - add tid to index key: very old idea.
> * Add queue of backends waiting for spinlock
We shouldn't mix two different approaches for different
kinds of short-time internal locks - in one cases we need
> > New TODO entries are:
> >
> > * Order duplicate index entries by tid
>
> In other words - add tid to index key: very old idea.
I was thinking during index creation, it would be nice to order them by
tid, but not do lots of work to keep it that way.
> > * Add queue of backends waiti
I have thought of a few new TODO performance items:
1) Someone at O'Reilly suggested that we order our duplicate index
entries by tid so if we are hitting the heap for lots of duplicates, the
hits will be on sequential pages. Seems like a nice idea.
2) After Tatsuo's report of running 1000 ba
set
digest
Added to /contrib, with small Makefile changes. Requires expat library.
Does not compile by default.
> I've packaged up what I've done so far and you can find it at
> http://www.cabbage.uklinux.net/pgxml.tar.gz
>
> The TODO file included indicates what still remains to be done (a lot!).
>
> I
Patch applied. Thanks. Still needs updated autoconf.
> Skip the patch for configure.in in that last one, use this in it's
> place (I missed one sysv5uw).
>
>
>
> --
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 972-414-9812 E-Mail: [EMAIL PRO
I've used select count(), then a select LIMIT/OFFSET for the pages.. A
cursor might be a better idea though I don't think you can get the total
number of rows without count()'ing them.
Good luck!
-Mitch
- Original Message -
From: "Christopher Kings-Lynne" <[EMAIL PROTECTED]>
To: "Hacker
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Bruce Momjian) wrote:
> The updated TODO item is:
>
> * Add XML interface: psql, pg_dump, COPY, separate server (?)
>
> I am unsure where we want it. We could do COPY and hence a flag in
> pg_dump, or psql like we do HTML from psql. A
> > > I have managed to get several XML files into PostgreSQL by writing a parser,
> > > and it is a huge hassle, the public parsers are too picky. I am thinking that a
> > > fuzzy parser, combined with some intelligence and an XML DTD reader, could make
> > > a very cool utility, one which I have
> "Leslie" <[EMAIL PROTECTED]> writes:
> > PostgreSQL7.1 is now running on AIX5L( S85, 6GB memory, 6CPU), which was
> > running on Linux before(Pentium3, 2CPU, as far as I
> > remember...sorry..).
> > The performance( on AIX5L ) is just half as good as the one( on Linux ).
>
> Hmm ... is
Hi all,
This is the situation: You are doing a big query, but you want the results
on the web page to be paginated. ie. The user can click page 1, 2, etc.
So, you need know how many rows total would be returned, but you also only
need a small fraction of them.
What is an efficient way of doin
On Mon, Jul 30, 2001 at 03:43:26PM +1000, Gavin Sherry wrote:
> On Mon, 30 Jul 2001, mlw wrote:
>
> I have had the same problem. The best XML parser I could find was the
> gnome-xml library at xmlsoft.org (libxml). I am currently using this in C
What happen if you use DOM type of XML parser for
Bruce Momjian wrote:
>
> > I have been fighting, for a while now, with idiot data vendors that think XML
> > is a cure all. The problem is that XML is a hierarchical format where as SQL is
> > a relational format.
> >
> > It would be good to get pg_dump to write an XML file and DTD, but getting
>
46 matches
Mail list logo