formation about what happens with
the table.
Good luck,
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
On 16. nov 2004, at 13:21, Nils Rennebarth wrote:
I have a table that is essentially a log where new entries
Anytime data changes in Postgres, the old rows are still on the disk.
This is true regardless if the transaction rolls back.
Read in the docs about vacuuming, which is a process that cleans this
up.
Regards,
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk
cast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
--
David Helgason,
Business Development et al.,
Over the Edge I/S
or download the list archive mbox files into your mail-program and
use that (which is what I do).
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
---(end of broadcast)--
10.3 (it's based on the
select function, but at least it exists).
Reading the description of your problem it sounds different.
d.
--
David Helgason,
Business Development et al.,
Over the Edge I/S (http://otee.dk)
Direct line +45 2620 0663
Main line +45 3264 5049
---(e
On 27. sep 2004, at 22:08, Dean Gibson (DB Administrator) wrote:
Greg Stark wrote on 2004-09-27 08:17:
Stephan Szabo <[EMAIL PROTECTED]> writes:
>> On Sun, 26 Sep 2004 20:16:52 +0200, David Helgason <[EMAIL PROTECTED]>
wrote:
>>> On a similar note, I've
On 15. feb 2004, at 18:18, Tom Lane wrote:
A workaround you could think about is invoking the LO functions via
ordinary SELECT commands, ignoring libpq's LO API altogether. This
would have been rather painful in pre-7.4 releases since you'd have
to be willing to deal with quoting and dequoting "by
I was just wondering whether this was either:
- supported, or
- doable, either
- using the normal poll-read-done? loop
- a settable timeout
It would be awfully nice.
David Helgason,
Over the Edge Entertainments
---(end of
oo.
If you are actually going to store multi-megabyte data buffers in
there, there are considerations of memory allocation (which may get
pretty extreme if you try to transfer huge buffers at once). Consider
using the Large Object interface instead.
Hope this helps.
David Helgason
Over the Edge
Mike Mascari <[EMAIL PROTECTED]> writes:
But just as a quick notice to those upgrading from 7.3 to 7.4 with
fully
normalized databases requiring > 11 joins, the GEQO setting can be a
killer...
Uh ... dare I ask whether you think it's too high? Or too low?
Just a data point: With a fresh 7.4 and
On 16. jan 2004, at 12:18, David Garamond wrote:
David Helgason wrote:
I'm switching right away. The notation doesn't really do anything for
me, but that's fine. I've been using bit(128), but always suspected
that of being unoptimal (for no particular reason).
I thi
cket before being allowed
to send any data to the perl-server. Not an un-clever system, I think,
that I'd be happy to share.
I wonder what other people are doing and if anyone has other arguments.
David Helgason,
Over the Edge Entertainments
---(end of br
:
maint=# select 1 = any ('{{1,2,3},{4,5,6}}'::int[])[2][1:3];
But that is not working for obvious reasons. This makes arrays pretty
broken for me.
Am I missing anything obvious?
Regards,
David Helgason
Over the Edge Entertainments
---(end of broadcast)---
ce, and keep up the good work!
d.
On 7. jan 2004, at 06:22, David Helgason wrote:
Thank you very much,
I figured I needed to open my own using SPI_connect(). I had assumed
that there was sth like a
the-connection-this-functions-is-begin-run-through.
Now I'm having problems with
which
2004, at 05:40, Tom Lane wrote:
David Helgason <[EMAIL PROTECTED]> writes:
I'm having trouble finding out how to find the current PGconn
connection inside a C function.
What makes you think that "*the* current PGconn" is a valid concept?
libpq has always supported mul
ested). It'll allow for really fast incremental updates of a
columns, which I'll use to make storing of huge blobs less of a pain
(although it depends on the client also speaking rsync-ese, but that'll
be included with the package).
Regards,
d.
--
David Helgason
Over the Edge E
16 matches
Mail list logo