[Please copy the mailing list on replies so others can participate
in and learn from the discussion.]
On Tue, Oct 18, 2005 at 07:09:08PM +, Rodrigo Madera wrote:
> > What language and API are you using?
>
> I'm using libpqxx. A nice STL-style library for C++ (I am 101% C++).
I've only dabble
On Tue, Oct 18, 2005 at 06:07:12PM +, Rodrigo Madera wrote:
> 1) Is there any way for me to send the binary field directly without needing
> escape codes?
In 7.4 and later the client/server protocol supports binary data
transfer. If you're programming with libpq you can use PQexecParams()
to
Hello there,
This is my first post in the list. I have a deep low-level background
on computer programming, but I am totally newbie to sql databases. I am
using postgres because of its commercial license.
My problem is with storing large values. I have a database that stores
large ammounts of dat
On Tue, Oct 18, 2005 at 05:21:37PM +0200, Csaba Nagy wrote:
> INFO: vacuuming "public.some_table"
> INFO: "some_table": removed 29598 row versions in 452 pages
> DETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec.
> INFO: "some_table": found 29598 removable, 39684 nonremovable row
> versions in 851
First of all thanks all for the input.
I probably can't afford even the reindex till Christmas, when we have
about 2 weeks of company holiday... but I guess I'll have to do
something until Christmas.
The system should at least look like working all the time. I can have
downtime, but only for shor
In the light of what you've explained below about "nonremovable" row
versions reported by vacuum, I wonder if I should worry about the
following type of report:
INFO: vacuuming "public.some_table"
INFO: "some_table": removed 29598 row versions in 452 pages
DETAIL: CPU 0.01s/0.04u sec elapsed 18
reindex should be faster, since you're not dumping/reloading the table
contents on top of rebuilding the index, you're just rebuilding the
index.
Robert Treat
emdeon Practice Services
Alachua, Florida
On Wed, 2005-10-12 at 13:32, Steve Poe wrote:
>
> Would it not be faster to do a dump/reload
Martin Nickel wrote:
When I turn of seqscan it does use the index - and it runs 20 to 30%
longer. Based on that, the planner is correctly choosing a sequential
scan - but that's just hard for me to comprehend. I'm joining on an int4
key, 2048 per index page - I guess that's a lot of reads - the