Hi. Is it possible to vacuum a table (vacuum full analyze)
from a script. Currently I run the Postgres client and then
run vacuum, but I'd like to automate the vacuum by calling
it from a (Perl) script.
Thanks,
Janet
Hi. What is the best way to check a pgdump
without doing a restore?
Thanks,
Janet
Hi. I am running Postgres 8.2.7 on a Linux system for over
a year now with no problems.
Today one of the database users reported the following error:
psql: FATAL: could not read block 0 of relation 1664/0/1262: read
only 0 of 8192 bytes
I tried stopping and restarting the Postgres serve
Hi. I'm trying to write a plperl function that returns a list of ids
that I want to use in a subquery.
The function call would look like:
select * from mlist( 168.4, 55.2, 0.1);
and would return a list of integers. I've written this function,
and it returns the right list of integers, but
- much better than reading
the image headers. The images are available on spinning disk, and
the image locations are in the db.
Thanks,
Janet
On 02/08/2009 05:59 p.m., Andy Colson wrote:
On 1 Aug 2009, at 23:24, Janet Jacobsen wrote:
My questions are:
(2) Should I REINDEX these two tables daily
Thanks for your reply. Responses below, and one follow-up
question about when/how often to use analyze.
Janet
On 02/08/2009 05:09 a.m., Alban Hertroys wrote:
On 1 Aug 2009, at 23:24, Janet Jacobsen wrote:
My questions are:
(2) Should I REINDEX these two tables daily after the pipeline
Hi. We are running a data processing/analysis pipeline that
writes about 100K records to two tables on a daily basis.
The pipeline runs from about 6:00 a.m. to 10:00 a.m.
Our user base is small - about five people. Each accesses
the database in a different way (generally using some script
- eit
wrote:
> * Janet Jacobsen (jsjacob...@lbl.gov) wrote:
>
>> If they are going to spend 95% of their time querying the
>> records that meet the 'good' criteria, what are the good
>> strategies for ensuring good performance for those queries?
>> (1) Should I
Hi. We have a table with 30 M records that is growing by
about 100 K records per day.
The experimentalists, whose data are in the table, have
decided that they will focus on the records for which the
value of one field, rbscore, is greater than a cut-off.
However, they want to continue to store a
Hi. Thanks for the quick and definitive answers to my questions.
The information you provided will save me from wasting time and
energy trying to see how far I could get otherwise. Thanks very much.
Janet
Tom Lane wrote:
> Janet Jacobsen writes:
>
>> Is it possible to creat
Hi. We are looking into the possibility of running a Postgres
server on an underutilized machine. This machine has very
little local disk space, so we would have to create the data
directory on a shared file system.
The underutilized machine was set up so that it can *only
read* from the shared
11 matches
Mail list logo