Hi,
from 7.3 I created a backup by:
pg_dumpall > backup
from 7.4 trying: pg_restore backup
results in postgres' [Archiver]s suspicion that "backup"
was not a valid archive.
Where to look?
TIA Erwin
---(end of broadcast)---
TIP 4: Don't 'kill
Hi,
We are trying to introduce a thread that monitors the creation of the
trigger_file. As and when the file is created, the process that monitors
postgres server needs to be notified through the inotify API.
This is to reduce the 3-4 seconds delay that exists with the current
implementation in p
Hello,
I was wondering if there would be a problem using Postgres 8.4 for my Rails
development database and Postgres 8.1 for my production database.
8.1 is what is available with my shared web hosting provider, but 8.4 is the
support (repo) version for my local Ubuntu machine.
I tried installing
Hello
i would like to export my PostGreSQL database and import it on another
pc. i seem not to find this possibility in 'pgAdmin III', can someone
help me on how to do this?
greetZ
wes
---(end of broadcast)---
TIP 4: Have you searched our lis
Hi Scott
i don't know, i do just the exact thing, with exact users and one the
other pc (where the db is not original created, + tried on 3 different
pc) it is not working.
i can restore the database, the tables and data is there, but i cannot
use the tables
i can connect to the database i
Apologies if this is a repost - I tried sending it yesterday and haven'
t seen it in the forum yet.
I am currently writing a perl script to convert the string a user
supplies to a search engine into SQL. The user supplies a string in the
same foramt as google uses - e.g. "cat -dog" finds records
Hi,
I'm having trouble with libpg.so.2.
Specifically:
Can't load '/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi/auto/
Pg/Pg.so' for module Pg: libpq.so.2: cannot open shared object file: No
such file or directory at /usr/lib/perl5/5.8.0/i386-linux-thread-multi/
DynaLoader.pm line 229.
I
Ok - discovered the solution in pgsql-php, repeated below for reference:
From: "Peter De Muer (Work)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: Re: 7.3.1 update gives PHP libpq.so.2 problem
Date: Tue, 4 Feb 2003 14:06:04 +0100
try making a soft
As mentioned previously I have a large text database with upwards of
40GB of data and 8 million tuples.
The time has come to buy some real hardware for it.
Having read around the subject online I see the general idea is to get
as much memory and the fastest I/O possible.
The buget for the serv
Ron thank you for your comments, sorry of the slow response - i
actually replied to you on saturday but i think the list was having
trouble again?!
Your questions are answered below...
> On Fri, 2003-07-25 at 07:42, [EMAIL PROTECTED] wrote:
> > As mentioned previously I have a large text databa
I am looking at ways to speed up queries, the most common way by for
queries to be constrianed is by date range. I have indexed the date
column. Queries are still slower than i would like.
Would there be any performance increase for these types of queries if
the tables were split by month as de
Below is the EXPLAIN ANALYZE output of a typical current query.
I have just begun looking at tsearch2 to index the header and body
fields.
I have also been using 'atop' to see I/O stats on the disk, i am now
pretty sure thats where the current bottleneck is. As soon as a query
is launched the
I am trying to setup tsearch2 on postgresql 7.3.4 on a Redhat9 system,
installed from rpms.
There seemed to be some files required for installation of tsearch
missing so I downloaded the src bundle too.
Tsearch2 then compiled ok but now the command:
psql mydb < tsearch2.sql
fails with a messa
Bad form to reply to my own posting i know but -
I notice that the integer dictionary can accept MAXLEN for the longest
number that is considered a valid integer. Can i set MAXLEN for the en
dictionary to be the longest word i want indexed?
I think i'd need to create a new dictionary...?
>
> On Thu, 7 Aug 2003 [EMAIL PROTECTED] wrote:
>
> > Part1.
> >
> > I have created a dictionary called 'webwords' which checks all
words
> > and curtails them to 300 chars (for now)
> >
> > after running
> > make
> > make install
> >
> > I then copied the lib_webwords.so into my $libdir
> >
> > I
I am trying to use the fti module to search my text.
Searching through the raw text using ILIKE takes 3 seconds,
searching using fti takes 212 seconds.
Then i tried to turn off seq_scan to see what happens, the
planner still does a seq_scan.
Why does the planner not use the index?
Are there any o
Hi, I am having problems manipulating bit strings.
CREATE TABLE lookup(
fname TEXT PRIMARY KEY,
digest BIT VARYING
);
I am trying to construct another bit string based on the length of the
first:
SELECT b'1'::bit( bit_length( digest ) ) FROM lookup;
This doesn't work as i had hoped, where am I
I have been trying to find out more about the postmaster crashing, but
things seem to be getting stranger! I am experiencing problems running
postmaster in gdb too (see end of message)
I will put all the information in this posting for completness,
apologies for the duplicated sections.
I am r
> From: Tom Lane <[EMAIL PROTECTED]>
>
> [EMAIL PROTECTED] writes:
> > How do i get the core files to examine? There never seem to be any
> > produced, even outside the debuggers.
>
> Most likely you have launched the postmaster under "ulimit -c 0",
which
> prevents core dumps. This seems to
> [EMAIL PROTECTED] writes:
> > I have set "ulimit -c unlimited" as you sugested,
> > i then copied postmaster to /home/postgres
> > and ran it as postgres from there...
> > but still no core files. Where should they appear?
>
> In $PGDATA/base/yourdbnumber/core (under some OSes the file name
mi
First - apologies for the stuff about "i don't understand why there's
only one core file", i now have a post-it note now saying "ulimit gets
reset at reboot" (i assume thats what happened).
So please find below a potentially more useful core file gdb output:
Core was generated by `postgres: mat
Tom Lane writes:
> That has nothing whatever to do with how much memory the kernel will
let
> any one process have. Check what ulimit settings the postmaster is
> running under (particularly -d, -m, -v).
My ulimit settings you requested look ok (others included for info)
ulimit -d, -m, -v : un
Is postgresql have an axisting solution for distibuting the databse
amoungst several servers, so that each one holds a subset of the data
and queries are passed to each one and then collated by a master server?
I have heard erServer mentioned but i got the impression this might
just be for bac
After applying the patches supplied so far and also trying the lastest
stable tar.gz for tsearch2 ( downloaded 24th of september)
I am still experiencing the same issue as previously described:
I try to do a
SELECT to_tsvector( 'default', 'some text' )
The backend crashes.
SELECT to_tsvector(
I am running a SELECT to get all tuples within a given date range. This
query is much slwoer than i expected - am i missing something?
I have a table 'meta' with a column 'in_date' of type timestamp(0), i
am trying to select all
records within a given date range. I have an index on 'in_date' and
hi,
I am becoming more and more convinced that in order to achieve the
required performance and scalability I need to split my data amoungst
many backend machines.
Ideally I would start with about 10 machine and have 1/10th of the data
on each. As the data set grows I would then buy additional
hello,
I have read the documentation couple of times and I still can not figure out
the following aspects.
if a function does insert/update/delete it needs to be stable or volatile ?
if a immutable function executes 'nextval' should itself be also volatile ?
thanks,
Razvan Radu
27 matches
Mail list logo