Hi all.
PostgreSQL 9.1.2 on i686-pc-linux-gnu, compiled by gcc-4.4.real (Debian
4.4.5-8) 4.4.5, 32-bit
id | integer | not null Vorgabewert nextval('a_id_seq'::regclass)
a | integer | not null
b | integer | not null
Indexe:
"a_pkey" PRIMARY KEY, btree (id)
"a_a_key" UNIQUE CONSTRAINT
Dear All,
Recently i have released the next version of the epqa. which is a very
useful tool for, gives input for optimizing psql queries, and fine tuning
it.
epqa is tool similar like, pqa. But designed and implemented to parse log
files which is in GB's. Report is similar like that.
Dear All,
Am going to do migration of database from one version to another., is there
any article or any other document explaining the possibilities and other
things.
Further Explanation:
I have a database in postgres X.Y which has around 90 tables, and lot of
data in it.
In the next version of
ok
On Tue, Mar 25, 2008 at 5:33 PM, Alvaro Herrera <[EMAIL PROTECTED]>
wrote:
> Please stop reposting your questions to multiple groups. Since all your
> questions are about performance, please stick to the pgsql-performance
> list. Posting to pgsql-sql is not really appropriate, and in
> pgsql
I have a table with 32 lakh record in it. Table size is nearly 700 MB,
and my machine had a 1 GB + 256 MB RAM, i had created the table space in
RAM, and then created this table in this RAM.
So now everything is in RAM, if i do a count(*) on this table it returns
327600 in 3 seconds, why i
hi all,
I want this mail to be continued about summary of performance tuning
tools... or other postgres related tools..
I ll start with saying there is a tool SCHEMASPY ( i got to know about this
from another group ), this will draw ER diagram and gives interesting
informations about our postgres
Is there any tool to draw ER diagram from SQL schema file...
Is there any article describing the migration database from postgresql 7.4to
8.1
hi,
how to find trigger names in my database ?
using psql 7.4
the following query shows system triggers, i want only to list the
triggers created by me
select relname, tgname, tgtype, proname, prosrc, tgisconstraint,
tgconstrname, tgconstrrelid, tgdeferrable, tginitdeferred, tgnargs,
tgattr
now it is for 500 records.
postgres 7.4
Debian
--
call_id | integer | not null default
nextval('call_log_seq'::text)
agent_id | integer |
call_id already has index.
count(*
I am having a table with more than 1000 records, i am not having index in
that, while executing that query it occupies the processor..
I created an index, and then executed that query., Now it is not getting
executed at all... while seeing the top the processor is busy in WA, that it
is waiting fo
On 4/30/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
On Mon, 30 Apr 2007, psql psql wrote:
> On 4/30/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
>>
>> On Mon, 30 Apr 2007, psql psql wrote:
>>
>> > Anyone know why to_tsvector('sausages') mig
On 4/30/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
On Mon, 30 Apr 2007, psql psql wrote:
> Anyone know why to_tsvector('sausages') might return "sausages" while
> to_tsvector('default','sausages') correctly returns "sausag"?
>
Anyone know why to_tsvector('sausages') might return "sausages" while
to_tsvector('default','sausages') correctly returns "sausag"?
This is causing me a fairly major headache. I am guessing that the
tsearch2() function used in my trigger is not specifying "default" when
creating the tsvector sinc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Nov 27, 2006, at 1:21 PM, Brandon Aiken wrote:
The other argument is that it's redundant data with no real meaning to
the domain, meaning using surrogate keys technically violates low-
order
normal forms.
It has real meaning in the sense that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a php script to upgrade a database, and it works just fine when not in a
transaction, but it fails when I turn on a transaction.
I'm using:
fedora core 4
postgresql 8.0.3
php-pgsql-4.3.11
The error is:
[db_error: message="DB Error: unknown er
hi,
I am becoming more and more convinced that in order to achieve the
required performance and scalability I need to split my data amoungst
many backend machines.
Ideally I would start with about 10 machine and have 1/10th of the data
on each. As the data set grows I would then buy additional
I am running a SELECT to get all tuples within a given date range. This
query is much slwoer than i expected - am i missing something?
I have a table 'meta' with a column 'in_date' of type timestamp(0), i
am trying to select all
records within a given date range. I have an index on 'in_date' and
After applying the patches supplied so far and also trying the lastest
stable tar.gz for tsearch2 ( downloaded 24th of september)
I am still experiencing the same issue as previously described:
I try to do a
SELECT to_tsvector( 'default', 'some text' )
The backend crashes.
SELECT to_tsvector(
Is postgresql have an axisting solution for distibuting the databse
amoungst several servers, so that each one holds a subset of the data
and queries are passed to each one and then collated by a master server?
I have heard erServer mentioned but i got the impression this might
just be for bac
Tom Lane writes:
> That has nothing whatever to do with how much memory the kernel will
let
> any one process have. Check what ulimit settings the postmaster is
> running under (particularly -d, -m, -v).
My ulimit settings you requested look ok (others included for info)
ulimit -d, -m, -v : un
First - apologies for the stuff about "i don't understand why there's
only one core file", i now have a post-it note now saying "ulimit gets
reset at reboot" (i assume thats what happened).
So please find below a potentially more useful core file gdb output:
Core was generated by `postgres: mat
> [EMAIL PROTECTED] writes:
> > I have set "ulimit -c unlimited" as you sugested,
> > i then copied postmaster to /home/postgres
> > and ran it as postgres from there...
> > but still no core files. Where should they appear?
>
> In $PGDATA/base/yourdbnumber/core (under some OSes the file name
mi
> From: Tom Lane <[EMAIL PROTECTED]>
>
> [EMAIL PROTECTED] writes:
> > How do i get the core files to examine? There never seem to be any
> > produced, even outside the debuggers.
>
> Most likely you have launched the postmaster under "ulimit -c 0",
which
> prevents core dumps. This seems to
nd ->
After more poking i discovered that the to_tsvector function call does
not cause a seg fault in the backend if you pass it only numbers,
characters and whitespace, but instead works as desired.
ddd postmaster
<- run postmaster with -D /data ->
psql test
<- seg fault, similar LO
Hi, I am having problems manipulating bit strings.
CREATE TABLE lookup(
fname TEXT PRIMARY KEY,
digest BIT VARYING
);
I am trying to construct another bit string based on the length of the
first:
SELECT b'1'::bit( bit_length( digest ) ) FROM lookup;
This doesn't work as i had hoped, where am I
I am trying to use the fti module to search my text.
Searching through the raw text using ILIKE takes 3 seconds,
searching using fti takes 212 seconds.
Then i tried to turn off seq_scan to see what happens, the
planner still does a seq_scan.
Why does the planner not use the index?
Are there any o
; >
> > I then copied the lib_webwords.so into my $libdir
> >
> > I have run
> >
> > psql mybd < dict_webwords.sql
> >
> Once you did 'psql mybd < dict_webwords.sql' you should be able use
it :)
> Test it :
>select lexize('w
Bad form to reply to my own posting i know but -
I notice that the integer dictionary can accept MAXLEN for the longest
number that is considered a valid integer. Can i set MAXLEN for the en
dictionary to be the longest word i want indexed?
I think i'd need to create a new dictionary...?
>
I am trying to setup tsearch2 on postgresql 7.3.4 on a Redhat9 system,
installed from rpms.
There seemed to be some files required for installation of tsearch
missing so I downloaded the src bundle too.
Tsearch2 then compiled ok but now the command:
psql mydb < tsearch2.sql
fails wit
Below is the EXPLAIN ANALYZE output of a typical current query.
I have just begun looking at tsearch2 to index the header and body
fields.
I have also been using 'atop' to see I/O stats on the disk, i am now
pretty sure thats where the current bottleneck is. As soon as a query
is launched the
I am looking at ways to speed up queries, the most common way by for
queries to be constrianed is by date range. I have indexed the date
column. Queries are still slower than i would like.
Would there be any performance increase for these types of queries if
the tables were split by month as de
Ron thank you for your comments, sorry of the slow response - i
actually replied to you on saturday but i think the list was having
trouble again?!
Your questions are answered below...
> On Fri, 2003-07-25 at 07:42, [EMAIL PROTECTED] wrote:
> > As mentioned previously I have a large text databa
As mentioned previously I have a large text database with upwards of
40GB of data and 8 million tuples.
The time has come to buy some real hardware for it.
Having read around the subject online I see the general idea is to get
as much memory and the fastest I/O possible.
The buget for the serv
Ok - discovered the solution in pgsql-php, repeated below for reference:
From: "Peter De Muer (Work)" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: Re: 7.3.1 update gives PHP libpq.so.2 problem
Date: Tue, 4 Feb 2003 14:06:04 +0100
try making a soft
Hi,
I'm having trouble with libpg.so.2.
Specifically:
Can't load '/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi/auto/
Pg/Pg.so' for module Pg: libpq.so.2: cannot open shared object file: No
such file or directory at /usr/lib/perl5/5.8.0/i386-linux-thread-multi/
DynaLoader.pm line 229.
I
Apologies if this is a repost - I tried sending it yesterday and haven'
t seen it in the forum yet.
I am currently writing a perl script to convert the string a user
supplies to a search engine into SQL. The user supplies a string in the
same foramt as google uses - e.g. "cat -dog" finds records
37 matches
Mail list logo