Hi,
I am looking for an efficient and effective solution to eliminate
duplicates in a continuously updated "cumulative" transaction table
(no deletions are envisioned as all non-redundant records are
important). Below is my situation.
I do have a “central” transaction table (A) that is expected to
I still have some problems with my xml index
CREATE INDEX xml_index
ON time_series
USING btree
((
(xpath('/AttributeList/Attributes/Attribute/Name/text()',
external_attributes))[1]::text));
When I run the following query the index is not used :
select id from time_series where
array_uppe
I have to compare two schamas in two different Postgresql databases to
identify the mismatch between the production and development environment
databases.
Is there any good tool?
Any suggestion please.
On Sunday 07 March 2010 02.49:29 Bruce Momjian wrote:
> Manlio Perillo wrote:
> > I think the following behaviour is not intuitive:
> >
> > manlio=> DROP TABLE IF EXISTS foo.bar;
> > ERROR: schema "foo" does not exist
> >
> > The statement should not fail if the schema does not exist
>
> Hmm.
On Sun, 2010-03-07 at 16:40 +0600, AI Rumman wrote:
> I have to compare two schamas in two different Postgresql databases to
> identify the mismatch between the production and development
> environment
> databases.
>
> Is there any good tool?
http://bucardo.org/wiki/Check_postgres
check_postgre
On 7 Mar 2010, at 11:02, Chris Roffler wrote:
> I still have some problems with my xml index
>
> CREATE INDEX xml_index
> ON time_series
> USING btree
> ((
> (xpath('/AttributeList/Attributes/Attribute/Name/text()',
> external_attributes))[1]::text));
>
> When I run the following query
I want to provide a "suggest word as you type" feature in an application
(like google suggest).
All the documents are - of course - stored within a PostgreSQL database,
within TEXT columns.
To provide those suggestions, I need a list of all words, together with a
number indicating the number of oc
Alban
thanks for your replay. Yes I am looking for node exists ...
I'll give it a roll.
>There are a couple of cases where Postgres won't use your index, but in this
>case it's quite clearly because you're asking for (quite) a different
>expression than the one you indexed.
>
>You seem to wa
Version: 8.3.9
I was surprised when I came across the following while changing some
code to send parameters out of line (to avoid interpolation).
postgres=# prepare test1 as select timestamp '2009-01-01';
PREPARE
postgres=# prepare test2 as select timestamp $1;
ERROR: syntax error at or near "$1
"Massa, Harald Armin" writes:
> I want to provide a "suggest word as you type" feature in an application
> (like google suggest).
> All the documents are - of course - stored within a PostgreSQL
> database, within TEXT columns.
See pg_trgm.
http://www.postgresql.org/docs/8.4/static/pgtrgm.h
Martijn van Oosterhout writes:
> postgres=# prepare test1 as select timestamp '2009-01-01';
> PREPARE
> postgres=# prepare test2 as select timestamp $1;
> ERROR: syntax error at or near "$1"
> The workaround is simple, use a cast instead, but is there a particular
> reason why you can't use a pa
Hi everyone,
I have a big table (more than 10 millions rows) and create an index
(three columns) ( the index is not created with creating the table). Within
one day of the index created, the performance of query searching is ok. But
with more than several thousands rows of data inserted into
2010/3/7 zxo102 ouyang
> Hi everyone,
> I have a big table (more than 10 millions rows) and create an index
> (three columns) ( the index is not created with creating the table). Within
> one day of the index created, the performance of query searching is ok. But
> with more than several th
Tom Lane wrote on 07.03.2010 16:34:
We wouldn't even support it at all because it's so syntactically messy and
inextensible
I like it :)
It's the only cross-DBMS way to write down a date or timestamp literal.
Thomas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To m
Hi
does any one has DBT2 installation guide?
I have installation manual of DBT1 and DBT2 but I am using DBT2.
I couldnt find it.
Help me ..
thanks,
--
Megha
Hi,
I use FreeBSD 7.2-RELEASE #0: Fri May 1 07:18:07 UTC 2009
I installed the above port and my /usr/local/etc/rc.d/postgresql
script doesn't work...it doesn't initialize the db, it doesn't make it
start
If i do it "manually" with the on-installation created user pgsql:
[pg...@zennavo /]$ i
Hi
does any one has DBT2 installation guide?
I have installation manual of DBT1 and DBT2 but I am using DBT2.
I couldnt find it.
Help me ..
thanks,
--
Megha
--
Megha
On 7 Mar 2010, at 24:16, Aleksandar Sosic wrote:
> Hi,
>
> I use FreeBSD 7.2-RELEASE #0: Fri May 1 07:18:07 UTC 2009
>
> I installed the above port and my /usr/local/etc/rc.d/postgresql
> script doesn't work...it doesn't initialize the db, it doesn't make it
> start
Did you add it to /etc
Hi All,
I got the following error and not sure how to fix it.
"psql:/tmp/030610dumpfile.txt:4369: ERROR: invalid byte sequence for encoding
"UTF8": 0x93
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encod
Hi!
After going several months without such incidents, we now got bit by the same
problem again. We have since upgraded the hardware we ran the database cluster
on and currently use version 8.3.7. The general outline of the problem hasn't
changed much though - we still don't use the database 'p
On Sun, Mar 7, 2010 at 6:06 PM, Markus Wollny wrote:
> Hi!
>
> After going several months without such incidents, we now got bit by the same
> problem again. We have since upgraded the hardware we ran the database
> cluster on and currently use version 8.3.7. The general outline of the
> proble
On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau wrote:
> Hi,
> I am looking for an efficient and effective solution to eliminate
> duplicates in a continuously updated "cumulative" transaction table
> (no deletions are envisioned as all non-redundant records are
> important). Below is my situation.
I
Hi,
I have postgreslq 8.3.7. I have the following issue:
$ psql databasename
psql: FATAL: role "litvinse" does not exist
If found the workaround:
$ kdestroy
$ psql databasename
Welcome to psql 8.3.7, the PostgreSQL interactive terminal.
Type: \copyright for distribution terms
\h fo
On Sat, Mar 6, 2010 at 6:16 PM, Aleksandar Sosic wrote:
> What's the problem with the startup script? am I doing
> something wrong?
>
The startup script starts the service at boot, or when manually run.
If you run it with the "init" command it will do that. Normally it
just starts/stops the serv
Dmitry Litvintsev writes:
> Apparently default user is extracted from kerberos ticket. Is it possible
> to disable this behavior and take current unix user as default user.
We no longer do that as of 8.4 and later. You could possibly back-patch
the change if you are not in a position to update
Hello ,
I want to read oracle database table data from postgre. (like in oracle we
can use database links, is there something like this available in
postgre).
I tried below -
select dblink_connect('host=.., dbname=..user=.. pwd=..'); but it is
giving error database refused connection. Do i ne
venkatra...@tcs.com wrote:
Hello ,
I want to read oracle database table data from postgre. (like in
oracle we can use database links, is there something like this
available in postgre).
I tried below -
select dblink_connect('host=.., dbname=..user=.. pwd=..'); but it is
giving error databa
On Mon, 2010-03-08 at 10:40 +0530, venkatra...@tcs.com wrote:
>
> I want to read oracle database table data from postgre. (like in
> oracle we can use database links, is there something like this
> available in postgre).
(It is not postgre, it is PostgreSQL or Postgres.)
You can try dbi-link:
On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe wrote:
> On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau wrote:
>> Hi,
>> I am looking for an efficient and effective solution to eliminate
>> duplicates in a continuously updated "cumulative" transaction table
>> (no deletions are envisioned as all non-r
I've got an 8.4.2 database where it appears that vacuum keeps redoing the same
table and indexes, never thinking it's finished:
auditor=# VACUUM analyze VERBOSE repair_queue ;
INFO: vacuuming "public.repair_queue"
INFO: scanned index "repair_queue_pkey" to remove 2795932 row versions
DETAIL: C
On Sun, Mar 7, 2010 at 11:31 PM, Allan Kamau wrote:
> On Mon, Mar 8, 2010 at 5:49 AM, Scott Marlowe wrote:
>> On Sun, Mar 7, 2010 at 1:45 AM, Allan Kamau wrote:
>>> Hi,
>>> I am looking for an efficient and effective solution to eliminate
>>> duplicates in a continuously updated "cumulative" tra
31 matches
Mail list logo