[GENERAL] RLS & SEpgsql

2012-06-03 Thread Matthias
Hello,

I have a question about row level security and SEpgsql. As far as I
can tell the current state is that the leaky view problem is solved
and some kind of security checking provider (implemented for SElinux)
exists.
Row level security seems to be an additional step. Is it planned for
9.3dev? If yes, will this work on windows by default or do I need to
write a custom security checking provider? Or will there be some kind
of simple default provider which you can configure in some way?
Right now I am emulating some simple form of RLS via an intarray field
attached to each row which stores the permissions necessary to read
this row.

Thanks,
-Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] Inaccurate Explain Cost

2012-09-27 Thread Matthias

Am 27.09.2012, 02:04 Uhr, schrieb Jeff Janes :


On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski
 wrote:

On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:

The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
When I disable nested loop, I get a cost of 2,535,992.34 which runs in
only 133,447.790 ms.  We have run queries on our database with a cost
of 200K cost before and they ran less then a few seconds, which makes
me wonder if the first query plan is inaccurate.  The other issue is
understanding why a query plan with a much higher cost is taking less
time to run.


Are you under impression that cost should be somehow related to actual
time?


I am certainly under that impression.  If the estimated cost has
nothing to do with run time, then what is it that the cost-based
optimizer is trying to optimize?


See http://www.postgresql.org/docs/9.2/static/runtime-config-query.html  
section "18.7.2. Planner Cost Constants".


-Matthias


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Gist cost estimates

2012-01-31 Thread Matthias
Hello,

I've noticed the new range data types in 9.2dev. I'm really looking
forward to use them, so I built postgres 9.2dev on windows to try.

While testing I noticed one thing. I have a simple test table with 1
million rows. There's a column called valid_range (of type int4range)
which is GiST indexed. Now when I do a query like

 select * from mytable where valid_range && int4range(100,200)

it will use the created gist index. But it will completely fail with
the cost estimation. For whatever reason it always assumes 5104 rows
will be returned, while in reality more than 300k rows are returned.
If I change the query to look like

 select * from mytable where valid_range && int4range(null,null)

it will still estimate 5104 rows to be returned (in reality it's 1M
rows -- the whole table). This leads to grossly inefficient query
plans.

Curiously I have the same problem with postgres' cube data type
(tested on 9.1 and which also estimates exactly 5104 rows). And
postgis indexes have a similar (though maybe unrelated) problem.

 Do you have any explanation for these grossly wrong cost estimates?
Are they unimplemented? What can I do to debug this further?

 Thank you,
 -Matthias

 P.S.: I've already increased the statistics collection size (done by
 vacuum analyze) to no avail

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Temporal foreign keys

2012-02-02 Thread Matthias
Hey,

how can I implement temporal foreign keys with postgresql? Is writing
triggers the only way to enforce temporal referential integrity
currently?

-Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Encryption - searching and sorting

2012-05-16 Thread Matthias
2012/5/14 Bruno Wolff III :
> On Thu, May 03, 2012 at 15:42:00 +0200,
>  David Welton  wrote:
>>
>>
>> Thoughts?

Something I found interesting while researching exactly the same problem:

http://web.mit.edu/ralucap/www/CryptDB-sosp11.pdf

I haven't used any of it because the most interesting index operators
for me are not supported, nor do I know how well it performs in
reality, but the section on encryption and fast searching with the
different algorithms is a really interesting read.

-Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to handle larger databases?

2004-11-16 Thread matthias
I am currently creating a database with less than 20 simple tables (only
SQL 92 types, simple constraints, no PostgreSQL specific stuff, no stored
procedures...)

Unfortunately, some of those tables will contain up to 4 Million entries,
making the size of the entire database 700-1000MB.

In order to maintain good query times (hopefully <1-3 seconds) I would
like to ask for some tips on how to manage and organize such a database.
Like what should I do and what should I avoid? Where and how should I use
indexes and all that stuff?

I know there are much larger PostgreSQL databases around. How do they
manage good query times?

Thanks a lot

Matt


P.S. The test system is a normal Win 2000 PC, the target machines will be
IA-32 based Linux machines.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] How to deal with almost recurring data?

2004-11-16 Thread matthias
I am creating a database which is supposed to contain many data entries
(events) that differ only in the date they occur.

So let's say event 1 occurs every Monday, Tuesday and Sunday between
January 1st and May 30th 2005.

How do I store and manage such data in a meaningful way?

The simple idea would be to store the event itself in one table and have
another table containing all the dates (all Mondays, Tuesdays and Sundays
between 2005-01-01 and 2005-05-30) plus a foreign key to event_ID =>
(date, event_id).

The problem is that we are dealing with several tenthousand events,
resulting in several million single dates if I stored it in the described
manner.

That is why I would like to know if there is a better way to store and
manage such information?

Thanks

Matt

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] How to deal with almost recurring data?

2004-11-16 Thread matthias
Thanks for the answer.
If I go that way, though, I am afraid I will have very long query times.
And I have no idea how to handle large databases of that kind without
losing performance.

If I store the pattern - is there a way to still perform the query in the
database? Or do I have to do that in my software? (Matching the resultset
to the pattern)

Thanks

Matt


 --- Ursprüngliche Nachricht ---
Datum: 16.11.2004 16:42
Von: Richard Huxton <[EMAIL PROTECTED]>
An: [EMAIL PROTECTED]
Betreff: Re: [GENERAL] How to deal with almost recurring data?

> [EMAIL PROTECTED] wrote:
> > I am creating a database which is supposed to contain many data
entries
> > (events) that differ only in the date they occur.
> >
> > So let's say event 1 occurs every Monday, Tuesday and Sunday between
> > January 1st and May 30th 2005.
> >
> > How do I store and manage such data in a meaningful way?
> >
> > The simple idea would be to store the event itself in one table and
have
> > another table containing all the dates (all Mondays, Tuesdays and
Sundays
> > between 2005-01-01 and 2005-05-30) plus a foreign key to event_ID =>

> > (date, event_id).
> >
> > The problem is that we are dealing with several tenthousand events,
> > resulting in several million single dates if I stored it in the
described
> > manner.
>
> This seems like a reasonable approach. Don't forget that PostgreSQL is
> designed to store large amounts of data.
>
> Of course, if there is some pattern to the dates then you might be
> better off storing the pattern rather than the individual dates.
>
> --
>Richard Huxton
>Archonet Ltd
>
> ---(end of
broadcast)---
> TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
>


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] How to deal with almost recurring data?

2004-11-16 Thread matthias
Thanks
The standard case is: I enter a date and 2 parameters of the event and I
get the result set.
Even though with Java date arithmetics are quite simple and I could
therefore extend my software in that fashion, I would much rather perform
all queries entirely in the database.
So in case I go about it the way I described, I will have 4 million
entries in the table that stores the dates and the references to the
event.
Now - how do I handle that database if I want to maintain very good query
times (<1-3 seconds)?
I have no idea how to handle a 1GB database in terms of query performance,
like what tricks to use, what to do and what to avoid.
Thanks

Matt


 --- Ursprüngliche Nachricht ---
Datum: 16.11.2004 16:32
Von: Csaba Nagy <[EMAIL PROTECTED]>
An: [EMAIL PROTECTED]
Betreff: Re: [GENERAL] How to deal with almost recurring data?

> I would say it all depends on what you want to do with the data.
> If you want to look up all the possible occurences for an event, it
> might be useful to have the simple solution you described. If you will
> only look up the next n occurences starting from a given date, you
might
> be better off storing the rule to derive the dates, and then calculate
> them in your software, but this will make your software more
complicated
> for sure (dealing with date arithmetics can be incredibly complex, I've
> been there). I would go with the simple solution as long as there is
> enough space in the DB to store all the occurences...
>
> HTH,
> Csaba.
>
> On Tue, 2004-11-16 at 15:53, [EMAIL PROTECTED] wrote:
> > I am creating a database which is supposed to contain many data
entries
> > (events) that differ only in the date they occur.
> >
> > So let's say event 1 occurs every Monday, Tuesday and Sunday between
> > January 1st and May 30th 2005.
> >
> > How do I store and manage such data in a meaningful way?
> >
> > The simple idea would be to store the event itself in one table and
have
> > another table containing all the dates (all Mondays, Tuesdays and
Sundays
> > between 2005-01-01 and 2005-05-30) plus a foreign key to event_ID =>

> > (date, event_id).
> >
> > The problem is that we are dealing with several tenthousand events,
> > resulting in several million single dates if I stored it in the
described
> > manner.
> >
> > That is why I would like to know if there is a better way to store
and
> > manage such information?
> >
> > Thanks
> >
> > Matt
> >
> > ---(end of
broadcast)---
> > TIP 2: you can get off all lists at once with the unregister command
> > (send "unregister YourEmailAddressHere" to
[EMAIL PROTECTED])
>
>
> ---(end of
broadcast)---
> TIP 7: don't forget to increase your free space map settings
>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Problems importing Unicode

2004-11-16 Thread matthias
I have batch files with entries such as

INSERT INTO country VALUES (248,'ALA','AX','Åland Islands');
INSERT INTO country VALUES (384,'CIV','CI','Côte d\'Ivoire');

I tried to execute them using "pgsql \i filename.sql"

Unfortunately, I keep getting an error message:
"ERROR:  invalid byte sequence for encoding "UNICODE": 0xc56c"

How can that be possible?
My database is set to encoding "UNICODE" and so are the batchfiles.

Why does that not work?

Thanks

Matt


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] How to suppress echo while executing batch files?

2004-11-16 Thread matthias
I have to execute fairly large batch files (200-300MB) with several million
INSERT commands.

When I do that from pgsql using the \i switch, I get the echo
INSERT 0 1

for every single insert statement of my batch file.

Since I assume that this slows down the whole process I would like to know
if there is a way to suppress that echo and only get error messages and
such?

Thanks

Matt

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Problems importing Unicode

2004-11-16 Thread matthias
Well, they were generated by MySQL and I can open them with e.g. the
Windows Editor Notepad. But I don't know if they are actually encoded in
UNICODE.
Since I can open the file with Notepad and read the statements, I assume,
it is not UNICODE. They look just like in the email below.

The problem are apparently those characters Å or ô and I really would like
to know how to import those files into PostgreSQL 8.0.0

Is there a switch I can use to do a codepage / encoding translation?

Why are MS Access or even MySQL able to read those files without trouble
but PostgreSQL reports an error?

Thanks

Matt



--- Ursprüngliche Nachricht ---
Datum: 17.11.2004 02:25
Von: Tatsuo Ishii <[EMAIL PROTECTED]>
An: [EMAIL PROTECTED]
Betreff: Re: [GENERAL] Problems importing Unicode

> > I have batch files with entries such as
> >
> > INSERT INTO country VALUES (248,'ALA','AX','Åland Islands');
> > INSERT INTO country VALUES (384,'CIV','CI','Côte d\'Ivoire');
> >
> > I tried to execute them using "pgsql \i filename.sql"
> >
> > Unfortunately, I keep getting an error message:
> > "ERROR:  invalid byte sequence for encoding "UNICODE": 0xc56c"
> >
> > How can that be possible?
> > My database is set to encoding "UNICODE" and so are the batchfiles.
> >
> > Why does that not work?
>
> I bet your batch file is not encoded in UNICODE (UTF-8).
> --
> Tatsuo Ishii
>


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] postgresql 9.4 streaming replication

2015-10-19 Thread Matthias Müller
WAL sender process starts when streaming client is active. The connection
is initiated by the client. That's why you cannot see it. It also uses the
TCP-port that is used by any other pg client. There is a special entry in
the pg_hba.conf for the replication clients. Refer to this please:
http://www.postgresql.org/docs/9.4/static/auth-pg-hba-conf.html.

cheers
Matthias


[GENERAL] pg_dump crashing

2016-03-15 Thread Matthias Schmitt
b2 22675465   mmppostgre 60017
0x0052e6b3 22708234   mmppostgre 60017
0x0052e6b4 22741003   mmppostgre 60017
0x0052e6b5 22773772   mmppostgre 60017
0x0052e6b6 22806541   mmppostgre 60017
0x 21856270   www-data   6001 
0x 21889039   www-data   6001 

As all server processes have been cancelled it seems to me normal, that the 
semids have been changed. Beyond that I can not see anything which might be 
useful for me.

3rd possible solution: Strange values for shared memory:
My Debian 8.3 default settings for shared memory have been:

kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096

These values have been looking strange to me, so I changed them to some more 
realistic values:
kernel.shmall = 4194304
kernel.shmmax = 17179869184
kernel.shmmni = 4096

4th  possible solution: I have read here: 
https://lists.freedesktop.org/archives/systemd-devel/2014-April/018373.html
that logging into the postgres user in and out might delete semaphore memory, 
but I cannot reproduce this.

Interesting enough I have another PostgreSQL version running perfectly under 
Debian 8.3 on a different machine. This is a PostgreSQL 9.4.1. It uses the same 
backup mechanism. I hate to consider a downgrade as a possible solution. May be 
an info, which might be useful: my machine has 64 GB RAM.

Any ideas where to continue my search?

Best regards

Matthias Schmitt

magic moving pixel s.a.
23, Avenue Grande-Duchesse Charlotte
L-3441 Dudelange
Luxembourg
Phone: +352 54 75 75
http://www.mmp.lu






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump crashing

2016-03-20 Thread Matthias Schmitt
Hello,

sorry for the late response.

> On 15 Mar 2016, at 18:59, rob stone  wrote:
> 
> I'm running Debian 4.3.5-1 (2016-02-06) x86_64 GNU/Linux.
> 
> Backups are done via a shell script using double hyphen syntax, as in
> e.g.:-
> 
> pg_dump --port=5433 --dbname=mydatabase_1_0_0 --username=mmppostgres
>  --file=/my_backup_path/mydatabase_1_0_0.dump
> 
> We do it twice. First with --data_only and the second time with
>  --format=custom
> 
> Hasn't failed yet. Don't know if this helps at all but worth a try.

Thank you for your answer. But sorry, no, this does not change anything. Same 
server crash when executed in a cron job. It runs perfectly when executed 
manually from the shell.

Best regards

Matthias Schmitt

magic moving pixel s.a.
23, Avenue Grande-Duchesse Charlotte
L-3441 Dudelange
Luxembourg
Phone: +352 54 75 75
http://www.mmp.lu






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump crashing

2016-03-20 Thread Matthias Schmitt
Hello,

> On 16 Mar 2016, at 14:55, Adrian Klaver  wrote:
> 
> On 03/15/2016 08:10 AM, Matthias Schmitt wrote:
>> Hello,
>> 
>> since two weeks I am trying to get PostgreSQL 9.5.1 to run on Debian 8.3. 
>> Everything is fine except the daily backup. When calling pg_dump as part of 
>> a cron job pg_dump crashes:
>> 
>> 2016-03-15 01:00:02 CETFATAL:  semctl(23232524, 3, SETVAL, 0) failed: 
>> Invalid argument
>> 2016-03-15 01:00:02 CETLOG:  server process (PID 22279) exited with exit 
>> code 1
>> 2016-03-15 01:00:02 CETLOG:  terminating any other active server processes
>> 2016-03-15 01:00:02 CETWARNING:  terminating connection because of crash of 
>> another server process
>> 2016-03-15 01:00:02 CETDETAIL:  The postmaster has commanded this server 
>> process to roll back the current transaction and exit, because another 
>> server process exited abnormally and possibly corrupted shared memory.
>> 2016-03-15 01:00:02 CETHINT:  In a moment you should be able to reconnect to 
>> the database and repeat your command.
>> 2016-03-15 01:00:02 CETLOG:  all server processes terminated; reinitializing
>> 2016-03-15 01:00:02 CETLOG:  could not remove shared memory segment 
>> "/PostgreSQL.1804289383": No such file or directory
>> 2016-03-15 01:00:02 CETLOG:  semctl(22839296, 0, IPC_RMID, ...) failed: 
>> Invalid argument
>> 2016-03-15 01:00:02 CETLOG:  semctl(22872065, 0, IPC_RMID, ...) failed: 
>> Invalid argument
>> 2016-03-15 01:00:02 CETLOG:  semctl(22904834, 0, IPC_RMID, ...) failed: 
>> Invalid argument
>> …
>> 
>> I am calling pg_dump in my cronjob like this:
>> su - mmppostgres -c "/Users/…/bin/pg_dump -p 5433 mydatabase_1_0_0 > 
>> /my_backup_path/mydatabase_1_0_0.dump"
>> 
>> After the crash the database runs in recovery mode. A restart of the 
>> database brings everything back to normal.
>> This crash is always reproducible and occurs every night during backup. When 
>> calling the same command via the command line everything run fine. In the 
>> system log I can see:
>> 
> 
> Is the command you run via the command line exactly the same, including the 
> su -?

Yes.

> What user are you running the cronjob as?

root

> How do you supply the password for the mmppostgres user?

I configured in pg_hba.conf:
 
local  all  mmppostgres  trust

All local connections from this user are trusted.

Best regards

Matthias Schmitt

magic moving pixel s.a.
23, Avenue Grande-Duchesse Charlotte
L-3441 Dudelange
Luxembourg
Phone: +352 54 75 75
http://www.mmp.lu






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump crashing ... solved

2016-03-20 Thread Matthias Schmitt
Hello,

thank you for your answers.

> On 20 Mar 2016, at 16:56, Adrian Klaver  wrote:
> 
> So what happens if you either?:
> 
> 1) In the root crontab, change the command to:
> 
> /Users/…/bin/pg_dump -p 5433 mydatabase_1_0_0 -U mmpostgres > 
> /my_backup_path/mydatabase_1_0_0.dump

This will make no difference.

> On 21 Mar 2016, at 03:42, Thomas Munro  wrote:
> 
> Is this related?
> 
> http://www.postgresql.org/message-id/cak7teys9-o4bterbs3xuk2bffnnd55u2sm9j5r2fi7v6bhj...@mail.gmail.com

Yes, this is related. I tried to set RemoveIPC=no, but it made no difference.

Finally I found a solution with the hint given here:
http://www.postgresql.org/message-id/56a52018.1030...@gmx.net

Systemd defaults to remove all IPC (including SYSV memory) when a user "fully" 
logs out. This seems to be happen when running the cron job. It seems to be a 
difference if the job is running as a “normal” user or a “system” user.

So I modified my existing postgres user to be a system user

usermod -u  mmppostgres

an now my backup is running. I cannot remember finding such a hint in the 
PostgreSQL documentation. It would be worth mentioning it.

Best regards

Matthias Schmitt

magic moving pixel s.a.
23, Avenue Grande-Duchesse Charlotte
L-3441 Dudelange
Luxembourg
Phone: +352 54 75 75
http://www.mmp.lu






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-08 Thread Bannert Matthias
Thanks for your reply. I do think it is rather a postgres than an R issue, 
here's why:

a) R simply puts an SQL string together. What Charles had posted was an excerpt 
of that string. 
Basically we have 1.7 MB of that string. Everything else is equal just the 
hstore contains 40K key value pairs. 

b) The error message clearly mentions max_stack_depth which is a postgres 
parameter. 

c) If I just take that SQL string (only the first part of it, i.e. the create 
temp table and insert into  part w/o all the 
update and join gibberish and put it to a .sql file and simply run it through a 
psql client like this: 
\i myquery.sql

I get exactly the same error message (without any R involved at any stage)

psql:query.sql:3: ERROR:  stack depth limit exceeded
HINT:  Increase the configuration parameter "max_stack_depth" (currently 
7168kB), after ensuring the platform's stack depth limit is adequate.

d) I ran into to quite some R stack errors and they look different... 
(C_STACK_SIZE)

conclusion:
We are running a simple insert. Nothing special except for the fact that hstore 
has 40K key value pairs. Could it be that the indexing of that hstore gets 
kinda large 
and thus a query string that only has 1.7 MB exceeds the stack ? 







From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Friday, April 08, 2016 4:20 PM
To: Charles Clavadetscher
Cc: pgsql-general@postgresql.org; Bannert  Matthias
Subject: Re: [GENERAL] max_stack_depth problem though query is substantially 
smaller

"Charles Clavadetscher"  writes:
> When R processes the daily time serie we get a stack size exceeded
error, followed by the hint to increase the max_stack_depth.

Postgres doesn't generally allocate large values on the stack, and I doubt
that R does either.  Almost certainly, what is causing this is not data
size per se but unreasonable call nesting depth in your R code.  You may
have a function that's actually in infinite recursion, or maybe it's
recursing to a depth governed by the number of data elements.  If so,
consider revising it into iteration with an explicitly-represented state
stack.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-10 Thread Bannert Matthias
I guess you are right. I have narrowed the query down 
to a simple create table, followed by an insert, one text field, one hstore 
field and an integer field.
No temporary table, no BEGIN etc. One record, yet the hstore has 40K kvp. No R 
involved. 
and I still end up with the same error. 

Thanks for the pointer to the stack trace backend. I'll try to set that up and 
report what I find. 





From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Friday, April 08, 2016 9:39 PM
To: Bannert  Matthias
Cc: Charles Clavadetscher; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth problem though query is substantially 
smaller

"Bannert  Matthias"  writes:
> Thanks for your reply. I do think it is rather a postgres than an R issue, 
> here's why:
> a) R simply puts an SQL string together. What Charles had posted was an 
> excerpt of that string.
> Basically we have 1.7 MB of that string. Everything else is equal just the 
> hstore contains 40K key value pairs.

Well, as a test I ran a query that included an hstore literal with 4
million key/value pairs (a bit shy of 70MB of query text).  I didn't see
any misbehavior on a machine with 2MB max_stack_depth.  So there's
something else going on in your situation.

I concur with the suggestion to try to get a stack backtrace from the
point of the error.  Setting a breakpoint at errfinish() is usually
an effective strategy when you know that the query will provoke a SQL
error report.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-10 Thread Bannert Matthias
/postmaster.c:3840
#20142 ServerLoop () at 
/build/postgresql-9.3-G1RSAD/postgresql-9.3-9.3.11/build/../src/backend/postmaster/postmaster.c:1691
#20143 0x7fe7fb7c6361 in PostmasterMain (argc=5, argv=) at 
/build/postgresql-9.3-G1RSAD/postgresql-9.3-9.3.11/build/../src/backend/postmaster/postmaster.c:1315
#20144 0x7fe7fb5ff0a3 in main (argc=5, argv=0x7fe7fdd25190) at 
/build/postgresql-9.3-G1RSAD/postgresql-9.3-9.3.11/build/../src/backend/main/main.c:227
Detaching from program: /usr/lib/postgresql/9.3/bin/postgres, process 1521


Is this of any help at all? I really how you can get almost 70 MB done and I 
can't even get 2-5 MB done. Maybe you can share a brief part of you 70 MB query?

regards, matt bannert




From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Friday, April 08, 2016 9:39 PM
To: Bannert  Matthias
Cc: Charles Clavadetscher; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth problem though query is substantially 
smaller

"Bannert  Matthias"  writes:
> Thanks for your reply. I do think it is rather a postgres than an R issue, 
> here's why:
> a) R simply puts an SQL string together. What Charles had posted was an 
> excerpt of that string.
> Basically we have 1.7 MB of that string. Everything else is equal just the 
> hstore contains 40K key value pairs.

Well, as a test I ran a query that included an hstore literal with 4
million key/value pairs (a bit shy of 70MB of query text).  I didn't see
any misbehavior on a machine with 2MB max_stack_depth.  So there's
something else going on in your situation.

I concur with the suggestion to try to get a stack backtrace from the
point of the error.  Setting a breakpoint at errfinish() is usually
an effective strategy when you know that the query will provoke a SQL
error report.

https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth problem though query is substantially smaller

2016-04-10 Thread Bannert Matthias
Heureka! thanks so much for your help, patience and the right hunch. Actually I 
am glad now I ran into that stack issue (and you) cause the entire thing is 
also much faster now. 
I changed my app to emit strings like you suggested and it works, also with 
smaller max_stack_depth. 

Fwiw, I was not stubbornly insisting on nesting operators. Actually I switched 
from "=>" to the hstore function cause
a note in the manual said it was deprecated 
(http://www.postgresql.org/docs/9.0/static/hstore.html). Somehow I must have 
understand that note the wrong way. 
What's your take on that operator being deprecated? 

regards, matt bannert




From: Tom Lane [t...@sss.pgh.pa.us]
Sent: Saturday, April 09, 2016 5:25 PM
To: Bannert  Matthias
Cc: Charles Clavadetscher; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth problem though query is substantially 
smaller

"Bannert  Matthias"  writes:
> [ very deep stack of parser transformExprRecurse calls ]

> #20137 0x7fe7fb80ab8c in pg_analyze_and_rewrite 
> (parsetree=parsetree@entry=0x7fe7fffdb2a0, 
> query_string=query_string@entry=0x7fe7fdf606b0 "INSERT INTO 
> ts_updates(ts_key, ts_data, ts_frequency) VALUES 
> ('some_id.sector_all.news_all_d',hstore('1900-01-01','-0.395131869823009')||hstore('1900-01-02','-0.395131869823009')||hstore('1"...,
>  paramTypes=paramTypes@entry=0x0, numParams=numParams@entry=0) at 
> /build/postgresql-9.3-G1RSAD/postgresql-9.3-9.3.11/build/../src/backend/tcop/postgres.c:640

The SQL fragment we can see here suggests that your "40K entry hstore" is
getting built up by stringing together 40K hstore concatenation operators.
Don't do that.  Even without the parser stack depth issue, it's uselessly
inefficient.  I presume you're generating this statement mechanically,
not by hand, so you could equally well have the app emit

'1900-01-01 => -0.395131869823009, 1900-01-02 => -0.395131869823009, 
...'::hstore

which would look like a single hstore literal to the parser, and be
processed much more quickly.

If you insist on emitting SQL statements that have operators nested
to such depths, then yes you'll need to increase max_stack_depth to
whatever it takes to allow it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeking advice on database replication.

2010-11-12 Thread Matthias Müller
I don't know slony that much. I used WAL processing. But since 9.0 I prefer 
Hot-Streaming replication. This link is a good starting point - besides the 
documentation. 

<http://it.toolbox.com/blogs/database-soup/5-minutes-to-binary-
replication-41873>

A solution for your problem might be a second database on the replicated db-
server that is working in normal mode (not hot standby) and is filled up by the 
replicated database, which works in read-only mode. PG can easily handle 
multiple clusters that contain 80MB on one "normal" server.

Matthias


Am Donnerstag, 11. November 2010, um 23:05:06 schrieb Demitri Muna:
> Hello,
> 
> I am interested in sharing/replicating data between different databases,
> and I'd like to ask if what I'd like to do is possible in postgresql. I
> have read a fair amount of documentation and was looking forward to
> PostgreSQL 9, but I don't think it will do for me what I want.
> 
> I have an astronomical database at one site, let's call it A. At my own
> institution (across the country), I have another database, B. I want to
> replicate all of the tables of A into a read-only copy in B, in as close
> to real-time as possible. The time isn't a critical factor here - if it's
> delayed by even an hour, I'm ok with that. Tables in B will need to JOIN
> against tables from A. The total size of A is ~80MB and grows slowly.
> 
> After reading the documentation for PG9's replication, it seems I cannot do
> this since it only supports replicating a cluster. It appears that I'd
> want to put the tables in B into one schema, the tables from A into
> another schema in the same database (let's call it B.a), and replicate the
> tables from A into B.a. Is this at all possible? This promises to be a
> very powerful tool for us, but I don't know how best to accomplish this.
> 
> Further, I'd like A to be replicated to several institutions. Again, this
> is not a real-time operation, but something that doesn't require user
> intervention is ideal.
> 
> I tried to run Slony-I last year, but found it to be very frustrating and
> never got it to work. (In retrospect, I don't even know if it supports
> schema-level replication).
> 
> Any advice would be greatly appreciated!
> 
> Cheers,
> Demitri
> 
> Center for Cosmology and Particle Physics
> New York University


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understanding PG9.0 streaming replication feature

2010-12-21 Thread Matthias Müller
Hi Ben,

load balancing is not possible with the tools that are in the postgres 
installation. There is no automatic switch-over to a slave if the master 
fails. The trigger file needs to be created to make a slave to the master. This 
is not done automaitcally by postgres, but should be done by a cluster 
software (i.e. pacemaker). 

If you can live without load balancing, read the page of Satoshi. 

But the slaves can handle read requests (SELECT). Maybe this helps a little 
bit. 

Regards 
Matthias

Am Mittwoch, 22. Dezember 2010, um 00:31:14 schrieb Satoshi Nagayasu:
> Hi Ben,
> 
> On 2010/12/22 7:46, Ben Carbery wrote:
> > FYI, not looking for a detailed how to here.. I have read the manual
> > twice and just can't figure which sections are relevant. The manual
> > seems to be trying to cover all uses simultaneously which is always
> > going to get confusing :) For example do I need I need WAL archiving or
> > not?
> 
> My blog entry would be a good entry point for you.  :)
> 
> 5 steps to implement a PostgreSQL replication system
> http://pgsnaga.blogspot.com/2010/05/5-steps-to-implement-postgresql.html
> 
> It was written to be a guide for building a simple master-slave config.
> 
> Please take a look, including the comments.
> 
> Thanks,
> 
> On 2010/12/22 7:46, Ben Carbery wrote:
> > FYI, not looking for a detailed how to here.. I have read the manual
> > twice and just can't figure which sections are relevant. The manual
> > seems to be trying to cover all uses simultaneously which is always
> > going to get confusing :) For example do I need I need WAL archiving or
> > not?
> > 
> > On Tue, Dec 21, 2010 at 2:40 PM, Ben Carbery mailto:ben.carb...@gmail.com>> wrote:
> > Hi,
> > 
> > I am having some trouble trying to figure out how to configure this
> > particular scenario..
> > 
> > I have a pair of pg servers that I want to put in a Master/Standby
> > configuration. Currently a script dumps the master db every hour,
> > copies it to the standby, restores, and restarts the server. The aim
> > is to replace the dumps/restores with streaming replication and
> > ensure the standby is always up to date.
> > 
> > In this case writes are infrequent, but reads are constant, and I
> > only need high availability for reads. I would ideally like both
> > master and standby to be available simultaneously to allow
> > load-balancing. My confusion seems to be around the fact I don't
> > need failover - my applications will detect a master down and
> > immediately start using the standby, so there is no need to allow
> > writes on the standby, they will just wait for the master to be
> > available again - I am not sure what the minimum config needed for
> > this scenario is..
> > 
> > cheers,
> > 
> > Ben


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Como usar interval con un variable

2011-01-04 Thread Matthias Müller
Hola,

hay una portal en espanol sobre postgres con listas de correo y IRC:
http://www.postgresql-es.org/comunidad

Aqui hablamos ingles. Sorry. 

saludos desde Alemana.
Matthias

Am Dienstag, 4. Januar 2011, um 21:26:33 schrieb liliana lazarte:
> Hola gente:
> Estoy haciendo un funcion que recibe como parametro fechainicial, fechafin,
> horainicial,horafin, y frecuencia. Esta funcion debe generar fechas que
> esten dentro del rango de fechas que paso como parametros, y por cada
> fecha, debe generar un rango de horas con minutos, pero segun la
> frecuencia. Bueno vamos a lo que me interesa,para generar las horas uso la
> frecuencia para ir incrementando la hora, este es un pedazo de codigo:
> 
> por ej:
> frec=5; -- declarado como integer
> v_horaini=8 --variable declarada como timestamp
> v_horafin= 12 -- variable declarada como timestamp
> 
> v_hora1:=v_horaini;
> WHILE v_hora1<=v_horafin LOOP
>  v_hora_actual:=v_hora1;
>   v_hora1:=v_hora1 + interval || '  '' ' || frec || ' minute '' ';
> --***Aca me genera error
>  END LOOP;
> 
> Porque me genera error? y como podria ir haciendo para que v_hora se vaya
> incrementando segun la frecuencia, en el caso del ejemplo seria: 8, 8:05,
> 8:10,etc
> Saludos y gracias


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected data offset flag 0

2014-04-25 Thread matthias . trauffer
Hello,

I'm trying to restore a database dumpfile I made of a small 9.3.4 database 
using pg_dump -Fc -v -f "db.dump" db, but when I try to restore it on the 
same or on another 9.3.4 system with  pg_restore -C -d postgres db.dump I 
get the following error message:
pg_restore: [archiver] unexpected data offset flag 0
Even listing the contents doesn't work: pg_restore -l db.dump
pg_restore: [archiver] unexpected data offset flag 0
I couldn't find anything in the documentation about this error message. Is 
it possible that the dump is corrupted? What can I do to fix this?
 
Matthias



[GENERAL] Slow delete with with cascading foreign keys

2008-10-20 Thread Matthias Karlsson
Hi,

I have a rather complex set of relations, connected with cascading
foreign keys on delete. I'm experiencing very slow performance when
deleting *the* lead node, which everything eventually depends on. The
number of records ultimately to be deleted aren't that many (perhaps
2000-3000) but there are maybe 30 relations involved. I understand
that this is most likely due to missing indices, but I have been
unable to figure out how to get PostgreSQL to tell me where the
performance bottleneck is.

Say my simple query looks like "delete from foo where id = 1". An
"explain" on this won't yield any information for example.

Is there any way to get PostgreSQL to do an analys of a delete
statement like the way I need?

// Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow delete with with cascading foreign keys

2008-10-20 Thread Matthias Karlsson

Tom Lane skrev:

"Matthias Karlsson" <[EMAIL PROTECTED]> writes:

I have a rather complex set of relations, connected with cascading
foreign keys on delete. I'm experiencing very slow performance when
deleting *the* lead node, which everything eventually depends on. The
number of records ultimately to be deleted aren't that many (perhaps
2000-3000) but there are maybe 30 relations involved. I understand
that this is most likely due to missing indices, but I have been
unable to figure out how to get PostgreSQL to tell me where the
performance bottleneck is.


If it's a reasonably modern PG version, EXPLAIN ANALYZE will break out
the time spent in each on-delete trigger, which should be enough to
answer the question.

regards, tom lane


Thanks, that gave me something to work with. I targeted the triggers that had the most "uses", but it did not seem to 
help that much. I managed to reduce execution time with maybe 10-15%, but I'll try to apply indices more aggressively to 
see if it helps.


// Matthias

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to implement a multi-table constraint?

2008-10-21 Thread Matthias Karlsson
Why do you need to store the total area at all (property_area)? This
value can easily be calculated with an group by query.

On Mon, Oct 20, 2008 at 10:56 PM, Karl Nack <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I'm a bit of a newb designing a database to hold landcover information for
> properties in a city. Here's some simple sample data:
>
> property:
> property_name*, property_area
> -
> sample house, 2500
>
>
> property_landcover:
> property_name*, landcover_name*, landcover_area
> ---
> sample house, building, 1000
> sample house, grass, 1000
> sample house, concrete, 500
>
>
> Now, I need to check that the sum of landcover_area for a property matches
> the property_area.
>
> It seems like I have three obvious options:
>
> 1. A constraint trigger that sums up landcover area and compares it to the
> property area.
>
> Downside: The trigger will run for every row that's updated in these two
> tables, although it only needs to run once for each property.
>
>
> 2. A statement-level trigger that does the same thing as #1.
>
> Downside: Since I don't have access to the updated rows, I'll have to
> check the entire property table against the entire property_landcover
> table. It seems like this could get expensive if either of these tables
> gets very large.
>
>
> 3. Use a 3rd table to hold the total landcover area for each property. Use
> row-level triggers to keep this 3rd table updated. Use a statement-level
> trigger (or table constraint) to ensure the total landcover area matches
> the property area.
>
> Downside: Although I avoid redundant checks, my understanding is that
> UPDATE is a fairly expensive operation, so it might not actually perform
> any better.
>
>
> Although my tables are small right now, they may potentially have to hold
> an entire city's worth of properties, so I'm interested in finding a
> solution that scales.
>
> Can anyone offer some feedback or suggestions on which of these options to
> use? Or perhaps even another solution that hasn't occurred to me?
>
> Thanks!
>
> -Karl
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Visualizer

2008-11-05 Thread Matthias Karlsson
DbVisualizer  is a tool I have used in the past for exactly this.

// Matthias

On Wed, Nov 5, 2008 at 1:18 PM, Mohammad Ali Darvish Darab
<[EMAIL PROTECTED]> wrote:
> Hello everybody,
>
> I have got an already existing Porstgres DB which is pretty large
> (including more than 650 table with considerable number of constraints
> etc). And now I am supposed to realize generally how it is constructed
> and works. I thought it would be good to have a grapahical
> visualization of whole DB (with tables and relations etc). Does anyone
> have any idea about any tool that does this? I would also appreciate
> any comments or suggestions that might help me in this regard.
>
> Thank you in advance,
> Ali.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgraded from Leopard to Snow Leopard - PostgreSQL won't start anymore

2010-01-04 Thread Matthias Dietrich
Hi,

some weeks or months ago I upgraded my Mac from Leopard to Snow Leopard.  
Because I needed to work a long time remote I didn't use my local dev system 
and found it broken now.  First of all I've got the following error message in 
my log:

 FATAL:  could not create shared memory segment: Invalid argument
 DETAIL:  Failed system call was shmget(key=5432001, size=4374528, 03600).
 HINT:  [...]

After setting the value of shared_buffers from 2400kB to some lower (eg. 
2000kB), the messages goes away but the following appears:

 FATAL:  incorrect checksum in control file

I guess this is because I upgraded the OS without dumping the database before.  
And now I'm stuck.  I found many pages that suggests to make a pg_dumpall with 
the old database and the new pg_dumpall version.  But I (really) don't have the 
old system anymore and I guess it's impossible for me to get it from somewhere. 
 What can I do?  How can I dump the data from the 32bit database?  Does someone 
has built a postgres binary for this issue?

Thanks in advance,
 Matthias


-- 
rainboxx Software Engineering
Matthias Dietrich, Freelancer

rainboxx  |  Mobil: +49 (0) 151 / 50 60 78 64
Königsallee 43|  
71638 Ludwigsburg |  WWW  :  http://www.rainboxx.de

XING: https://www.xing.com/profile/Matthias_Dietrich18
GULP: http://www.gulp.de/profil/rainboxx.html






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgraded from Leopard to Snow Leopard - PostgreSQL won't start anymore

2010-01-04 Thread Matthias Dietrich
Hi,

some weeks or months ago I upgraded my Mac from Leopard to Snow Leopard.  
Because I needed to work a long time remote I didn't use my local dev system 
and found it broken now.  First of all I've got the following error message in 
my log:

  FATAL:  could not create shared memory segment: Invalid argument
  DETAIL:  Failed system call was shmget(key=5432001, size=4374528, 03600).
  HINT:  [...]

After setting the value of shared_buffers from 2400kB to some lower (eg. 
2000kB), the messages goes away but the following appears:

  FATAL:  incorrect checksum in control file

I guess this is because I upgraded the OS without dumping the database before.  
And now I'm stuck.  I found many pages that suggests to make a pg_dumpall with 
the old database and the new pg_dumpall version.  But I (really) don't have the 
old system anymore and I guess it's impossible for me to get it from somewhere. 
 What can I do?  How can I dump the data from the 32bit database?  Does someone 
has built a postgres binary for this issue?

Thanks in advance,
  Matthias


-- 
rainboxx Software Engineering
Matthias Dietrich, Freelancer

rainboxx  |  Mobil: +49 (0) 151 / 50 60 78 64
Königsallee 43|  
71638 Ludwigsburg |  WWW  :  http://www.rainboxx.de

XING: https://www.xing.com/profile/Matthias_Dietrich18
GULP: http://www.gulp.de/profil/rainboxx.html






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] "Compressed data is corrupt"

2012-03-01 Thread Matthias Leisi
I have a behaviour of Postgres which I do not understand (and thus can
not fix...). When inserting into a log-like table, I get the error
message "compressed data is corrupt" for certain (rare) combination of
values. When I re-create the same table structure from scratch and
insert the same data in to that table, the error does not appear.

Situation which triggers the error:

dnswl=# \d+ requesthistory
 Table "public.requesthistory"
 Column  |  Type  | Modifiers  |
Storage  | Description
-+++--+-
 requestip   | inet   | not null   | main |
 requesthostname | character varying(255) | not null   | extended |
 requestdate | date   | not null   | plain|
 requestcount| integer| not null default 0 | plain|
Indexes:
"requesthistory_pkey" PRIMARY KEY, btree (requestip, requestdate)
Has OIDs: no
dnswl=# insert into requesthistory values ('209.166.168.6',
'myhostname', '2012-02-29', 23);
ERROR:  compressed data is corrupt

Situation which does not lead to the error:

dnswl=# \d+ testip
  Table "public.testip"
  Column  |  Type  | Modifiers  | Storage  | Description
--+++--+-
 ip   | inet   | not null   | main |
 hostname | character varying(255) | not null   | extended |
 mydate   | date   | not null   | plain|
 count| integer| not null default 0 | plain|
Indexes:
"testip_pkey" PRIMARY KEY, btree (ip, mydate)
Has OIDs: no
dnswl=# insert into testip values ('209.166.168.6', 'myhostname',
'2012-02-29', 23);
INSERT 0 1

Changing the hostname, date or count fields does not change the
situation. Changing the IP address slightly (eg from "..6" to "..5")
makes the error disappear.

Any clue what may be going on? Any more things I should try and test?

Running Postgresql 8.4.7 on an openSuSE machine (64bit).

-- Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do an UPDATE for all the fields that do NOT break a constraint?

2009-01-26 Thread Matthias Karlsson
On Mon, Jan 26, 2009 at 2:09 PM, Phoenix Kiula  wrote:
> I wonder if this is an SQL limitation or something I'm missing in the
> PG manual, but I need to run an update on my database (to replace the
> value of a column to match a new design structure).
>
> Due to the new business logic, the  replaced value of a field may end
> up being already present in the database in another record. This leads
> to unique key violations when I run the update.
>
> My question: I don't mind if the update transaction skips the records
> where the key would be violated (this preservation is in fact what we
> want) but these are only about 2% of the overall updatable records.
>
> Is there anyway to make the transaction go through with the remaining
> 98% of the update SQL which will in fact NOT violate the unique
> constraint?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

You could always extend your update statement to include an additional
check to see if there are already rows present with the same value in
the field you are talking about.

// Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to do an UPDATE for all the fields that do NOT break a constraint?

2009-01-26 Thread Matthias Karlsson
On Mon, Jan 26, 2009 at 2:53 PM, Phoenix Kiula  wrote:
> On Mon, Jan 26, 2009 at 9:45 PM, Matthias Karlsson  wrote:
>> On Mon, Jan 26, 2009 at 2:09 PM, Phoenix Kiula  
>> wrote:
>>> I wonder if this is an SQL limitation or something I'm missing in the
>>> PG manual, but I need to run an update on my database (to replace the
>>> value of a column to match a new design structure).
>>>
>>> Due to the new business logic, the  replaced value of a field may end
>>> up being already present in the database in another record. This leads
>>> to unique key violations when I run the update.
>>>
>>> My question: I don't mind if the update transaction skips the records
>>> where the key would be violated (this preservation is in fact what we
>>> want) but these are only about 2% of the overall updatable records.
>>>
>>> Is there anyway to make the transaction go through with the remaining
>>> 98% of the update SQL which will in fact NOT violate the unique
>>> constraint?
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>> You could always extend your update statement to include an additional
>> check to see if there are already rows present with the same value in
>> the field you are talking about.
>>
>> // Matthias
>>
>
>
>
> Thanks Matthias, but this seems a little recursive to me and I don't
> know how to do the SQL.
>
> Here is my SQL thus far. The table is "testimonials". Basically the
> column "user_alias" needs to be replaced to delete any mention of a
> user's "api_key". Both of these are fields in the same table, hence
> the replace logic below.
>
> As you will also see, based on our business logic, I have already kept
> all the related IDs in a separate small table called
> "testimonials_temp". This should speed up the process quite a bit
> because instead of going through 5 million IDs, we just loop through
> around 400,000.
>
>
>
> update testimonials
> set user_alias = replace(user_alias, '-'||api_key, '')
> where
>  id in (select id from testimonials_temp)
> ;
>
>
>
> The problem is that after being replaced like that the "user_alias"
> column has a problem, because some user_aliases already exist. How
> should I add a check condition recursively? I tried this:
>
>
> update testimonials
> set user_alias = replace(user_alias, '-'||api_key, '')
> where
>  id in (select id from testimonials_temp)
>  and replace(user_alias, '-'||api_key, '') not in (select user_alias
> from links where user_alias = ?))
> ;
>
>
>
> Hope I have explained this clearly. Would appreciate any ideas!
>

My idea was very similar to the SQL at the end of your post. Wouldn't
something like this work?

update testimonials u
set u.user_alias = replace(u.user_alias, '-'||api_key, '')
where
u.id in (select id from testimonials_temp) and
not exists (select id testimonials where user_alias =
replace(u.user_alias, '-'||api_key, '')))
?

Not sure if this exact SQL is correct, but in your not in expression,
you just need to make sure to refer to the user_alias of the current
row being updated.

// Matthias

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which SQL is the best for servers?

2009-02-19 Thread Matthias Hoys

>
> Should you choose an open-source, make sure your code AND your DDL uses as 
> much ANSI standards as possible so when you do need to move to something 
> else, it won't be as painful. (auto-incrementing columns vs. sequences 
> etc...).

I really wouldn't go for database independence ... Choose a RDBMS and use 
all of its features ! 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Foreign Keys Question

2005-04-13 Thread Matthias Loitsch

First of all, hi. I'm new to this mailing list.


I searched this on the net, but I didn't get any usable answers...


So here's my problem:

I have 2 schemas. One is called SITE, one SITE_forum.

What I wanted, is to separate the forum from the whole Site db, so I can
put them on different servers if I encounter performance problems or
anything.
So I started the SITE_forum schema, made all my tables, and realized,
that I don't want to create the table users in the schema SITE_forum,
because I want to use the exact same users than in the schema SITE.

One possibility would be, to create 2 identic tables on both schemas,
but that really really is not what I'd like to do.

So I thought I could make a foreign key on a different Schema (db), and
use the same table

And well, thats where I started to search if this is possible ... and,
in fact my main question is: Is this a good idea?
I have no idea if this will be fast enough, or if I will have lots of
problems afterward

Could anyone help me with this ?


Thanks in advance,
Matthias Loitsch



-- 
THEK
Matthias Loitsch
www.studiothek.com/



pgpPyT3vJZehW.pgp
Description: PGP signature


Re: [GENERAL] Access to databas from the Internet

2006-09-19 Thread Matthias . Pitzl
Hello Lukasz!

You need some port forwarding onto your router/firewall. You have to forward
incoming connections on port 5432 (or the port postmaster is listening on)
from IP xxx.xxx.xxx.xxx to IP yyy.yyy.yyy.yyy with the according port. If
your router is a Linux machine, take a look into the iptables rules and also
the HowTos at http://www.netfilter.org/ how to create a port forwarding. Be
aware that this will also attackers from the internet will enable them to
use exploits onto your server! So make sure, that this machine is properly
secured.

Best regards,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Lukasz
> Sent: Tuesday, September 19, 2006 3:11 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Access to databas from the Internet
> 
> 
> Hello,
> 
> I would like to install a PostgreSQL. I know how to manage 
> the database
> itself, creae databases, user, groups and so on. But I don't know how
> to allow other users, who are outside LAN to connect to database
> through Internet.
> 
> For example, my external IP is xxx.xxx.xxx.xxx, and my IP in the local
> network is yyy.yyy.yyy.yyy. I want to install PostgreSQL on the
> computer with yyy.yyy... What and where I need to configure to have
> access to my database from Internet?
> 
> Regards,
> Lukasz
> 
> 
> ---(end of 
> broadcast)---
> TIP 6: explain analyze is your friend
> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Strange database corruption with PostgreSQL 7.4.x on Debian Sarge

2006-09-20 Thread Matthias . Pitzl
Hello!

We're running the latest release of PostgreSQL 7.4.13 on a Debian Sarge
machine. Postgres has been compiled by oureselves.
We have a pretty big database running on this machine, it has about 6.4 GB
approximately. One table contains about 55 million rows.
Into this table we insert about 50 rows each day. Our problem is that
without any obvious reason the database gets corrupt. The messages we get
are:
invalid page header in block 437702 of relation ""
We already have tried out some other versions of 7.4. On another machine
running Debian Woody with PotgreSQL 7.4.10 we don't have any problems.
Kernels are 2.4.33 on the Sarge machine, 2.4.28 on the Woody machine. Both
are SMP kernels.
Does anyone of you perhaps have some hints what's going wrong here?

Best regards,
Matthias

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o

2006-09-20 Thread Matthias . Pitzl
Hello Scott!

Thank you for the quick answer. I'll try to check our hardware which is a
Compaq DL380 G4 with a batteyr buffered write cache on our raid controller.
As the system is running stable at all i think it's not the cpu or memory. 
At moment i tend more to a bad disk or SCSI controller but even with that i
don't get any message in my logs...
Any ideas how i could check the hardware?

Best regards,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Scott Marlowe
> Sent: Wednesday, September 20, 2006 2:56 PM
> To: [EMAIL PROTECTED]
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Strange database corruption with 
> PostgreSQL 7.4.x on
> 
> 
> On Wed, 2006-09-20 at 14:34 +0200, [EMAIL PROTECTED] wrote:
> > Hello!
> > 
> > We're running the latest release of PostgreSQL 7.4.13 on a 
> Debian Sarge
> > machine. Postgres has been compiled by oureselves.
> > We have a pretty big database running on this machine, it 
> has about 6.4 GB
> > approximately. One table contains about 55 million rows.
> > Into this table we insert about 50 rows each day. Our 
> problem is that
> > without any obvious reason the database gets corrupt. The 
> messages we get
> > are:
> > invalid page header in block 437702 of relation ""
> > We already have tried out some other versions of 7.4. On 
> another machine
> > running Debian Woody with PotgreSQL 7.4.10 we don't have 
> any problems.
> > Kernels are 2.4.33 on the Sarge machine, 2.4.28 on the 
> Woody machine. Both
> > are SMP kernels.
> > Does anyone of you perhaps have some hints what's going wrong here?
> 
> Most likely causes in these cases tends to be, bad memory, bad hard
> drive, bad cpu, bad RAID / IDE / SCSI controller, loss of power when
> writing to IDE drives / RAID controllers with cache with no battery
> backup.
> 
> I.e. check your hardware.
> 
> ---(end of 
> broadcast)---
> TIP 6: explain analyze is your friend
> 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o

2006-09-20 Thread Matthias . Pitzl
Hello Scott!

Thank you. Memtest86 i know. I think we will use this for testing our
hardware too.
Got some other nice information meanwhile from someone also running a DL380
server which had a defect backplane causing similar issues.
He also gave me the hint that there's a test suite CD by Compaq to run some
hardware diagnostic checks on our machine. I will try this out as soon as
possible.
I will inform you when i know more :)

-- Matthias

> -Original Message-
> From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, September 20, 2006 4:12 PM
> To: [EMAIL PROTECTED]
> Cc: pgsql-general@postgresql.org
> Subject: RE: [GENERAL] Strange database corruption with 
> PostgreSQL 7.4.x o n
> 
> Keep in mind, a single bad memory location is all it takes to 
> cause data
> corruption, so it could well be memory.  CPU is less likely if the
> machine is otherwise running stable.
> 
> The standard tool on x86 hardware is memtest86 www.memtest86.com
> 
> So, you'd have to schedule a maintenance window to run the 
> test in since
> you have to basically down the machine and run just 
> memtest86.  I think
> a few live linux distros have it built in (FC has a memtest label in
> some versions I think)
> 
> My first suspicion is always memory.  We ordered a batch of 
> memory from
> a very off brand supplier, and over 75% tested bad.  And it took >24
> hours to find some of the bad memory.
> 
> good luck with your testing, let us know how it goes.
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o

2006-09-20 Thread Matthias . Pitzl
Hello Tom!

Not yet, but i will try this one too. Anything special i should look for
when dumping out the bad pages?

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
> Sent: Wednesday, September 20, 2006 4:32 PM
> To: [EMAIL PROTECTED]
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Strange database corruption with 
> PostgreSQL 7.4.x on Debian Sarge
> 
> 
> [EMAIL PROTECTED] writes:
> > invalid page header in block 437702 of relation ""
> 
> I concur with Scott that this sounds suspiciously like a hardware
> problem ... but have you tried dumping out the bad pages with
> pg_filedump or even just od?  The pattern of damage would help to
> confirm or disprove the theory.
> 
> You can find pg_filedump source code at
> http://sources.redhat.com/rhdb/
> 
>   regards, tom lane
> 
> ---(end of 
> broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o

2006-09-20 Thread Matthias . Pitzl
Hello all!

Ok, i found out some more informations. According to
http://h2.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&;
taskId=110&prodSeriesId=397634&prodTypeId=15351&prodSeriesId=397634&objectID
=PSD_EX050119_CW01 one of our four disks in the server has a firmware issue.
The problem are incomplete writes onto disk while on high I/O load...
We will check this one first. If it won't help, we will try the hardware
diagnostics and some other tests...
Meanwhile thank you all for your suggestions :)

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> [EMAIL PROTECTED]
> Sent: Wednesday, September 20, 2006 3:14 PM
> To: [EMAIL PROTECTED]
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Strange database corruption with 
> PostgreSQL 7.4.x o
> 
> 
> Hello Scott!
> 
> Thank you for the quick answer. I'll try to check our 
> hardware which is a
> Compaq DL380 G4 with a batteyr buffered write cache on our 
> raid controller.
> As the system is running stable at all i think it's not the 
> cpu or memory. 
> At moment i tend more to a bad disk or SCSI controller but 
> even with that i
> don't get any message in my logs...
> Any ideas how i could check the hardware?
> 
> Best regards,
> Matthias
> 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Good books about PL/PGSQL programming?

2006-09-21 Thread Matthias . Pitzl
Hello there!

Are there any good and recommendable books about PL/PGSQL programming?
Covering new features of PostgreSQL 8.x is no must but would be nice to
have.

Best regards,
Matthias

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Backup / export DB

2006-09-22 Thread Matthias . Pitzl
Title: Nachricht



Simply 
dump the database with pg_dump, copy the dump to the other machine and restore 
the dump there.
See 
the man pages of pg_dump how to use the tool. If you're database contains 
foreign keys or similar, make sure to include OIDs into the database 
dump.
 
-- 
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Alain RogerSent: Friday, September 22, 2006 
  7:59 AMTo: pgsql-general@postgresql.orgSubject: 
  [GENERAL] Backup / export DBHi,I have a database 
  (table, stored procedures, accounts,..) on 1 computer.for some reason i 
  need to move this database to another computer but not in the same folder name 
  or on the same HDD.how can i export (and after import) all relative 
  structure and data ? or maybe a backup of DB is enough ?thanks a 
  lot,Alain


Re: [GENERAL] Backup roles / users

2006-09-22 Thread Matthias . Pitzl
Title: Nachricht



Hi 
Alain!
 
To get 
all the users and so on to the other machine, you have to use the pg_dumpall 
tool with the -g switch. This will dump out all the users, groups and roles i 
think.
 
-- 
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Alain RogerSent: Friday, September 22, 2006 
  9:20 AMTo: pgsql-general@postgresql.orgSubject: 
  [GENERAL] Backup roles / usersHi,I've checked in 
  pg_dump and i did not find anything regarding backuping roles ?When i 
  migrate my DB to another computer, should i recreate all roles manually 
  ?thx.Alain


Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o

2006-09-22 Thread Matthias . Pitzl
Hello everyone!

Small update on this issue:
Our server has four 146GB disks as pairwise RAID 1 and one of these is
affected by the bug mentioned in the HP support page.
As quick fix i moved our database to the the other raid device built of
unaffected disks. Till now i don't got any new database corruption, so i
think the one disk with the firmware bug is the cause of our problems. Since
only the database does a lot of I/O onto the disks, this will help us for
the next days till we can upgrade or replace the bugged disk.
Thank you all for your hints and suggestions!

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> [EMAIL PROTECTED]
> Sent: Wednesday, September 20, 2006 4:51 PM
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Strange database corruption with 
> PostgreSQL 7.4.x o
> 
> 
> Hello all!
> 
> Ok, i found out some more informations. According to
>
http://h2.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&;
taskId=110&prodSeriesId=397634&prodTypeId=15351&prodSeriesId=397634&objectID
=PSD_EX050119_CW01
> one of our four disks in the server has a firmware issue.
> The problem are incomplete writes onto disk while on high I/O load...
> We will check this one first. If it won't help, we will try the hardware
> diagnostics and some other tests...
> Meanwhile thank you all for your suggestions :)
> 
> -- Matthias


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PG_DUMP without asking password

2006-09-22 Thread Matthias . Pitzl
Title: Nachricht



Hm, 
depends on how the security settings of the database are set. For local users 
you could set in the pg_hba.conf ident as authentication method. Then they won't 
need a password anymore.
See 
the pg_hba.conf documentation for more infos.
Another way could be to execute the dump script under a privileged user. 
For my machines here, the user postgres can dump all databases without entering 
a password.
If you 
backup your database via a cron job started by root, you can simply do a 
su postgres -c ... to run the backupjob under user 
postgres.
 
-- 
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Alain RogerSent: Friday, September 22, 2006 
  1:32 PMTo: pgsql-general@postgresql.orgSubject: 
  [GENERAL] PG_DUMP without asking passwordHi,is 
  there a way to backup the database thanks a command script, without postgresql 
  requesting the user password ?thanks a 
lot,Alain


Re: [GENERAL] copy db1 to db2

2006-09-25 Thread Matthias . Pitzl
What kind of errors? Just saying "i got some errors" isn't very helpfull for
us to be able to help you properly ;)
So, what exact error messages you get when trying to restore the database?

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> Bobby Gontarski
> Sent: Sunday, September 24, 2006 5:21 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] copy db1 to db2
> 
> 
> Basically I need to copy db1 to db2 which I create manually. 
> How do I do that, I tried pg_dump pg_restore but I get some 
> errors with foreign key restraint...
> 
> ---(end of 
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
   http://archives.postgresql.org

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Definition of return types for own functions?

2006-09-28 Thread Matthias . Pitzl
Hello all!

Is it possible to define a complex return type like a record in a function
without having some table around with the needed structure of the return
values?
For example: if i want a function that returns a date and an integer, i
create the function:

CREATE FUNCTION bla(text) RETURNS SETOF table_name AS '...

And i need the table table_name with the structure:
CREATE TABLE table_name( datum DATE, zahl INTEGER);

Can i somehow define this structe inside the function declaration without
having some empty tables or views around?

Thank you all!

Best regards,
Matthias

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Fwd: Help with function

2006-10-04 Thread Matthias . Pitzl
Just curious but since which version these IN/OUT parameters are supported?

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of A. Kretschmer
> Sent: Wednesday, October 04, 2006 4:01 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Fwd: Help with function
> 
> 
> am  Wed, dem 04.10.2006, um  9:31:28 -0400 mailte Chris 
> Hoover folgendes:
> > Here is my attempt at the function (note, this is a simple 
> example that could
> > obviously be done via a view, but I am trying to learn more 
> about writing
> > plpgsql functions):
> > 
> > create or replace function dba.active_transactions_by_db() 
> returns setof
> > integer pg_stat_activity.datname%TYPE as
> > $BODY$
> > declare
> > dbNamevarchar;
> > activeTransactionsinteger;
> > countRecrecord;
> > begin
> > for countRec in select count(1) as cnt, datname from 
> pg_stat_activity group
> > by datname loop
> > return next countRec;
> > end loop;
> >
> > return countRec;
> > end;
> > $BODY$
> > language plpgsql;
> 
> I wrote for you this:
> 
> create or replace function active_transactions_by_db(out _cnt 
> int, out _datname text) returns setof record as
> $BODY$
> declare
> dbNamevarchar;
> activeTransactionsinteger;
> countRecrecord;
> begin
> for countRec in select count(1) as cnt, datname from 
> pg_stat_activity group by datname loop
> _cnt := countRec.cnt;
> _datname := countRec.datname;
> return next;
> end loop;
> 
> return;
> end;
> $BODY$
> language plpgsql;
> 
> 
> 
> It works.
> If you want lern more about IN and OUT - Parameters, see:
> http://people.planetpostgresql.org/xzilla/index.php?/archives/
149-out-parameter-sql-plpgsql-examples.html#extended


HTH, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Help required

2006-10-05 Thread Matthias . Pitzl
Title: Nachricht



Hello 
Vijay!
 
Not 
100% sure but Ubuntu should have the database set up similar to Debian on which 
it's based. This means, you can start the database via:
/etc/init.d/postgresql start
This 
should use the distribution specific startup scripts.
 
Greetings,
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Ravindran Vijaykumar R-AVR025Sent: Tuesday, 
  October 03, 2006 12:00 PMTo: 
  pgsql-general@postgresql.orgSubject: [GENERAL] Help 
  required
   
  Hi 
  Friends,
   
  I am using 
  postgres 7.4.2 in Ubuntu linux...
   
  when I run psql -l 
  command, it says the following error
   
  [EMAIL PROTECTED]:~$ psql -lpsqll: could 
  not connect to server: Connection 
  refused    Is the server running 
  locally and accepting    
  connections on Unix domain socket 
  "/var/run/postgresql/.s.PGSQL.5432"?
   
  I have the backup 
  database in the home folder, please update me, how should i make the database 
  up...
   
  rgds
  vijay


Re: [GENERAL] Intentionally produce Errors

2006-10-09 Thread Matthias . Pitzl
In PL/pgSQL you could use the RAISE command:
http://www.postgresql.org/docs/8.1/interactive/plpgsql-errors-and-messages.h
tml

Best regards,
Matthias

> -Original Message-
> 
> Hi,
> 
> this is sort of a silly question, but: what's the proper way to 
> intentionally generate an error? I'm writing tests for 
> pyPgSQL and want 
> to check it's error handling. Currently, I'm using:
> 
> SELECT "THIS PRODUCES AN SQL ERROR";
> 
> Is there any better way to generate errors? Probably even generating 
> specific errors given?
> 
> Regards
> 
> Markus
> 
> ---(end of 
> broadcast)---
> TIP 6: explain analyze is your friend
> 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] adjusting primary key

2006-10-10 Thread Matthias . Pitzl
You have to create t2 as following:
CREATE TABLE t2 (id int, grp int references t1(id) ON UPDATE CASCADE ON
DELETE CASCADE, info text);

Through the cascade commands Postgresql will check the t2 table on rows
which have to be deleted or updated according to your changes in t1.

For changing the existing table take a look at the ALTER TABLE commands.

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Rafal Pietrak
> Sent: Tuesday, October 10, 2006 3:59 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] adjusting primary key
> 
> 
> Hi All,
> 
> I have two tables:
> CREATE TABLE t1 (id int not null unique, info text);
> CREATE TABLE t2 (id int, grp int references t1(id), info text);
> 
> Now, at certain point (both tables populated with tousends of records,
> and continuesly referenced by users), I need to adjust the value of an
> ID field of table T1. 
> 
> How can I do that? On the life system?
> 
> Obvious solution like:
>   UPDATE t1 SET id=239840 where id=9489;
> or in fact:
>   UPDATE t1 SET id=id+1 where id<1000;
> wouldn't work, regretably.
> 
> Naturally I need to have column t2(grp) adjusted accordingly 
> - within a
> single transaction.
> 
> Asking this, because currently I've learned, that I can adjust the
> structure of my database (add/remove columns at will, reneme those,
> etc.), but I'm really stuck with 'looking so simple' task.
> 
> Today I dump the database and perl-edit whatever's necesary 
> and restore
> the database. But that's not a solution for life system.
> 
> Is there a way to get this done? life/on-line?
> -- 
> -R
> 
> ---(end of 
> broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] 
> so that your
>message can get through to the mailing list cleanly
> 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Maximum size of database

2006-10-18 Thread Matthias . Pitzl
Title: Nachricht



What 
disk subsystem do you have? Single disks? Raid? Raid with battery buffered write 
cache?
Last 
one can improve your performance massively.
 
-- 
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of roopa perumalrajaSent: Wednesday, October 18, 
  2006 8:10 AMTo: pgsql-general@postgresql.orgCc: Michael 
  FuhrSubject: Re: [GENERAL] Maximum size of 
  database
  Thanks for your reply.
   
  I have answered your questions below. 
   
  1 & 2) System: Microsoft Windows XP 
  Professional  Version 
  2002
    Computer: Intel 
  Pentium CPU 3.40GHz, 960MB of RAM
   
  3) shared_buffers = 2    autovaccum = on
   
  4) Yes, I am vacuuming & analyzing the database once every day.
   
  5) No concurrent activities, means I run one command at a time.
   
  6) Nothing else running on the box other than Postgres.
   
  I hope these answers will try to solve my problem. Thanks again.
   
  Roopa


[GENERAL] Modifying SQL parser with extensions?

2006-10-29 Thread Matthias Lüdtke

Hi everyone,

I am searching for the easiest way to let PostgreSQL parse a special 
dialect of SQL. The dialect I want to parse augments common SQL in a way 
that enables expressions within a WHERE clause to be annotated and is 
thus not compatible with the standard SQL syntax anymore.


I am totally new to PostgreSQL extension development and skimming 
through the Server Programming chapter I didn't notice an obvious way to 
get down to the parser internals.


Is there any way of using existing extension techniques or do I have to 
compile my own special version of PostgreSQL in order to modify the parser?


Any suggestions are appreciated.

Regards,
Matthias

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Modifying SQL parser with extensions?

2006-10-29 Thread Matthias Lüdtke

Alvaro Herrera wrote:
I am searching for the easiest way to let PostgreSQL parse a special 
dialect of SQL. The dialect I want to parse augments common SQL in a way 
that enables expressions within a WHERE clause to be annotated and is 
thus not compatible with the standard SQL syntax anymore.


No, there's no mechanism for that.  You'd have to preprocess the query
before passing it to PostgreSQL.


Actually I didn't describe my whole story:

In fact, parsing this SQL dialect would just be the first step, as the 
annotations within the query induce an ordering of the result set.


So I need this extra information in the query to accomplish the 
subsequent task of sorting the result set in a certain way before the 
result is returned to the client. I'll have to use some hand crafted 
internal data structures to do this sorting.


It seems to me that at least part of the code that needs to be written 
might be implemented with the existing extension mechanism, though the 
parser does not belong to that category.


Regards,
Matthias

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Modifying SQL parser with extensions?

2006-10-29 Thread Matthias Lüdtke


Richard Troy wrote:

Our package lets you pass individual statements or entire files full
of SQL with embeded comments as it's not that uncommon among SQL
dialects - they're just stripped out before getting to the engine, as
Alvaro suggested.


Unfortunately I need the information in the statement to sort the result 
set in a certain way later on. Sorry, I should have mentioned that from 
the beginning.


This whole thing I want to implement was already written as a proxy JDBC 
driver - from parser to result sorter - and works quite fine. I am now 
investigating if it's possible to implement it directly in an RDBMS, 
i.e. PostgreSQL.


Regards,
Matthias

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Modifying SQL parser with extensions?

2006-10-29 Thread Matthias Luedtke


Alvaro Herrera wrote:
In fact, parsing this SQL dialect would just be the first step, as the 
annotations within the query induce an ordering of the result set.


Huh, what is this supposed to be able to do that you can't do with the
already existing ORDER BY clause?


Basically, conditional statements are annotated with integers that 
represent weights, like


(...)WHERE (foo = 'a')[42] OR (bar = 'b')[20]

In the result set those entries that fulfill both conditions yield score 
62, i.e. 42+20, and are ranked top, whereas entries that fulfill only 
one of the conditions yield scores 42 and 20 respectively and are 
therefore ranked lower.


Honestly, I'm only the poor student who has to implement what the 
smarter ones have thought out. ;)


And, yes, it is possible to accomplish the desired sorting with the 
ORDER BY clause, but as conditions become more complex, the self cooked 
dialect is considered to be more readable and intuitive to the end user.


Regards,
Matthias

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Modifying SQL parser with extensions?

2006-10-29 Thread Matthias Luedtke

Tom,

Tom Lane wrote:
In fact, parsing this SQL dialect would just be the first step, as the 
annotations within the query induce an ordering of the result set.
So I need this extra information in the query to accomplish the 
subsequent task of sorting the result set in a certain way before the 
result is returned to the client. I'll have to use some hand crafted 
internal data structures to do this sorting.


Seems like you could save a large amount of work if you can express what
you want to do as ORDER BY a user-defined operator.


thanks for your reply.

I'll take a look at them. Unfortunately, I'm only the poor chap that has 
to implement a proof of concept for the solution that others worked out.



If you insist on bolting it on as new SQL syntax, changing the parser
will be only the tip of the iceberg --- you'll likely need planner and
executor changes as well.  You could get a rough idea of what's involved
in adding a new kind of query clause by looking at the last patch that
did so:


So, part of the fun will be digging in PostgreSQL and searching for the 
easiest way to implement our solution.


> http://archives.postgresql.org/pgsql-committers/2006-08/msg00251.php

Seems like I should prepare for a long journey. So, while we're at it:

Where can I find the Definitive Design Guide with valuable in-depth 
information for pgsql hackers? Is there any information apart from the 
official manual


http://www.postgresql.org/docs/8.0/interactive/server-programming.html

and the source itself that you can recommend? I'm especially interested 
in how the typical implementation scheme for RDBMS found in textbooks 
maps to pgsql. You see, I'm looking for the K&R for pgsql.


Regards,
Matthias

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [PL/pgSQL] How should I use FOUND special variable.

2006-11-09 Thread Matthias . Pitzl
Title: Nachricht



Hi!
 
Just 
use this:
 
FETCH crs_cnt into row_cnt;
EXIT WHEN NOT FOUND;
 
Greetings,
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Jeremiasz MiedzinskiSent: Thursday, November 
  09, 2006 1:15 PMTo: pgsql-general@postgresql.orgSubject: 
  [GENERAL] [PL/pgSQL] How should I use FOUND special variable. Documentation is 
  little unclear for meHello.I'm porting some 
  procedures from PL/SQL  and I encountered following problem:In PL/SQL 
  I'm using this statement related to cursor:OPEN crs_cnt(start_millis, 
  end_millis);LOOP FETCH crs_cnt into row_cnt; 
      EXIT WHEN crs_cnt%NOTFOUND;    
  insert into spm_audit_stats values(SEQ_SPM_ID_AUTO_INC.nextval, start_millis, 
  base_stat_period, row_cnt.adt_count, row_cnt.adt_avg, row_cnt.adt_max, 
  row_cnt.adt_min, row_cnt.adt_stdev, row_cnt.adt_service_name, 
  row_cnt.adt_root_user);  global_counter := global_counter + 1;END 
  LOOP;CLOSE crs_cnt;Now, I need to do the same action in PL/pgSQL. 
  It's rather simple, but I don't know how to use FOUND variable described in 
  documentation:FETCH retrieves the next row from 
  the cursor into a target, which may be a row variable, a record variable, or a 
  comma-separated list of simple variables, just like SELECT 
  INTO. As with SELECT INTO, the special variable 
  FOUND may be checked to see whether a row was obtained 
  or not. When I'm trying to use it in Oracle way, my DB reports error. 
  Also I tried to use it like that:IF NOT crs_cnt%FOUND THEN 
  ...But it also doesn't worked for me.Thanks for any 
  help.Kind Regards. -- -- audi vide sile -- 



Re: [GENERAL] autovacuum blues

2006-11-09 Thread Matthias . Pitzl
Hi Anton!

I'm not sure how this is with 8.1 but on 7.4.14 we have to enable row level
statistics collection for autovacuum:
stats_row_level = true

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Anton Melser
> Sent: Thursday, November 09, 2006 1:12 PM
> To: Richard Huxton
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] autovacuum blues

> postgres  1300  0.0  1.1  20180  3048 ?S12:03   0:00
> /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
> postgres  1302  0.0  0.2   9968   544 ?S12:03   0:00
> postgres: logger process
> postgres  1304  0.0  0.4  20316  1188 ?S12:03   0:00
> postgres: writer process
> postgres  1305  0.0  0.6  10968  1544 ?S12:03   0:00
> postgres: stats buffer process
> postgres  1306  0.0  0.3  10200   796 ?S12:03   0:00
> postgres: stats collector process
> ...
> Any chance you could give me some pointers on activating logging? My
> thoughts were to log
> log_planner_stats = on
> log_min_messages = info
> 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Why overlaps is not working

2006-11-09 Thread Matthias . Pitzl
Hm, why not this one:

select ('2006-10-31'::date, '-12-31'::date) OVERLAPS
('2006-10-16'::DATE, '2006-10-31':: DATE);

 overlaps
--
 f
(1 row)

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Andrus
> Sent: Thursday, November 09, 2006 2:47 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Why overlaps is not working
> 
> 
> set datestyle to iso,iso;
> 
> select 1 where ('2006-10-31'::date, '-12-31'::date) OVERLAPS
>('2006-10-16'::DATE, '2006-10-31':: DATE)
> 
> 
> does not return any rows.
> 
> Why ?
> How to make overlaps to return correct result?
> 
> Andrus.
> 
> 
> 
> ---(end of 
> broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL - update table problem...

2006-11-13 Thread Matthias . Pitzl
Hi!

According to the synopsis of UPDATE you just can update one table at a time.
Just use more UPDATE commands inside a transaction.

-- Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Marko Rihtar
> Sent: Monday, November 13, 2006 2:21 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] SQL - update table problem...
> 
> 
> Hi,
> 
> i'm trying to make update on multiple tables but don't know how.
> is something like this posible with postgresql?
> 
> update table1
> join table2 on (table1.id=table2.t1)
> join table3 on (table2.id=table3.t2)
> set table1.name='test', table2.sum=table1.x+table2.y, 
> table3.cell='123456789'
> where table1.id=6
> 
> i know that this syntax is not supported with postgres but i tried to 
> rewrite the code using this synopsis:
> 
> UPDATE [ ONLY ] table SET column = { expression | DEFAULT } [, ...]
>   [ FROM fromlist ]
>   [ WHERE condition ]
> 
> i failed again.
> updating multiple tables, can it be done?
> 
> thanks for help
> 
> _
> FREE pop-up blocking with the new MSN Toolbar - get it now! 
> http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/
> 
> 
> ---(end of 
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
   http://archives.postgresql.org/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] can't start postgresql

2006-11-14 Thread Matthias . Pitzl
He's just trying to do what you told him.
But the use of initdb won't correct the problem Igor got. There's something
wrong with his database so that he can't start the postmaster process:
> PANIC:  failed to re-find parent key in "23724"

Unfortunately i don't have any idea how to solve this issue :(

-- Matthias


> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Enrico
> Sent: Tuesday, November 14, 2006 4:22 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] can't start postgresql
> 
> 
> 
> > Thanks, but this is not working -
> > 
> > [EMAIL PROTECTED] ~]$ /usr/pgsql/bin/initdb -D /usr/pgsql/data
> > The files belonging to this database system will be owned 
> by user "postgres".
> > This user must also own the server process.
> > 
> > The database cluster will be initialized with locale C.
> > 
> > initdb: directory "/usr/pgsql/data" exists but is not empty
> > If you want to create a new database system, either remove or empty
> > the directory "/usr/pgsql/data" or run initdb
> > with an argument other than "/usr/pgsql/data".
> 
> of course if want to use /usr/pgsql/bin/initdb -D /usr/pgsql/data
> directory /usr/pgsql/data must be empty and user postgres 
> must be the owner, 
> but I don't understand, what is your problem?
> 
> Enrico
> 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Question about query optimization

2006-11-15 Thread Matthias . Pitzl
Hello!

I have to tables, component with unchanging component data and a
component_history table containing the history of some other values that can
change in time.
The table component_history holds a foreign key to the component_id column
in the component table. The table component_history has a primary key over
the columns component_id and history_timestamp.

Now, we often need to get the situation at a given time out of these tables
and at moment we use following query:


SELECT * FROM component JOIN component_history AS c_h USING(component_id) 
WHERE history_timestamp = (
SELECT history_timestamp FROM component_history 
WHERE c_h.component_id = component_history.component_id AND
history_timestamp <= '2006-10-01'
ORDER BY history_timestamp DESC LIMIT 1
)


The query gets executed like this:

 Hash Join  (cost=32540.55..32665.07 rows=32 width=78) (actual
time=118.958..136.416 rows=4160 loops=1)
   Hash Cond: ("outer".component_id = "inner".component_id)
   ->  Seq Scan on component  (cost=0.00..71.31 rows=4231 width=19) (actual
time=0.004..3.685 rows=4231 loops=1)
   ->  Hash  (cost=32540.47..32540.47 rows=32 width=63) (actual
time=118.165..118.165 rows=0 loops=1)
 ->  Seq Scan on component_history c_h  (cost=0.00..32540.47 rows=32
width=63) (actual time=0.092..111.985 rows=4160 loops=1)
   Filter: (history_timestamp = (subplan))
   SubPlan
 ->  Limit  (cost=6.27..6.28 rows=1 width=8) (actual
time=0.016..0.017 rows=1 loops=5165)
   ->  Sort  (cost=6.27..6.28 rows=2 width=8) (actual
time=0.014..0.014 rows=1 loops=5165)
 Sort Key: history_timestamp
 ->  Index Scan using component_history_pkey on
component_history  (cost=0.00..6.26 rows=2 width=8) (actual
time=0.007..0.009 rows=1 loops=5165)
   Index Cond: (($0 = component_id) AND
(history_timestamp <= '01.10.2006 00:00:00'::timestamp without time zone))
 Total runtime: 139.044 ms


Is there any other, and more performat way, to get the last history entry
for a given date than this query?
Queries of this kind are often used in our application and getting a more
performant solution would speed up things a lot.

Thank's for your suggestions!

Greetings,
Matthias

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Question about query optimization

2006-11-15 Thread Matthias . Pitzl
Title: Nachricht



Hello 
Gurjeet!
 
Tried 
your suggestion but this is just a marginal improvement.
Our 
query needs 126 ms time, your query 110 ms.
 
Greetings,
Matthias

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
  On Behalf Of Gurjeet SinghSent: Wednesday, November 15, 2006 
  4:18 PMTo: [EMAIL PROTECTED]Cc: 
  pgsql-general@postgresql.orgSubject: Re: [GENERAL] Question about 
  query optimizationOn 11/15/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
  
  Is 
there any other, and more performat way, to get the last history 
entryfor a given date than this query? 
  Create an (independent) index on history_timestamp column and use 
  a min/max in the subquery.More specifically, your query should look 
  like this:SELECT    
  *FROM    componentJOIN    
  component_history AS c_h    
  USING(component_id)WHERE    history_timestamp 
  =    (SELECT    
  max(history_timestamp)        
                  
      FROM    component_history 
              
                  
  WHERE    c_h.component_id =    
                  
                  
      component_history.component_id    
                  
          )Here's a session snippet 
  for an example of how drastically that can reduce the cost and the run-time: 
  postgres=# drop table 
  t;DROP TABLEpostgres=# create table t ( a int, b int );CREATE 
  TABLEpostgres=# insert into t select s, 9-s from 
  generate_series(0,9) as s; INSERT 0 10postgres=# analyze 
  t;ANALYZEpostgres=# explain select count(*) from t o where a = 
  (select max(a) from t i where i.b = o.b 
  );    
  QUERY 
  PLAN-- 
   Aggregate  (cost=179103292.25..179103292.26 rows=1 
  width=0)   ->  Seq Scan on t o  
  (cost=0.00..179103291.00 rows=500 
  width=0) Filter: (a = 
  (subplan)) 
  SubPlan   
  ->  Aggregate  (cost= 1791.01..1791.02 rows=1 
  width=4) 
  ->  Seq Scan on t i  (cost=0.00..1791.00 rows=1 
  width=4)   
  Filter: (b = $0)(7 rows)Time: 0.000 mspostgres=# create index 
  ind_t_a on t(a) ; CREATE INDEXTime: 719.000 mspostgres=# 
  create index ind_t_b on t(b);CREATE INDEXTime: 750.000 
  mspostgres=# explain select count(*) from t o where a = (select max(a) 
  from t i where i.b = o.b 
  );  
  QUERY 
  PLAN--- Aggregate  
  (cost=806146.25..806146.26 rows=1 width=0)   ->  Seq 
  Scan on t o  (cost= 0.00..806145.00 rows=500 
  width=0) Filter: (a = 
  (subplan)) 
  SubPlan   
  ->  Aggregate  (cost=8.03..8.04 rows=1 
  width=4) 
  ->  Index Scan using ind_t_b on t i  (cost= 0.00..8.03 rows=1 
  width=4)   
  Index Cond: (b = $0)(7 rows)Time: 15.000 ms/* and now the execution times 
  */postgres=# drop index ind_t_a, ind_t_b; DROP 
  INDEXTime: 0.000 mspostgres=# select count(*) from t o where a = 
  (select max(a) from t i where i.b= o.b );Cancel request sent (had to cancel after 1 minute) ERROR:  canceling statement due to user 
  requestpostgres=# create index ind_t_a on t(a) ;CREATE 
  INDEXTime: 687.000 mspostgres=# create index ind_t_b on 
  t(b);CREATE INDEX Time: 765.000 mspostgres=# select count(*) 
  from t o where a = (select max(a) from t i where i.b= o.b 
  ); count 10(1 row)Time: 2704.000 
  mspostgres=#  -- 
  [EMAIL PROTECTED][EMAIL PROTECTED] gmail | hotmail | 
  yahoo }.com 


Re: [GENERAL] build for none standard socket

2006-11-22 Thread Matthias . Pitzl
Hi!

You can at least configure the socket directory, the socket group and the
socket permissions in your postgresql.conf.

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> garry saddington
> Sent: Wednesday, November 22, 2006 8:58 AM
> To: Postgres General
> Subject: [GENERAL] build for none standard socket
> 
> 
> I have searched and Googled but can't find how to do a ./configure to
> use a different socket than /tmp/.s.PGSQL.5432. It says in the manual
> that it can be done but then does not say how.
> Anyone done this before?
> Regards
> Garry 
> 
> 
> ---(end of 
> broadcast)---
> TIP 6: explain analyze is your friend
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PGSQL Newbie

2006-11-22 Thread Matthias . Pitzl
Hi Robert!

Sure, why not? Both databases run on different directories, ports, sockets
and so on.

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Wolfe, Robert
> Sent: Wednesday, November 22, 2006 4:53 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] PGSQL Newbie
> 
> 
> 
> Good morning all!  I'm a newbie to PGSQL here so thought I 
> would ask my
> first question since I joined this list this morning...
> 
> Is it possible to run Postgresql and MySQL together on the 
> same machine?
> 
> -- 
> Robert Wolfe, Linux and Network Admin
> net261.com | http://www.net261.com:85
> [EMAIL PROTECTED]
> 
> 
> ---(end of 
> broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] returning parameters from function

2006-12-12 Thread Matthias . Pitzl
You have to call the function in the following form:

SELECT * FROM get_test_data(1) AS (field1 type, field2 type, ...)

In words, you have to tell the database how the data returned by the
function has to be interpreted.

Greetings,
Matthias

> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of 
> Rikard Pavelic
> Sent: Tuesday, December 12, 2006 3:06 PM
> To: Shoaib Mir; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] returning parameters from function
> 
> This doesn't work. ;(
> I get ERROR: a column definition list is required for functions 
> returning "record"
> SQL state: 42601
> when running this function
> 
> Regards,
> Rikard
> 
> ---(end of 
> broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] 
> so that your
>message can get through to the mailing list cleanly
> 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql 8.1: plperl code works with LATIN1, fail

2007-01-29 Thread Matthias . Pitzl
> In an 8.1.6 UTF-8 database this example returns false; in 8.2.1 it
> returns true.  See the following commit message and the related bug
> report regarding PL/Perl and UTF-8:
> 
> http://archives.postgresql.org/pgsql-committers/2006-10/msg00277.php
> http://archives.postgresql.org/pgsql-bugs/2006-10/msg00077.php
> 
> If you can't upgrade to 8.2 then you might be able to work around
> the problem by creating the function as plperlu and adding 'use utf8;'.

> -- 
> Michael Fuhr

Hello Michael!

As fas as i know 'use utf8;' normally just tells Perl that the source code
is written in UTF-8 and noting more.
For converting from and to UTF-8 in data usually the Encode modul is used.
Or is this different for plperlu?

Greetings,
Matthias

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Storing database in cluster (Memory)

2007-02-06 Thread Matthias . Pitzl
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of roopa perumalraja
Sent: Tuesday, February 06, 2007 12:33 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Storing database in cluster (Memory)


Hi all,
 
As I am inserting 100million rows daily into partitioned tables (daily
wise), it is getting slower. Even the retrivel of data, select statement on
those tables takes about 30 mintues. I have tried increasing the parameters
in postgres.conf but still that doesn't help me much as the no of rows are
huge. Will the idea of storing the database in cluster (memory) increase the
performance of insert/update/select in the table in a suitation like this?
Thanks a lot in advance.
 
Regards
Roopa



  _  

Don't pick lemons.
See all the new
<http://autos.yahoo.com/new_cars.html;_ylc=X3oDMTE0OGRsc3F2BF9TAzk3MTA3MDc2B
HNlYwNtYWlsdGFncwRzbGsDbmV3Y2Fycw--> 2007 cars at Yahoo!
<http://autos.yahoo.com/new_cars.html;_ylc=X3oDMTE0OGRsc3F2BF9TAzk3MTA3MDc2B
HNlYwNtYWlsdGFncwRzbGsDbmV3Y2Fycw--> Autos. 
 

Hello Roopa,
 
Are you doing any vacuum runs on these tables? Most time degrading
performance one highly updated tables is caused by not performing any vacuum
runs.
 
Greetings,
Matthias


Re: [GENERAL] Very strange error

2007-02-06 Thread Matthias . Pitzl

> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ümit Öztosun
> Sent: Tuesday, February 06, 2007 2:50 PM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] Very strange error
> 
> 
> Hi,
> 
> Today suddenly our PostgreSQL 8.1 server started producing strange errors.
Error occurs during simple updates:
> 
> "Table has type character varying, but query expects character varying." 
> 
> We are still trying to figure out the problem. I've googled for this error
but found nothing. Any insight?
> 
> Platform: Ubuntu Dapper, Running PostgreSQL 8.1 (vanilla packages from
Ubuntu), UTF-8 and non-US locale. 
> 
> Regards,
> -- 
> Ümit Öztosun

 
Have you installed any updates for PostgreSQL? The latest security update
fixed something with type checks or so.
I've seen the same error message also on the BUGS mailing list concerning a
broken CHECK constraint on a table row.
Perhaps this is the cause of the error messages.

Greetings,
Matthias 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Very strange error

2007-02-06 Thread Matthias . Pitzl
 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ümit Öztosun
Sent: Tuesday, February 06, 2007 3:59 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Very strange error





Have you installed any updates for PostgreSQL? The latest security update 
fixed something with type checks or so. 
I've seen the same error message also on the BUGS mailing list concerning a
broken CHECK constraint on a table row.
Perhaps this is the cause of the error messages.


Well, I've just  dumped old data, installed  v8.2.2 from sources,  restored
data. Unfortunately the error remains the same and we have no ideas left.
Error is again:

"Table has type character varying, but query expects character varying." 

The error is about a varchar column, with no other special attributed. It
was working flawlessly for a long time.


Any help is appreciated.

Regards,
Ümit Öztosun
 

Hello there!
 
I suggest to post this on the BUGS mailing list. As said before, there has
been some other mail with exact the same error message and with the latest
version something concerning data type checks had been fixed.
 
Greetings,
Matthias


[GENERAL] Combining several rows

2004-05-24 Thread Matthias Nagl
Hello List!

I would like to combine the contents of several rows of a subquery. After 
several hours of search in the documentation and the internet I didn'T find a 
solution and hope anyone can help. My problem:

Let's say I've got a table in the following form:

SELECT * FROM test;

id  |  name
---
 1  |  abc
 2  |  def
 3  |  ghi

For a table like this I am looking for a query that returns a result that 
looks this way:

name
-
abc, def, ghi

It should work for any number of rows. I would like to Insert the returned 
String (with a comma separated list of all name-fields in the test-table) in 
the main-query ( SELECT (whatever is a solution) AS name, other, fields FROM 
tables...).

Thanks in advance

Matthias Nagl

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] postgres replication only some datas

2004-08-24 Thread Matthias Blohm
Hello,
a question about a tool or a possibility how could something work.
following situation:
we have a database which is full of very sensitive information and needed that db to use our online website.
but now we move the website to a server outside our office and needed to replicate only some datas to the online db.
with the tool slony i found out , that some tables could be replicated, but in some tables are some information, which we do not 
wont to replicate.
so we need a tool or an idea how to do that.
I though about a dump and deleting the sensitive datas, but the database is about a half gig and we need the changed entries 
directly on the online db within seconds.

Anybody how could help?
Greetings
Matthias
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] More Red Hat information

2001-06-26 Thread Matthias Urlichs

At 8:57 -0700 2001-06-25, David Wheeler wrote:
>On Mon, 25 Jun 2001, Bruce Momjian wrote:
>
>>  Here is a link with more information than the press release:
>>
>>  http://www.redhat.com/products/software/database/
>
>$2225  Are they *kidding*???

If they really deliver, i.e. you get reasonable phone+email support 
from people who can actually _do_something_ instead of giving you the 
runaround (phoned M$ tech support lately with a _real_ bug report? 
*), it's cheap as dirt as databases go.

*: Oops, sorry, of course M$ doesn't have bugs. They might have 
"issues". Or "features".
-- 
Matthias Urlichs

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] More Red Hat information

2001-06-26 Thread Matthias Urlichs

Hi,

David Wheeler:
> M$ is a good example, as their database is quite capable, and costs only a
> few hundred bucks (last time I looked). If RedHat DB is three times the
> price, IME many PHBs will go with SQL Server, instead, just because it's
> cheaper, and they know the Microsoft name (and FUD).
> 
RHDB has a price tag of zero if all you need is the database.

This is not what I was talking about, though.

-- 
Matthias Urlichs | noris network AG | http://smurf.noris.de/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] group by problem

2000-06-15 Thread Matthias Teege

Moin,

i have an sql query which works perfect under PostgrSQL
6.0 but under 6.5.1 it brings: ERROR: Illegal use of
aggregates or non-group column in target list.

select T1.id, T1.name, T1.zusatz, T1.kontakt, T1.strasse,
T1.land, T1.plz, T1.ort, T1.telefax 
from debitoren T1, auftrag T2
where T2.kunde=T1.id 
group by T1.id;

Where is the problem?

Thanks for any hints

Bis dann
Matthias




[GENERAL] Why isn't that null

2000-06-27 Thread Matthias Teege


Moin,

I have al small problem with ISNULL in a trigger
function. My definition looks like: 

IF NEW.vkp ISNULL THEN
   select t1.las into i_kd 
   from auftrag t1, aufpos t2 where 
   t2.auftrag = t1.id;
   select get_vkp(i_kd, NEW.artikel) into f_vkp;
   NEW.vkp:=f_vkp;
END IF;

The following SQL query didnt cause this if statement: 

insert into aufpos(auftrag, artikel, best_menge, vkp,
lieferwoche, cuser, ctime, uuser, utime) values
('175','8501900','12','','3500','matthias',
now (), 'matthias',now ())

If I change the function statement as follows: 

IF NEW.vkp = 0 THEN
   ... 
END IF;

it works but '0' is a leagal value but not '' (empty). The
function should only be called if NEW.vkp ist empty.

Many thanks
Matthias





Re: [GENERAL] Why isn't that null

2000-06-27 Thread Matthias Teege

Tom Lane <[EMAIL PROTECTED]> writes:

[...]

> > If I change the function statement as follows: 
> 
> > IF NEW.vkp = 0 THEN
> >... 
> > END IF;
> 
> > it works
> 
> Shouldn't that give a type error of some sort?  You didn't say
> what type vkp is, but if it's a string type then comparing it
> against a numeric shouldn't work IMHO.

no it isn't. It is a float8 type. I changed my program in
that way that the query includes NULL instead of ''.

Bis dann
Matthias


-- 
Matthias Teege -- [EMAIL PROTECTED] -- http://emugs.de
make world not war
PGP-Key auf Anfrage



[GENERAL] Was an insert successfully?

2000-07-22 Thread Matthias Teege


Moin,

how can I check if an insert was successfully in an
plpgsql function?

I use 6.5.1 under FreeBSD.

Many thanks
Matthias



[GENERAL] Get the OID after an insert

2000-07-24 Thread Matthias Teege

Moin,

ist there any possibility to get the OID returned by an
insert clause in a plpgsql function? Something like
"select into intAuftragID id from auftrag ..." but only
for insert.

Bis dann
Matthias



[GENERAL] Unions in views

2000-08-03 Thread Matthias Teege


Moin,

I use PostgreSQL 6.5.1 and PHP4 under FreeBSD and now I
want to use unions in views. But PostgreSQL says "ERROR:
Views on unions not implemented.".

Are unions in views implemented in the current version
(7.x)? Must I reinstall PHP if I update PostgreSQL?

Many thank's
Matthias




[GENERAL] Last x records

2001-02-27 Thread Matthias Teege


Moin,

is there any way to get the last x records of an query
result?

I can use "... order by x DESC LIMIT n" but I need ASC not
DESC, the last records in the correct order.

Many thanks
Matthias

-- 
Matthias Teege -- [EMAIL PROTECTED] -- http://emugs.de
make world not war
PGP-Key auf Anfrage




[GENERAL] Connect PostgreSQL 6.0 Server with php4b

1999-10-09 Thread Matthias Teege

Moin,

i have an PostgreSQL 6.0 Server wich I would query with php4b. I have
problems to make the connection because off php gives me the following
error message:

Warning: Unable to connect to PostgresSQL server: Failed to
authenticate client as Postgres user 'nobody' using authentication
scheme 131072. in /usr/local/share/apache/htdocs/matthias/hellodb.php
on line 2
An error occured. 

Were is the Problem?

Many thanks
Matthias





Re: [GENERAL] Connect PostgreSQL 6.0 Server with php4b

1999-10-13 Thread Matthias Teege

Moin,

thank you for your reply but the problem remain. The fallowing line is
in my pg_hba.conf:

host moon192.168.153.0 255.255.255.0 trust

The hole network with the given number has access to the database moon.
The Webserver has the IP 192.168.153.9. But the no connection is
possible. I've tried an explicit line in pg_hba.conf for the webserver
but without any success.

Any other hints? :-)

Thanks 
Matthias


On Sun, Oct 10, 1999 at 04:51:24PM -0300, Charles Tassell wrote:
> Your problem is probably in the /usr/local/pgsql/data/pg_hba.conf file.
> That file lists what machines are allowed to connect to your Postgres
> server and what sort of authentication they have to provide.  If the web
> server and the Postgres server are on the same machine, you should have
> these two lines  in that file:
> localalltrust
> host all 127.0.0.1255.255.255.255   trust
> 
> If they are on seperate machines, you will want to set up something like:
> host all web.server.ip   255.255.255.255   crypt
> 
> and set up accounts/passwords for your PHP scripts, then use this sort of
> thing to connect to the DB:
> $dbCon = pg_PConnect("host=postgres.server.address user=username
> password=password dbname=database.to.connect.to");
> 
> 
> At 06:45 AM 10/9/99, Matthias Teege wrote:
> >Moin,
> >
> >i have an PostgreSQL 6.0 Server wich I would query with php4b. I have
> >problems to make the connection because off php gives me the following
> >error message:
> >
> >Warning: Unable to connect to PostgresSQL server: Failed to
> >authenticate client as Postgres user 'nobody' using authentication
> >scheme 131072. in /usr/local/share/apache/htdocs/matthias/hellodb.php
> >on line 2
> >An error occured. 
> >
> >Were is the Problem?
> >
> >Many thanks
> >Matthias
> >
> >
> >
> 
> 

-- 
Matthias Teege -- [EMAIL PROTECTED] -- http://emugs.de
make world not war
PGP-Key auf Anfrage






[GENERAL] Connect from a new psql to an old 6.0 postmaster

1999-12-15 Thread Matthias Teege

Moin,

I have two database servers. One running postgresql 6.0
and the new one running 6.5.1.

If I run "psql -h oldone -d oldone -u" from the newer
server and input username and password psql says:

Connection to database 'oldone' failed.
Failed to authenticate client as Postgres user 'olduser'
using authentication scheme 131072.

What does it mean and how can I fix this.

Many thanks
Matthias






[GENERAL] upgrade postgreSQL

2000-01-26 Thread Matthias Zehnder

I got the file <<.patch>> and tried to upgrade postgres
sous FreeBSD with the command patch from the version 6.5.2 to
the version 6.5.3 but it doesn't work. Could somebody
explain to me how to upgrade postgres on my server.

Thank's for your help



__

Matthias Zehnder - Informatique
E-mail: [EMAIL PROTECTED]
__

M & C net
M&C Management & Communications SA
A VIA NET.WORKS COMPANY

Rue de Romont 35, CH-1700 Fribourg
Tél.: ++41 (0)26 347 20 40, fax: ++41 (0)26 347 20 49
E-mail: [EMAIL PROTECTED], http://www.mcnet.ch