[GENERAL] understanding explain data

2006-05-10 Thread Sim Zacks
I am looking at the explain data for my query and it mostly 
understandable thanks to an excellent article by Jim Nasby, 
http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10120


It is very time consuming and confusing walking through the explain. Is 
there a possibility (or does it even make sense), to be able to feed the 
 explain into an application that would tell you what you need to do to 
optimize the query?


Something such as: with this explain data, adding an index on table tbl 
column A would drastically improve the efficiency. Or at least an 
application that would say, the least efficient part of your query is on 
this part of the code so that you could more easily figure out what to 
do about it.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Exporting postgres query to CSV

2006-05-10 Thread Nis Jorgensen
[EMAIL PROTECTED] wrote:
> A word of advice: if there is any chance that a column (e.g. text) contains
> an embedded newline, you will be much better off outputting the data in
> simple xml, instead of CSV.  This works very well with Excel for import.  I
> just did a simple program for this recently.

If the csv generator correctly quotes values containing quotes, commas
and newlines, Excel is going to be happy with the csv file as well

/Nis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] default client_encoding with psql on windows

2006-05-10 Thread John DeSoi

[Don't forget to CC the list so everyone can see your reply]

The pgInstaller uses Wix which is open source. You can download it  
from Source Forge.


John



On May 10, 2006, at 7:23 AM, Thomas Sondag wrote:


2006/5/10, John DeSoi <[EMAIL PROTECTED]>:


On May 8, 2006, at 12:14 PM, Thomas Sondag wrote:

> I tried various things with the --set option of psql without
> success, like
> --set client_encoding=win1252 or --set CLIENT_ENCODING=win1252  
or --

> set encoding=win1252.


The variables are case sensitive, so it looks like you left out the
right one. The docs say the psql variable is ENCODING (all caps).


--set ENCODING=win1252 doesn't work


If that does not work maybe you could call \encoding in the psqlrc  
file?


\encoding win1252 in psqlrc.conf do the work, thanks for the clue.
But it may have some side effect with psql.exe in a shell script (like
importing data in UTF8).

Today I find one environement variable in the pg_dump man page :
PGCLIENTENCODING

This shortcut solve most of my problems :
C:\WINDOWS\system32\cmd.exe /k chcp 1252 && set PGPORT=5432 && set
PGCLIENTENCODING=win1252 &&"C:\Program
Files\PostgreSQL\8.1\bin\psql.exe"  -h localhost -p 5432 postgres
"postgres"

I think this shorcut could solve a lot of isue, with PostgreSQL on  
Windows.


I would like to modify the PostgreSQL msi myself to replace the
default psql link with that one, but I don't have the tool to do it.

I will post a bug report for Pginstaller.

Thanks

  Thomas




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Douglas McNaught
Greg Stark <[EMAIL PROTECTED]> writes:

> Douglas McNaught <[EMAIL PROTECTED]> writes:

>> Correct me if I'm wrong, but I've never heard of a 15kRPM SATA drive.
>
> Well, dollar for dollar you would get the best performance from slower drives
> anyways since it would give you more spindles. 15kRPM drives are *expensive*.

Depends on your power, heat and rack space budget too...  If you need
max performance out of a given rack space (rather than max density),
SCSI is still the way to go.  I'll definitely agree that SATA is
becoming much more of a player in the server storage market, though.

-Doug

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] understanding explain data

2006-05-10 Thread Alban Hertroys

Sim Zacks wrote:
Something such as: with this explain data, adding an index on table tbl 
column A would drastically improve the efficiency. Or at least an 
application that would say, the least efficient part of your query is on 
this part of the code so that you could more easily figure out what to 
do about it.


The latter part is the most useful IMO, optimizing usually needs a 
(human) brain to put things into the right perspective. Adding an index 
can speed up your queries only so much, a more optimal data presentation 
 (like moving calculations to insert/update instead of select) can do a 
lot more sometimes.


It looks like something like that shouldn't be too hard to write... 
Maybe it even does exist already. Personally I'd prefer a command line 
tool ;)
It would help if you can pipe the output of explain analyze to an 
external tool from within psql.


Regards,
--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [PERFORM] Arguments Pro/Contra Software Raid

2006-05-10 Thread Florian Weimer
* Hannes Dorbath:

> + Hardware Raids might be a bit easier to manage, if you never spend a
> few hours to learn Software Raid Tools.

I disagree.  RAID management is complicated, and once there is a disk
failure, all kinds of oddities can occur which can make it quite a
challenge to get back a non-degraded array.

With some RAID controllers, monitoring is diffcult because they do not
use the system's logging mechanism for reporting.  In some cases, it
is not possible to monitor the health status of individual disks.

> + Using SATA drives is always a bit of risk, as some drives are lying
> about whether they are caching or not.

You can usually switch off caching.

> + Using hardware controllers, the array becomes locked to a particular
> vendor. You can't switch controller vendors as the array meta
> information is stored proprietary. In case the Raid is broken to a
> level the controller can't recover automatically this might complicate
> manual recovery by specialists.

It's even more difficult these days.  3ware controllers enable drive
passwords, so you can't access the drive from other controllers at all
(even if you could interpret the on-disk data).

> + Even battery backed controllers can't guarantee that data written to
> the drives is consistent after a power outage, neither that the drive
> does not corrupt something during the involuntary shutdown / power
> irregularities. (This is theoretical as any server will be UPS backed)

UPS failures are not unheard of. 8-/ Apart from that, you can address
a large class of shutdown failures if you replay a log stored in the
BBU on the next reboot (partial sector writes come to my mind).

It is very difficult to check if the controller does this correctly,
though.

A few other things to note: You can't achieve significant port density
with non-RAID controllers, at least with SATA.  You need to buy a RAID
controller anyway.  You can't quite achieve what a BBU does (even if
you've got a small, fast persistent storage device) because there's
no host software support for such a configuration.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] understanding explain data

2006-05-10 Thread chris smith

On 5/10/06, Alban Hertroys <[EMAIL PROTECTED]> wrote:

Sim Zacks wrote:
> Something such as: with this explain data, adding an index on table tbl
> column A would drastically improve the efficiency. Or at least an
> application that would say, the least efficient part of your query is on
> this part of the code so that you could more easily figure out what to
> do about it.

The latter part is the most useful IMO, optimizing usually needs a
(human) brain to put things into the right perspective. Adding an index
can speed up your queries only so much, a more optimal data presentation
  (like moving calculations to insert/update instead of select) can do a
lot more sometimes.

It looks like something like that shouldn't be too hard to write...
Maybe it even does exist already. Personally I'd prefer a command line
tool ;)
It would help if you can pipe the output of explain analyze to an
external tool from within psql.


I've thought about writing a similar tool.. I'm about 30% of the way :)

It's written in python and can grab the queries out of the db logs..
but the harder part is working out the explain output.. also taking in
to consideration an index might be available but not the best option
for the query.

I guess the easiest way to check is to have the script turn seq scans
off when it runs explain and go from there.

If anyone's interested in helping it go further contact me off list
(can put it on pgfoundry.org and go from there if need be).

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Getting relation/attribute names from inside UDT input/output function

2006-05-10 Thread Humberto Luiz Razente
Hello,

I wrote the UDT input/output functions for my complex type
and everything is working well.
But now, in the input function, I need to get the relation/attribute
names where a new object is being inserted.

PG_FUNCTION_INFO_V1(complex_in);
Datum complex_in(PG_FUNCTION_ARGS) {
   char *arg1 = PG_GETARG_CSTRING(0);
   char *relname = ???
   char *attribname = ???
   ...
   PG_RETURN_POINTER(result);
}

Does anybody have an idea of how I get that?

Thanks in advance

Humberto Razente
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] understanding explain data

2006-05-10 Thread Sim Zacks
I agree with you that an index isn't always the answer, that was more of 
an example. I was thinking more along the lines of an intelligent part 
of the database that has access to the statistics and would be able to 
spit out recommendations for the query.


Such as, I type in a monster query and say optimize and in would be able 
to spit out 4 smaller views that use each other and generate the same 
result in 10% of the time. Or to say this query is optimized, but you 
need an index on these columns.


I disagree with you that a human brain would be better then a machine 
for optimizing purposes. If the system is programmed to optimize 
correctly, then it will when to stick data into a temp table and update 
columns instead of doing a select because x number of joins are too much 
for the select. Humans may not know the optimal number of joins before 
the query becomes inefficent.


Alban Hertroys wrote:

Sim Zacks wrote:
Something such as: with this explain data, adding an index on table 
tbl column A would drastically improve the efficiency. Or at least an 
application that would say, the least efficient part of your query is 
on this part of the code so that you could more easily figure out what 
to do about it.


The latter part is the most useful IMO, optimizing usually needs a 
(human) brain to put things into the right perspective. Adding an index 
can speed up your queries only so much, a more optimal data presentation 
 (like moving calculations to insert/update instead of select) can do a 
lot more sometimes.


It looks like something like that shouldn't be too hard to write... 
Maybe it even does exist already. Personally I'd prefer a command line 
tool ;)
It would help if you can pipe the output of explain analyze to an 
external tool from within psql.


Regards,


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Markus Schaber
Hi, Scott & all,

Scott Lamb wrote:

> I don't know the answer to this question, but have you seen this tool?
> 
> http://brad.livejournal.com/2116715.html

We had a simpler tool inhouse, which wrote a file byte-for-byte, and
called fsync() after every byte.

If the number of fsyncs/min is higher than your rotations per minute
value of your disks, they must be lying.

It does not find as much liers as the script above, but it is less
intrusive (can be ran on every low-io machine without crashing it), and
it found some liers in-house (some notebook disks, one external
USB/FireWire to IDE case, and an older linux cryptoloop implementations,
IIRC).

If you're interested, I can dig for the C source...

HTH,
Markus




-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Bruce Momjian
Markus Schaber wrote:
> Hi, Scott & all,
> 
> Scott Lamb wrote:
> 
> > I don't know the answer to this question, but have you seen this tool?
> > 
> > http://brad.livejournal.com/2116715.html
> 
> We had a simpler tool inhouse, which wrote a file byte-for-byte, and
> called fsync() after every byte.
> 
> If the number of fsyncs/min is higher than your rotations per minute
> value of your disks, they must be lying.
> 
> It does not find as much liers as the script above, but it is less

Why does it find fewer liers?

---

> intrusive (can be ran on every low-io machine without crashing it), and
> it found some liers in-house (some notebook disks, one external
> USB/FireWire to IDE case, and an older linux cryptoloop implementations,
> IIRC).
> 
> If you're interested, I can dig for the C source...
> 
> HTH,
> Markus
> 
> 
> 
> 
> -- 
> Markus Schaber | Logical Tracking&Tracing International AG
> Dipl. Inf. | Software Development GIS
> 
> Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org
> 
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
> 

-- 
  Bruce Momjian   http://candle.pha.pa.us
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] understanding explain data

2006-05-10 Thread Alban Hertroys

Sim Zacks wrote:
I disagree with you that a human brain would be better then a machine 
for optimizing purposes. If the system is programmed to optimize 
correctly, then it will when to stick data into a temp table and update 
columns instead of doing a select because x number of joins are too much 
for the select. Humans may not know the optimal number of joins before 
the query becomes inefficent.


You're thinking about single queries here ;) A piece of software can 
very well optimize a single query, but it will be limited to that.


I was referring to changing parts of your database so that the query can 
be written in a simpler, more optimal way.


As an example, say that you have a tree structure of factories, each 
with a multitude of departments, and you want to query the number of 
employees (sum of all departments) in a specific factory.


You could write a query to read in all those departments and then count 
all the employees in them.


You could also keep track of the the employee-count for each department 
when you insert or delete employee records, and summarize these values 
for each factory record (using a trigger, most likely). If you do that, 
you only need to read the employee-count of the factory record.


Which do you think is faster? And how do you think a piece of software 
can get to this solution only from reading the explain output of your query?


--
Alban Hertroys
[EMAIL PROTECTED]

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Getting relation/attribute names from inside UDT input/output function

2006-05-10 Thread Michael Fuhr
On Wed, May 10, 2006 at 09:58:04AM -0300, Humberto Luiz Razente wrote:
> I wrote the UDT input/output functions for my complex type
> and everything is working well.
> But now, in the input function, I need to get the relation/attribute
> names where a new object is being inserted.

Why would the type's input function need to know that?  What are
you trying to do?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Markus Schaber
Hi, Bruce,

Bruce Momjian wrote:


>>It does not find as much liers as the script above, but it is less
> 
> Why does it find fewer liers?

It won't find liers that have a small "lie-queue-length" so their
internal buffers get full so they have to block. After a small burst at
start which usually hides in other latencies, they don't get more
throughput than spindle turns.

It won't find liers that first acknowledge to the host, and then
immediately write the block before accepting other commands. This
improves latency (which is measured in some benchmarks), but not
syncs/write rate.

Both of them can be captured by the other script, but not by my tool.

HTH,
Markus


-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Scott Marlowe
On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:
> Scott Marlowe wrote:
> > Actually, in the case of the Escalades at least, the answer is yes. 
> > Last year (maybe a bit more) someone was testing an IDE escalade
> > controller with drives that were known to lie, and it passed the power
> > plug pull test repeatedly.  Apparently, the escalades tell the drives to
> > turn off their cache.  While most all IDEs and a fair number of SATA
> > drives lie about cache fsyncing, they all seem to turn off the cache
> > when you ask.
> > 
> > And, since a hardware RAID controller with bbu cache has its own cache,
> > it's not like it really needs the one on the drives anyway.
> 
> You do if the controller thinks the data is already on the drives and
> removes it from its cache.

Bruce, re-read what I wrote.  The escalades tell the drives to TURN OFF
THEIR OWN CACHE.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Douglas McNaught
Scott Marlowe <[EMAIL PROTECTED]> writes:

> On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:

>> You do if the controller thinks the data is already on the drives and
>> removes it from its cache.
>
> Bruce, re-read what I wrote.  The escalades tell the drives to TURN OFF
> THEIR OWN CACHE.

Some ATA drives would lie about that too IIRC.  Hopefully they've
stopped doing it in the SATA era.

-Doug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Markus Schaber
Hi, Bruce,

Markus Schaber wrote:

>>>It does not find as much liers as the script above, but it is less
>>Why does it find fewer liers?
> 
> It won't find liers that have a small "lie-queue-length" so their
> internal buffers get full so they have to block. After a small burst at
> start which usually hides in other latencies, they don't get more
> throughput than spindle turns.

I just reread my mail, and must admit that I would not understand what I
wrote above, so I'll explain a little more:

My test programs writes byte-for-byte. Let's say our FS/OS has 4k page-
and blocksize, that means 4096 writes that all write the same disk blocks.

Intelligent liers will see that the the 2nd and all further writes
obsolete the former writes who still reside in the internal cache, and
drop those former writes from cache, effectively going up to 4k
writes/spindle turn.

Dumb liers will keep the obsolete writes in the write cache / queue, and
so won't be caught by my program. (Note that I have no proof that such
disks actually exist, but I have enough experience with hardware that I
won't be surprised.)


HTH,
Markus

-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf. | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] [GENERAL] Arguments Pro/Contra Software Raid

2006-05-10 Thread Scott Marlowe
On Wed, 2006-05-10 at 09:51, Douglas McNaught wrote:
> Scott Marlowe <[EMAIL PROTECTED]> writes:
> 
> > On Tue, 2006-05-09 at 20:02, Bruce Momjian wrote:
> 
> >> You do if the controller thinks the data is already on the drives and
> >> removes it from its cache.
> >
> > Bruce, re-read what I wrote.  The escalades tell the drives to TURN OFF
> > THEIR OWN CACHE.
> 
> Some ATA drives would lie about that too IIRC.  Hopefully they've
> stopped doing it in the SATA era.

Ugh.  Now that would make for a particularly awful bit of firmware
implementation.  I'd think that if I found a SATA drive doing that I'd
be likely to strike the manufacturer off of the list for possible future
purchases...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] how do i unsubscribe

2006-05-10 Thread pgaio
sorry for sending this message out in the list... but there is no explanation 
anywhere about how to unsuscribe this list.

the link to subscribe/unsubscribe takes me to pgfoundry.

can anybody give me a hint on how to unsubscribe?
thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Getting relation/attribute names from inside UDT input/output function

2006-05-10 Thread Tom Lane
"Humberto Luiz Razente" <[EMAIL PROTECTED]> writes:
> I wrote the UDT input/output functions for my complex type
> and everything is working well.
> But now, in the input function, I need to get the relation/attribute
> names where a new object is being inserted.

Since, in general, there *is* no such place where the value would be
inserted, this is impossible.  Consider

SELECT 'foo'::yourtype

Better rethink what you're trying to accomplish.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how do i unsubscribe

2006-05-10 Thread Scott Marlowe
On Wed, 2006-05-10 at 10:12, [EMAIL PROTECTED] wrote:
> sorry for sending this message out in the list... but there is no explanation 
> anywhere about how to unsuscribe this list.
> 
> the link to subscribe/unsubscribe takes me to pgfoundry.
> 
> can anybody give me a hint on how to unsubscribe?
> thanks

Actually, it's in the headers of the email you receive from the list. 
View all headers, and you'll see one like:

List-Unsubscribe: 


and there ya go!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Interface

2006-05-10 Thread Bob Pawley



The folks on the Delphi forum haven't been responsive 
to this question. Perhaps the ones that visit the forum don't know the 
answer.
 
I have created a Delphi interface to a Postgresql 
table containing a serial column.
 
When I try to insert a new value through the interface I 
get an error message that the serial field requires a value.
 
When I remove the serial column the interface inserts a 
new value with no problem. When I insert directly via sql the serial column 
updates automatically.
 
What am I missing to make this work through 
Delphi?
 
Bob


Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!

2006-05-10 Thread Bruno Wolff III
On Tue, May 09, 2006 at 22:17:21 +0300,
  Joe Kramer <[EMAIL PROTECTED]> wrote:
> 
> Right on! SHA2 should fallback the same as AES!

Note that it's SHA256, not SHA2.

SHA-1 is really a fix of the original SHA (sometimes referred to as SHA-0).

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Interface

2006-05-10 Thread Joshua D. Drake

Bob Pawley wrote:
The folks on the Delphi forum haven't been responsive to this question. 
Perhaps the ones that visit the forum don't know the answer.
 
I have created a Delphi interface to a Postgresql table containing a 
serial column.
 
When I try to insert a new value through the interface I get an error 
message that the serial field requires a value.
 
When I remove the serial column the interface inserts a new value with 
no problem. When I insert directly via sql the serial column updates 
automatically.
 
What am I missing to make this work through Delphi?


Try using DEFAULT as the serial value.. e.g;

INSERT INTO foo VALUES (DEFAULT, 'Hello');

Where DEFAULT is the serial column.

Sincerely,

Joshua D. Drake


 
Bob



--

   === The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensive  PostgreSQL solutions since 1997
http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Friendly catalog views

2006-05-10 Thread Bruno Almeida do Lago








Sometime ago I saw a project with the purpose of creating Oracle
views over the PostgreSQL catalog.

 

I can’t remember/find it anymore. Do you know if it’s
still available?

 

Is there any other project aiming to create friendly catalog
views?

 

 

Best Regards,

Bruno

 

 








Re: [GENERAL] Encoding Conversion

2006-05-10 Thread Rick Gigger

jef peeraer wrote:

beer schreef:

Hello All

So I have an old database that is ASCII_SQL encoded.  For a variety 
of reasons I need to convert the database to UNICODE.  I did some 
googling on this but have yet to find anything that looked like a 
viable option, so i thought I'd post to the group and see what sort 
of advice might arise. :)
well i recently struggled with the same problem. After a lot of trial 
and error and reading, it seems that an ascii encoded database can't 
use its client encoding capabilities ( set client_encoding to utf8 ).
i think the easist solution is to do a dump, recreate the database 
with a proper encoding, and restore the dump.


jef peeraer


TIA

-b


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




In my experience ASCII_SQL will let you put anything in there.  You need 
to figure out the actual encoding of the data.  Is it LATIN1?  Is it 
UTF-8?  UTF-16?  I found that my old ASCII_SQL dbs, before they were 
converted to unicode, contained 99.9% LATIN1 chars but also had a few 
random weird characters thrown in from people copying and pasting from 
office.  For instance MS Word uses these non-ascii standard characters 
to implement it's "magic quotes" or whatever they call it where the 
quotes curl in towards each other.


I had to identify what the bad chars were.  I think that viewing the 
dump in vi showed me the hex codes for the non-ascii chars.  Then I 
changed the encoding specified at the top of the dump as LATIN1.  Then I 
used sed to remove them as I piped it into a postgres unicode db.


Rick


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] PG_CONFIG MISSING

2006-05-10 Thread ftoliveira
Hi all,
ive just instaled Fedora 5, with postgresql 8.1.3 on my PC, and im trying to 
install POSTGIS, but when i do the
./config says that the package pg_config is missing.
Can someone help me?

PS: Sorry about the bad english.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pgcrypto sha256/384/512 don't work on Redhat. Please help!

2006-05-10 Thread Marko Kreen

On 5/10/06, Bruno Wolff III <[EMAIL PROTECTED]> wrote:

On Tue, May 09, 2006 at 22:17:21 +0300,
  Joe Kramer <[EMAIL PROTECTED]> wrote:
>
> Right on! SHA2 should fallback the same as AES!

Note that it's SHA256, not SHA2.


It's SHA224/256/384/512, which together are more easily referred as SHA2.


SHA-1 is really a fix of the original SHA (sometimes referred to as SHA-0).


--
marko

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Interface

2006-05-10 Thread Frank L. Parks

Bob,

Can you at least show what your insert statement looks like please?  
Also, are you using ODBC or VitaVoom's direct driver for DBExpress?


Frank

Bob Pawley wrote:

The folks on the Delphi forum haven't been responsive to this 
question. Perhaps the ones that visit the forum don't know the answer.
 
I have created a Delphi interface to a Postgresql table containing a 
serial column.
 
When I try to insert a new value through the interface I get an error 
message that the serial field requires a value.
 
When I remove the serial column the interface inserts a new value with 
no problem. When I insert directly via sql the serial column updates 
automatically.
 
What am I missing to make this work through Delphi?
 
Bob




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] understanding explain data

2006-05-10 Thread Richard Huxton

Alban Hertroys wrote:

Sim Zacks wrote:
Something such as: with this explain data, adding an index on table 
tbl column A would drastically improve the efficiency. Or at least an 
application that would say, the least efficient part of your query is 
on this part of the code so that you could more easily figure out what 
to do about it.


The latter part is the most useful IMO, optimizing usually needs a 
(human) brain to put things into the right perspective. Adding an index 
can speed up your queries only so much, a more optimal data presentation 
 (like moving calculations to insert/update instead of select) can do a 
lot more sometimes.


It looks like something like that shouldn't be too hard to write... 
Maybe it even does exist already. Personally I'd prefer a command line 
tool ;)
It would help if you can pipe the output of explain analyze to an 
external tool from within psql.


I've got a short perl script that I throw explain output into. It's not 
brilliant - can give false positives, but it will usually give you two 
or three lines to look at.


For those that are interested, I've attached it.

--
  Richard Huxton
  Archonet Ltd


parse_explain.pl
Description: Perl program

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Interface

2006-05-10 Thread Bob Pawley

Hi Frank

Here's the code entered as part of TUpdateSQL component.
insert into p_id.p_id
 (process_name)
values
 (:process_name)

I am connected via ODBC.

Thanks

Bob

- Original Message - 
From: "Frank L. Parks" <[EMAIL PROTECTED]>

To: "Postgresql" 
Sent: Wednesday, May 10, 2006 12:30 PM
Subject: Re: [GENERAL] Interface



Bob,

Can you at least show what your insert statement looks like please?  
Also, are you using ODBC or VitaVoom's direct driver for DBExpress?


Frank

Bob Pawley wrote:

The folks on the Delphi forum haven't been responsive to this 
question. Perhaps the ones that visit the forum don't know the answer.
 
I have created a Delphi interface to a Postgresql table containing a 
serial column.
 
When I try to insert a new value through the interface I get an error 
message that the serial field requires a value.
 
When I remove the serial column the interface inserts a new value with 
no problem. When I insert directly via sql the serial column updates 
automatically.
 
What am I missing to make this work through Delphi?
 
Bob




---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Interface

2006-05-10 Thread Joshua D. Drake

Bob Pawley wrote:

Hi Frank

Here's the code entered as part of TUpdateSQL component.
insert into p_id.p_id
 (process_name)
values
 (:process_name)



Did my DEFAULT not work?

Joshua D. Drake



I am connected via ODBC.

Thanks

Bob

- Original Message - From: "Frank L. Parks" 
<[EMAIL PROTECTED]>

To: "Postgresql" 
Sent: Wednesday, May 10, 2006 12:30 PM
Subject: Re: [GENERAL] Interface



Bob,

Can you at least show what your insert statement looks like please?  
Also, are you using ODBC or VitaVoom's direct driver for DBExpress?


Frank

Bob Pawley wrote:

The folks on the Delphi forum haven't been responsive to this 
question. Perhaps the ones that visit the forum don't know the answer.
 
I have created a Delphi interface to a Postgresql table containing a 
serial column.
 
When I try to insert a new value through the interface I get an error 
message that the serial field requires a value.
 
When I remove the serial column the interface inserts a new value 
with no problem. When I insert directly via sql the serial column 
updates automatically.
 
What am I missing to make this work through Delphi?
 
Bob




---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensive  PostgreSQL solutions since 1997
http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Interface

2006-05-10 Thread Bob Pawley

Joshua

My first try was to include it with the SQL code on the Update component.
It hasn't worked yet. I want to try variations.

I am also trying a Query component which gave me the same error. I also have 
another problem with this component giving me a cursor handle error which I 
am attempting to solve. (Same SQL code)


Lots of challenges for a beginner to work through.

Bob
- Original Message - 
From: "Joshua D. Drake" <[EMAIL PROTECTED]>

To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "Frank L. Parks" <[EMAIL PROTECTED]>; "Postgresql" 


Sent: Wednesday, May 10, 2006 3:41 PM
Subject: Re: [GENERAL] Interface



Bob Pawley wrote:

Hi Frank

Here's the code entered as part of TUpdateSQL component.
insert into p_id.p_id
 (process_name)
values
 (:process_name)



Did my DEFAULT not work?

Joshua D. Drake



I am connected via ODBC.

Thanks

Bob

- Original Message - From: "Frank L. Parks" 
<[EMAIL PROTECTED]>

To: "Postgresql" 
Sent: Wednesday, May 10, 2006 12:30 PM
Subject: Re: [GENERAL] Interface



Bob,

Can you at least show what your insert statement looks like please? 
Also, are you using ODBC or VitaVoom's direct driver for DBExpress?


Frank

Bob Pawley wrote:

The folks on the Delphi forum haven't been responsive to this question. 
Perhaps the ones that visit the forum don't know the answer.
 I have created a Delphi interface to a Postgresql table containing a 
serial column.
 When I try to insert a new value through the interface I get an error 
message that the serial field requires a value.
 When I remove the serial column the interface inserts a new value with 
no problem. When I insert directly via sql the serial column updates 
automatically.

 What am I missing to make this work through Delphi?
 Bob




---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
 Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
 Providing the most comprehensive  PostgreSQL solutions since 1997
http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PG_CONFIG MISSING

2006-05-10 Thread Tom Lane
"ftoliveira" <[EMAIL PROTECTED]> writes:
> ive just instaled Fedora 5, with postgresql 8.1.3 on my PC, and im trying to 
> install POSTGIS, but when i do the
> ./config says that the package pg_config is missing.

Did you remember to install the postgresql-devel RPM?  You need that for
building any PG-dependent software.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] [CHALLENGE] return column by ordinal number

2006-05-10 Thread Agent M
I came across a guy that wanted to get rows from a table by specifying 
the table name and column ordinal number and nothing more. [Yes, this 
is useless and violates relational model and SQL priniciples.]


My initial thoughts centered on using an array to snag each row and 
pull out the column number I want, but I couldn't figure out how to 
concatenate all the columns together without specifying them 
individually.


Then, I whipped up some plpgsql:

CREATE OR REPLACE FUNCTION columnx(tablename text,columnindex integer)
RETURNS SETOF RECORD
AS $$
DECLARE
r RECORD;
colname TEXT;
BEGIN
SELECT INTO colname isc.column_name FROM information_schema.columns AS 
isc WHERE tablename LIKE table_schema || '.' || table_name AND 
columnindex=isc.ordinal_position;

RAISE NOTICE '%',colname;
FOR r IN EXECUTE 'SELECT ' || colname || ' FROM ' || tablename || ';' 
LOOP

RETURN NEXT r;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';

But running this gets me:
agentm=# select * from columnx('public.test',2);
ERROR:  a column definition list is required for functions returning 
"record"

agentm=# select * from columnx('public.test',2) as ret(a anyelement);
ERROR:  column "a" has pseudo-type anyelement
agentm=# select * from columnx('public.test',2) as ret(a text);
NOTICE:  b
ERROR:  wrong record type supplied in RETURN NEXT
CONTEXT:  PL/pgSQL function "columnx" line 8 at return next
agentm=# select * from columnx('public.test',2) as ret(a integer);
NOTICE:  b
 a
---
 2
(1 row)

In the function, I don't know until I get to the information schema 
what types I will be returning and I can't declare a variable then. 
Making it explicit (as I do in the last command) is cheating because I 
would want it to return whatever type that column is without manually 
figuring that out.


Can this be done without resorting to an external SQL generation 
programr? Does anyone have a good hack to share?


-M

¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬
AgentM
[EMAIL PROTECTED]
¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬ ¬

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Getting relation/attribute names from inside UDT input/output function

2006-05-10 Thread Michael Fuhr
[Please copy the mailing list on replies so others can contribute
to and learn from the discussion.]

On Wed, May 10, 2006 at 02:50:26PM -0300, Humberto Luiz Razente wrote:
> I'm trying to index data characteristics extracted from multimedia
> files (like a color distribution histogram from an image), in
> order to make k-nearest neighbor queries (similarity-based queries).
> The problem is that I can have a variable number of characteristics (its
> called adimensional features), so I cannot index with R-tree
> like GIST. I already have the tree implemented in C, so I would
> like to build a tree for each complex type in each
> attribute/relation. Am I going in the wrong direction?

I'm not familiar with the problem domain but maybe somebody else
on the mailing list is.  Could you explain a bit more, perhaps with
an example showing how you're currently using the type and what
you'd like to be able to do with it (e.g., a hypothetical table,
data set, query, and query result)?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Recovery problem with PostgreSQL

2006-05-10 Thread Shoaib Mir
While running the recovery process I am getting the following errors for cannot stat:
cp: cannot stat `/data1_node5/arch/0001.history': No such file or
directory
cp: cannot stat
`/data1_node5/arch/00010007.02C4C530.backup': No such
file or directory
cp: cannot stat `/data1_node5/arch/00010007': No such
file or directory
2006-05-10 17:07:35 PDT LOG:  checkpoint record is at 0/1EC4C558
2006-05-10 17:07:35 PDT LOG:  redo record is at 0/1EC4C530; undo record
is at 0/0; shutdown FALSE
2006-05-10 17:07:35 PDT LOG:  next transaction ID: 1227111; next OID:
35682
2006-05-10 17:07:35 PDT LOG:  next MultiXactId: 168; next
MultiXactOffset: 338
2006-05-10 17:07:35 PDT LOG:  automatic recovery in progress
2006-05-10 17:07:35 PDT LOG:  redo starts at 0/1EC4C530
2006-05-10 17:07:35 PDT LOG:  record with zero length at 0/1EF5C91C
2006-05-10 17:07:35 PDT LOG:  redo done at 0/1EF5C8F4
cp: cannot stat `/data1_node5/arch/00010007': No such
file or directory
2006-05-10 17:07:35 PDT LOG:  archive recovery complete
2006-05-10 17:07:35 PDT LOG:  database system is ready
2006-05-10 17:07:35 PDT LOG:  transaction ID wrap limit is 2147484188,Any idea what can be causing these?Regards,Shoaib


[GENERAL] compiling postgres on solaris and DBD::Pg

2006-05-10 Thread Luke Vanderfluit

Hi.

I've been able to compile postgresql on solaris 10.
Now I want to install the perl DBD::Pg module but I get a complaint, namely:

/~~~
[EMAIL PROTECTED] # make
rm -f blib/arch/auto/DBD/Pg/Pg.so
LD_RUN_PATH="/usr/local/pgsql/lib" /opt/SUNWspro/bin/cc  -G 
-xarch=generic64 -L/opt/SUNWspro/prod/lib/amd64 -L/lib/amd64 
-L/usr/local/lib Pg.o dbdimp.o quote.o types.o  -o 
blib/arch/auto/DBD/Pg/Pg.so \

  -L/usr/local/pgsql/lib -lpq  \

ld: fatal: file /usr/local/pgsql/lib/libpq.so: wrong ELF class: ELFCLASS32
ld: fatal: File processing errors. No output written to 
blib/arch/auto/DBD/Pg/Pg.so

*** Error code 1
make: Fatal error: Command failed for target `blib/arch/auto/DBD/Pg/Pg.so'
\___

Any help to understand this appreciated.
Kind regards.

--
Luke Vanderfluit.
Analyst/Programmer.
Internode Systems Pty. Ltd.


---(end of broadcast)---
TIP 6: explain analyze is your friend