Re: [GENERAL] Anyone testing changes to libpq/bcc32.mak?

2009-04-07 Thread Magnus Hagander
I don't know that anybody does. We usually get a report a couple of  
minor versions in and fix it then, which backs that guess. It's also  
not tested by the buildfarm. So I think you can call it semi- 
maintained at best.


So if you want to become the maintainer and test/send patches at an  
earlier stage, please go right ahead!


/Magnus


On 7 apr 2009, at 03.52, ljb  wrote:

There were some recent changes to libpq/bcc32.mak that broke my  
attempt to
build libpq with the Borland compiler. (OK, not so recent. I've been  
busy.)
Five new modules were added from ports/: dirent dirmod pgsleep open  
win32error
I had to removed two, dirent.c and open.c to get the compilation to  
work.


CVS comment on the changes (Rev 1.30, 1.28.2.1) is:
 "Synchronize Borland libpq makefile to match MSVC.  Backpatch to  
8.3.X."


So: Are changes to libpq/bcc32.mak being tested, or just applied  
assuming they
are correct? Because if they are tested, I'm getting different  
results and we
need to compare notes. If the changes are not tested, I will submit  
a patch

that works for me.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Querying Large Objects

2009-04-07 Thread Albe Laurenz
David Kerr wrote:
> I'm having a heck of a time trying to track this down.
> Is it possible to retrive a large object from psql/pgbench?
> 
> I don't want just the OID, i want the actual streamed data.
> 
> I'm doing a timing comparison between bytea and lo's. So it'd
> be ideal if I can pull it with pgbench.

You can do it with SQL:

SELECT loread(lo_open($1, 262144), 1073741819)

Where $1 is to be replaced with the OID of the large object.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing c functions for postgres

2009-04-07 Thread Albe Laurenz
eehab hamzeh wrote:
> I am trying to build some functions using C language. these functions are
> mentioned in the postgresql documentation.
> 
> the only function that are work are the one with int32 variable.
> the other function bring errors and are not working
> any body can give directions
> 
> #include "postgres.h" 
> #include  
> #include "fmgr.h" 
> 
> #ifdef PG_MODULE_MAGIC 
> PG_MODULE_MAGIC; 
> #endif 
> 
[...]
> 
> PG_FUNCTION_INFO_V1(makepoint); 
> 
> Datum 
> makepoint(PG_FUNCTION_ARGS) 
> { 
> /* Here, the pass-by-reference nature of Point is not hidden. */ 
> Point *pointx = PG_GETARG_POINT_P(0); 
> Point *pointy = PG_GETARG_POINT_P(1); 
> Point *new_point = (Point *) palloc(sizeof(Point)); 
> 
[...]
> 
> PG_FUNCTION_INFO_V1(copytext); 
> 
> Datum 
> copytext(PG_FUNCTION_ARGS) 
> { 
[...]
> VARATT_SIZEP(new_t) = VARSIZE(t); 
[...]
> } 
> 
> PG_FUNCTION_INFO_V1(concat_text); 
> 
> Datum 
> concat_text(PG_FUNCTION_ARGS) 
> { 
[...]
> VARATT_SIZEP(new_text) = new_text_size; 
[...]
> } 
> 
> 
> the error 
> 
> in function 'makepoint': 
> error: 'Point' undeclared (first use in this function) 
> error: (each undeclared identifier is reported only one 
> error: for each function it appears in.) 
> error: 'pointx' undeclared (first use in ´this function) 
> error: 'pointy' undeclared (first use in his fnction) 
> error 'new_point' undeclared (first use in his function) 
> error: syntax error befre ')' oken 
> in function 'copy text': 
> error: 'invalid lvalue in assinment 
> In function 'concat_text' 
> error: invalid lvalue in assignement 
> warning no new line at end of file 

These error messages are certainly not what you saw on the screen.
It is best to copy and paste error messages!

If you want to have Point, you'll have to
#include "utils/geo_decls.h"

The "invalid lvalue" errors are generated because gcc does not
know VARATT_SIZEP and assumes it is a function returning int.
I recommend that you always use gcc with -Wall to see warnings!

I can find no VARATT_SIZEP in the PostgreSQL 8.3 headers.
Where did you get that from?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Convert Oracle function to PostgreSQL

2009-04-07 Thread Albe Laurenz
SHARMILA JOTHIRAJAH wrote:
> I use this Oracle function(from  AskTom  - 
> http://asktom.oracle.com/pls/asktom/f?p=100:11:0P11_QUESTION_ID:210612357425)
>  
> 
> 
> 
> SQL> create or replace type myTableType as table 
>  of varchar2 (255);
>   2  /
>  
> Type created.
>  
> ops$tk...@dev8i> create or replace 
>  function in_list( p_string in varchar2 ) return myTableType
>   2  as
>   3  l_stringlong default p_string || ',';
>   4  l_data  myTableType := myTableType();
>   5  n   number;
>   6  begin
>   7loop
>   8exit when l_string is null;
>   9n := instr( l_string, ',' );
> 10 l_data.extend;
> 11 l_data(l_data.count) := 
>  ltrim( rtrim( substr( l_string, 1, n-1 ) ) );
> 12 l_string := substr( l_string, n+1 );
> 13end loop;
> 14
> 15return l_data;
> 16  end;
> 17  /
>  
> Function created.
>  
> ops$tk...@dev8i> select *
>   2from THE 
> ( select cast( in_list('abc, xyz, 012') as
>   mytableType ) from dual ) a
>   3  /
>  
> COLUMN_VALUE
> 
> abc
> xyz
> 012
>
> How can I convert this function into PostgreSQL ? Any thoughts?

Sorry, but we can't easily do that as complicated in PostgreSQL.

You'll have to live with something like

SELECT * FROM regexp_split_to_table('abc, xyz, 012', ', ?');
 regexp_split_to_table 
---
 abc
 xyz
 012
(3 rows)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] exit code 128

2009-04-07 Thread Niederland
Today I postgres on one of our servers shutdown with exit code 128.
Postgresql 8.3.7 on Windows 2003 server.

It started up normally but this is the second occurance this week.
Below is a section of the log.  A restart of the service returned
postgres to normal.  Suggestions?

Thanks,
Roger

2009-04-06 10:47:15 HST LOG:  automatic vacuum of table
"crm.quartz.qrtz_triggers": index scans: 1
pages: 0 removed, 5 remain
tuples: 58 removed, 14 remain
system usage: CPU 0.00s/0.01u sec elapsed 0.06 sec
2009-04-06 10:47:15 HST LOG:  automatic vacuum of table
"crm.pg_toast.pg_toast_63861": index scans: 0
pages: 0 removed, 0 remain
tuples: 0 removed, 0 remain
system usage: CPU 0.00s/0.00u sec elapsed 0.00 sec
2009-04-06 10:47:15 HST LOG:  automatic analyze of table
"crm.quartz.qrtz_triggers" system usage: CPU 0.00s/0.01u sec elapsed
0.04 sec
2009-04-06 10:48:11 HST LOG:  server process (PID 4912) exited with
exit code 128
2009-04-06 10:48:11 HST LOG:  terminating any other active server
processes
2009-04-06 10:48:11 HST WARNING:  terminating connection because of
crash of another server process
2009-04-06 10:48:11 HST DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2009-04-06 10:48:11 HST HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2009-04-06 10:48:11 HST WARNING:  terminating connection because of
crash of another server process
2009-04-06 10:48:11 HST DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2009-04-06 10:48:11 HST HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2009-04-06 10:48:11 HST WARNING:  terminating connection because of
crash of another server process

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing c functions for postgres

2009-04-07 Thread Glyn Astill




--- On Tue, 7/4/09, Albe Laurenz  wrote:

> I can find no VARATT_SIZEP in the PostgreSQL 8.3 headers.
> Where did you get that from?
> 
> Yours,
> Laurenz Albe
> 

I think it's depreciated and he should be using SET_VARSIZE instead ...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Oleg Bartunov

Kevin,

contrib/test_parser - an example parser code.

On Mon, 6 Apr 2009, Kevin Grittner wrote:


Tom Lane  wrote:

"Kevin Grittner"  writes:

People are likely to search for statute cites, which tend to have a
hierarchical form.



I think what you need is a custom parser


I've just returned to this and after review have become convinced that
this is absolutely necessary; once the default parser has done its
work, figuring out the bounds of a statute cite would be next to
impossible.  Examples of the kind of fun you can have labeling
statutes, ordinances, and rules should you ever get elected to public
office:

10-3-350.10(1)(k)
10.1(40)(d)1
10.40.040(c)(2)
100.525(2)(a)3
105-10.G(3)(a)
11.04C.3.R.(1)
8.961.41(cm)
9.125.07(4A)(3)
947.013(1m)(a)

In any of these, a search string which exactly matches something up to
(but not including) a dash, dot, or left paren should find that thing.


Unfortunately we don't seem to have any really easy way to plug in a
custom parser, other than copy-paste-modify the existing one which
would be a PITA from a maintenance standpoint.


I'm afraid I'm going to have to bite the bullet and do this anyway.
Any guidance on how to go about it may save me some time.  Also, if
there is any way to do this which may be useful to others or integrate
into PostgreSQL to reduce the long-term PITA aspect, I'm all ears.

-Kevin




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bug in 8.4 pg_dumpall ?

2009-04-07 Thread Grzegorz Jaśkiewicz
dumped db with 2 week version of pg_dumpall (yeah, I know, install
tools first, etc, but that's 8.4).
now trying to restore it from that backup with new psql, gives me that:

ERROR:  syntax error at or near "COLLATE"
LINE 1: ...ATE = template0 OWNER = gjaskie ENCODING = 'UTF8' COLLATE = ...

ideas ? (except for just removing anythign after ENCODING.


-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] I can't solve this, PostgreSQL won't install no matter what!

2009-04-07 Thread Neithan Maxcom
Hi everyone, thanks in advance. I'm using Win XP and trying to install 
PostgreSQL 8.3, i've read the DOCs and followed every step carefully, 
everything seems to be right on the progress bar untill it stucks and 
prompts: "Failed to run initdb: 1! Please see the logfile in 
"blabla"\tmp\initdb.log"


So here it is:

The files belonging to this database system will be owned by user "SYSTEM".
This user must also own the server process.

The database cluster will be initialized with locale Spanish_Spain.1252.
The default text search configuration will be set to "spanish".

creating directory C:/Archivos de programa/PostgreSQL/8.3/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creating configuration files ... ok
creating template1 database in C:/Archivos de 
programa/PostgreSQL/8.3/data/base/1 ... child process exited with exit 
code 1
initdb: removing data directory "C:/Archivos de 
programa/PostgreSQL/8.3/data"




After this, i've tried installing the Postgres Plus Advanced Server 8.3, 
using operating system local and encoding (by default)...and bang, it 
won't install either! Thanks to this try, i think i've taken more 
information about what the real problem is...but i can't go any further. 
This is thi error i get in postgresplus instalation:


(Apr 7, 2009 1:49:07 PM), Install, 
com.enterprisedb.isaction.InitDBFailAction, err, ProductException: 
(error code = 601; message="Initdb Failed"; additional data = [Data 
folder [C:\PostgresPlus\8.3AS\data] does not exist.])

STACK_TRACE: 12
ProductException: (error code = 601; message="Initdb Failed"; additional 
data = [Data folder [C:\PostgresPlus\8.3AS\data] does not exist.])

[and a lot more of junk here, useless in my opinion]

Install logs are showing:
 initout.txt:
Calling initdb as user enterprisedb...
Initializing Database Cluster

C:\PostgresPlus\8.3AS\dbserver\scripts-win>"C:\PostgresPlus\8.3AS\dbserver\bin\initdb"  
--pwfile=password.txt -U enterprisedb -D "C:\PostgresPlus\8.3AS\data" -A 
md5 --locale="Spanish_Spain.1252" --encoding= 
1>"C:\PostgresPlus\8.3AS\dbserver\..\install_logs\initdb_stdout.txt" 
2>"C:\PostgresPlus\8.3AS\dbserver\..\install_logs\initdb_stderr.txt"

Done!


 initdb_stderr.txt: (this is the one i found interesting)
2009-04-07 15:49:05 WARNING:  could not query value for 'std' to 
identify Windows timezone: 2
2009-04-07 15:49:05 WARNING:  could not find a match for Windows 
timezone "Hora de verano romance"

el proceso hijo terminó con código de salida 1
initdb: eliminando el directorio de datos «C:/PostgresPlus/8.3AS/data»



- initdb_stdout.txt:
Los archivos de este cluster serán de propiedad del usuario «Administrador».
Este usuario también debe ser quien ejecute el proceso servidor.
El cluster será inicializado con configuración local Spanish_Spain.1252.

creando el directorio C:/PostgresPlus/8.3AS/data ... hecho
creating subdirectories ... hecho
seleccionando el valor para max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creando archivos de configuración ... hecho
creando base de datos template1 en C:/PostgresPlus/8.3AS/data/base/1 ...



and initerr.txt is void


I've changed my zone time, changed coding values diferent thant UTF8...i 
dont know what more can i do! Please guys help me out.


Thank you! Looking forward to see your replies.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Kevin Grittner
Oleg Bartunov  wrote: 
 
> contrib/test_parser - an example parser code.
 
Thanks!  Sorry I missed that.
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bug in 8.4 pg_dumpall ?

2009-04-07 Thread Tom Lane
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?=  writes:
> dumped db with 2 week version of pg_dumpall (yeah, I know, install
> tools first, etc, but that's 8.4).
> now trying to restore it from that backup with new psql, gives me that:

> ERROR:  syntax error at or near "COLLATE"
> LINE 1: ...ATE = template0 OWNER = gjaskie ENCODING = 'UTF8' COLLATE = ...

We just yesterday changed the spelling of those options.  If you dump
with today's pg_dump it should work.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] string_to_array with empty input

2009-04-07 Thread Steve Crawford

Did I miss the exciting conclusion or did this drift silently off radar?

I seem to recall three options:

1. Leave as is. Arguments: least effort, no backward compatibility 
issues, since array_to_string evaluate both an array with single empty 
string and an array with no elements to an empty string, string_to_array 
on empty strings is ambiguous so we'll call it null. But: means that the 
result of null input and non-null empty-string both result in null 
output, requires everyone to explicitly handle empty strings (with the 
side effect that they really know what the result will be) instead of 
"helping" the majority of users. Requires: documentation change to 
accurately describe function's behavior.


2. Change function to return an array. Arguments: Distinguishes null 
from non-null input, easier coding for most cases, perhaps a less 
surprising result. But: not backward compatible, requires somewhat 
arbitrary decision on correct return value. Requires: code 
change/testing, documentation updates.


In scenario 2, there were two options:
2a. Return zero-element array.
2b. Return array with single empty-string element.

My impression was that among the "change" options, 2b had the most 
support (it is the most useful for the use-cases I've encountered so it 
gets my vote). If the consensus is to change the function, it may be too 
late for 8.4. But the documentation could be updated to reflect current 
and planned behavior.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] string_to_array with empty input

2009-04-07 Thread justin

Steve Crawford wrote:

Did I miss the exciting conclusion or did this drift silently off radar?


it was pretty well split between the options.  tabled for another time. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread James B. Byrne
This is what I have come up with.  Comments are welcomed.


  CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
  RETURNS TRIGGER AS $pg_fn$
-- ROW AFTER TRIGGER
-- trigger passes identifier_type, _value and _description
-- received as ARGV[0], ARGV[1] and ARGV[2]
  BEGIN
INSERT INTO identifiers(
entity_id,
identifier_type,
identifier_value,
identifier_description)
  VALUES(
NEW.id,
TG_ARGV[0],
TG.ARGV[1],
TG_ARGV[2]);

  -- Assume the INSERT fails because of a unique key violation,
  --   (entity_id + identifier_type + identifier_value)
  --
  -- This does not matter since we only need ensure that this
  -- alias exists, so handle the exception and return:

  EXCEPTION
WHEN unique_violation THEN
  -- do nothing
  NULL;
  END;
  $pg_fn$ LANGUAGE plpgsql;

  COMMENT ON FUNCTION hll_pg_fn_ident_insert IS
'Used by entities trigger. Inserts a corresponding
identifiers row.'

  CREATE TRIGGER hll_pg_tr_entity_identifier_akna
AFTER INSERT OR UPDATE ON entities
FOR EACH ROW EXECUTE PROCEDURE hll_pg_fn_identifier_insert(
  "AKNA", entities.entity_common_name, "Common Name
auto-insert");

  COMMENT ON TRIGGER hll_pg_tr_entity_identifier_akna IS
'Inserts an alias identifier for common name if one does not
exist'




-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] string_to_array with empty input

2009-04-07 Thread David E. Wheeler

On Apr 7, 2009, at 8:07 AM, Steve Crawford wrote:


In scenario 2, there were two options:
2a. Return zero-element array.
2b. Return array with single empty-string element.

My impression was that among the "change" options, 2b had the most  
support (it is the most useful for the use-cases I've encountered so  
it gets my vote). If the consensus is to change the function, it may  
be too late for 8.4. But the documentation could be updated to  
reflect current and planned behavior.


+1

David

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Number Conversion Function

2009-04-07 Thread justin






Tom Lane wrote:

  Tino Wildenhain  writes:
  
  
I would not recommend to do this within the database. Thats typical
a job for your presentation layer.

  
  
... but having said that, I think the "money" datatype has a function
for this.  Whether that's of any use to you I dunno; money is pretty
restrictive about what it can handle.

			regards, tom lane
  


I disagree the database is the wrong place, there are cases it makes
sense.

I have looked for what Tom talks about for the money type i can't find
any thing??  so I wrote a function primary purpose is used with checks
but with a little modification will work for anyone one and has no
practical limitation .   It will work to Quintillion. 

CREATE OR REPLACE FUNCTION spellNumericValue( pValue numeric)
  RETURNS text AS
$BODY$
DECLARE
  _dollar bigint = trunc(pValue)::text;
  _cents int = ((pValue - trunc(pValue))*100)::int;
  _spelledAmount text = '' ;
  _brokenOut int[] ;
  _pos integer = 0;
  _word text ;
BEGIN
    
    --lets breakout the number into hundreds into a array
    WHILE _dollar > 0 loop
        _brokenOut = array_append(_brokenOut, (_dollar%1000)::int);
        _dollar = trunc(_dollar/1000);
        _pos = _pos + 1;
    End Loop;

    --this works on numbers between 1 to 999 transforming into english
words. then goes to the 
    --next set of numbers in the array working backwards as the array
was loaded backwards
    --Meaning the highest value is in the last element of the array
_brokenOut
    --This also assumes words thousands millions, billions... occurs
every 10^3 .
    while _pos > 0 loop
        if _brokenOut[_pos] >99 then
            SELECT CASE 
                    WHEN _brokenOut[_pos] > 899 THEN 'Nine Hundred '
                    WHEN _brokenOut[_pos] > 799 THEN 'Eight Hundred '
                    WHEN _brokenOut[_pos] > 699 THEN 'Seven Hundred '
                    WHEN _brokenOut[_pos] > 599 THEN 'Six Hundred ' 
                    WHEN _brokenOut[_pos] > 499 THEN 'Five Hundred '
                    WHEN _brokenOut[_pos] > 399 THEN 'Four Hundred '
                    WHEN _brokenOut[_pos] > 299 THEN 'Three Hundred '
                    WHEN _brokenOut[_pos] > 199 THEN 'Two Hundred '
                    WHEN _brokenOut[_pos] > 99 THEN 'One Hundred '
                    else ''
                end
            into _word;
            _spelledAmount = _spelledAmount || _word ;
        end if;

        Select Case
                WHEN _brokenOut[_pos]%100 = 10 THEN 'Ten '
                WHEN _brokenOut[_pos]%100 = 11 THEN 'Eleve '
                WHEN _brokenOut[_pos]%100 = 12 THEN 'Twelve '
                WHEN _brokenOut[_pos]%100 = 13 THEN 'Thirteen '
                WHEN _brokenOut[_pos]%100 = 14 THEN 'Fourteen '
                WHEN _brokenOut[_pos]%100 = 15 THEN 'Fifteen '
                WHEN _brokenOut[_pos]%100 = 16 THEN 'Sixteen '
                WHEN _brokenOut[_pos]%100 = 17 THEN 'Seventeen '
                WHEN _brokenOut[_pos]%100 = 18 THEN 'Eighteen'
                WHEN _brokenOut[_pos]%100 = 19 THEN 'Nineteen '
                WHEN _brokenOut[_pos]/10%10=2 THEN 'Twenty ' 
                WHEN _brokenOut[_pos]/10%10=3 THEN 'Thirty ' 
                WHEN _brokenOut[_pos]/10%10=4 THEN 'Fourty ' 
                WHEN _brokenOut[_pos]/10%10=5 THEN 'Fifty ' 
                WHEN _brokenOut[_pos]/10%10=6 THEN 'Sixty ' 
                WHEN _brokenOut[_pos]/10%10=7 THEN 'Seventy ' 
                WHEN _brokenOut[_pos]/10%10=8 THEN 'Eighty ' 
                WHEN _brokenOut[_pos]/10%10=9 THEN 'Ninety  ' 
                ELSE ''
            End
        into _word;
        _spelledAmount = _spelledAmount || _word;

        if _brokenOut[_pos]%100 < 10 or _brokenOut[_pos]%100 > 20
then
            SELECT CASE 
                    WHEN _brokenOut[_pos]%10 = 1 THEN 'One '
                    WHEN _brokenOut[_pos]%10 = 2 THEN 'Two'
                    WHEN _brokenOut[_pos]%10 = 3 THEN 'Three '
                    WHEN _brokenOut[_pos]%10 = 4 THEN 'Four '
                    WHEN _brokenOut[_pos]%10 = 5 THEN 'Five '
                    WHEN _brokenOut[_pos]%10 = 6 THEN 'Six '
                    WHEN _brokenOut[_pos]%10 = 7 THEN 'Seven '
                    WHEN _brokenOut[_pos]%10 = 8 THEN 'Eight '
                    WHEN _brokenOut[_pos]%10 = 9 THEN 'Nine '
                    ELSE ''
                end
            into _word;
    
            _spelledAmount = _spelledAmount || _word;
        end if ;
        
        If _pos = 2 then
            _spelledAmount = _spelledAmount || 'Thousand ';
        elsif _pos = 3  then
            _spelledAmount = _spelledAmount || 'Million';
        elsif _pos = 4  then
            _spelledAmount = _spelledAmount || 'Billion ';
        elsif _pos = 5 then
            _spelledAmount = _spelledAmount || 'Trillion ';
        elsif _pos = 6 then
            _spelledAmount = _spelledAmount || 'Quadrillion ';
        elsif _pos = 7 then
            _spelledAmount = _spe

Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Tom Lane
"Kevin Grittner"  writes:
> Can I use a different set of dictionaries
> for creating the tsquery than I did for the tsvector?
 
Sure, as long as the tokens (normalized words) that they produce match
up for words that you want to have match.  Once the tokens come out,
they're just strings as far as the rest of the text search machinery
is concerned.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Oleg Bartunov

On Tue, 7 Apr 2009, Kevin Grittner wrote:


Oleg Bartunov  wrote:

of course, you can build tsquery youself, but once your parser can
recognize your very own token 'xxx', it'd be much better to have
mapping xxx -> dict_xxx, where dict_xxx knows all semantics.


I probably just need to have that "Aha!" moment, slap my forehead, and
move on; but I'm not quite understanding something.  The answer to
this question could be it: Can I use a different set of dictionaries
for creating the tsquery than I did for the tsvector?


Sure ! For example, you want to index all words, so your dictionaries
doesn't have stop word lists, but forbid people to search common words.
Or, if you want to search 'to be or not to be' you have to use 
dictionaries without stop words.





If so, I can have the dictionaries which generate the tsvector include
the appropriate leading tokens ('341', '341.15', '341.15(3)') and the
dictionaries for the tsquery can only generate the token based on
exactly what the user typed.  That would give me exactly what I want,
but somehow I have gotten the impression that the tsvector and tsquery
need to be generated using the same dictionary set.

I hope that's a mistaken impression?


yes.



-Kevin



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Kevin Grittner
Tom Lane  wrote: 
> "Kevin Grittner"  writes:
>> Can I use a different set of dictionaries
>> for creating the tsquery than I did for the tsvector?
>  
> Sure, as long as the tokens (normalized words) that they produce
> match up for words that you want to have match.  Once the tokens
> come out, they're just strings as far as the rest of the text search
> machinery is concerned.
 
Fantastic!  Don't know how I got confused about that, but the way now
looks clear.
 
Thanks!
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread James B. Byrne

On Tue, April 7, 2009 15:09, Tom Lane wrote:

>
> ALTER DATABASE foo SET log_min_messages = whatever;
>
> Note this will only affect subsequently-started sessions.  Also,
> if memory serves, you have to be superuser to set this particular
> variable.

Thanks.  Am I correct to infer from the output this generates that
log_min_messages = debug is primarily for developers of PG itself? 
I am poking in the dark here.  What I want to do is to determine if
the trigger is firing and whether the function works as intended. 
At the moment I am not seeing anything show up in the secondary
table so I have done something wrong.  Is there some way of getting
PG to tell me what it is doing?

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] tables filled to 1 Gig each

2009-04-07 Thread Cha Yang

Postgresql 8.2
backend to a Java application with JDBC


For whatever reason all of postgresql tables all filled up to 1 gig each
Accessing Postgresql through either the application or PGadmin fails

any ideas of how to get into Postgresql or a way to clean up these tables?


thanks





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread Chris Spotts
Not in regards to logging detail, but that function in general...
I'm pretty new to postgres, so I could be totally wrong in this, but I think
this thread
http://archives.postgresql.org/pgsql-performance/2008-03/msg00204.php
may pertain if you see some performance degradation with that trigger. 
Like I said, somebody correct me if I'm way off base.
Chris

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of James B. Byrne
Sent: Tuesday, April 07, 2009 1:52 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] INSERT or UPDATE TRIGGER

I am testing the trigger function that I wrote.  Is there a way to
increase the logging detail level for just a single database
instance?  The manual indicates not, but just in case I am
misreading things I am asking here?


-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread Tom Lane
"James B. Byrne"  writes:
> I am poking in the dark here.  What I want to do is to determine if
> the trigger is firing and whether the function works as intended. 
> At the moment I am not seeing anything show up in the secondary
> table so I have done something wrong.  Is there some way of getting
> PG to tell me what it is doing?

You might find it more useful to add some elog(LOG) statements to the
trigger body.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Oleg Bartunov

On Tue, 7 Apr 2009, Kevin Grittner wrote:


If the document text contains '341.15(3)' I want to find it with a
search string of '341', '341.15', '341.15(3)' but not '341.15(3)(b)',
'341.1', or '15'.  How do I handle that?  Do I have to build my
tsquery values myself as text and cast to tsquery, or is there
something more graceful that I'm missing?


of course, you can build tsquery youself, but once your parser can
recognize your very own token 'xxx', it'd be much better to have
mapping xxx -> dict_xxx, where dict_xxx knows all semantics.
For example, we have our dict_regex 
http://vo.astronet.ru/arxiv/dict_regex.html


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Kevin Grittner
Oleg Bartunov  wrote: 
> of course, you can build tsquery youself, but once your parser can
> recognize your very own token 'xxx', it'd be much better to have
> mapping xxx -> dict_xxx, where dict_xxx knows all semantics.
 
I probably just need to have that "Aha!" moment, slap my forehead, and
move on; but I'm not quite understanding something.  The answer to
this question could be it: Can I use a different set of dictionaries
for creating the tsquery than I did for the tsvector?
 
If so, I can have the dictionaries which generate the tsvector include
the appropriate leading tokens ('341', '341.15', '341.15(3)') and the
dictionaries for the tsquery can only generate the token based on
exactly what the user typed.  That would give me exactly what I want,
but somehow I have gotten the impression that the tsvector and tsquery
need to be generated using the same dictionary set.
 
I hope that's a mistaken impression?
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread Tom Lane
"James B. Byrne"  writes:
> I am testing the trigger function that I wrote.  Is there a way to
> increase the logging detail level for just a single database
> instance?

ALTER DATABASE foo SET log_min_messages = whatever;

Note this will only affect subsequently-started sessions.  Also,
if memory serves, you have to be superuser to set this particular
variable.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread James B. Byrne
I am testing the trigger function that I wrote.  Is there a way to
increase the logging detail level for just a single database
instance?  The manual indicates not, but just in case I am
misreading things I am asking here?


-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT or UPDATE TRIGGER

2009-04-07 Thread James B. Byrne

On Tue, April 7, 2009 16:07, Tom Lane wrote:

>
> You might find it more useful to add some elog(LOG) statements to
> the trigger body.
>

Thank you again.  I will go through section 44.2 tonight.


-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tables filled to 1 Gig each

2009-04-07 Thread Scott Marlowe
On Tue, Apr 7, 2009 at 1:36 PM, Cha Yang  wrote:
> Postgresql 8.2
> backend to a Java application with JDBC
>
>
> For whatever reason all of postgresql tables all filled up to 1 gig each
> Accessing Postgresql through either the application or PGadmin fails

When they get to 1gig they start a new segment, so they look like:

12345.1
12345.2
12345.3

No problem

> any ideas of how to get into Postgresql or a way to clean up these tables?

You don't need to clean it up.  What are the actual symptoms of a
failure, if any, you are having?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Anyone testing changes to libpq/bcc32.mak?

2009-04-07 Thread Bruce Momjian
Magnus Hagander wrote:
> I don't know that anybody does. We usually get a report a couple of  
> minor versions in and fix it then, which backs that guess. It's also  
> not tested by the buildfarm. So I think you can call it semi- 
> maintained at best.
> 
> So if you want to become the maintainer and test/send patches at an  
> earlier stage, please go right ahead!

Yep, send us the changes and we will blindly apply them.  ;-)

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Kenneth Tilton
I am porting a datamining web app to postgres from a non-sql datastore 
and plan to use temporary tables quite a bit, to manage collections the 
user will be massaging interactively. They might search and find 
anywhere from 50 to 50k items, then filter that, unfilter, sort, etc.


Currently I manage those collections in the server application, meaning 
everything gets pulled from the datastore into RAM. I see postgres 
temporary tables and postgres features in general can greatly simplify 
my code because so much of what I do can be expressedin postgres-ese. Yayyy.


Some on the team think I am nuts, but one reason given was the absence 
of indices and I see (a) temporary tables *can* be indexed and (b) 
postgres does not even use an index for small sets, and many collections 
will be relatively small (as a design goal in fact--we hope to make 
search smarter and return fewer hits).


I thought it would not hurt to check with the gurus before spending a 
week on the wrong code, so... dumb idea?


kenny


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Scott Marlowe
On Tue, Apr 7, 2009 at 3:11 PM, Kenneth Tilton  wrote:
> I am porting a datamining web app to postgres from a non-sql datastore and
> plan to use temporary tables quite a bit, to manage collections the user
> will be massaging interactively. They might search and find anywhere from 50
> to 50k items, then filter that, unfilter, sort, etc.
>
> Currently I manage those collections in the server application, meaning
> everything gets pulled from the datastore into RAM. I see postgres temporary
> tables and postgres features in general can greatly simplify my code because
> so much of what I do can be expressedin postgres-ese. Yayyy.
>
> Some on the team think I am nuts,

People are often resistant to new ideas, even good ones.

> but one reason given was the absence of
> indices and I see (a) temporary tables *can* be indexed

Correct

> and (b) postgres
> does not even use an index for small sets, and many collections will be
> relatively small (as a design goal in fact--we hope to make search smarter
> and return fewer hits).

Correct again.

> I thought it would not hurt to check with the gurus before spending a week
> on the wrong code, so... dumb idea?

Good idea to at least come up with some tests to prove (or disprove)
your point.  We can wait for the gurus to check in later...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 dictionary for statute cites

2009-04-07 Thread Kevin Grittner
Oleg Bartunov  wrote: 
> contrib/test_parser - an example parser code.
 
Using that as a template, I seem to be on track to use the regexp.c
code to pick out statute cites from the text in my start function, and
recognize when I'm positioned on one in my getlexeme (GETTOKEN)
function, delegating everything before, between, and after statute
cites to the default parser.  (I really didn't want to copy/paste and
modify the whole default parser.)
 
That leaves one question I'm still pretty fuzzy on -- how do I go
about having a statute cite in a tsquery match the entire statute cite
from a tsvector, or delimited leading portions of it, without having
it match shorter portions?
 
For example:
 
If the document text contains '341.15(3)' I want to find it with a
search string of '341', '341.15', '341.15(3)' but not '341.15(3)(b)',
'341.1', or '15'.  How do I handle that?  Do I have to build my
tsquery values myself as text and cast to tsquery, or is there
something more graceful that I'm missing?
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UTF8, LATIN1 conversion problems

2009-04-07 Thread Paulo Angelo
Hi All,

   I'm getting some problems with UTF8 and Pgsql for some days. In the 8.3
version these problems became worse.

In the new version,

=> select convert(f0601_desc, 'UTF8', 'LATIN1') from f0601;
gives: ERROR:  ... convert(character varying, unknown, unknown) don't exists

and

=> select convert(f0601_desc::bytea, 'UTF8', 'LATIN1'),f0601_desc from
f0601;
gives:

  convert  |   f0601_desc
---+
 \343 | ã
(1 registro)

And I dont know what to do with a "\343".

I really wanna do something like:

=> select f0601_desc from f0601 where  to_ascii(convert(f0601_desc::bytea,
'UTF8', 'LATIN1')::text,'LATIN1') like 'a';
gives:  (0 registro)


What should I do? I'm thinking create a DB with LATIN1 encoding, but  UTF8
is better (should be :)  ).

Thanx for help.

[]'s

Paulo Angelo


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Tom Lane
Kenneth Tilton  writes:
> I am porting a datamining web app to postgres from a non-sql datastore 
> and plan to use temporary tables quite a bit, to manage collections the 
> user will be massaging interactively. They might search and find 
> anywhere from 50 to 50k items, then filter that, unfilter, sort, etc.

The main issue you should think about is whether the required lifespan
of the temp tables matches up with your application's use of database
connections.  If you are going through a connection pooler, for example,
it can be pretty awkward to hold onto the connection that has got the
temp table instances you need.  Web apps in general tend to have a hard
time maintaining such state across successive page references, so I'm
afraid this could be a show-stopper for you.

> Some on the team think I am nuts, but one reason given was the absence 
> of indices and I see (a) temporary tables *can* be indexed 

Yeah, whoever claimed that is simply uninformed, or at least is
well-informed about some other database.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Greg Smith
Temp tables can be great for simplifying your code into more logical 
sections.  When making a case for using them, make sure to point out that 
using them more aggressively can cut down on the amount of indexing you 
need on the big tables, which has positive implications in terms of 
getting simpler and robust query plans and cutting down on insertion 
overhead.


You should be sure to turn on log_temp_files (which is handy in general, 
that's not specific to temp tables).  One specific thing to look for to 
support your case is that sorts that used to execute in RAM and spill to 
disk when they exceed work_mem might instead execute with less memory 
usage; you'll be doing the final sort/filter steps using the temp tables 
instead.  If that is already happening, the overhead of using the temp 
table can end up looking pretty good.


One thing I like doing when in the early development stages is to create a 
seperate disk partition for the temporary tables, turn that into a 
tablespace, and then use temp_tablespaces to point the temp tables toward 
it.  The idea is to separate out I/O to the temp tables so that you can 
measure it to see how significant it is.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Kenneth Tilton



Tom Lane wrote:

Kenneth Tilton  writes:
I am porting a datamining web app to postgres from a non-sql datastore 
and plan to use temporary tables quite a bit, to manage collections the 
user will be massaging interactively. They might search and find 
anywhere from 50 to 50k items, then filter that, unfilter, sort, etc.


The main issue you should think about is whether the required lifespan
of the temp tables matches up with your application's use of database
connections.  If you are going through a connection pooler, for example,
it can be pretty awkward to hold onto the connection that has got the
temp table instances you need.  Web apps in general tend to have a hard
time maintaining such state across successive page references, so I'm
afraid this could be a show-stopper for you.


Ah, I should disclosed I am a relative web application nooby as well. 
But our design depends anyway on the same server process handling a web 
session from start to finish, and I thought this was doable with 
sufficient effort. I mean, I asked and people said it could be arranged. 
Not so? Yes, I am doomed. But if we can do that, I already have a ton of 
logic for keeping sessions separate and for hanging onto a connection 
for the life of a session (most requests use a PG connection pooler, the 
ones that need to see the temp tables use a dedicated connection (or two 
I think I might need). Keep sessions separate by working the session key 
into the temp table name...well, that's the plan anyway.




Some on the team think I am nuts, but one reason given was the absence 
of indices and I see (a) temporary tables *can* be indexed 


Yeah, whoever claimed that is simply uninformed, or at least is
well-informed about some other database.


I think I misrepresented their position. They did not say it, but I 
think they were referring to some hairy freetext indexing they did on 
the permanent tables. And that's OK, we do not need that on the temp tables.


thx for the input, I will study up on the viability of getting a session 
managed by the same process throughout.


ken

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Kenneth Tilton



Greg Smith wrote:
Temp tables can be great for simplifying your code into more logical 
sections.  When making a case for using them, make sure to point out 
that using them more aggressively can cut down on the amount of indexing 
you need on the big tables, which has positive implications in terms of 
getting simpler and robust query plans and cutting down on insertion 
overhead.


You should be sure to turn on log_temp_files (which is handy in general, 
that's not specific to temp tables).  One specific thing to look for to 
support your case is that sorts that used to execute in RAM and spill to 
disk when they exceed work_mem might instead execute with less memory 
usage; you'll be doing the final sort/filter steps using the temp tables 
instead.  If that is already happening, the overhead of using the temp 
table can end up looking pretty good.


One thing I like doing when in the early development stages is to create 
a seperate disk partition for the temporary tables, turn that into a 
tablespace, and then use temp_tablespaces to point the temp tables 
toward it.  The idea is to separate out I/O to the temp tables so that 
you can measure it to see how significant it is.


Thx, I will keep that in mind as a good way of really seeing what is 
going on. I did notice the tablespace feature but wasn't sure how to 
leverage it. Mgmt has been lusting after those new solid-state memory 
disks (SSDs?), this could be a good excuse for a PO. We are a skunkworks 
project getting as much praise so far for the speed of the web app as 
anything else so we don't want to give up this plus.


ken


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Scott Marlowe
On Tue, Apr 7, 2009 at 5:05 PM, Kenneth Tilton  wrote:
>
>
> Greg Smith wrote:
>>
>> Temp tables can be great for simplifying your code into more logical
>> sections.  When making a case for using them, make sure to point out that
>> using them more aggressively can cut down on the amount of indexing you need
>> on the big tables, which has positive implications in terms of getting
>> simpler and robust query plans and cutting down on insertion overhead.
>>
>> You should be sure to turn on log_temp_files (which is handy in general,
>> that's not specific to temp tables).  One specific thing to look for to
>> support your case is that sorts that used to execute in RAM and spill to
>> disk when they exceed work_mem might instead execute with less memory usage;
>> you'll be doing the final sort/filter steps using the temp tables instead.
>>  If that is already happening, the overhead of using the temp table can end
>> up looking pretty good.
>>
>> One thing I like doing when in the early development stages is to create a
>> seperate disk partition for the temporary tables, turn that into a
>> tablespace, and then use temp_tablespaces to point the temp tables toward
>> it.  The idea is to separate out I/O to the temp tables so that you can
>> measure it to see how significant it is.
>
> Thx, I will keep that in mind as a good way of really seeing what is going
> on. I did notice the tablespace feature but wasn't sure how to leverage it.
> Mgmt has been lusting after those new solid-state memory disks (SSDs?), this
> could be a good excuse for a PO. We are a skunkworks project getting as much
> praise so far for the speed of the web app as anything else so we don't want
> to give up this plus.

Make sure the newer generation like Intel's that are fast under
concurrent access.  Most of the older SSDs are horrificall slow when
handling multiple random accesses.

You can use a different method if you need a table available to the
same session.  Create a schema based on the session id, and put your
temp tables there, only don't call them temp tables.  You'll either
need to make sure you always clean up your temp schema your session
created or come up with a daemon that comes along every hour or so and
kills off old schemas that aren't in use anymore.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread John Cheng

Hi Kenneth,

One concern I have with SSD drives is that the performance degrades over time. 
If you were not familiar with this issue already, take a look at the following 
article.

http://www.anandtech.com/storage/showdoc.aspx?i=3531

It is not a huge problem and I have faith in Intel to come up with a good 
solution fairly quickly, but it is worth noting. Given the cost of SSD, it does 
make me think that perhaps a more cost effective solution is to have plenty of 
RAM on the box. 


John L. Cheng



- Original Message 
> From: Scott Marlowe 
> To: Kenneth Tilton 
> Cc: pgsql-general@postgresql.org
> Sent: Tuesday, April 7, 2009 4:47:17 PM
> Subject: Re: [GENERAL] nooby Q: temp tables good for web apps?
> 
> On Tue, Apr 7, 2009 at 5:05 PM, Kenneth Tilton wrote:
> >
> >
> > Greg Smith wrote:
> >>
> >> Temp tables can be great for simplifying your code into more logical
> >> sections.  When making a case for using them, make sure to point out that
> >> using them more aggressively can cut down on the amount of indexing you 
> >> need
> >> on the big tables, which has positive implications in terms of getting
> >> simpler and robust query plans and cutting down on insertion overhead.
> >>
> >> You should be sure to turn on log_temp_files (which is handy in general,
> >> that's not specific to temp tables).  One specific thing to look for to
> >> support your case is that sorts that used to execute in RAM and spill to
> >> disk when they exceed work_mem might instead execute with less memory 
> >> usage;
> >> you'll be doing the final sort/filter steps using the temp tables instead.
> >>  If that is already happening, the overhead of using the temp table can end
> >> up looking pretty good.
> >>
> >> One thing I like doing when in the early development stages is to create a
> >> seperate disk partition for the temporary tables, turn that into a
> >> tablespace, and then use temp_tablespaces to point the temp tables toward
> >> it.  The idea is to separate out I/O to the temp tables so that you can
> >> measure it to see how significant it is.
> >
> > Thx, I will keep that in mind as a good way of really seeing what is going
> > on. I did notice the tablespace feature but wasn't sure how to leverage it.
> > Mgmt has been lusting after those new solid-state memory disks (SSDs?), this
> > could be a good excuse for a PO. We are a skunkworks project getting as much
> > praise so far for the speed of the web app as anything else so we don't want
> > to give up this plus.
> 
> Make sure the newer generation like Intel's that are fast under
> concurrent access.  Most of the older SSDs are horrificall slow when
> handling multiple random accesses.
> 
> You can use a different method if you need a table available to the
> same session.  Create a schema based on the session id, and put your
> temp tables there, only don't call them temp tables.  You'll either
> need to make sure you always clean up your temp schema your session
> created or come up with a daemon that comes along every hour or so and
> kills off old schemas that aren't in use anymore.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Kenneth Tilton



Scott Marlowe wrote:

You can use a different method if you need a table available to the
same session.  Create a schema based on the session id, and put your
temp tables there, only don't call them temp tables.  You'll either
need to make sure you always clean up your temp schema your session
created or come up with a daemon that comes along every hour or so and
kills off old schemas that aren't in use anymore.


I am LMAO because Lisp (my server-side lang) does this to noobs, too: 
three (at least) ways to do everything. Well, if all things are equal 
dropping one schema and not kludging up mangled table names has a lot of 
appeal. Thx.


ken


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Scott Marlowe
On Tue, Apr 7, 2009 at 7:12 PM, Kenneth Tilton  wrote:
>
>
> Scott Marlowe wrote:
>>
>> You can use a different method if you need a table available to the
>> same session.  Create a schema based on the session id, and put your
>> temp tables there, only don't call them temp tables.  You'll either
>> need to make sure you always clean up your temp schema your session
>> created or come up with a daemon that comes along every hour or so and
>> kills off old schemas that aren't in use anymore.
>
> I am LMAO because Lisp (my server-side lang) does this to noobs, too: three
> (at least) ways to do everything. Well, if all things are equal dropping one
> schema and not kludging up mangled table names has a lot of appeal. Thx.

Schemas, search_path and views together can let you do some pretty
cool things in terms of integrating external postgresql based apps
with each other.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-07 Thread Kenneth Tilton



Scott Marlowe wrote:

On Tue, Apr 7, 2009 at 7:12 PM, Kenneth Tilton  wrote:


Scott Marlowe wrote:

You can use a different method if you need a table available to the
same session.  Create a schema based on the session id, and put your
temp tables there, only don't call them temp tables.  You'll either
need to make sure you always clean up your temp schema your session
created or come up with a daemon that comes along every hour or so and
kills off old schemas that aren't in use anymore.

I am LMAO because Lisp (my server-side lang) does this to noobs, too: three
(at least) ways to do everything. Well, if all things are equal dropping one
schema and not kludging up mangled table names has a lot of appeal. Thx.


Schemas, search_path and views together can let you do some pretty
cool things in terms of integrating external postgresql based apps
with each other.


Or between XHRs? It just occurred to me that if I go with a schema 
instead of temp tables then I do not need to worry about hanging on to a 
connection/pgsession, or even worry about routing a web session to the 
same process if all state is stored in pg under the session id.


ken *coming up to speed slowly, going to look up search_path*

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general