[PERFORM] determining maxsize for character varying

2007-06-16 Thread okparanoid
Hello i would like to know if not determining a max size value for a character
varying's fields decrease the perfomance (perhaps size of stockage ? or
something else ?)

If not it is a good way to not specify a max size value ?
If it has an importance is it possible to have a general environnment variable
saying to postgres to automatically truncate fields which postgres have to
insert or update with a length superior at the max length.

Sorry for my bad english...

Lot of thanks

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] determining maxsize for character varying

2007-06-16 Thread okparanoid
Thanks

if i understand well that means that if i choose character varying(3) or
character varying(8) or character varying(32) or character varying with no max
length the fields will take the same place in the disk (8kb) except for fields
too long to take place in the 8kb whose are stored in another place ?

Is that correct ?

So for small strings it's better to choose character(n) when it's possible ?


Best regards,

Loic

Selon Andreas Kretschmer <[EMAIL PROTECTED]>:

> [EMAIL PROTECTED] <[EMAIL PROTECTED]> schrieb:
>
> > Hello i would like to know if not determining a max size value for a
> character
> > varying's fields decrease the perfomance (perhaps size of stockage ? or
> > something else ?)
>
> No problem because of the TOAST-technology:
> http://www.postgresql.org/docs/current/static/storage-toast.html
>
>
> Andreas
> --
> Really, I'm not out to destroy Microsoft. That will just be a completely
> unintentional side effect.  (Linus Torvalds)
> "If I was god, I would recompile penguin with --enable-fly."(unknow)
> Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°
>
> ---(end of broadcast)---
> TIP 7: You can help support the PostgreSQL project by donating at
>
> http://www.postgresql.org/about/donate
>



---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[PERFORM] update 600000 rows

2007-12-14 Thread okparanoid

Hello

i have a python script to update 60 rows to one table from a csv file in my
postgres database and it takes me 5 hours to do the transaction...

I'm on debian etch with 8.1 postgres server on a 64 bits quad bi opteron.

I have desactived all index except the primary key who is not updated since it's
the reference column of the update too.

When i run this script the server is not used by any other user.

First when i run htop i see that the memory used is never more than 150 MB.
I don't understand in this case why setting shmall and shmmax kernel's
parameters to 16 GB of memory (the server has 32 GB) increase the rapidity of
the transaction a lot compared to a shmall and shmax in (only) 2 GB ?!

The script is run with only one transaction and pause by moment to let the time
to postgres to write data to disk.

If the data were writed at the end of the transaction will be the perfomance
better ? i wan't that in production data regulary writed to disk to prevent
loosinf of data but it there any interest to write temporary data in disk in a
middle of a transaction ???

I'm completely noob to postgres and database configuration and help  are
welcome.

thanks



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings