Hi,

A small question ... and your opinions are well done ...

I have tested one thing today ...
I have a script in charge to insert a lot of files inside one of my db.
I have a perl script where I read the files ... check if some similar data are 
in the db ... then delete them ... and insert the contains of the file.
I do this file by file, and each time I make a transaction where I delete the 
data first (one delete command) and insert the new data with one insert by 
line ... then I commit and check the next file ...
My file have between 200 to 5000 lines each times ... in my test I have about 
650 000 lignes in total ...
I have a dedicated partition on the disk for the WAL files, 2Gb.

So I start my script and saw the size of the pg_xlog partition groing quickly 
to be full after 20 min ...

So ... I think it's my checkpoint_timeout who is too large ... I look at my 
config file and I see : 
#checkpoint_timeout = 300

So default ... it's not set ... and the default value seems to be 300 ... (I 
have tested after).

My question is simple ... if I reduce the time of the checkpoint_timeout ... 
what is the impact in normal use of my database ... does someone have done 
any test ? ...

Is it possible in futur version to change somewhere in realtime this value 
just for big insert like this ... or this idea have no sence ?

Thanks for your reply,

Regards,
-- 
Hervé Piedvache

Elma Ingénierie Informatique
6 rue du Faubourg Saint-Honoré
F-75008 - Paris - France
Pho. 33-144949901
Fax. 33-144949902


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to