[GENERAL] Performance Problem

2006-10-12 Thread roopa perumalraja
I am new to postgres and I have 4 doubts.     1) I have a performance problem as I am trying to insert around 60 million rows to a table which is partitioned. So first I copied the .csv file which contains data, with COPY command to a temp table which was quick. It took only 15 to 20 minutes. Now I am inserting data from temp table to original table using insert into org_table (select * from temp_table); which is taking more than an hour & is still
 inserting. Is there an easy way to do this?     2) I want to increase the performance of database as I find it very slow which has more than 60 million rows in one table. I increased the shared_buffer parameter in postgres.conf file to 2 but that does help much.     2) I have partitioned a parent table into 100 child tables so when I insert data to parent table, it automatically inserts to child table. I have
 followed http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html. When I did this, I noticed that when I viewed data of parent table, it had the rows of the child table and is not empty. But the child tables do have the rows in it. I don’t understand.     3) I want to use materialized views, I don’t understand it from http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html, can anyone explain me with a simple example.     Thanks in advance.     Regards  Roopa  
		Yahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2¢/min or less.

[GENERAL] Performance problem

2006-10-13 Thread roopa perumalraja
I am new to postgres and I have 4 doubts.     1) I have a performance problem as I am trying to insert around 60 million rows to a table which is partitioned. So first I copied the .csv file which contains data, with COPY command to a temp table which was quick. It took only 15 to 20 minutes. Now I am inserting data from temp table to original table using insert into org_table (select * from temp_table); which is taking more than an hour & is still inserting. Is there an easy way to do this?     2) I want to increase the performance of database as I find it very slow which has more than 60 million rows in one table. I increased the shared_buffer parameter in postgres.conf file to 2 but that does help much.     3) I have partitioned a parent table into 100 child tables so when I insert data to parent table, it automatically inserts to child table. I have followed http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html. When I did
 this, I noticed that when I viewed data of parent table, it had the rows of the child table and is not empty. But the child tables do have the rows in it. I don’t understand.     4) I want to use materialized views, I don’t understand it from http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html, can anyone explain me with a simple example.     Thanks in
 advance.     Regards  Roopa  
		Get your email and more, right on the  new Yahoo.com 


[GENERAL] Maximum size of database

2006-10-17 Thread roopa perumalraja
Hi     I would like to know that what can be the maximum size of database in postgres 8.1.4. Currently my database size is 37GB & its pretty slow. I wonder if its b'cos of huge amount of data in it.     Thanks in advance.  Roopa 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.

[GENERAL] Postgres Database Slow

2006-10-17 Thread roopa perumalraja
Hi     Currently my database size is 38GB and it is pretty slow in whatever I do with it like take a backing up, vaccuming, reindexing, running all queries. Why is that? Is it possible to improve the performance.     Thanks in advance  Roopa    
		Yahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2¢/min or less.

Re: [GENERAL] Maximum size of database

2006-10-17 Thread roopa perumalraja
Thanks for your reply.     I have answered your questions below.      1 & 2) System: Microsoft Windows XP Professional  Version 2002    Computer: Intel Pentium CPU 3.40GHz, 960MB of RAM     3) shared_buffers = 2    autovaccum = on     4) Yes, I am vacuuming & analyzing the database once every day.     5) No concurrent activities, means I run one command at a time.     6) Nothing else running on the box other than Postgres.     I hope these answers will try to solve my problem. Thanks again.     Roopa  Michael Fuhr <[EMAIL PROTECTED]> wrote:  On Tue, Oct 17, 2006 at 07:26:25PM -0700, roopa perumalraja wrote:> I would like to know that what can be the maximum size of database in> postgres 8.1.4.http://www.postgresql.org/docs/faqs.FAQ.html#item4.4> Currently my database size is 37GB & its pretty slow. I wonder if> its b'cos of huge amount of data in it.37GB isn't all that huge; as the FAQ mentions, much larger databasesexist. Without more information we'll have to ask some of thestandard questions:What's your hardware configuration?What operating system and version are you using?What are your non-default postgresql.conf settings?Are you vacuuming and analyzing the database regularly?How much concurrent activity do you have?Does anything other than PostgreSQL run on the box?If you have a specific query that's slow then please post the
 EXPLAINANALYZE output. Also, you might get more help on the pgsql-performancelist.-- Michael Fuhrsignature 
	

	
		Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.


Re: [GENERAL] Maximum size of database

2006-10-18 Thread roopa perumalraja
Hi     Thanks for your reply.     explain select * from tk_20060403;    QUERY PLAN    -- Seq Scan on tk_20060403  (cost=0.00..95561.30 rows=3609530 width=407)(1 row)  will this help?     louis gonzales <[EMAIL PROTECTED]> wrote:  also, runEXPLAINon any command, show the results of
 this. In particular, if you have some commands that are taking 'even longer?'roopa perumalraja wrote:> Thanks for your reply.> > I have answered your questions below.> > 1 & 2) System: Microsoft Windows XP Professional> Version 2002> Computer: Intel Pentium CPU 3.40GHz, 960MB of RAM> > 3) shared_buffers = 2> autovaccum = on> > 4) Yes, I am vacuuming & analyzing the database once every day.> > 5) No concurrent activities, means I run one command at a time.> > 6) Nothing else running on the box other than Postgres.> > I hope these answers will try to solve my problem. Thanks again.> > Roopa> */Michael Fuhr <[EMAIL PROTECTED]>/* wrote:>> On Tue, Oct 17, 2006 at 07:26:25PM -0700, roopa perumalraja wrote:> > I would like to know that what can be the maximum size of>
 database in> > postgres 8.1.4.>> http://www.postgresql.org/docs/faqs.FAQ.html#item4.4>> > Currently my database size is 37GB & its pretty slow. I wonder if> > its b'cos of huge amount of data in it.>> 37GB isn't all that huge; as the FAQ mentions, much larger databases> exist. Without more information we'll have to ask some of the> standard questions:>> What's your hardware configuration?> What operating system and version are you using?> What are your non-default postgresql.conf settings?> Are you vacuuming and analyzing the database regularly?> How much concurrent activity do you have?> Does anything other than PostgreSQL run on the box?>> If you have a specific query that's slow then please post the EXPLAIN> ANALYZE output. Also, you might get more help on the pgsql-performance> list.>> -- >
 Michael Fuhr>>>>> signature>> > Get your own web address for just $1.99/1st yr >  
		Do you Yahoo!? Everyone is raving about the  all-new Yahoo! Mail.

[GENERAL] schema diagram with child tables

2006-11-05 Thread roopa perumalraja
Hi all,     I am drawing the schema diagram for my postgres database. I wanted to add child tables to the master table in my diagram but I am not sure how to do it. I couldnt find any sample diagram for this in the web. If somebody can help me out with this that will be great.     Thanks a lot for your help     Roopa 

We have the perfect Group for you. Check out the handy changes to Yahoo! Groups.


[GENERAL] Backup the part of postgres database

2007-01-14 Thread roopa perumalraja
Hi all,
   
  Is it possible to back up only part of my database for example from tables 
which have data from April to July 2006, if so what will be the command for 
that.
   
  Thanks a lot in advance
  Roopa

 
-
It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.

Re: [GENERAL] Backup the part of postgres database

2007-01-14 Thread roopa perumalraja

Hi Adam,
   
  Thanks a lot for your immediate reply. can you please tell me how to use the 
command pg_dump to backup specific tables. Thanks a lot in advance.
   
  Regards
  Roopa
  
Adam Rich <[EMAIL PROTECTED]> wrote:
  Roopa,
  You can use the command pg_dump to backup specific tables.   But you won't be 
able to
  restrict the backup to specific rows.  (You can always create a separate 
table just for backups,
  and dump just that table).
   
   

  -Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of roopa perumalraja
Sent: Sunday, January 14, 2007 8:39 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Backup the part of postgres database 


  Hi all,
   
  Is it possible to back up only part of my database for example from tables 
which have data from April to July 2006, if so what will be the command for 
that.
   
  Thanks a lot in advance
  Roopa

-
  It's here! Your new message!
Get new email alerts with the free Yahoo! Toolbar.

 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

[GENERAL] Problem with Online-Backup

2007-02-01 Thread roopa perumalraja
Hi all,
   
  While there are inserts & updates happening into the database, is it possible 
to make the base backup without losing any of the updates in the database?
   
  What does select pg_start_backup('label'); & pg_stop_backup(); do actually?
   
  I am worried if the file system backup tool will not store the data files in 
the /pgsql/data/base which is still getting populated. Or does the segment file 
store that? 
   
  Will the Continuous backup or Hot backup (if any) solve the problem?
   
  Thanks a lot in advance

 
-
Everyone is raving about the all-new Yahoo! Mail beta.

Re: [GENERAL] Problem with Online-Backup

2007-02-01 Thread roopa perumalraja
Thanks a lot for your reply. To make it more clear will the be no loss of data 
or data corruption when taking a base backup while there is inserts & updates 
happening in the database?

  --
  Roopa
  
Richard Huxton  wrote:
  roopa perumalraja wrote:
> Hi all,
> 
> While there are inserts & updates happening into the database, is it
> possible to make the base backup without losing any of the updates in
> the database?

Yes, that's the whole point of PITR. The filesystem backup + WAL files 
gives you a working database when restored.

> What does select pg_start_backup('label'); & pg_stop_backup(); do
> actually?

Tell the server that you are taking a backup, start a new WAL segment 
for you too.

http://www.postgresql.org/docs/8.2/static/continuous-archiving.html

> I am worried if the file system backup tool will not store the data
> files in the /pgsql/data/base which is still getting populated. Or
> does the segment file store that?

Not sure I understand you here.

-- 
Richard Huxton
Archonet Ltd


 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

[GENERAL] Storing database in cluster (Memory)

2007-02-06 Thread roopa perumalraja
Hi all,
   
  As I am inserting 100million rows daily into partitioned tables (daily wise), 
it is getting slower. Even the retrivel of data, select statement on those 
tables takes about 30 mintues. I have tried increasing the parameters in 
postgres.conf but still that doesn't help me much as the no of rows are huge. 
Will the idea of storing the database in cluster (memory) increase the 
performance of insert/update/select in the table in a suitation like this? 
Thanks a lot in advance.
   
  Regards
  Roopa

 
-
Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.

Re: [GENERAL] Storing database in cluster (Memory)

2007-02-07 Thread roopa perumalraja
>>Hi all,
>>
>>As I am inserting 100million rows daily into partitioned tables (daily wise), 
>>it is getting slower. 

>What is - the inserts? By how much? What tables? What indexes? How are you 
>inserting these rows?

  I take my words back as 100million rows. The insert of 20million rows 
everyday takes only 10minutes as I use copy statement to copy into temperory 
table from flat files then do some manipulation to the data & insert it into 
the paritioned tables. I have solved the problem. I have paritioned the tables 
date-wise. (Is partitioning the tables monthly is recommanded?)
   
  The proble is with another insert, selecting data from one of the paritioned 
tables and doing some calculations then inserting into another table. That is 
around 28 rows every day. This takes really a long time (almost a day) if 
somebody is doing something with the database. If nobody is using the database 
then it takes almost two hours. Even a select statement to other tables in the 
datbase affects this insert. While inserting I use
   
  BEGIN WORK
  SELECT foo1
  LOCK TABLE foo2 IN EXCLUSIVE MODE
  INSERT INTO foo2 (SELECT..)
  COMMIT WORK
   
  All the tables are indexed. I am using 4 indexes including the pkey index. 
Will dropping the index before inserting and reindexing it after the insert 
will help?
  
>>Even the retrivel of data, select statement on those tables takes about 30 
>>mintues. 

>All selects take 30 minutes, regardless what work they do? Or do you have 
>specific selects that are causing problems?

  Any statement more than one running on database takes time. Say for example 
if I am inserting 20million rows into one table & at the same time if I try 
updating into another table, that takes a lot of time.
  
>> I have tried increasing the parameters in postgres.conf but still that 
>> doesn't help me much as the no of rows are huge. 

>What parameters, to what values? What hardware are you running on? What load 
>do you place on it?

  parameters in postgres.conf which I increased are
   
  fsysn on
  wal_buffers 128
 checkpoint_segments 256 (2 GB)
 checkpoint_timeout 3600 (1 hour)
  work_mem: set to 128MB
 maintenance_work_mem: to 512MB
   
  I wanted to increase shared_buffer to 60,000 but I am not able to restart the 
database if I change it even to 1. It says 'could not start postmaster'. 
(max_connection is 100.) 
  
>>Will the idea of storing the database in cluster (memory) increase the 
>>performance of 
>> insert/update/select in the table in a suitation like this? Thanks a lot in 
>> advance.

>A better question would be "Why isn't my data being cached?". The 
>operating-system should cache regularly used files.

  So I think the problem is when two things are happening in database, it takes 
a long time. Thats the reason I was wondering if storing the database in 
cluster will solve the problem.
   
  And I even run vaccum every night.
   
  >Still - more details please roopa and we'll see if anyone can help you.

  I hope this helps.
   
  Thanks
-- 
Roopa


  
-
Looking for earth-friendly autos? 
 Browse Top Cars by "Green Rating" at Yahoo! Autos' Green Center.