Re: [PERFORM] Optimizer showing wrong rows in plan

2010-03-29 Thread raghavendra t
Hi Mattew,

Thank you for the information.

Once again, I like to thank each and everyone in this thread for there
ultimate support.

Regards
Raghavendra

On Mon, Mar 29, 2010 at 4:47 PM, Matthew Wakeling wrote:

> On Mon, 29 Mar 2010, Tadipathri Raghu wrote:
>
>> As per the documentation, one page is 8kb, when i create a table with int
>> as
>> one column its 4 bytes. If i insert 2000 rows, it should be in one page
>> only
>> as its 8kb, but its extending vastly as expected. Example shown below,
>> taking the previous example table test with one column.
>>
>
> There is more to a row than just the single int column. The space used by a
> column will include a column start marker (data length), transaction ids,
> hint bits, an oid, a description of the types of the columns, and finally
> your data columns. That takes a bit more space.
>
> Matthew
>
> --
> If you let your happiness depend upon how somebody else feels about you,
> now you have to control how somebody else feels about you. -- Abraham Hicks
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


[PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi All,

I have a table with 40GB size, it has few indexes on it. When i try to
REINDEX on the table, its take a long time. I tried increasing the
maintenance_work_mem, but still i havnt find a satisfying result.

Questions
===
1. What are the parameters will effect, when issuing the REINDEX command
2. Best possible way to increase the spead of the REINDEX

Thanks in Advance

Regards
Raghavendra


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Hi Kevin,

Thank you for the update,

>>What does the table look like?  What indexes are there?
Table has a combination of byteas. Indexes are b-tree and Partial

>>Why are you doing that?
Our table face lot of updates and deletes in a day, so we prefer reindex to
update the indexes as well overcome with a corrupted index.

>> How long?
More than 4 hrs..

>>What run time are you expecting?
Less than what it is taking at present.

>>It's hard to answer that without more information, like PostgreSQL
>>version and configuration, for starters.  See:
  version

 PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-44), 32-bit
(1 row)

>>http://wiki.postgresql.org/wiki/SlowQueryQuestions
Expected the performance question..

Regards
Raghavendra

On Thu, Apr 1, 2010 at 2:32 AM, Kevin Grittner
wrote:

> raghavendra t  wrote:
>
> > I have a table with 40GB size, it has few indexes on it.
>
> What does the table look like?  What indexes are there?
>
> > When i try to REINDEX on the table,
>
> Why are you doing that?
>
> > its take a long time.
>
> How long?
>
> > I tried increasing the maintenance_work_mem, but still i havnt
> > find a satisfying result.
>
> What run time are you expecting?
>
> > Questions
> > ===
> > 1. What are the parameters will effect, when issuing the REINDEX
> >command
> > 2. Best possible way to increase the spead of the REINDEX
>
> It's hard to answer that without more information, like PostgreSQL
> version and configuration, for starters.  See:
>
> http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> My best guess is that you can make them instantaneous by not running
> them.  A good VACUUM policy should make such runs unnecessary in
> most cases -- at least on recent PostgreSQL versions.
>
> -Kevin
>


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
>
> If this is a one-time fix for a corrupted index, did you look at
> CREATE INDEX CONCURRENTLY?  You could avoid any down time while you
> fix things up.
>
Using CREATE INDEX CONCURRENTLY will avoid the exclusive locks on the table,
but my question is, how to get a performance on the existing indexes. You
mean to say , drop the existing indexes and create the index with
CONCURRENTLY. Does this give the performance back.

Regards
Raghavendra


On Thu, Apr 1, 2010 at 3:10 AM, Kevin Grittner
wrote:

> raghavendra t  wrote:
>
> > overcome with a corrupted index.
>
> If this is a one-time fix for a corrupted index, did you look at
> CREATE INDEX CONCURRENTLY?  You could avoid any down time while you
> fix things up.
>
> http://www.postgresql.org/docs/8.4/interactive/sql-createindex.html
>
> -Kevin
>


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
Thank you for the suggestion.

On Thu, Apr 1, 2010 at 3:21 AM, Kevin Grittner
wrote:

> raghavendra t  wrote:
>
> > my question is, how to get a performance on the existing indexes.
> > You mean to say , drop the existing indexes and create the index
> > with CONCURRENTLY. Does this give the performance back.
>
> You would normally want to create first and then drop the old ones,
> unless the old ones are hopelessly corrupted.  Since you still
> haven't given me any information to suggest you need to reindex
> except for the mention of corruption, or any information to help
> identify where the performance bottleneck is, I can't see any other
> improvements to suggest at this point.
>
> -Kevin
>


Re: [PERFORM] How to fast the REINDEX

2010-03-31 Thread raghavendra t
>
> I'm sorry I couldn't come up with more, but what you've provided so
> far is roughly equivalent to me telling you that it takes over four
> hours to travel to see my Uncle Jim, and then asking you how I can
> find out how he's doing in less time than that.  There's just not
> much to go on.  :-(
>
> If you proceed with the course suggested in the URL I referenced,
> people on the list have a chance to be more helpful to you.
>
Instead of looking into the priority of the question or where it has to be
posted, it would be appreciated to keep a discussion to the point
mentioned.  Truely this question belong to some other place as you have
mentioned in the URL. But answer for Q1 might be expected alteast. Hope i
could get the information from the other Thread in other catagory.

Thank you

Regards
Raghavendra


On Thu, Apr 1, 2010 at 3:40 AM, Kevin Grittner
wrote:

> raghavendra t  wrote:
> > Thank you for the suggestion.
>
> I'm sorry I couldn't come up with more, but what you've provided so
> far is roughly equivalent to me telling you that it takes over four
> hours to travel to see my Uncle Jim, and then asking you how I can
> find out how he's doing in less time than that.  There's just not
> much to go on.  :-(
>
> If you proceed with the course suggested in the URL I referenced,
> people on the list have a chance to be more helpful to you.
>
> -Kevin
>


Re: [PERFORM] How to fast the REINDEX

2010-04-01 Thread raghavendra t
Hi All,

System Config
-
CPU - Intel® Xenon® CPU
CPU Speed - 3.16 GHz
Server Model - Sun Fire X4150
RAM-Size - 16GB

> Steve:

So am I to understand I don't need to do daily reindexing as a maintenance
> measure with 8.3.7 on FreeBSD.


My question is something like Steve's, why we should not do reindexing as
our maintenance task. I was doing reindex only to get
a best fit and not fall short of 90% hole, bcoz my table has lot of updates
and deletes. We also has the weekly maintance of VACUUM, but still reindex
takes lot of time.

Present Paramters settings

maintainence_work_mem - 1GB
Checkpoint_segment and Wal_buffers are default values.


Kevin, Pierre, Greg, Steve, Hannu, Jorge -   Thank you for your
wonderfull support and giving me the correct picture on REINDEX on this
thread. I appoligies if i couldnt have shared the proper information in
resolving my issue. Is the above information provided by me will help out in
tuning better.

Regards
Raghavendra




On Thu, Apr 1, 2010 at 5:55 PM, Pierre C  wrote:

>
> So am I to understand I don't need to do daily reindexing as a maintenance
>> measure with 8.3.7 on FreeBSD.
>>
>
> Sometimes it's better to have indexes with some space in them so every
> insert doesn't hit a full index page and triggers a page split to make some
> space.
> Of course if the index is 90% holes you got a problem ;)
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
>
>
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>


[PERFORM] ERROR: cache lookup failed for relation X

2010-04-03 Thread raghavendra t
Hi All,

I am facing the error "cache lookup failed for relation X" in Postgres-8.4.2
version. As you all know, its a reproducable and below is the example.

This can be generated with two sessions;
Am opening two sessions here Session A and Session B
Session A
=
step 1 - creating the table

postgres=# create table cache(id integer);

step 2 - Create the trigger and the function on particular table

postgres=# CREATE FUNCTION cachefunc() RETURNS trigger AS $$
BEGIN
RETURN NEW;
END; $$ LANGUAGE plpgsql;

postgres=# CREATE TRIGGER cachetrig BEFORE INSERT ON cache FOR EACH ROW
EXECUTE PROCEDURE cachefunc();

Step 3 - Inserting a row in a table

postgres=# insert into cache values (1);

Step 4 - Droping the table in BEGIN block and issuing the same drop in
another session B

postgres=# begin;
postgres=# drop table cache;

step 5 - Open the second session B and issue the same command

postgres=# drop table cache; --- In session B, this will wait untill commit
is issued by the Session A.

step 6 - Issue the commit in Session A

postgres=# commit;

Step -7 now we can the see the error in the session B

ERROR: cache lookup failed for relation X

Could plese tell me, why this is generated and what is the cause.

Thanks in advance

Regards
Raghavendra


Re: [PERFORM] [GENERAL] ERROR: cache lookup failed for relation X

2010-04-03 Thread raghavendra t
>
> Hi Justin,

Thank you for your reply..


> In the future please  don't cross post to multiple lists.


Appoligies for it...

Regards
Raghavendra

On Sat, Apr 3, 2010 at 10:34 PM, jus...@magwerks.com
wrote:


> Because You dropped/deleted the table cache in Session A.
>
> The simplest way to look at it is  Session B was lock out when the Drop
> table command was issued from Session A.  Now when session B finally got its
> chance to drop/delete the table it was already gone .
>
> What kind error were you expecting from Postgresql to Return when it can't
> find the table???
>
> In the future please  don't cross post to multiple lists.
>
>  Message from raghavendra t 
> at 04-03-2010 10:08:11 
> PM --
>
>
> step 6 - Issue the commit in Session A
>
> postgres=# commit;
>
> Step -7 now we can the see the error in the session B
>
> ERROR: cache lookup failed for relation X
>  Could plese tell me, why this is generated and what is the cause.
>
>
> Thanks in advance
>
> Regards
> Raghavendra
>
> All legitimate Magwerks Corporation quotations are sent in a .PDF file
> attachment with a unique ID number generated by our proprietary quotation
> system. Quotations received via any other form of communication will not be
> honored.
>
> CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain
> legally privileged, confidential or other information proprietary to
> Magwerks Corporation and is intended solely for the use of the individual to
> whom it addresses. If the reader of this e-mail is not the intended
> recipient or authorized agent, the reader is hereby notified that any
> unauthorized viewing, dissemination, distribution or copying of this e-mail
> is strictly prohibited. If you have received this e-mail in error, please
> notify the sender by replying to this message and destroy all occurrences of
> this e-mail immediately.
> Thank you.
>
>