-Original Message-
From: Matthew Wakeling [mailto:matt...@flymine.org]
On Thu, 14 May 2009, Ow Mun Heng wrote:
>> Shouldn't BITMAP indexes come into play?
>>
>> Does having one index w/ 3 parameters being better than 3 index w/ 3
>> different parameters be
-Original Message-
From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance- A
much
>> better index to answer your query is (city_id, house_id, floor_id) -
>> then it can just look up straight away. Instead of the index returning
>> 20 rows to check, it will return ju
On Fri, 2008-07-11 at 09:55 -0500, Kevin Grittner wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> >>> Ow Mun Heng wrote:
> >>> If it were this simple a change, I'm not certain why (I believe) PG
> >>>is checking each and every row to se
On Thu, 2008-07-10 at 09:57 -0400, Alvaro Herrera wrote:
> Ow Mun Heng wrote:
>
> > This is what I see on the table
> >
> > NEW attypmod = -1
> > OLD attypmod = 8
>
> 8 means varchar(4) which is what you said you had (4+4)
> -1 means unlimited size.
>
On Thu, 2008-07-10 at 10:36 +0200, Mario Weilguni wrote:
> Ow Mun Heng schrieb:
> >
> > I want to change a column type from varchar(4) to varchar()
> >
> >
> Example:
> {OLDLEN} = 4
> {NEWLEN} = 60
>
> update pg_attribute
>set atttypmod={NEWLE
Is there any quick hacks to do this quickly? There's around
20-30million
rows of data.
I want to change a column type from varchar(4) to varchar()
table size is ~10-15GB (and another 10-15G for indexes)
What would be the preferrred way of doing it? SHould I be dropping the
indexes 1st to make
On Mon, 2008-04-07 at 11:50 +0100, Matthew wrote:
> On Mon, 7 Apr 2008, Ow Mun Heng wrote:
> > just wondering if there's a special tweak i can do to force more usage
> > of indexes to do BITMAP ands?
>
> There's no need to post this again. You have already ha
just wondering if there's a special tweak i can do to force more usage
of indexes to do BITMAP ands?
I have a table like
A int
B int
C int
D int
E int
F int
g int
where A/B/C/D/E are indexes
There's ~20millions rows in the table.
Query are something like this.
select * from table
where A=X
a
just wondering if there's a special tweak i can do to force more usage
of indexes to do BITMAP ands?
I have a table like
A int
B int
C int
D int
E int
F int
g int
where A/B/C/D/E are indexes
There's ~20millions rows in the table.
Query are something like this.
select * from table
where A=X
a
On Thu, 2008-02-14 at 17:29 -0800, Josh Berkus wrote:
> David,
>
> > Once per quarter, we need to load a lot of data, which causes many
> > updates across the database. We have an online transaction
> > processing-style application, which we really want to stay up during the
> > update job.
> Ho
On Fri, 2007-11-16 at 11:06 -0500, Jonah H. Harris wrote:
> On Nov 16, 2007 10:56 AM, Dave Dutcher <[EMAIL PROTECTED]> wrote:
> > I don't know about that. There are times when it is the right plan:
>
> Agreed. IMHO, there's nothing wrong with nested-loop join as long as
> it's being used proper
On Fri, 2007-11-09 at 16:41 +0100, Sebastian Hennebrueder wrote:
> If the queries are complex, this is understable. I had a performance
> review of a Hibernate project (Java Object Relation Mapping) using
> MySQL. ORM produces easily "complex" queries with joins and subqueries.
> MySQL uses neste
On Thu, 2007-11-01 at 11:16 -0700, Steve Crawford wrote:
> Magnus Hagander wrote:
> > Ow Mun Heng wrote:
> >>> You're likely better off (performance-wise) putting it on the same disk
> >>> as the database itself if that one has better RAID, for example.
>
On Thu, 2007-11-01 at 07:54 +0100, Magnus Hagander wrote:
> Ow Mun Heng wrote:
> > On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
> >
> >> 2) separate the transaction log from the database
> >>
> >> It's mostly written, and it's t
On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:
> 2) separate the transaction log from the database
>
> It's mostly written, and it's the most valuable data you have. And in
> case you use PITR, this is the only thing that really needs to be
> backed up.
My main DB datastore
On Wed, 2007-10-10 at 19:49 -0500, Scott Marlowe wrote:
> On 10/10/07, Radhika S <[EMAIL PROTECTED]> wrote:
> > Thank you scott.
> >
> > We plan on upgrading to Postgres 8.2 very soon.
> > Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have
> > 2GB memory ).
> > The default i
On Thu, 2007-10-04 at 08:30 -0400, Alvaro Herrera wrote:
> Henrik wrote:
>
> > Ahh I had exactly 8 joins.
> > Following your suggestion I raised the join_collapse_limit from 8 to 10 and
> > the planners decision sure changed but now I have some crazy nested loops.
> > Maybe I have some statistic
On Wed, 2007-09-26 at 00:02 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > Where can I erad more about this new "feature"?
>
> http://developer.postgresql.org/pgdocs/postgres/sql-createfunction.html
>
> http://developer.postgresql.org/
On Tue, 2007-09-25 at 11:31 +0200, Steinar H. Gunderson wrote:
> On Tue, Sep 25, 2007 at 12:53:55AM -0400, Carlo Stonebanks wrote:
> > My problem is that I think that SRF's are causing my problems. The SRF's
> > gets an automatic row estimate of 1000 rows. Add a condition to it, the
> > planner gue
en I'm pulling from, the nested loops are
many and I presume it's cos of the 8x SMP and the multiGB ram which is
making the query better.
>
> Carlo
>
> -Original Message-
> From: Ow Mun Heng [mailto:[EMAIL PROTECTED]
> Sent: September 24, 2007 8:51 PM
>
idual queries were fast. So, In then end, I made a
SRF and used the SRFs to join each other. This worked better.
>
>
> "Ow Mun Heng" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> > On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wr
On Mon, 2007-09-17 at 07:27 -0500, Decibel! wrote:
> Take a look at the stuff at http://decibel.org/~decibel/pervasive/, it'd
> hopefully provide a useful starting point.
A bit offtrack, but I was reading the articles and noticed this in the
bottom. Is this a typo or ...
Making PostreSQL perva
On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:
> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)
>
> I am noticing that my queries are spending a lot of time in nested loops.
> The table/index row estimates are not bad, but the nested loops can be off
> b
On Wed, 2007-09-12 at 15:14 +0200, Nis Jørgensen wrote:
> El-Lotso skrev:
>
> > I'm on the verge of giving up... the schema seems simple and yet there's
> > so much issues with it. Perhaps it's the layout of the data, I don't
> > know. But based on the ordering/normalisation of the data and the on
On Fri, 2007-09-14 at 08:33 +0530, Harsh Azad wrote:
> ah.. thanks. Didn't realize table spaces can be mentioned while
> creating a index. BTW, are soft links ok to use for pg_clog /
> pg_xlog . I moved the existing directories to /mnt/logs/pglogs and
> made soft links for both directories in $PGDA
On Fri, 2007-09-14 at 08:20 +0530, Harsh Azad wrote:
> Hi,
>
> Where are the database index files located in the $PGDATA directory? I
> was thinking on soft linking them to another physical hard disk array.
you have to search through pg_class for the "number"
Alternatively, you can try using tab
26 matches
Mail list logo