Phil Endecott <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
> > You're omitting the time spent finding the actual table for the correct
> > user in your current scheme. That's exactly the same as the log(u) factor
> > above.
>
> I hope not - can anyone confirm?
>
> I have the impression tha
Greg Stark wrote:
The sort of question I do need to answer is this: starting from individual
X, find all the ancestors and descendants for n generations. This involves n
iterations of a loop, joining the relatives found so far with the next
generation. If there are p people in the tree this has s
Phil Endecott <[EMAIL PROTECTED]> writes:
> Those aren't questions that I need to answer often.
But the fact that they're utterly infeasible in your current design is a bad
sign. Just because you don't need them now doesn't mean you won't need
*something* that spans users later. Sometimes you h
Greg Stark wrote:
Phil Endecott wrote:
Just to give a bit of background, in case it is useful: this is my family tree
website, treefic.com. I have a schema for each user, each with about a dozen
tables. In most cases the tables are small, i.e. tens of entries, but the
users I care about are th
Phil Endecott <[EMAIL PROTECTED]> writes:
> Hello again,
>
> Just to give a bit of background, in case it is useful: this is my family tree
> website, treefic.com. I have a schema for each user, each with about a dozen
> tables. In most cases the tables are small, i.e. tens of entries, but the
On Fri, Jul 29, 2005 at 09:08:28AM -0400, Jeff Trout wrote:
>
> On Jul 28, 2005, at 2:40 PM, Jan Wieck wrote:
>
> >Then again, the stats file is only written. There is nothing that
> >actually forces the blocks out. On a busy system, one individual
> >stats file will be created, written to,
>
On Jul 28, 2005, at 2:40 PM, Jan Wieck wrote:
Then again, the stats file is only written. There is nothing that
actually forces the blocks out. On a busy system, one individual
stats file will be created, written to,
If one is running with stats_reset_on_server_start true (the default)
d
On Thu, Jul 28, 2005 at 03:12:33PM -0400, Greg Stark wrote:
> I think occasionally people get bitten by not having their pg_* tables being
> vacuumed or analyzed regularly. If you have lots of tables and the stats are
> never updated for pg_class or related tables you can find the planner taking a
On Thu, Jul 28, 2005 at 05:48:21PM -0500, Guy Rouillier wrote:
> Jan Wieck wrote:
>
> > Then again, the stats file is only written. There is nothing that
> > actually forces the blocks out. On a busy system, one individual stats
> > file will be created, written to, renamed, live for 500ms and be
>> This is Linux 2.4.26 and an ext3 filesystem.
> With the dir_index feature or without?
With, I believe. It is enabled in the superblock (tune2fs -O dir_index)
but this was not done when the filesystem was created so only new
directories are indexed I think. I don't think there's a way to in
Jan Wieck wrote:
> Then again, the stats file is only written. There is nothing that
> actually forces the blocks out. On a busy system, one individual stats
> file will be created, written to, renamed, live for 500ms and be
> thrown away by the next stat files rename operation. I would assume
> t
On Thu, Jul 28, 2005 at 09:43:44PM +0200, Peter Wiersig wrote:
> On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
> >
> > This is Linux 2.4.26 and an ext3 filesystem.
>
> With the dir_index feature or without?
Also, with data=ordered, data=writeback or data=journal?
(First one is d
Scott Marlowe wrote:
Yeah, I found these three facets of the OP's system a bit disconcerting:
QUOTE ---
This is for a web application which uses a new connection for each CGI
request.
The server doesn't have a particularly high disk bandwidth and this
mysterious activity had been the bottlene
On Thu, Jul 28, 2005 at 08:31:21PM +0100, Phil Endecott wrote:
>
> This is Linux 2.4.26 and an ext3 filesystem.
With the dir_index feature or without?
Peter
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hello again,
Just to give a bit of background, in case it is useful: this is my
family tree website, treefic.com. I have a schema for each user, each
with about a dozen tables. In most cases the tables are small, i.e.
tens of entries, but the users I care about are the ones with tens of
tho
Jan Wieck <[EMAIL PROTECTED]> writes:
> >> PostgreSQL itself doesn't work too well with tens of thousands of tables.
> > Really? AFAIK it should be pretty OK, assuming you are on a filesystem
> > that doesn't choke with tens of thousands of entries in a directory.
> > I think we should put down
Jan Wieck <[EMAIL PROTECTED]> writes:
> Then again, the stats file is only written. There is nothing that actually
> forces the blocks out. On a busy system, one individual stats file will be
> created, written to, renamed, live for 500ms and be thrown away by the next
> stat files rename operati
On Thu, 2005-07-28 at 13:40, Jan Wieck wrote:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
>
> > Jan Wieck <[EMAIL PROTECTED]> writes:
> >> On 7/28/2005 2:03 PM, Tom Lane wrote:
> >>> Well, there's the problem --- the stats subsystem is designed in a way
> >>> that makes it rewrite its entire stats col
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:28 PM, Tom Lane wrote:
>> Jan Wieck <[EMAIL PROTECTED]> writes:
>>> PostgreSQL itself doesn't work too well with tens of thousands of
>>> tables.
>>
>> Really? AFAIK it should be pretty OK, assuming you are on a filesystem
>> that doesn't ch
On 7/28/2005 2:28 PM, Tom Lane wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
On 7/28/2005 2:03 PM, Tom Lane wrote:
Well, there's the problem --- the stats subsystem is designed in a way
that makes it rewrite its entire stats collection on every update.
That's clearly not going to scale well to
Jan Wieck <[EMAIL PROTECTED]> writes:
> On 7/28/2005 2:03 PM, Tom Lane wrote:
>> Well, there's the problem --- the stats subsystem is designed in a way
>> that makes it rewrite its entire stats collection on every update.
>> That's clearly not going to scale well to a large number of tables.
>> Off
On 7/28/2005 2:03 PM, Tom Lane wrote:
Phil Endecott <[EMAIL PROTECTED]> writes:
For some time I had been trying to work out why every connection to my
database resulted in several megabytes of data being written to the
disk, however trivial the query. I think I've found the culprit:
global/p
Phil Endecott <[EMAIL PROTECTED]> writes:
> For some time I had been trying to work out why every connection to my
> database resulted in several megabytes of data being written to the
> disk, however trivial the query. I think I've found the culprit:
> global/pgstat.stat. This is with 7.4.7.
Dear Postgresql experts,
For some time I had been trying to work out why every connection to my
database resulted in several megabytes of data being written to the
disk, however trivial the query. I think I've found the culprit:
global/pgstat.stat. This is with 7.4.7.
This is for a web app
24 matches
Mail list logo