Tom Lane wrote:
Adam Gundy <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
Oh, it coerces the type all right, just not in the direction you'd like.
is there a reason it doesn't coerce to a type that's useful to the
planner (ie varchar in my case),
In this case I think the choice is probably se
Adam Gundy <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Oh, it coerces the type all right, just not in the direction you'd like.
> is there a reason it doesn't coerce to a type that's useful to the
> planner (ie varchar in my case),
In this case I think the choice is probably semantically co
Tom Lane wrote:
Adam Gundy <[EMAIL PROTECTED]> writes:
hmm. unfortunately it did turn out to be (part) of the issue. I've
discovered that mixing char and varchar in a stored procedure does not
coerce the types, and ends up doing seq scans all the time.
Oh, it coerces the type all right, just
On Mon, 14 Apr 2008, Tom Lane wrote:
Ideally, very hot pages would stay in shared buffers and drop out of the
kernel cache, allowing you to use a database approximating all-of-RAM
before you hit the performance wall.
With "pgbench -S", the main hot pages that get elevated usage counts and
cli
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> The transition domain where performance drops dramatically as the database
>> starts to not fit in shared buffers but does still fit in filesystem cache.
>
> It looks to me like the knee comes where the DB no lon
Gregory Stark <[EMAIL PROTECTED]> writes:
> The transition domain where performance drops dramatically as the database
> starts to not fit in shared buffers but does still fit in filesystem cache.
It looks to me like the knee comes where the DB no longer fits in
filesystem cache. What's interesti
Adam Gundy wrote:
I doubt we could get stats stable enough for this. the number of groups
will hopefully be much larger at some point.
The pg_stats table should be recording the n most-common values, so if
you have 1 million groups you track details of the 1000 most-common.
That gives you a m
Adam Gundy <[EMAIL PROTECTED]> writes:
> hmm. unfortunately it did turn out to be (part) of the issue. I've
> discovered that mixing char and varchar in a stored procedure does not
> coerce the types, and ends up doing seq scans all the time.
Oh, it coerces the type all right, just not in the di
Richard Huxton wrote:
Adam Gundy wrote:
On Thu, Apr 10, 2008 at 8:52 AM, Adam Gundy <[EMAIL PROTECTED]> wrote:
Richard Huxton wrote:
How many distinct values do you have in groups.groupid and
group_access.group_id?
for the small database (since it shows the same problem):
group_access: 280
On Mon, 14 Apr 2008, Gregory Stark wrote:
I'm curious about the total database size as a for each of the scaling factors
as well as the total of the index sizes.
That's all in a table at
http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm
--
* Greg Smith [EMAIL PROTECTED]
On Mon, 14 Apr 2008, Gaetano Mendola wrote:
I'm using postgres 8.2.3 on Red Hat compiled with GCC 3.4.6.
8.2.3 has a performance bug that impacts how accurate pgbench results are;
you really should be using a later version.
http://img84.imageshack.us/my.php?image=totalid7.png
as you can se
In response to Adrian Moisey <[EMAIL PROTECTED]>:
>
> We currently have a 16CPU 32GB box running postgres 8.2.
>
> When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
> UTF8 -F c -b" I get a file of 14GB in size.
>
> But the database is 110GB in size on the disk. Why the big
Gregory Stark wrote:
"Gaetano Mendola" <[EMAIL PROTECTED]> writes:
The following graph reports the results:
http://img84.imageshack.us/my.php?image=totalid7.png
That's a *fascinating* graph.
It is, isn't it? Thanks Gaetano.
It seems there are basically three domains.
The small domain w
"Gaetano Mendola" <[EMAIL PROTECTED]> writes:
> The following graph reports the results:
>
> http://img84.imageshack.us/my.php?image=totalid7.png
That's a *fascinating* graph.
It seems there are basically three domains.
The small domain where the database fits in shared buffers -- though actua
Will this help with performance ?
Depends if the bloat is in part of your working set. If debloating can
make the working set fit in RAM, or lower your IOs, you'll get a boost.
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tabl
Hi
the live one is 113G
the restored one is 78G
>
Good news for you is that you know that you can do something ;)
:)
Will this help with performance ?
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tables perhaps, or in the sy
Adrian Moisey wrote:
> Hi
>
>> If you suspect your tables or indexes are bloated, restore your
>> dump to a test box.
>> Use fsync=off during restore, you don't care about integrity on
>> the test box.
>> This will avoid slowing down your production database.
>> Then look at the si
If you suspect your tables or indexes are bloated, restore your
dump to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it i
Hi
If you suspect your tables or indexes are bloated, restore your dump
to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it
Hi
We currently have a 16CPU 32GB box running postgres 8.2.
When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
But the database is 110GB in size on the disk. Why the big difference
in size? Does this have anything to do with
Hi all,
I started to do some performance tests (using pgbench) in order to
estimate the DRBD impact on our servers, my plan was to perform some
benchmarks without DRBD in order to compare the same benchmark with
DRBD.
I didn't perform yet the benchmark with DRBD and I'm already facing
something I c
Adrian Moisey wrote:
> Hi
>
> We currently have a 16CPU 32GB box running postgres 8.2.
>
> When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
> UTF8 -F c -b" I get a file of 14GB in size.
>
> But the database is 110GB in size on the disk. Why the big difference
> in size? D
Adam Gundy wrote:
On Thu, Apr 10, 2008 at 8:52 AM, Adam Gundy <[EMAIL PROTECTED]> wrote:
Richard Huxton wrote:
How many distinct values do you have in groups.groupid and
group_access.group_id?
for the small database (since it shows the same problem):
group_access: 280/268
groups: 2006/139
23 matches
Mail list logo