Hi All,
I have compiled PostgreSQL 8.4 from source code and in order to install
pgbench, I go under contrib folder and run below commands:
make
make install
when I write pgbench as a command system cannot find pgbench as a command.
As a result I cannot use pgbench-tools because system does not int
Alexandre de Arruda Paes writes:
> 2010/3/19 Tom Lane :
>> The cost estimates look a bit unusual to me; are you using nondefault
>> cost parameters, and if so what are they?
> The non default value in cost parameters is different only in
> random_page_cost that are set to 2.5 and default_statisti
Hi Yeb,
I have not looked at the gist code, but would it be possible to
make virtual pages that have a size that is 1/power-of-2 * blocksize.
Then the leaf node could be 1/8 or even 1/16 the size of the full
pagesize.
Regards,
Ken
On Fri, Mar 19, 2010 at 09:49:30PM +0100, Yeb Havinga wrote:
> Ye
Hi Tom,
2010/3/19 Tom Lane :
> Alexandre de Arruda Paes writes:
>> My question: if the cost is exactly the same, why PG choose the index
>> ict13t2 on ct13t and apply a filter instead use the primary key ?
>
> Why shouldn't it, if the estimated costs are the same? You didn't
> actually demonstra
Yeb Havinga wrote:
Since the gistpagesize is derived from the database blocksize, it
might be wise to set the blocksize low for this case, I'm going to
play with this a bit more.
Ok, one last mail before it turns into spam: with a 1KB database
blocksize, the query now runs in 30 seconds (orig
Yeb Havinga wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items.
For
that, we have a test case at
http://archives.postgresql.org/pgsql-performance/2009-04/msg00276.php
for
btre
Alexandre de Arruda Paes writes:
> My question: if the cost is exactly the same, why PG choose the index
> ict13t2 on ct13t and apply a filter instead use the primary key ?
Why shouldn't it, if the estimated costs are the same? You didn't
actually demonstrate they're the same though.
The cost e
K.I.S.S. here . the best way to do one of these in most DB's is
typically an outer join and test for null:
select f1.* from friends f1
left outer join friends f2 on (f1.user_id=f2.ref_id and
f1.ref_id=f2.user_id)
where f2.id is null;
On Fri, Mar 19, 2010 at 7:26 AM, Corin wrote:
> Hi
Corin,
* Corin (wakath...@gmail.com) wrote:
> I fill this table with around 2.800.000 random rows (values between 1
> and 500.000 for user_id, ref_id).
Using random data really isn't a good test.
> The intention of the query is to find rows with no "partner" row. The
> offset and limit are j
On Thu, Mar 18, 2010 at 10:31 AM, Corin wrote:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual dualcore op
I also wonder why the reported runtime of 5.847 ms is so much different
to the runtime reported of my scripts (both php and ruby are almost the
same). What's the best tool to time queries in postgresql? Can this be
done from pgadmin?
I've seen differences like that. Benchmarking isn't eas
On Wednesday 17 March 2010 22:18:47 Hannu Krosing wrote:
> On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:
> > Alvaro Herrera wrote:
> > > Andres Freund escribió:
> > >> I find it way much easier to believe such issues exist on a tables in
> > >> constrast to indexes. The likelihood to get seq
Yeb Havinga wrote:
Yeb Havinga wrote:
Matthew Wakeling wrote:
Matthew Wakeling wrote:
A second quite distinct issue is the general performance of GiST
indexes
which is also mentioned in the old thread linked from Open Items. For
that, we have a test case at
http://archives.postgresql.org/pgsq
Corin wrote:
> It's already faster, which is great, but I wonder why the query
> plan is that complex.
Because that's the plan, out of all the ways the planner knows to
get the requested result set, which was estimated to cost the least.
If it isn't actually the fastest, that might suggest tha
On Fri, Mar 19, 2010 at 3:04 AM, Dimitri Fontaine
wrote:
> Corin writes:
>> I'm running quite a large social community website (250k users, 16gb
>> database). We are currently preparing a complete relaunch and thinking about
>> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database
Hi,
PostgreSQL 8.4.2 / default_statistics_target = 300
I have a strange problem for a bad choose of indexes.
client=# \d ct13t
Table "public.ct13t"
Column | Type | Modifiers
+--+---
ct12emp04 | integer | not null
ct03emp01 | integer
Hi all!
While evaluting the pgsql query planer I found some weird behavior of
the query planer. I think it's plan is way too complex and could much
faster?
CREATE TABLE friends (
id integer NOT NULL,
user_id integer NOT NULL,
ref_id integer NOT NULL,
);
ALTER TABLE ONLY friends ADD
Greg Smith writes:
> However, that doesn't actually solve any of the problems I was talking about
> though, which is why I'm not even talking about that part. We need the glue
> to pull out software releases, run whatever testing tool is appropriate, and
> then save the run artifacts in some stan
Corin writes:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual dualcore operton 2216 with 12gb ram running
19 matches
Mail list logo