Hi,
Does any one can tell me why the same query runs against on smaller data is
slower than bigger table. thanks very much.
I am using PostgreSQL9.1.8.
*t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1
about 10M more or less. According to the result, it need to read a lot of
On 02/28/2013 16:11, Ao Jianwang wrote:
Hi,
Does any one can tell me why the same query runs against on smaller
data is slower than bigger table. thanks very much.
I am using PostgreSQL9.1.8.
*t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1
about 10M more or less. Acco
I get it now... project_id=115 has a frequency of 0.09241 in pg_stats.
So if ((project_id = 115) AND (project_id = 115)) is considered as two
independent conditions, the row estimate ends up being 0.09241 *
0.09241 * 1.20163e+07 (reltuples from pg_class) = 102614.
http://www.postgresql.org/docs/9.
On Wed, Feb 20, 2013 at 3:44 PM, Josh Berkus wrote:
> On 02/19/2013 07:15 PM, Scott Marlowe wrote:
>> On Tue, Feb 19, 2013 at 4:24 PM, Josh Berkus wrote:
>>> ... then you'll see checkpoint "stalls" and spread checkpoint will
>>> actually make them worse by making the stalls longer.
>>
>> Wait, if
Hello,
The Query Plan for the query below shows a large number in its actual rows
count by an unknown reason. As a result Merge Join works on a large enough
data to slow down the query.
The table which I query has the following description:
I took some time to figure out a reasonable tuning for my fresh 9.2.3
installation when I've noticed the following:
[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U
postgres -i -s 1
...
10 tuples done.
...
vacuum...done.
[costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1
On 23/02/13 08:05, Nikolas Everett wrote:
I can't really help, but I can make it more clear why postgres is
choosing a _bitmap_ index scan rather than a regular index scan. With
a regular index scan it pumps the index for the locations of the rows
that it points to and loads those rows as it f
I'm trying to create a view that uses a window function, but it seems that
Postgres is apparently unable to optimize it. Here's a reproduction of my
situation with 9.2.2:
---
drop table if exists values cascade; create table values ( fkey1 integer
not null, fkey2 integer not null, fkey3 integer n
I got a problem with the performance of a PL/PGsql stored procedure
outputting an xml.
/Server version:/ PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC
gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)
/CPU/: Intel(R) Core(TM) i3 CPU 540 @ 3.07GHz
/RAM installed:/ 4GB
/Hard Disk:/ Seagate 5
I have a planner problem that looks like a bug, but I'm not familiar enough
with how planner the works to say for sure.
This is my schema:
create table comments (
id serial primary key,
conversation_id integer,
created_at timestamp
);
create index comments_conversat
>Rebuilding the index might help, as it would put all the leaf pages holding
>values for session_id=27 adjacent to each other, so they would read from disk
>faster. But with a name like >"session_id", I don't know how long such
>clustering would last though.
>If I'm right about the index disk
> From: Josh Berkus
>To: Scott Marlowe
>Cc: pgsql-performance@postgresql.org
>Sent: Thursday, 21 February 2013, 3:14
>Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu
>12.04
>
>
>> Sounds to me like your IO system is stalling on fsyncs or something
>> like that.
On Wed, Feb 27, 2013 at 3:15 AM, Costin Oproiu wrote:
> I took some time to figure out a reasonable tuning for my fresh 9.2.3
> installation when I've noticed the following:
>
> [costin@fsr costin]$ /home/pgsql/bin/pgbench -h 192.1.1.2 -p 5432 -U
> postgres -i -s 1
> ...
> 10 tuples done.
> ..
On Fri, Feb 22, 2013 at 3:21 AM, Davide Berra wrote:
> I got a problem with the performance of a PL/PGsql stored procedure
> outputting an xml.
>
> Server version: PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc
> (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)
> CPU: Intel(R) Core(TM) i3 CPU 5
Vahe Evoyan writes:
> Merge Join (cost=0.00..2513.96 rows=1 width=72) (actual
> time=127.361..473.687 rows=66460 loops=1)
>Merge Cond: ((v1.dft_id = v2.dft_id) AND ((v1.key)::text =
> (v2.key)::text))
>-> Index Scan using qor_value_self_join on qor_value v1
> (cost=0.00..1255.60 rows=27
<>
Technically, yes. That would really help, but the issue is scheduling.
Although the logs are closed off for writes, they aren't closed off for
reads, ref PG documentation: "When a table is being clustered, an ACCESS
EXCLUSIVE lock is acquired on it. This prevents any other database
operatio
Thanks Julien very much.
Two strange behaviors I found:
1) Even I restart the machine and restart the PostgreSQL, then I execute
the query, i still see the shared_hit. It seems when start PG, i will
automatically load the data in the cache of the last time?
2) After I rerun the query, the time for
I am currently looking at a performance issue where a subquery is
executed even though I cannot see the result ever being asked for or
used. Not even google has helped me find any information about this one.
I have created a simple test case to demonstrate this.
CREATE TABLE test1 (t1 INT);
IN
18 matches
Mail list logo