On Fri, Mar 4, 2011 at 10:34 AM, Glyn Astill wrote:
> I'm wondering (and this may be a can of worms) what peoples opinions are on
> these schedulers? I'm going to have to do some real world testing myself
> with postgresql too, but initially was thinking of switching from our current
> CFQ bac
Dan Harris wrote:
> Just another anecdote, I found that the deadline scheduler
> performed the best for me. I don't have the benchmarks anymore
> but deadline vs cfq was dramatically faster for my tests. I
> posted this to the list years ago and others announced similar
> experiences. Noop wa
On Fri, Mar 4, 2011 at 11:39 AM, Dan Harris wrote:
> Just another anecdote, I found that the deadline scheduler performed the
> best for me. I don't have the benchmarks anymore but deadline vs cfq was
> dramatically faster for my tests. I posted this to the list years ago and
> others announced
On 3/4/11 11:03 AM, Wayne Conrad wrote:
On 03/04/11 10:34, Glyn Astill wrote:
> I'm wondering (and this may be a can of worms) what peoples opinions
are on these schedulers?
When testing our new DB box just last month, we saw a big improvement
in bonnie++ random I/O rates when using the noop
On Fri, Mar 4, 2011 at 12:00 PM, Mark Thornton wrote:
> On 04/03/2011 16:07, Robert Haas wrote:
>>
>> That seems quite surprising. There are only 14 rows in the table but
>> PG thinks 2140? Do you have autovacuum turned on? Has this table
>> been analyzed recently?
>>
> I think autovacuum is ena
On 03/04/11 10:34, Glyn Astill wrote:
> I'm wondering (and this may be a can of worms) what peoples opinions
are on these schedulers?
When testing our new DB box just last month, we saw a big improvement in
bonnie++ random I/O rates when using the noop scheduler instead of cfq
(or any other).
Hi Guys,
I'm in the process of setting up some new hardware and am just doing some basic
disk performance testing with bonnie++ to start with.
I'm seeing a massive difference on the random seeks test, with CFQ not
performing very well as far as I can see. The thing is I didn't see this sort
o
On 04/03/2011 16:07, Robert Haas wrote:
That seems quite surprising. There are only 14 rows in the table but
PG thinks 2140? Do you have autovacuum turned on? Has this table
been analyzed recently?
I think autovacuum is enabled, but as a temporary table LinkIds has only
existed for a very sho
On 04/03/2011 16:07, Robert Haas wrote:
On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton wrote:
I can achieve this manually by rewriting the query as a union between
queries against each of the child tables. Is there a better way? (I'm using
PostGreSQL 8.4 with PostGIS 1.4).
Can you post the EXPL
On Fri, Mar 4, 2011 at 6:40 AM, Mark Thornton wrote:
> The query plan appends sequential scans on the tables in the partition (9
> tables, ~4 million rows) and then hash joins that with a 14 row table. The
> join condition is the primary key of each table in the partition (and would
> be the prima
This is not a performance bug -- my query takes a reasonably long
amount of time, but I would like to see if I can get this calculation
any faster in my setup.
I have a table:
volume_id serial primary key
switchport_id integer not null
in_octets bigint not null
out_octets bigint not null
insert_ti
On Wed, Mar 2, 2011 at 11:31 PM, Adarsh Sharma wrote:
> Don't know why it uses Seq Scan on loc_context_terror as i have indexes on
> the desired columns as well.
I don't see how an index scan would help. The query appears to need
all the rows from that table.
--
Robert Haas
EnterpriseDB: http:
On Fri, Mar 4, 2011 at 8:20 AM, Robert Haas wrote:
> On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke wrote:
>> Robert Haas wrote:
>>> Old row versions have to be kept around until they're no longer of
>>> interest to any still-running transaction.
>>
>> Thanks for the explanation.
>>
>> Regarding the
On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke wrote:
> Robert Haas wrote:
>> Old row versions have to be kept around until they're no longer of
>> interest to any still-running transaction.
>
> Thanks for the explanation.
>
> Regarding the snippet above, why would the intermediate history of
> multip
On Fri, Mar 4, 2011 at 5:26 AM, Vidhya Bondre wrote:
> select ctid,xmin,xmax,* from pg_index gives 2074 records.
Can you put them in a text file and post them here as an attachment?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-pe
The query plan appends sequential scans on the tables in the partition
(9 tables, ~4 million rows) and then hash joins that with a 14 row
table. The join condition is the primary key of each table in the
partition (and would be the primary key of the parent if that was
supported).
It would be m
Robert,
select ctid,xmin,xmax,* from pg_index gives 2074 records.
Regards
Vidhya
On Wed, Mar 2, 2011 at 9:14 PM, Robert Haas wrote:
> On Mon, Feb 28, 2011 at 12:08 AM, Bhakti Ghatkar
> wrote:
> > Tom,
> > The query which you gave returns me 0 rows.
> > select ctid,xmin,xmax,* from pg_index
Robert Haas wrote:
> Old row versions have to be kept around until they're no longer of
> interest to any still-running transaction.
Thanks for the explanation.
Regarding the snippet above, why would the intermediate history of
multiply-modified uncommitted rows be of interest to anything, or is
18 matches
Mail list logo