[EMAIL PROTECTED] writes:
> ERROR: xlog flush request 7/7D02338C is not satisfied --- flushed only to
> 3/2471E324
> writing block 297776 of relation 1663/2110743/2110807
You need to fix or zero out that data block ...
regards, tom lane
---(end
Hi tom,
basically when i run any query with database say,
select count(*) from table1;
It gives me the following error trace:
WARNING: could not write block 297776 of 1663/2110743/2110807
DETAIL: Multiple failures --- write error may be permanent.
ERROR: xlog flush request 7/7D02338C is not
Tom Lane wrote:
What "same result"? You only ran it up to 2K rows, not 2M. In any
case, EXPLAIN without ANALYZE is pretty poor ammunition for complaining
that the planner made the wrong choice. I ran the same
Hello, sorry to jump in mid-stream, but this reminded me of something.
I have h
Pailloncy Jean-Gerard <[EMAIL PROTECTED]> writes:
> I redo the test, with a freshly installed data directory. Same result.
What "same result"? You only ran it up to 2K rows, not 2M. In any
case, EXPLAIN without ANALYZE is pretty poor ammunition for complaining
that the planner made the wrong cho
I redo the test, with a freshly installed data directory. Same result.
Note: This is the full log. I just suppress the mistake I do like
"sl" for "ls".
Jean-Gérard Pailloncy
Last login: Thu Nov 24 12:52:32 2005 from 192.168.0.1
OpenBSD 3.8 (WDT) #2: Tue Nov 8 00:52:38 CET 2005
Welcome to
Tom Lane wrote:
Greg Stark <[EMAIL PROTECTED]> writes:
Last I heard the reason count(*) was so expensive was because its state
variable was a bigint. That means it doesn't fit in a Datum and has to be
alloced and stored as a pointer. And because of the Aggregate API that means
it has to be allo
A quick note to say that I'm very grateful for Tom Lane's input also.
Tom, I did put you on the list of recipients for my last posting to
pgsql-performance, but got:
cut here
This is an automatically generated Delivery Status Notification.
Delivery to the
OK.
The consensus seems to be that I need more indexes and I also need to
look into the NOT IN statement as a possible bottleneck. I've
introduced the indexes which has led to a DRAMATIC change in response
time. Now I have to experiment with INNER JOIN -> OUTER JOIN
variations, SET ENABLE_SEQSCAN
The same 12.9GB distributed across 4 machines using Bizgres MPP fits into
I/O cache. The interesting result is that the query "select count(1)" is
limited in speed to 280 MB/s per CPU when run on the lineitem table. So
when I run it spread over 4 machines, one CPU per machine I get this:
===
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Last I heard the reason count(*) was so expensive was because its state
> > variable was a bigint. That means it doesn't fit in a Datum and has to be
> > alloced and stored as a pointer. And because of the Aggregate
Greg Stark <[EMAIL PROTECTED]> writes:
> Last I heard the reason count(*) was so expensive was because its state
> variable was a bigint. That means it doesn't fit in a Datum and has to be
> alloced and stored as a pointer. And because of the Aggregate API that means
> it has to be allocated and fr
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Yeah - it's pretty clear that the count aggregate is fairly expensive wrt cpu
> -
> However, I am not sure if all agg nodes suffer this way (guess we could try a
> trivial aggregate that does nothing for all tuples bar the last and just
> reports the fi
[EMAIL PROTECTED] writes:
> Can anyone suggest how do i fix this
> xlog flush request 7/7D02338C is not satisfied --- flushed only to
> 3/2471E324
This looks like corrupt data to me --- specifically, garbage in the LSN
field of a page header. Is that all you get? PG 7.4 and up should tell
you
"Bealach-na Bo" <[EMAIL PROTECTED]> writes:
> I'm having great difficulties getting the performance I had hoped for
> from Postgresql 8.0. The typical query below takes ~20 minutes !!
You need to show us the table definition (including indexes) and the
EXPLAIN ANALYZE results for the query.
It se
Hi,
I'm also sending the EXPLAIN outputs.
Please provide EXPLAIN ANALYZE outputs instead of EXPLAIN. You will have
more information.
Indexes on your tables are obviously missing. You should try to add:
CREATE INDEX idx_node_filter ON node(name, type, usage);
CREATE INDEX idx_job_log_filter
Hi,
Thanks for your comments. I've explicitly made any indexes, but the
default ones are:
[EMAIL PROTECTED]> \di
List of relations
Schema | Name | Type | Owner | Table
-+-+---+-+-
user | job_log_id_pkey | index | us
Hi Anjan,
I can support Scott. You should turn on HT if you see high values for CS.
I do have a few customers running a web-based 3-tier application with
PostgreSQL. We had to turn off HT to have better overall performance.
The issue is the behavior under high load. I notice that HT on does
co
> Typical query
>
>
> SELECT n.name
> FROM node n
> WHERE n.name
> LIKE '56x%'
> AND n.type='H'
> AND n.usage='TEST'
> AND n.node_id
> NOT IN
> (select n.node_id
> FROM job_log j
> INNER JOIN node n
> ON j.node_id = n.node_id
> WHERE n.name
> LIKE '56x%'
> AND n.type='H'
> AND n.usage=
Hi Folks,
I'm new to Postgresql.
I'm having great difficulties getting the performance I had hoped for
from Postgresql 8.0. The typical query below takes ~20 minutes !!
I hope an expert out there will tell me what I'm doing wrong - I hope
*I* am doing something wrong.
Hardware
Single
Hi ,
i get the following error on doing anything with the database after starting it.
Can anyone suggest how do i fix this
xlog flush request 7/7D02338C is not satisfied --- flushed only to 3/2471E324
Vipul Gupta
Pailloncy Jean-Gerard <[EMAIL PROTECTED]> writes:
Why the stupid indexscan plan on the whole table ?
Pray tell, what are you using for the planner cost parameters?
The only way I can come close to duplicating your numbers is
by setting random_page_cost to somewhere around 0.01 ...
I did not
THIS MAY SEEM SILLY but vacuum is mispelled below and presumably
there was never any ANALYZE done.
postgres=# vaccum full verbose analyze;
I do have done the "vacUUm full verbose analyze;".
But I copy/paste the wrong line.
Cordialement,
Jean-Gérard Pailloncy
---(en
Luke Lonergan wrote:
Mark,
Time: 197870.105 ms
So 198 seconds is the uncached read time with count (Just for clarity,
did you clear the Pg and filesystem caches or unmount / remount the
filesystem?)
Nope - the longer time is due to the "second write" known issue with
Postgres - it writes
Luke Lonergan wrote:
That says it's something else in the path. As you probably know there is a
page lock taken, a copy of the tuple from the page, lock removed, count
incremented for every iteration of the agg node on a count(*). Is the same
true of a count(1)?
Sorry Luke - message 3 - I s
Mark,
>> Time: 197870.105 ms
>
> So 198 seconds is the uncached read time with count (Just for clarity,
> did you clear the Pg and filesystem caches or unmount / remount the
> filesystem?)
Nope - the longer time is due to the "second write" known issue with
Postgres - it writes the data to the t
Luke Lonergan wrote:
Mark,
It would be nice to put some tracers into the executor and see where the
time is going. I'm also curious about the impact of the new 8.1 virtual
tuples in reducing the executor overhead. In this case my bet's on the agg
node itself, what do you think?
Yeah - it's
Luke Lonergan wrote:
12.9GB of DBT-3 data from the lineitem table
llonergan=# select relpages from pg_class where relname='lineitem';
relpage
Mark,
See the results below and analysis - the pure HeapScan gets 94.1% of the max
available read bandwidth (cool!). Nothing wrong with heapscan in the
presence of large readahead, which is good news.
That says it's something else in the path. As you probably know there is a
page lock taken, a
28 matches
Mail list logo