> "JRP" == John R Pierce writes:
>> effective_cache_size = 128MB
JRP> thats rather small unless your systme is very memory
JRP> constrained. assuming postgres is the primary disk IO consumer
JRP> on this ysstem, take a look at the cached' value on TOP or
JRP> whatever af
Milan Zamazal wrote:
PS> and value efective_cache_size ???
effective_cache_size = 128MB
thats rather small unless your systme is very memory constrained.
assuming postgres is the primary disk IO consumer on this ysstem, take a
look at the 'cached' value on TOP or whatever after it
> "PS" == Pavel Stehule writes:
PS> and value efective_cache_size ???
effective_cache_size = 128MB
PS> what is CREATE INDEX stament for index?
create index foo2_value_idx on foo2(value);
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to you
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> Have you original values random_page_cost and seq_page_cost in
> PS> postgres.conf?
>
> Yes. To be sure I uncommented the values in postgresql.conf
>
> seq_page_cost = 1.0 # measured on an arbitrary sca
> "PS" == Pavel Stehule writes:
PS> Have you original values random_page_cost and seq_page_cost in
PS> postgres.conf?
Yes. To be sure I uncommented the values in postgresql.conf
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 4.0
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> please EXPLAIN ANALYZE Pavel
>
> I see, I'm sorry. Here are the results:
Have you original values random_page_cost and seq_page_cost in postgres.conf?
it is strange.
Pavel
>
> set enable_seqscan = on;
> explain analy
> "GJ" == Grzegorz Jaśkiewicz writes:
GJ> Do you seriously need to walk the user through couple of million
GJ> rows of data ?
Typically not. Data can be of any size. Some tables may be large and
I'd like to understand what happens. It is a general data browser.
--
Sent via pgsq
> "AL" == Albe Laurenz writes:
AL> Did you try to reduce the cursor_tuple_fraction parameter?
No, good idea, thanks. It helps.
The question is whether it's a good idea to reduce cursor_tuple_fraction
universally, without knowing the table size before (and I'm not going to
use SELECT CO
> "PS" == Pavel Stehule writes:
PS> please EXPLAIN ANALYZE Pavel
I see, I'm sorry. Here are the results:
set enable_seqscan = on;
explain analyze declare c cursor for select * from foo2 order by value;
QUERY PLAN
please EXPLAIN ANALYZE
Pavel
2010/1/5 Milan Zamazal :
>> "PS" == Pavel Stehule writes:
>
> PS> please, send explain result
>
> For ~ 10 million rows table:
>
> explain declare c cursor for select * from foo2 order by value;
> QUERY PLAN
> -
2010/1/5 Milan Zamazal :
> Cursors are very convenient for me, because they allow easy browsing
> data in the user interface (fetching limited sets of rows while seeking
> forward and backward) and they prevent contingent seeking and other
> troubles when concurrent updates happen.
>
Sounds to me
Milan Zamazal wrote:
> My problem is that retrieving sorted data from large tables
> is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
> I typically retrieve the data using cursors, to display them in UI:
>
> BEGIN;
> DECLARE ... SELECT ... ORDER BY ...;
> FETCH ...;
> ...
>
> On
> "PS" == Pavel Stehule writes:
PS> please, send explain result
For ~ 10 million rows table:
explain declare c cursor for select * from foo2 order by value;
QUERY PLAN
-
> "FR" == Filip Rembiałkowski writes:
FR> 2010/1/5 Milan Zamazal
>> - Is it a good idea to set enable_seqscan or enable_sort to "off"
>> globally in my case? Or to set them to "off" just before working
>> with large tables? My databases contain short and long tables,
>>
Hello
please, send explain result
postgres=# explain analyze declare x cursor for select * from foo;
QUERY PLAN
-
Seq Scan on foo (cost=0.00..34.00 rows=240
2010/1/5 Milan Zamazal
> My problem is that retrieving sorted data from large tables is sometimes
> very slow in PostgreSQL (8.4.1, FWIW).
>
>
> I typically retrieve the data using cursors, to display them in UI:
>
> BEGIN;
> DECLARE ... SELECT ... ORDER BY ...;
> FETCH ...;
> ...
>
> On a n
My problem is that retrieving sorted data from large tables is sometimes
very slow in PostgreSQL (8.4.1, FWIW).
I typically retrieve the data using cursors, to display them in UI:
BEGIN;
DECLARE ... SELECT ... ORDER BY ...;
FETCH ...;
...
On a newly created table of about 10 million rows
17 matches
Mail list logo