On Wed, Dec 16, 2009 at 05:18:12PM -0800, yuliada wrote:
> Sam Mason wrote:
> > How about combining all 1000 selects into one?
>
> I can't combine these selects into one, I need to run them one after
> another.
Hum, difficult. What other information is in the row that you need
back? Can you tur
If I search for something which is not in db like 'dfsgsdfgsdfgdsfg' it
always work fast. I suspect that speed depends on number of rows retruned,
but I don't know exactly...
--
View this message in context:
http://old.nabble.com/Slow-select-tp26810673p26821859.html
Sent from the PostgreSQL - ge
Sam Mason wrote:
>
> Wouldn't this be "lower(value) = lower(?)" ?
>
Yes, I use it as "lower(value) = lower(?)", I typed inaccurate example.
Sam Mason wrote:
>
> So each query is taking approx 300ms? How much data does each one
> return?
>
No more than 1000 rows.
Sam Mason wrote:
>
> H
On Wed, Dec 16, 2009 at 04:56:16AM -0800, yuliada wrote:
> I have a table with column of character varying(100). There are about
> 150.000.000 rows in a table. Index was created as
>
> CREATE INDEX idx_stringv
> ON bn_stringvalue
> USING btree
> (lower(value::text));
>
> I'm trying to execu
show us explain select *
--
GJ
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
rafalak writes:
> QUERY PLAN without changes
> Aggregate (cost=98018.96..98018.97 rows=1 width=4) (actual
> time=64049.326..64049.328 rows=1 loops=1)
> -> Bitmap Heap Scan on tbl_photos_keywords (cost=533.23..97940.02
> rows=31577 width=4) (actual time=157.787..63905.939 rows=119154
> loops=1
On Fri, Apr 03, 2009 at 01:20:33AM -0700, rafalak wrote:
> QUERY PLAN without changes
> Aggregate (cost=98018.96..98018.97 rows=1 width=4) (actual
> time=64049.326..64049.328 rows=1 loops=1)
> -> Bitmap Heap Scan on tbl_photos_keywords (cost=533.23..97940.02
> rows=31577 width=4) (actual tim
> shared_buffers = 810MB
> temp_buffers = 128MB
> work_mem = 512MB
> maintenance_work_mem = 256MB
> max_stack_depth = 7MB
> effective_cache_size = 800MB
QUERY PLAN without changes
Aggregate (cost=98018.96..98018.97 rows=1 width=4) (actual
time=64049.326..64049.328 rows=1 loops=1)
-> Bitmap Hea
On Thu, Apr 2, 2009 at 2:48 PM, rafalak wrote:
> Hello i have big table
> 80mln records, ~6GB data, 2columns (int, int)
>
> if query
> select count(col1) from tab where col2=1234;
> return low records (1-10) time is good 30-40ms
> but when records is >1000 time is >12s
>
>
> How to increse perform
On Fri, Apr 3, 2009 at 2:18 AM, rafalak wrote:
> Hello i have big table
> 80mln records, ~6GB data, 2columns (int, int)
>
> if query
> select count(col1) from tab where col2=1234;
> return low records (1-10) time is good 30-40ms
> but when records is >1000 time is >12s
>
>
> How to increse perfor
Mat <[EMAIL PROTECTED]> writes:
> On Fri, 2003-10-03 at 17:50, Tom Lane wrote:
>> Well, it seems to be running at about 5 msec/row, which would be quite
>> respectable if each fetch required another disk seek. I'm wondering why
>> you are (apparently) not managing to get more than one row per page
Mat wrote:
Lines from postgresql.conf that don't start with a '#':
tcpip_socket = true
shared_buffers = 126976 #992 MB
sort_mem = 36864#36 MB
vacuum_mem = 73696 #72 MB
I would suggest scale down shared buffers to 128 or 64MB and set effective cache
size correc
What query plans are you getting for these various combinations?
regards, tom lane
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://www.postgresql.org/search.mpl
[ Charset ISO-8859-1 unsupported, converting... ]
> The FAQ states in entry 4.23 that SELECT...IN statements are slow and
> recommends to use EXISTS...IN statements instead. It also states that this
> will be resolved in some future version.
> I didn't find any entries about that in the TODO list,
14 matches
Mail list logo