Trying on another server, it gives different result.
-> Index Scan using response_log_by_activity on public.response_log rl2
(cost=0.00..50.29 rows=17 width=8) (actual time=0.955..0.967 rows=0
loops=30895)
Output: rl2.activity_id, rl2.feed_id
It's Postgres 9.1.24 on RHEL 6.5
On Tue, Sep 5, 2017 at 8:24 PM, Soni M wrote:
> Consider these 2 index scan produced by a query
>
> -> Index Scan using response_log_by_activity on public.response_log rl2
> (cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0
> loops=34098)
>
Consider these 2 index scan produced by a query
-> Index Scan using response_log_by_activity on public.response_log rl2
(cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0
loops=34098)
Output: rl2.activity_id, rl2.feed_id
On Mon, Dec 7, 2015 at 10:36 PM, Tory M Blue wrote:
> What am I not understanding missing?
Yes. There is a hard limit on the number of tuples than can be sorted
in memory prior to PostgreSQL 9.4. It's also the case that very large
work_mem or maintenance_work_mem settings are unlikely to help unl
Just trying to figure something out.
9.3.4, CentOS6.5
256GB Ram
Maintenance_work_mem = 125GB
Effective_Cache = 65GB
I have 2 indexes running, started at the same time, they are not small and
one will take 7 hours to complete.
I see almost zero disk access, very minor, not what I want to see whe
Thank you for your considerations Jeff. Actually I'm running an experiment
proposed by other researchers to evaluate a recommendation model.
My database is composed only by old tweets. In this experiment the
recommendation model is evaluated in a daily basis, and that's the reason
the query collect
On Mon, Nov 4, 2013 at 12:44 PM, Caio Casimiro wrote:
> Thank you very much for your answers guys!
>
>
> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes wrote:
>
>> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro
>> wrote:
>>
>>>
>>> SELECT tt.tweet_id, tt.topic, tt.topic_value
>>> FROM tw
On Sun, Nov 3, 2013 at 4:05 PM, Caio Casimiro wrote:
> System Information:
> OS: Slackware 14.0
> Postgresql Version: 9.3 Beta2
This probably doesn't have anything to do with your problem, but it's
long past time to migrate from the beta to the production 9.3.
merlin
--
Sent via pgsql-perform
On Mon, Nov 4, 2013 at 2:10 PM, Caio Casimiro wrote:
>
> You said that I would need B-Tree indexes on the fields that I want the
> planner to use index only scan, and I think I have them already on the
> tweet table:
>
> "tweet_ios_index" btree (id, user_id, creation_time)
>
> Shouldn't the tweet_
From: Caio Casimiro [mailto:casimiro.lis...@gmail.com]
Sent: Monday, November 04, 2013 4:33 PM
To: Igor Neyman
Cc: Jeff Janes; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field
These are the parameters I have set in postgresql.conf
Hello, thank you for your answer. I will give it a try and then I post here
the results.
In the original email I post the output of \d+ tweet, which contains the
indexes and constraints.
Best regards,
Caio
On Mon, Nov 4, 2013 at 8:59 PM, desmodemone wrote:
> Hello,
> I think you c
Hello,
I think you could try with an index on tweet table columns
"user_id, creation_time" [in this order , because the first argument is for
the equality predicate and the second with the range scan predicate, the
index tweet_user_id_creation_time_index is not ok because it has the
re
Hi Elliot, thank you for your answer.
I tried this query but it still suffer with index scan on
tweet_creation_time_index:
"Sort (cost=4899904.57..4899913.19 rows=3447 width=20) (actual
time=37560.938..37562.503 rows=1640 loops=1)"
" Sort Key: tt.tweet_id"
" Sort Method: quicksort Memory: 97k
Janes; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field
>
> Hi Neyman, thank you for your answer.
> Unfortunately this query runs almost at the same time:
>
> Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual
&g
From: Caio Casimiro [mailto:casimiro.lis...@gmail.com]
Sent: Monday, November 04, 2013 4:10 PM
To: Igor Neyman
Cc: Jeff Janes; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field
Hi Neyman, thank you for your answer.
Unfortunately this
On 2013-11-04 16:10, Caio Casimiro wrote:
Hi Neyman, thank you for your answer.
Unfortunately this query runs almost at the same time:
Sort (cost=4877693.98..4877702.60 rows=3449 width=20) (actual
time=25820.291..25821.845 rows=1640 loops=1)
Sort Key: tt.tweet_id
Sort Method: quicksort
013 at 6:52 PM, Igor Neyman wrote:
>
>
> From: pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] On Behalf Of Caio Casimiro
> Sent: Monday, November 04, 2013 3:44 PM
> To: Jeff Janes
> Cc: pgsql-performance@postgresql.org
> Subj
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Caio Casimiro
Sent: Monday, November 04, 2013 3:44 PM
To: Jeff Janes
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Slow index scan on B-Tree index over timestamp field
Thank
I should also say that table tweet has more than 400 millions hows and
table tweet_topic has estimated more than 800 millions rows.
Thanks again,
Caio
On Mon, Nov 4, 2013 at 6:44 PM, Caio Casimiro wrote:
> Thank you very much for your answers guys!
>
>
> On Mon, Nov 4, 2013 at 5:15 PM, Jeff Jan
Thank you very much for your answers guys!
On Mon, Nov 4, 2013 at 5:15 PM, Jeff Janes wrote:
> On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro
> wrote:
>
>> Hello all,
>>
>> I have one query running at ~ 7 seconds and I would like to know if it's
>> possible to make it run faster, once this que
On Sun, Nov 3, 2013 at 2:05 PM, Caio Casimiro wrote:
> Hello all,
>
> I have one query running at ~ 7 seconds and I would like to know if it's
> possible to make it run faster, once this query runs lots of time in my
> experiment.
>
Do you mean you want it to be fast because it runs many times,
On 2013-11-04 13:56, Kevin Grittner wrote:
Caio Casimiro wrote:
I have one query running at ~ 7 seconds and I would like to know
if it's possible to make it run faster, once this query runs lots
of time in my experiment.
Buffers: shared hit=2390 read=32778
Total runtime: 24066.145 ms
effec
Caio Casimiro wrote:
> I have one query running at ~ 7 seconds and I would like to know
> if it's possible to make it run faster, once this query runs lots
> of time in my experiment.
> Buffers: shared hit=2390 read=32778
> Total runtime: 24066.145 ms
> effective_cache_size = 2GB
> it seems
Hello all,
I have one query running at ~ 7 seconds and I would like to know if it's
possible to make it run faster, once this query runs lots of time in my
experiment.
Basically the query return the topics of tweets published by users that the
user N follows and that are published between D1 and
Anj Adu wrote:
> The combination index works great. Would adding the combination
> index guarantee that the optimizer will choose that index for
> these kind of queries involving the columns in the combination. I
> verified a couple of times and it picked the right index. Just
> wanted to make s
The combination index works great. Would adding the combination index
guarantee that the optimizer will choose that index for these kind of
queries involving the columns in the combination. I verified a couple
of times and it picked the right index. Just wanted to make sure it
does that consistentl
Appears to have helped with the combination index. I'll need to
eliminate caching effects before making sure its the right choice.
Thanks for the suggestion.
On Tue, Jun 22, 2010 at 7:01 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400
Alvaro Herrera writes:
> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:
>> This query seems unreasonable slow on a well-indexed table (13 million
>> rows). Separate indexes are present on guardid_id , from_num and
>> targetprt columns.
> Maybe you need to vacuum or reindex?
R
I did post the explain analyze..can you please clarify
On Tue, Jun 22, 2010 at 6:10 PM, Joshua D. Drake wrote:
> On Tue, 2010-06-22 at 18:00 -0700, Anj Adu wrote:
>> i have several partitions like this (similar size ...similar data
>> distribution)..these partitions are only "inserted"..never upd
On Tue, 2010-06-22 at 18:00 -0700, Anj Adu wrote:
> i have several partitions like this (similar size ...similar data
> distribution)..these partitions are only "inserted"..never updated.
> Why would I need to vacuum..
>
An explain analyze is what is in order for further diagnosis.
JD
> I can
i have several partitions like this (similar size ...similar data
distribution)..these partitions are only "inserted"..never updated.
Why would I need to vacuum..
I can reindex..just curious what can cause the index to go out of whack.
On Tue, Jun 22, 2010 at 4:44 PM, Alvaro Herrera
wrote:
> Exc
Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:
> This query seems unreasonable slow on a well-indexed table (13 million
> rows). Separate indexes are present on guardid_id , from_num and
> targetprt columns.
Maybe you need to vacuum or reindex?
--
Álvaro Herrera
The PostgreS
This query seems unreasonable slow on a well-indexed table (13 million
rows). Separate indexes are present on guardid_id , from_num and
targetprt columns.
The table was analyzed with a default stats target of 600.
Postgres 8.1.9 on 2 cpu quad core 5430 with 32G RAM (work_mem=502400)
6 x 450G 15K
On Thu, 25 Sep 2008, Tom Lane wrote:
Matthew Wakeling <[EMAIL PROTECTED]> writes:
Hi all. I'm having an interesting time performance-wise with a set of indexes.
Any clues as to what is going on or tips to fix it would be appreciated.
Are the indexed columns all the same datatype? (And which t
Matthew Wakeling <[EMAIL PROTECTED]> writes:
> Hi all. I'm having an interesting time performance-wise with a set of
> indexes.
> Any clues as to what is going on or tips to fix it would be appreciated.
Are the indexed columns all the same datatype? (And which type is it?)
It might be helpful
Hi all. I'm having an interesting time performance-wise with a set of indexes.
Any clues as to what is going on or tips to fix it would be appreciated.
My application runs lots of queries along the lines of:
SELECT * from table where field IN (.., .., ..);
There is always an index on the fie
36 matches
Mail list logo