Killian Driscoll wrote:
> It worked - thank you very much for your time.
Great!
> Regarding the file format used: I had used the pg_dump with .sql, but you
> suggested .out. Is there a particular reason to use .out instead of .sql when
> backing up?
No, doesn't matter.
Andreas
--
Really,
On 12/29/2015 11:38 PM, Killian Driscoll wrote:
On 24 December 2015 at 18:33, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:
It worked - thank you very much for your time.
Regarding the file format used: I had used the pg_dump with .sql, but
you suggested .out. Is there a particul
On 12/29/2015 6:03 PM, Jim Nasby wrote:
If I'm reading EXPLAIN ANALYZE correctly, to_tsquery_partial is being
simplified out of the query entirely:
Filter: (search_vec @@
to_tsquery((array_to_string('{213,E,13,ST,N}'::text[], ' & '::text) ||
':*'::text)))
Part of this could well be that yo
On 12/29/2015 6:35 PM, Tom Lane wrote:
Andy Colson writes:
I cannot get this sql to use the index:
explain analyze
select *
from search
where search_vec @@ to_tsquery_partial('213 E 13 ST N')
--
Seq Scan on search (c
Andy Colson writes:
> Here are my results, if there are any others you'd like to see please
> let me know. Thanks Tom.
For comparison, could we see the results for the non-partial case, ie
explain analyze
select *
from search
where search_vec @@ to_tsquery('213 & E & 13 & ST & N');
On 12/30/2015 9:53 AM, Tom Lane wrote:
Andy Colson writes:
Here are my results, if there are any others you'd like to see please
let me know. Thanks Tom.
For comparison, could we see the results for the non-partial case, ie
explain analyze
select *
from search
where search_vec @@ to_tsquery
On 12/30/2015 9:55 AM, Andy Colson wrote:
On 12/30/2015 9:53 AM, Tom Lane wrote:
Andy Colson writes:
Here are my results, if there are any others you'd like to see please
let me know. Thanks Tom.
For comparison, could we see the results for the non-partial case, ie
explain analyze
select *
Andy Colson writes:
> -> Bitmap Index Scan on search_key (cost=0.00..6.00 rows=1 width=0)
> (actual time=0.025..0.025 rows=0 loops=1)
> Index Cond: (search_vec @@ to_tsquery('213 & E & 13 & ST &
> N'::text))
Hmm ... so the partial case actually is significantly more expensive th
On 12/30/2015 10:09 AM, Tom Lane wrote:
Andy Colson writes:
-> Bitmap Index Scan on search_key (cost=0.00..6.00 rows=1 width=0)
(actual time=0.025..0.025 rows=0 loops=1)
Index Cond: (search_vec @@ to_tsquery('213 & E & 13 & ST & N'::text))
Hmm ... so the partial case actual
Andy Colson writes:
> No, that's not right, the table was empty. I rebuilt the table as it
> was before, here are all three queries again:
Ah, thanks for the more solid data.
> -> Bitmap Index Scan on search_key (cost=0.00..63623.00 rows=1 width=0)
> (actual time=4.996..4.996 rows=1 loo
I wrote:
> This says there's only about a 25% runtime penalty for the partial match,
> at least on your example, compared to the planner's estimate of 2700x
> penalty :-(. Definitely need to fix that.
I tried to reproduce this behavior with simple generated data, and could
not: the estimates seem
We have a performance problem accessing one of our tables, I think because
the statistics are out of date. The table is fairly large, on the order of
100M rows or so.
The general structure of the table is as follows:
Column | Type | Modifiers
---+--+---
Cory Tucker writes:
> This table is almost always queried using a combination of (account_id,
> record_id) and is generally pretty fast. However, under certain loads, the
> query becomes slower and slower as time goes on. The workload that causes
> this to happen is when data for a new account_i
On 12/30/2015 1:07 PM, Tom Lane wrote:
I wrote:
This says there's only about a 25% runtime penalty for the partial match,
at least on your example, compared to the planner's estimate of 2700x
penalty :-(. Definitely need to fix that.
I tried to reproduce this behavior with simple generated da
Wow thats bad.
Here's another link:
http://camavision.com/dn/stats.txt
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 12/30/2015 11:09 AM, Cory Tucker wrote:
> We have a performance problem accessing one of our tables, I think
> because the statistics are out of date. The table is fairly large, on
> the order of 100M rows or so.
> The fix I have employed to restore the speed of the query after I notice
> it
On Wed, Dec 30, 2015 at 11:20 AM Tom Lane wrote:
> Cory Tucker writes:
> > This table is almost always queried using a combination of (account_id,
> > record_id) and is generally pretty fast. However, under certain loads,
> the
> > query becomes slower and slower as time goes on. The workload
Andy Colson writes:
> On 12/30/2015 1:07 PM, Tom Lane wrote:
>> it seems like you've got some weird data statistics that are causing a
>> misestimate. Could we see the pg_stats row for that tsvector column?
>> Or maybe even the actual data?
> The table exists in a schema named jasperia, I've bee
On 12/30/2015 1:55 PM, Tom Lane wrote:
Andy Colson writes:
On 12/30/2015 1:07 PM, Tom Lane wrote:
it seems like you've got some weird data statistics that are causing a
misestimate. Could we see the pg_stats row for that tsvector column?
Or maybe even the actual data?
The table exists in a
On 12/30/2015 2:03 PM, Andy Colson wrote:
On 12/30/2015 1:55 PM, Tom Lane wrote:
Andy Colson writes:
On 12/30/2015 1:07 PM, Tom Lane wrote:
it seems like you've got some weird data statistics that are causing a
misestimate. Could we see the pg_stats row for that tsvector column?
Or maybe eve
Andy Colson writes:
> On 12/30/2015 1:55 PM, Tom Lane wrote:
>> Are you using any nondefault planner settings? Anything else
>> unusual about your installation?
> There are others, but I'll bet its:
> random_page_cost = 1
Nope...
Maybe something weird about the build you're using? What does
p
On 12/30/2015 2:18 PM, Tom Lane wrote:
Andy Colson writes:
On 12/30/2015 1:55 PM, Tom Lane wrote:
Are you using any nondefault planner settings? Anything else
unusual about your installation?
There are others, but I'll bet its:
random_page_cost = 1
Nope...
Maybe something weird about th
Andy Colson writes:
> On 12/30/2015 2:18 PM, Tom Lane wrote:
>> Maybe something weird about the build you're using? What does
>> pg_config print?
> [ output ]
No smoking gun there either.
It might be worthwhile to update to 9.3.10, just in case there is
something wonky about this particular bu
On 12/30/2015 2:33 PM, Tom Lane wrote:
Andy Colson writes:
On 12/30/2015 2:18 PM, Tom Lane wrote:
Maybe something weird about the build you're using? What does
pg_config print?
[ output ]
No smoking gun there either.
It might be worthwhile to update to 9.3.10, just in case there is
some
On 12/30/2015 2:39 PM, Andy Colson wrote:
On 12/30/2015 2:33 PM, Tom Lane wrote:
Andy Colson writes:
On 12/30/2015 2:18 PM, Tom Lane wrote:
Maybe something weird about the build you're using? What does
pg_config print?
[ output ]
No smoking gun there either.
It might be worthwhile to u
Andy Colson writes:
> Ok, I can reproduce this now. The full vacuum analyze isn't needed.
> If I drop and recreate the table it goes back to preferring table scan.
> I can "analyze search" and it still table scans.
> But once I "vacuum analyze search", then it starts index scanning.
Hah. You
26 matches
Mail list logo