>The presence of absence of the length limiter on a varchar will not impact
>the query plan. And I'm pretty sure you cannot even store a too long
>varchar in an index. It will error on the attempt (as opposed to
>truncating).
The max size is almost one block. After that you get an error:
FEHL
Bill Moran wrote
> On Fri, 25 Jul 2014 17:20:57 +0100
> Rebecca Clarke <
> r.clarke83@
> > wrote:
>
> Note that this is speculation on my part, but the
> point being that if those columns are usually as narrow as your
> examples you might want to try changing them to VARCHAR(50) or
> something a
On Fri, 25 Jul 2014 17:20:57 +0100
Rebecca Clarke wrote:
> Hi Bill,
>
> Thanks for the reply. Here's the EXPLAIN output of a couple of the queries:
A few suggestions in addition to David's comment about doing
EXPLAIN ANALYZE:
1) When experimenting, one thing to try might be making a single
G
Rebecca Clarke-2 wrote
> Thanks for the reply. Here's the EXPLAIN output of a couple of the
> queries:
Typically you want to provide EXPLAIN ANALYZE output so that comparisons
between planner estimates and reality can be made.
David J.
--
View this message in context:
http://postgresql.1045
Hi Bill,
Thanks for the reply. Here's the EXPLAIN output of a couple of the queries:
SELECT the_geom,oid from mytable
where the_geom && ST_GeomFromText('POLYGON((529342.334095833
180696.22173,529342.334095833 181533.44595,530964.336820833
181533.44595,530964.336820833 180696.22173,529342
On Wed, 23 Jul 2014 10:45:56 +0100
Rebecca Clarke wrote:
> Hi all,
>
> Looking for some advice regarding a slow query I have and indexing.
>
> I'm using postgresql 9.1 and this is my table that has around 680 rows:
>
> CREATE TABLE mytable
> (
> class character varying,
> floor charact
Hi all,
Looking for some advice regarding a slow query I have and indexing.
I'm using postgresql 9.1 and this is my table that has around 680 rows:
CREATE TABLE mytable
(
class character varying,
floor character varying,
source_id integer,
the_geom geometry
)
WITH (
OIDS=TRUE
);
On 31 Srpen 2011, 16:49, Andy Colson wrote:
> On 8/31/2011 9:35 AM, Tore Halvorsen wrote:
>> Hi,
>>
>> I'm trying to optimize a query where I have two tables that both have a
>> timestamp column. I want the result where either of the timestamps is
>> after a specified time. In a reduced form, like
On 8/31/2011 9:53 AM, Tore Halvorsen wrote:
On Wed, Aug 31, 2011 at 4:49 PM, Andy Colson mailto:a...@squeakycode.net>> wrote:
On PG 9, after I ANALYZED the tables, it used indexes:
QUERY PLAN
--__--__-
On Wed, Aug 31, 2011 at 4:49 PM, Andy Colson wrote:
> On PG 9, after I ANALYZED the tables, it used indexes:
>
>
> QUERY PLAN
> --**--**
> --**-
> Merge Join (cost=1.59..82778.35 rows=13171 w
On 8/31/2011 9:35 AM, Tore Halvorsen wrote:
Hi,
I'm trying to optimize a query where I have two tables that both have a
timestamp column. I want the result where either of the timestamps is
after a specified time. In a reduced form, like this:
CREATE TABLE a
(
id serial NOT NULL PRIMARY KEY
>
> -- This can be optimized by using CTEs
> with am as (
> select * from a where time >= '2011-08-15'
> )
> , bm as (
> select * from b where time >= '2011-08-15'
> )
> select * from am join bm using(id)
>
Disregard this, it doesn't to the same at all.
Now I'm more confused as to how I can o
Hi,
I'm trying to optimize a query where I have two tables that both have a
timestamp column. I want the result where either of the timestamps is after
a specified time. In a reduced form, like this:
CREATE TABLE a
(
id serial NOT NULL PRIMARY KEY,
time timestamp without time zone NOT NULL D
On Thu, Sep 17, 2009 at 9:55 AM, Ian Harding wrote:
> I have never had this particular problem in PostgreSQL, it seems to
> "just know" when queries can be "flattened" and indexes used. I know
> that takes tons of work. Thank you for that.
>
> Here's the Oracle question.
>
> http://stackoverflow
I have never had this particular problem in PostgreSQL, it seems to
"just know" when queries can be "flattened" and indexes used. I know
that takes tons of work. Thank you for that.
Here's the Oracle question.
http://stackoverflow.com/questions/1439500/oracle-index-usage-in-view-with-aggregates
Hello,
QUESTION1: Can somebody clarify in what of the proposed scenarios is the
following index used? Any further comment will be greatly appreciated.
QUESTION2: Does any other scenarios when an index is NOT used (and someone
might possibly expect it is used) come to your mind? Thank you v
On Sat, Nov 03, 2007 at 11:42:39AM +0400, rihad wrote:
> Does this mean that a condition like "WHERE ... [AND] lhs.a=rhs.b [AND]
> ..." where rhs.b is already unique-indexed, also requires (non-unique)
> index on lhs.a for maximal join speed? Otherwise why would they want to
> say that?
No, as
http://www.postgresql.org/docs/8.2/interactive/indexes-intro.html
states that "Indexes can moreover be used in join searches. Thus, an
index defined on a column that is part of a join condition can
significantly speed up queries with joins."
Does this mean that a condition like "WHERE ... [AND
On 10/17/07, Joao Miguel Ferreira <[EMAIL PROTECTED]> wrote:
> If your intention is to eliminate the unused indexes rows you should run
> 'vaccum' and/or 'vacuum full' and/or 'reindex'.
>
> This also has the consequence of freing filesystem space and returning
> it back to the OS.
>
> Check it out
On Tue, 2007-10-16 at 15:51 -0700, Ben wrote:
> You could take a look at pg_statio_user_indexes and/or
> pg_stat_user_indexes, if you have stats enabled
>
> On Tue, 16 Oct 2007, Bryan Murphy wrote:
If your intention is to eliminate the unused indexes rows you should run
'vaccum' and/or 'vacu
You could take a look at pg_statio_user_indexes and/or
pg_stat_user_indexes, if you have stats enabled
On Tue, 16 Oct 2007, Bryan Murphy wrote:
Is there a way I can track index usage over a long period of time?
Specifically, I'd like to identify indexes that aren't being regularly
used and
Is there a way I can track index usage over a long period of time?
Specifically, I'd like to identify indexes that aren't being regularly
used and drop them.
Bryan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
> Not that it changes anything, but I assume you meant to create
> the first index on c1.
Yes, I did -- that was just a typo in the example. Thanks for pointing
it out.
Adam Pritchard
---(end of broadcast)---
TIP 4: Have you searched our list ar
"Adam Pritchard" <[EMAIL PROTECTED]> writes:
> CREATE TABLE t ( c1 INT, c2 INT );
> CREATE INDEX c1_idx ON t(c2);
> CREATE INDEX c2_idx ON t(c2);
> EXPLAIN SELECT * FROM t ORDER BY c1, c2;
> "Sort (cost=69.83..72.33 rows=1000 width=8)"
> " Sort Key: c1, c2"
> " -> Seq Scan on t (cost=0.00..20.
On Fri, Jul 08, 2005 at 10:06:23AM -0700, Adam Pritchard wrote:
> Why aren't two single-column indexes used in a two-column ORDER BY
> clause? And is there some way to work around this?
>
> For example:
> CREATE TABLE t ( c1 INT, c2 INT );
> CREATE INDEX c1_idx ON t(c2);
Why aren't two single-column indexes used in a two-column ORDER BY
clause? And is there some way to work around this?
For example:
CREATE TABLE t ( c1 INT, c2 INT );
CREATE INDEX c1_idx ON t(c2);
CREATE INDEX c2_idx ON t(c2);
EXPLAIN SELECT * FROM t ORDER BY c1, c2;
"Sort (cost=69.83..72.33 rows
Tom Lane wrote:
This script is lacking a VACUUM or ANALYZE command, so the planner
doesn't know how large the table is. Note the ridiculously small
cost estimates in EXPLAIN ...
I see, I never knew about having to VACUUM/ANALYZE first. Thanks.
--
dave
---(end of broadcast)
David Garamond <[EMAIL PROTECTED]> writes:
> 1. script to create the test table (a 1mil-record table; each record
> contains 1-40 random bytes):
This script is lacking a VACUUM or ANALYZE command, so the planner
doesn't know how large the table is. Note the ridiculously small
cost estimates in E
Tom Lane wrote:
David Garamond <[EMAIL PROTECTED]> writes:
The table contain +- 1 mil records, all of the actual version of the
queries below return < 10 rows, so an index should be used. Using an
index scan, the query ran < 100ms. Using seq scan, 2-3 secs. So there is
no good reason why a seq
David Garamond <[EMAIL PROTECTED]> writes:
> The table contain +- 1 mil records, all of the actual version of the
> queries below return < 10 rows, so an index should be used. Using an
> index scan, the query ran < 100ms. Using seq scan, 2-3 secs. So there is
> no good reason why a seq scan shou
The table contain +- 1 mil records, all of the actual version of the
queries below return < 10 rows, so an index should be used. Using an
index scan, the query ran < 100ms. Using seq scan, 2-3 secs. So there is
no good reason why a seq scan should be used, especially in a case of
b='foo' or b='
On Fri, 24 Oct 2003, Staff, Alexander wrote:
> Hi,
> I created a simple table (name char200, zip char10, city char200, street char200, id
> int) and filled some data, appr. 250 000 records, in it.
> I tested accessing some rows (select id from address where id = 4;, select *
> from address w
Look through the docs. By altering the values of random_page_cost,
effect_cache_size and cpu_tuple_cost you can make the estimates approximate
real life better.
On Tue, Aug 26, 2003 at 12:32:23PM +0100, Pedro Alves wrote:
>
>
>
> One more thing I just noticed. Right after making a vacuum an
Pedro Alves <[EMAIL PROTECTED]> writes:
> The vacuum analyze is run on daily basis, so that cannot be the point.
The next thing I'd try to improve the planner's guess is to increase the
statistics target for the ra_datacolh column (see ALTER TABLE SET
STATISTICS). It looks like the default of 10
On Mon, 25 Aug 2003, Pedro Alves wrote:
> The querys below are exactly the same but refer to different months.
> One case uses indexes, the other doesn't.
>
> Is there anything I can do? Increasing index mem size?
Run "vacuum analyze". The planner seems to think that one of the queries
(Sorry for reposting...)
> You said you enter some rows, but how many rows got returned by your
query?
You're right, it does make all the difference. With only a few rows,
indexing is somehow used in the first case only (for my example), but with
larger tables index scan is used in both ways.
What does your data look like? If you have a lot of duplicate id's, a
sequential scan may be better than an index scan.
If you are not sure if this is the case, try:
SELECT id, count(*) AS count FROM test GROUP BY id ORDER BY count DESC
LIMIT 50;
This should show you the top 50 most duplicate
37 matches
Mail list logo