I'm hitting an unexpected problem with postgres 8.3 - I have some
tables which use varchar(32) for their unique IDs which I'm attempting
to join using some simple SQL:
select *
from group_access, groups
where group_access.groupid = groups.groupid and
group_access.uid = '7275359408f44591d0
We store traffic data in the partitioned tables. But the problem is
that all this data is not correct. The data is corrupt, hence they
need to be corrected.
On Wed, Apr 9, 2008 at 10:31 PM, Erik Jones <[EMAIL PROTECTED]> wrote:
>
> On Apr 9, 2008, at 6:41 PM, samantha mahindrakar wrote:
>
> > Hi
On Apr 9, 2008, at 6:41 PM, samantha mahindrakar wrote:
Hi
The reason for using the temporary table is that i need this data
buffered somewhere so that i can use it for later computation. And the
fact that for each imputation i need to have historical data from 10
previous weeks makes it necessa
Hi
The reason for using the temporary table is that i need this data
buffered somewhere so that i can use it for later computation. And the
fact that for each imputation i need to have historical data from 10
previous weeks makes it necessary to create something that can hold
the data. However once
The partitions are used to separate the data according to months. I
have run a query o find bad data from each such partition. The
imputation algorithm that i use requires data from 10 previous weeks
in order to impute the data. This historical data i store in a
temporary table, the i query this da
Bill Moran wrote:
This is a FAQ, it comes up on an almost weekly basis.
I don't think so. "where".
- select count(*) from gene_prediction_view where gene_ref = 523
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes
On Wed, Apr 9, 2008 at 11:41 AM, PFC <[EMAIL PROTECTED]> wrote:
>In order to use the index, you could rewrite it as something like :
>a.calldate >= '2008-04-09' AND a.calldate < ('2008-04-09'::DATE + '1
> DAY'::INTERVAL)
>This is a RANGE query (just like BETWEEN) which is in
This is a FAQ, it comes up on an almost weekly basis. Please do a
little Googling on count(*) and PostgreSQL and you'll get all the
explanations and suggestions on how to fix the problem you could
ever want.
In response to Arjen van der Meijden <[EMAIL PROTECTED]>:
> First of all, there is the
First of all, there is the 'explain analyze' output, which is pretty
helpful in postgresql.
My guess is, postgresql decides to do a table scan for some reason. It
might not have enough statistics for this particular table or column, to
make a sound decision. What you can try is to increase the
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes 26 sec
On Apr 8, 2008, at 2:43 PM, Alvaro Herrera wrote:
samantha mahindrakar escribió:
Well instead of creating a temp table everytime i just created a
permanant table and insert the data into it everytime and truncate
it.
I created indexes on this permanent table too. This did improve the
performa
On Mar 31, 2008, at 8:23 PM, Ravi Chemudugunta wrote:
In general I would recommend that you benchmark them using
as-close-to-real load as possible again as-real-as-possible data.
I am running a benchmark with around 900,000 odd records (real-load on
the live machine :o ) ... should show hopef
Well, this table has a primary key index on first column called acctid
which is an integer; instead the calldate column is a TIMESTAMPTZ and in
fact I'm using to do (calldate)::date in the ON clause because since the
time part of that column is always different and in the nesting I have
t
Luigi N. Puleio wrote:
If for some reason you cannot do that, please at least include the data
type of the primary key and all fields involved in the query, as well as
a list of all the indexes on both tables.
If you won't show people on the list your table definitions, or at least
the informati
>> With all that casting, is it possible that appropriate indexes aren't
>> being used because your WHERE / ON clauses aren't an exact type match
>> for the index?
>
> You mean to put an index on date with timestamptz datatype column?...
> Er ... I'm not quite sure what you mean. Do you mean an i
Luigi N. Puleio wrote:
>> With all that casting, is it possible that appropriate indexes aren't
>> being used because your WHERE / ON clauses aren't an exact type match
>> for the index?
>
> You mean to put an index on date with timestamptz datatype column?...
Er ... I'm not quite sure what you
>> SELECT
>> (a.column1)::date, MIN(b.column2) - a.column2
>> FROM
>> table a
>> inner join table b
>> on ((a.column1)::date = (b.column1)::date amd
>> b.column3 = 'b' and (b.column1)::time without time
>> zone >= (a.column1)::time without time zone)
>> WHERE
>> (a.colum
Luigi N. Puleio wrote:
> SELECT
> (a.column1)::date, MIN(b.column2) - a.column2
> FROM
> table a
> inner join table b
> on ((a.column1)::date = (b.column1)::date amd
> b.column3 = 'b' and (b.column1)::time without time
> zone >= (a.column1)::time without time zone)
> WHERE
--- Pavan Deolasee <[EMAIL PROTECTED]> wrote:
> On Wed, Apr 9, 2008 at 3:21 PM, Luigi N. Puleio
> <[EMAIL PROTECTED]> wrote:
> > Hello everyone!!
> >
> > I have a table with 17 columns and it has almost
> > 53 records and doing just a
> >
> > SELECT * FROM table
> >
> > with the EXPLAIN AN
On Apr 7, 8:27 pm, [EMAIL PROTECTED] ("samantha mahindrakar")
wrote:
> Hi
> I have written a program that imputes(or rather corrects data) with in
> my database.
> Iam using a temporary table where in i put data from other partitoined
> table. I then query this table to get the desired data.But the
On Wed, 9 Apr 2008, Pavan Deolasee wrote:
I have a table with 17 columns and it has almost
53 records and doing just a
SELECT * FROM table
Well, PK won't help you here because you are selecting all rows
from the table and that seq scan is the right thing for that.
Yes. Like he said.
Pavan Deolasee wrote:
> Anyways, I don't think there is much
> tweaking you can do for such a query except making sure that
> your table is not bloated with dead tuples.
To the OP:
More explicitly: Make sure you use autovacuum or run VACUUM manually on
the table periodically.
Would I be correct
On Wed, Apr 9, 2008 at 3:21 PM, Luigi N. Puleio <[EMAIL PROTECTED]> wrote:
> Hello everyone!!
>
> I have a table with 17 columns and it has almost
> 53 records and doing just a
>
> SELECT * FROM table
>
> with the EXPLAIN ANALYZE I get:
>
> Seq Scan on table (cost=0.00...19452.95 rows=5293
Hello everyone!!
I have a table with 17 columns and it has almost
53 records and doing just a
SELECT * FROM table
with the EXPLAIN ANALYZE I get:
Seq Scan on table (cost=0.00...19452.95 rows=529395
width=170) (actual time=0.155...2194.294 rows=529395
loops=1)
total runtime=3679.039 ms
and
25 matches
Mail list logo