Mark Kirkwood wrote:
Matthew Sackman wrote:
I need to get to the stage where I can run queries such as:
>
select street, locality_1, locality_2, city from address where (city =
'Nottingham' or locality_2 = 'Nottingham'
or locality_1 = 'Nottingham')
and upper(substring(street from
Hi all
I have got lot of information from ur group.
Now i want to relieve from this group.
I kindly request all of you.
Plz unsubcribe me.
Thankz a lot
RameshÂ
On 9/3/05, Matteo Beccati <[EMAIL PROTECTED]> wrote:
Simon Riggs wrote:> Returning to Matteo's original query, what we are saying is that t
Simon Riggs wrote:
Returning to Matteo's original query, what we are saying is that the new
optimization for MIN/MAX queries doesn't work with inherited tables.
It could do, by running optimize_minmax_aggregates() for each query that
gets planned to see if a better plan exists for each child t
Akshay Mathur wrote:
Ron,
Can you give me some pointers to make the tables RAM resident. If one
does so, is the program accessing the data need to change. Does pgsql
take care to write the data to disk?
PostgreSQL tried to intelligently cache information and then will also
use the OS disk
On Fri, 2005-09-02 at 12:20 +0200, Matteo Beccati wrote:
> I'm using inherited tables to partition some data which can grow very
> large. Recently I discovered that a simple query that on a regular table
> would use an index was instead using seq scans (70s vs a guessed 2s).
> The well known quer
Matthew,
> Well, this is a development box. But the live box wouldn't be much more
> than RAID 1 on SCSI 10ks so that should only be a halving of seek time,
> not the 1000 times reduction I'm after!
If you're looking for 1000 times reduction, I think you're going to need
*considerably* beefier h
"DELETE FROM statistics_sasme WHERE statistic_id = 9832;"
As Michael said, why use a NUMERIC when a bigint is faster and better for
your use case, as you only need an integer and not a fixed precision
decimal ?
Also if you use postgres < 8, the index will not be used if you search
Hey there folks. I'm at a loss as to how to increase the speed of this
query. It's something I need to run each day, but can't at the rate this
runs. Tables are updated 1/day and is vacuum analyzed after each load.
select ddw_tran_key, r.price_type_id, t.price_type_id
from
cdm.cdm_ddw_tran_
The correct strategy IMHO would
be applying the order by and limit for each child table (which results
in an index scan, if possible), appending, then finally sorting a bunch
of rows, and limiting again.
This would be a win in some cases, and in many others a loss (ie, wasted
sort steps). The h
Hi,
The correct strategy IMHO would
be applying the order by and limit for each child table (which results
in an index scan, if possible), appending, then finally sorting a bunch
of rows, and limiting again.
This would be a win in some cases, and in many others a loss (ie, wasted
sort steps)
Matteo Beccati <[EMAIL PROTECTED]> writes:
> The correct strategy IMHO would
> be applying the order by and limit for each child table (which results
> in an index scan, if possible), appending, then finally sorting a bunch
> of rows, and limiting again.
This would be a win in some cases, and i
On Fri, Sep 02, 2005 at 01:43:05PM +0200, Arnau wrote:
>
> statistic_id | numeric(10,0)| not null default
> nextval('STATISTICS_OPERATOR_ID_SEQ'::text)
Any reason this column is numeric instead of integer or bigint?
> That contains about 7.000.000 entries and I have to r
Simon Riggs wrote:
The query plan generated when running the query on a table which has
inheritance forces the planner to choose a seq_scan for each table.
Wouldn't be a good thing to also promote ORDER BYs and LIMITs to each
subscan (like WHERE does)?
The tuple_fraction implied by LIMIT is a
On Fri, 2005-09-02 at 12:20 +0200, Matteo Beccati wrote:
> I'm using inherited tables to partition some data which can grow very
> large. Recently I discovered that a simple query that on a regular table
> would use an index was instead using seq scans (70s vs a guessed 2s).
> The well known que
Hi all,
I have the following table:
espsm_asme=# \d statistics_sasme
Table "public.statistics_sasme"
Column | Type |
Modifiers
--+--+-
On 2 Sep 2005, at 10:42, Richard Huxton wrote:
Ricardo Humphreys wrote:
Hi all.
In a cluster, is there any way to use the main memory of the
other nodes instead of the swap? If I have a query with many sub-
queries and a lot of data, I can easily fill all the memory in a
node. The point
Hi,
I'm using inherited tables to partition some data which can grow very
large. Recently I discovered that a simple query that on a regular table
would use an index was instead using seq scans (70s vs a guessed 2s).
The well known query is:
SELECT foo FROM bar ORDER BY foo DESC LIMIT 1
(The
Hemant Pandey wrote:
Operation A:6 Fetch + 1 Update
Operation B:9 Fetch
Operation C:5 Fetch + 3 Update ( Tables has 140 records)
I have run these operations while Auto Vacumm is running and observed the
time taken in thse operations. I found that Operation C
Ricardo Humphreys wrote:
Hi all.
In a cluster, is there any way to use the main memory of the other nodes
instead of the swap? If I have a query with many sub-queries and a lot of
data, I can easily fill all the memory in a node. The point is: is there any
way to continue using the main memor
Ron,
Can you give me some pointers to make the tables RAM resident. If one
does so, is the program accessing the data need to change. Does pgsql
take care to write the data to disk?
Regards,
akshay
---
Akshay Mathur
SMTS, Product Verification
AirTight Network
20 matches
Mail list logo