On 21 sep 2009, at 23.41, Bruce Momjian wrote:
Alan McKay wrote:
And if so, where does that extra load go? ? ?Disk? ?CPU? ?RAM?
As of 8.4.X the load isn't measurable.
Thanks Bruce. What about 8.3 since that is our current production
DB?
Same. All statsistics settings that are enabled
astro77 wrote:
> Thanks Kevin. I thought about using tsearch2 but I need to be able to select
> exact values on other numerical queries and cannot use "contains" queries.
You might be able to make use of a custom parser for tsearch2 that creates
something like a single "word" for xml fragments lik
As a follow-up, when I try to create the index like this...
CREATE INDEX concurrently
idx_object_nodeid2
ON
object
USING
btree(
xpath('/a:root/a:Identification/b:ObjectId/text()', serialized_object,
ARRAY
[
ARRAY['a', 'http://schemas.datacontract.org/20
Thanks Kevin. I thought about using tsearch2 but I need to be able to select
exact values on other numerical queries and cannot use "contains" queries.
It's got to be fast so I cannot have lots of records returned and have to do
secondary processing on the xml for the records which contain the exa
CREATE INDEX CONCURRENTLY idx_serializedxml
ON "object" (serialized_object ASC NULLS LAST);
yields the error:
ERROR: data type xml has no default operator class for access method "btree"
The same error occurs when I try to use the other access methods as well.
On Thu, Sep 3, 2009 at 4:06 PM
On Monday 21 September 2009 17:00:36 Merlin Moncure wrote:
> On Mon, Sep 21, 2009 at 10:50 AM, Vincent de Phily
>
> wrote:
> > On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:
> >> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily
> >>
> >> wrote:
> >> >
On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:
> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily
> wrote:
> > Table "public.message"
> > Column | Type | Modifiers
> > ---+--
On Friday 11 September 2009 23:30:37 Robert Haas wrote:
> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily
> wrote:
> > On Monday 07 September 2009 03:25:23 Tom Lane wrote:
> >>
> >> 99% of the time, the reason a delete takes way longer than it seems like
> >> it should is trigger firing time. In
I'm looking at running session servers in ram. All the data is
throw-away data, so my plan is to have a copy of the empty db on the
hard drive ready to go, and have a script that just copies it into ram
and starts the db there. We're currently IO write bound with
fsync=off using a 15k5 seagate SA
Alan McKay wrote:
> >> And if so, where does that extra load go? ? ?Disk? ?CPU? ?RAM?
> >
> > As of 8.4.X the load isn't measurable.
>
> Thanks Bruce. What about 8.3 since that is our current production DB?
Same. All statsistics settings that are enabled by default have
near-zero overhead. Is
>> And if so, where does that extra load go? Disk? CPU? RAM?
>
> As of 8.4.X the load isn't measurable.
Thanks Bruce. What about 8.3 since that is our current production DB?
--
“Don't eat anything you've ever seen advertised on TV”
- Michael Pollan, author of "In Defense of Food
On 9/19/09 5:08 PM, Michael Korbakov wrote:
> -> Hash Join (cost=8.50..25.11 rows=1
> width=28) (actual time=0.092..1.864 rows=560 loops=1)
>Hash Cond:
> (((partners_shares.year)::double precision = (shares.year)::double
> precision) AND ((
Alan McKay wrote:
> Is there a rule of thumb for the extra load that will be put on a
> system when statement stats are turned on?
>
> And if so, where does that extra load go?Disk? CPU? RAM?
As of 8.4.X the load isn't measurable.
--
Bruce Momjian http://momjian.us
Enterprise
On Mon, Sep 21, 2009 at 10:47 AM, Alan McKay wrote:
> We are looking to optimize the query I was talking about last week
> which is killing our system.
>
> We have explain and analyze which tell us about the cost of a query
> time-wise, but what does one use to determine (and trace / predict?)
> m
Kevin Grittner wrote:
> Michael Glaesemann wrote:
> > On Sep 14, 2009, at 16:55 , Josh Berkus wrote:
>
> >> Please read the following two documents before posting your
> >> performance query here:
> >>
> >> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
> >> http://wiki.postgresql
On Mon, Sep 21, 2009 at 10:50 AM, Vincent de Phily
wrote:
> On Friday 11 September 2009 23:55:09 Merlin Moncure wrote:
>> On Mon, Sep 7, 2009 at 5:05 AM, Vincent de Phily
>> wrote:
>> > Table "public.message"
>> > Column | Type |
Hey folks,
We are looking to optimize the query I was talking about last week
which is killing our system.
We have explain and analyze which tell us about the cost of a query
time-wise, but what does one use to determine (and trace / predict?)
memory consumption?
thanks,
-Alan
--
“Don't eat an
* solAris:
> Also, average time to search for a query in a table is taking about 15
> seconds. I have done indexing but the time is not reducing.
> Is there any way to reduce the time to less than 1 sec ???
How are your queries structured? Do you just compare values? Do you
perform range qu
not only that's slow, but limited as you can see. Use something like:
http://gjsql.wordpress.com/2009/04/19/how-to-speed-up-index-on-bytea-text-etc/
instead.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
19 matches
Mail list logo