Cosimo Streppone wrote:
Mark Kirkwood ha scritto:
Cosimo Streppone wrote:
# Config
/etc/sysctl.conf:
kernel.shmall = 786432000
kernel.shmmax = 786432000
I think you have a problem here.
kernel.shmmax should *not* be set to an amount of RAM, but
Sorry, I thought
/16)) /* max shm system wide
(pages) */
#define SHMSEG SHMMNI
Hope that helps
Best wishes
Mark
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
ably high. I can't see much reason for
setting it bigger than (physical RAM in bytes)/4096 myself. So in your
case this is 2*(1024*1024*1024)/4096 = 524288
Cheers
Mark
---(end of broadcast)---
TIP 2: you can get off all lists at once with
used postgresql with xfs on mandrake 9.0/9.1 a while ago -
reliability was great, performance seemed better than ext3. I didn't
compare with reiserfs - the only time I have ever lost data from a Linux
box has been when I used reiserfs, hence I am not a fan :-(
best wi
r the most bang for the buck
are appreciated!
( I have already found: http://www.powerpostgresql.com/PerfList/ and it has
been a very helpful source of suggestions. )
Mark
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please
md64 postgresql port. Is anyone else
using postgresql on an AMD64 machine without similar issues?
TIA
Mark
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PRO
I'm running the Redhat Version of Postgresql which came pre-installed
with Redhat ES. It's version number is 7.3.10-1. I'm not sure what
options it was compiled with. Is there a way for me to tell? Should i
just compile my own postgresql for this platform?
Thanks
Mark
On Mon, 2
would expect less than 5 minutes on
reasonable hardware.
You may have dead tuple bloat - can you post the output of 'ANALYZE
VERBOSE mytable' ?
Cheers
Mark
---(end of broadcast)---
TIP 6: Have you searched our list archives?
Apologies - I should have said output of 'VACUUM VERBOSE mytable'.
(been using 8.1, which displays dead tuple info in ANALYZE...).
Mark
Yves Vindevogel wrote:
rvponp=# analyze verbose tblPrintjobs ;
INFO: analyzing "public.tblprintjobs"
INFO: "tblprintjobs":
Neil Conway wrote:
> Mark Stosberg wrote:
>> I've used PQA to analyze my queries and happy overall with how they are
>> running. About 55% of the query time is going to variations of the pet
>> searching query, which seems like where it should be going. The query is
&
as a
'supported' package for RH7.2 anyway.
-- Mark Lewis
On Thu, 2005-06-16 at 07:46 -0700, Todd Landfried wrote:
> Yes, it is 7.2. Why? because an older version of our software runs on
> RH7.3 and that was the latest supported release of Postgresql for
> RH7.3 (that we can find)
On Fri, 2005-07-08 at 16:43 +0200, Enrico Weigelt wrote:
> * PFC <[EMAIL PROTECTED]> wrote:
>
>
> > For Python it's the reverse : the MySQL driver is slow and dumb,
> > and the postgres driver (psycopg 2) is super fast, handles all
> > quoting,
> > and knows about type conversions, i
sr.name,sr.seq_region_id,
sr.length, 1 FROM seq_region sr WHERE sr.name = $1 AND sr.coord_system_id = $2]
LOG: duration: 0.164 ms
I've attached a patch in case anyone finds it useful.
-Mark
*** postgresql-8.0.3/src/backend/tcop/postgres.c2005-07-13
09:42:04.997669193 +0100
--- p
36GB drives.
But here are some number for my 12-disk lvm2 striped volume.
$ time dd if=/dev/zero of=bigfile3 bs=8k count=400
400+0 records in
400+0 records out
real1m17.059s
user0m1.479s
sys 0m41.293s
Mark
On Thu, 21 Jul 2005 16:14:47 -0700
"Luke Lonergan&quo
the size of the table by a large percentage, you will
want to ANALYZE periodically, as an optimal plan for a small table may
be a disaster for a large table, and PostgreSQL won't switch plans
unless you run ANALYZE.
-- Mark
On Tue, 2005-07-26 at 14:51 -0500, John A Meinel wrote:
> Matth
> performance is also very poor, though generally faster than using HW RAID.
Are there any recommendations for Qlogic controllers on Linux, scsi or
fiber channel? I might be able to my hands on some. I have pci-x slots
for AMD, Itanium, or POWER5 if the architecture makes a difference.
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
ly want
all the rows?
Cheers
Mark
---(end of broadcast)---
TIP 6: explain analyze is your friend
to be mindful of . . . Truncate is NOT accounted for with this,
and unfortunately the rule system doesn't allow truncate operations so you
can't work around it that way.
'njoy,
Mark
On 8/10/05 11:52 PM, "Gavin Sherry" <[EMAIL PROTECTED]> wrote:
> Hi Dan,
>
performance with
this particular hardware and workload?
Ah well. Thought myself in circles and have no real conclusions to show
for it. Posting anyway, maybe this will give somebody some ideas to
work with.
-- Mark Lewis
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
> Paul,
>
> Bef
ch will really test any kind of soundex matching
if you are using that.
http://www.ncbi.nlm.nih.gov/Omim/omimfaq.html#download
Unfortunately it only comes as a flat text file, but is
very easy to parse.
And if you start reading it, you'll probably learn quite
a lot of things you really did
eative alternatives would be greatly
appreciated. :)
'njoy,
Mark
--
Writing software requires an intelligent person,
creating functional art requires an artist.
-- Unknown
---(end of broadcast)---
TIP 3: Have you checked our exten
ght of building up a batch of queries and creating a transaction from
them.
I've been looking into memcached as a persistent object store as well and
hadn't seen the reactor pattern yet. Still trying to get my puny brain
around that one.
Again, thanks for the help.
'njoy,
Mark
nously, but it would require that your libraries support this.
I understand the concepts well enough, maybe I'll put together a patch. It
would be much lower overhead than managing all those threads. Looks like
it's gonna be a fun weekend.
Thanks again for all the great feedback.
'nj
vation system for MySQL app of
the year. Needless to say they weren't too happy when they heard we might
be switching DBs.
'njoy,
Mark
On 8/19/05 1:12 PM, "J. Andrew Rogers" <[EMAIL PROTECTED]> wrote:
> On 8/19/05 1:24 AM, "Mark Cotner" <[EMAIL PROT
e) difference. I feel
like there might be a way of using an index on both 'completion_time'
and 'start_time', but can't put a temporal lobe on the details.
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
s=8 width=4)
-> Nested Loop (cost=0.00..243598.00 rows=8 width=4)
-> Index Scan Backward using c_pkey on c
(cost=0.00..1518.00 rows=8 width=4)
-> Index Scan using b_on_c on b (cost=0.00..3.01
rows=1 width=4)
Index Cond: (b.c_id = &
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
What is interesting is why this plan is being rejected...
Which PG version are you using exactly? That mistake looks like an
artifact of the 8.0 "fuzzy plan cost" patch, which we fixed recently:
http://archives.post
0
rows=8 width=4)
-> Index Scan using b_on_c on b (cost=0.00..1490.00
rows=8 width=4)
(6 rows)
The non default server params of relevance are:
shared_buffers = 12000
effective_cache_size = 10
work_mem/sort_mem = 20480
I did wonder if the highish sort_mem might
50.00 rows=8 width=4)
-> Index Scan using b_on_c on b (cost=0.00..3.01
rows=1 width=4)
Index Cond: (b.c_id = "outer".id)
(6 rows)
This is 8.1devel from today.
regards
Mark
---(end of broadcast)--
It would be good to see EXPLAIN ANALYZE output for the three queries
below (the real vs. estimated row counts being of interest).
The number of pages in your address table might be interesting to know too.
regards
Mark
Matthew Sackman wrote (with a fair bit of snippage):
explain select
treet from 1 for 1)) = 'A'
group by street, locality_1, locality_2, city
order by street
limit 20 offset 0
and have the results very quickly.
This sort of query will be handled nicely in 8.1 - it has bitmap and/or
processing to make use of multiple indexes. N
Mark Kirkwood wrote:
Matthew Sackman wrote:
I need to get to the stage where I can run queries such as:
>
select street, locality_1, locality_2, city from address where (city =
'Nottingham' or locality_2 = 'Nottingham'
or locality_1 = 'Nottingham
reign key constraints. Otherwise, you'll find out
that there's a trigger for a constraint called $3 that's taking up all
of your time, but you won't know what table that constraint is on.
-- Mark
On Tue, 2005-09-06 at 11:32 -0400, Tom Lane wrote:
> "Brian Choate&qu
ndex store the entire row from the main table regardless
of the column being indexed?
I am running postgresql 8.0.3 on a Pentium 4 with ide hard drives and the
default configuration file settings.
Thanks in advance,
mark
mark=# create sequence test_id_seq;
CREATE SEQUENCE
mark=# create tabl
er of disk passes.
Not that I have anything else useful to add to this discussion, just a
tidbit I remembered from my CS classes back in college :)
-- Mark
On Fri, 2005-09-23 at 13:17 -0400, Tom Lane wrote:
> Ron Peacetree <[EMAIL PROTECTED]> writes:
> > 2= No optimal external sort
K C Lau wrote:
Thank you all for your suggestions. I' tried, with some variations too,
but still no success. The times given are the best of a few repeated
tries on an 8.1 beta 2 db freshly migrated from 8.0.3 on Windows.
A small denormalization, where you mark the row with the l
s (or something
similar, like b+tree tables) would be highly desirable.
From what I have understood from previous discussions, there are
difficulties involved with producing a design that does not cause new
problems...
regards
Mark
---(end of broa
Which version of PG are you using? One of the new features for 8.0 was
an improved caching algorithm that was smart enough to avoid letting a
single big query sweep everything else out of cache.
-- Mark Lewis
On Tue, 2005-10-04 at 10:45 -0400, Lane Van Ingen wrote:
> Yes, Stefan, the kind
same browser
settings on the server and client (or the same browser for that matter).
Note that some browsers really suck for large (wide or long) table display!
cheers
Mark
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ
es that (in my case) most of the 2 seconds is being used by
Firefox (not being very good at) formatting the wide output for display.
The figure of about 2-5 seconds seems about right, so your 20-30 seconds
certainly seems high!
cheers
Mark
---(end of broadcast)
d(!) etc.
regards
Mark
I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html
Nope, not me either.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ave to ask :-) ).
cheers
Mark
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Christian Paul B. Cosinas wrote:
Hi mark
I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)
LOL - sorry, mis-understood your previous message to mean you had
identified *one* query
Do you have an index on the date column? Can you post an EXPLAIN
ANALYZE for the slow query?
-- Mark Lewis
On Wed, 2005-10-26 at 13:41 -0700, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is ma
ely of one executor doing "SELECT ... FROM my_single_table".
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
Cheers
Mark
---(end of broadcast)-
Luke Lonergan wrote:
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
I may not have listened
Luke Lonergan wrote:
Mark,
On 11/18/05 3:46 PM, "Mark Kirkwood" <[EMAIL PROTECTED]> wrote:
If you alter this to involve more complex joins (e.g 4. way star) and
(maybe add a small number of concurrent executors too) - is it still the
case?
4-way star, same result, that
Mark Kirkwood wrote:
- I am happy that seqscan is cpu bound after ~110M/s (It's cpu bound on
my old P3 system even earlier than that)
Ahem - after reading Alan's postings I am not so sure, ISTM that there
is some more investigation require
filesystem
created with 32K blocksize (both cases). It might be interesting to see
the effect of using 16K (the default) with the 8K Pg block size, I would
expect this to widen the gap.
Cheers
Mark
---(end of broadcast)---
TIP 5: don'
Mark Kirkwood wrote:
The test is SELECT 1 FROM table
That should read "The test is SELECT count(1) FROM table"
---(end of broadcast)---
TIP 6: explain analyze is your friend
- by (say) writing a little bit
of code to heap scan the desired relation (sample attached).
Cheers
Mark
/*
* fastcount.c
*
* Do a count that uses considerably less CPU time than an aggregate.
*/
#include "postgres.h"
#include "funcapi.h"
#include "access/he
Luke Lonergan wrote:
Mark,
This is an excellent idea – unfortunately I’m in Maui right now
(Mahalo!) and I’m not getting to testing with this. My first try was
with 8.0.3 and it’s an 8.1 function I presume.
Not to be lazy – but any hint as to how to do the same thing for 8.0?
Yeah, it
=
Analysis:
Bandwidth Percent of max
dd Read 407MB/s 100%
Count(1)263MB/s 64.6%
HeapScan383MB/s 94.1%
Cheers
Mark
---(end of broadcast)---
TIP 6: explain analyze is your friend
Luke Lonergan wrote:
Mark,
It would be nice to put some tracers into the executor and see where the
time is going. I'm also curious about the impact of the new 8.1 virtual
tuples in reducing the executor overhead. In this case my bet's on the agg
node itself, what do you thin
3 - I seem to be suffering from a very small
working memory buffer myself right now, I think it's after a day of
working with DB2 ... :-)
Anyway, as I read src/backend/parser/gram.y:6542 - count(*) is
transformed into count(1), so these two are identical.
Cheers (last time tonight, promi
Luke Lonergan wrote:
Mark,
Time: 197870.105 ms
So 198 seconds is the uncached read time with count (Just for clarity,
did you clear the Pg and filesystem caches or unmount / remount the
filesystem?)
Nope - the longer time is due to the "second write" known issue with
Postgres -
25 s
Note that the 'fastcount()' function takes 21 s in both cases - so all
the improvement seems to be from the count overhead reduction.
Cheers
Mark
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
t of interest - applying Niels palloc - avoiding changes to
NodeAgg.c and int8.c in 8.0 changes those results to:
8.0 + palloc avoiding patch : 27 s
(I am guessing the remaining 2 s could be shaved off if I backported
8.1's virtual tuples - however that looked like a lot of wo
out a 2U U320 diskpack from whomever supplies you the Supermicro boxes?
I have just noticed Luke's posting - I would second the advice to avoid
SAN - in my experience it's an expensive way to buy storage.
best wishes
Mark
---(end of broadcast)---
TIP 6: explain analyze is your friend
Kevin Brown wrote:
I'll just start by warning that I'm new-ish to postgresql.
I'm running 8.1 installed from source on a Debian Sarge server. I have a
simple query that I believe I've placed the indexes correctly for, and I
still end up with a seq scan. It makes sense, kinda, but it should b
Kevin Brown wrote:
I only had two explicit indexes. One was on to_ship.ordered_product_id and
the other was on ordered_products.paid. ordered_products.id is a primary
key. This is on your query with an index added on suspended_sub:
dli=# explain analyze SELECT ordered_products.product_i
_ship.
Cheers
Mark
---(end of broadcast)---
TIP 6: explain analyze is your friend
fact that all of those
columns are used together in the query suggests that you might do better
with a three-column index on those.
With all three columns indexed individually, you're apparently not
getting the bitmap plan that Mark is hoping for. I imagine this has to
do with the lack
s and some not), I have come to believe that hinting
(or similar) actually *hinders* the development of a great optimizer.
Best wishes
Mark
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
run
today.
cheers
Mark
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
David Lang wrote:
On Fri, 16 Dec 2005, Mark Kirkwood wrote:
Right on. Some of these "coerced" plans may perform much better. If
so, we can look at tweaking your runtime config: e.g.
effective_cache_size
random_page_cost
default_statistics_target
to see if said plans can
Jim C. Nasby wrote:
On Wed, Dec 14, 2005 at 08:28:56PM +1300, Mark Kirkwood wrote:
Another interesting thing to try is rebuilding the database ufs
filesystem(s) with 32K blocks and 4K frags (as opposed to 8K/1K or
16K/2K - can't recall the default on 4.x). I found this to give a factor
egy called the "star transformation" where
some of the dimension joins get rewritten as subqueries, then the above
method is used again! This tends to be useful when the cartesion
products would be stupidly large (e.g. "sparse" facts, or few
restriction clauses)
regards
.
If I understand it correctly, the idea is to successively build up a
list (hash / bitmap) of fact RIDS that will satisfy the query, and when
complete actually perform the join and construct tuples. The goal being
similar in intent to the star join method (i.e. access the fact table
as l
Index Cond: ((dyr >= 2010) AND (dyr <= 2015))
-> Bitmap Heap Scan on dim0 cd0 (cost=2.00..2.81 rows=1
width=4) (actual time=0.031..0.031 rows=1 loops=201600)
Recheck Cond: ("outer".d0key = cd0.d0key)
Filter: ((dyr >= 2010) AND (dyr <= 2015))
Simon Riggs wrote:
On Sun, 2005-12-18 at 17:07 +1300, Mark Kirkwood wrote:
Tom Lane wrote:
2. transform joins into subselects, then return subselect rows via an
index bitmap. Joins are performed via a bitmap addition process.
Looks like 8.1 pretty much does this right now:
Good
Simon Riggs wrote:
On Sun, 2005-12-18 at 15:02 +1300, Mark Kirkwood wrote:
Yeah - the quoted method of "make a cartesian product of the dimensions
and then join to the fact all at once" is not actually used (as written)
in many implementations
But it is used in some, whic
My understanding: Teradata and DB2 use this.
FWIW - I think DB2 uses the successive fact RID buildup (i.e method 2),
unfortunately I haven't got a working copy of DB2 in front of me to test.
Cheers
Mark
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Mark Kirkwood wrote:
Kevin Brown wrote:
I'll just start by warning that I'm new-ish to postgresql.
I'm running 8.1 installed from source on a Debian Sarge server. I
have a simple query that I believe I've placed the indexes correctly
for, and I still end up with a seq s
Vivek Khera wrote:
and only the
opteron boxes needed to come from sun. add a zero return policy and you
wonder how they expect to keep in business
sorry, i had to vent.
Just out of interest - why did the opterons need to come from Sun?
---(end of broadcast
iggered by the truncate CRON jobs too.
4) Some of our config settings:
shared_buffers = 8192
work_mem = 8192
Total RAM on server is 1 Gig
Basically any advice as to what to look at to avoid this situation would be
greatly appreciated. Is this simply a matter of
On Thursday 05 January 2006 15:12, Qingqing Zhou wrote:
> "Mark Liberman" <[EMAIL PROTECTED]> wrote
>
> > First run, after a night of inactivity:
> >
> > -> Bitmap Index Scan on
> > 1min_events_file_id_begin_idx (cost=0.00..
than a handwaving manner - e.g pgbench on an older SMP system
showed what looked like about a 10% hit. However the noise level for
pgbench is typically >10% so - a better benchmark on better hardware is
needed.
Cheers
Mark
---(end of b
inly SELECT type queries, the dangers
connected with priority inversion are considerably reduced.
Cheers
Mark
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
of any task holding a lock that a high priority task
waits on. I guess that would just make it so that instead of HIGH tasks
being effectively reduced to LOW, then LOW tasks could be promoted to
HIGH.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
posting this here instead of the GENERAL list...richard is right, this is more
of a performance question than a general question.
thanks,
Mark Jensen
- Forwarded Message
From: Mark Jensen <[EMAIL PROTECTED]>
To: Richard Huxton
Cc: pgsql-g
than
better disk...)
Cheers
Mark
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
David Boreham wrote:
These number look a bit strange. I am wondering if there is a hardware
problem on one of the drives
or on the controller. Check in syslog for messages about disk timeouts
etc. 100% util but 6 writes/s
is just wrong (unless the drive is a 1980's vintage floppy).
Agreed
n the machine above? Any other settings
that I should be paying particular consideration too?
Thanks
Mark
]
Sent: 04 December 2006 23:29
To: Mark Lonsdale
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Configuration settings for 32GB RAM server
On 4-Dec-06, at 12:10 PM, Mark Lonsdale wrote:
Hi
We are migrating our Postgres 7.3.4 application to postgres 8.1.5 and also
So a RAID controller with a battery-backed write cache can enable its
own write cache, but can't safely enable the write-caches on the disk
drives it manages.
-- Mark Lewis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
lls - but
'SELECT FOUND_ROWS()' involves that set of steps too, so I'm not
entirely convinced that doing a 2nd 'SELECT count(*)...' is really any
different in impact.
Cheers
Mark
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
t of variables to consider if you want to settle this
debate once and for all :-)!
Best wishes
Mark
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http://www.postgresql.org/about/donate
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
-- Mark
On Wed, 2006-
x27;m wondering if it's possible to shed some light on the
remaining dark shadows of PG performance troubleshooting.
-- Mark Lewis
On Thu, 2006-12-14 at 11:19 -0500, Tom Lane wrote:
> Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> > Tom Lane a crit :
> >> It seems the ti
efault.
Anyone know how this might be improved (short of replacing hardware)?
Updating the BIOS might be worth investigating, and then bugging your
Linux distro mailing list/support etc for more help. (What sort of
motherboard is it?)
Cheers
Mark
?
any help much appreciated,
Mark
ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192
Mark
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
system.
Another factors to consider is that end user tools (and end users) may
find a date/time dimension helpful.
Best wishes
Mark
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http
strong chance of working again.
Is anyone else using "statement_timeout" as part of an overall
performance plan?
Mark
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
ther people who have deployed high-performance proximity
searches with PostgreSQL would be appreciated!
Mark
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Bruno Wolff III wrote:
> On Sat, Feb 03, 2007 at 14:00:26 -0500,
> Mark Stosberg <[EMAIL PROTECTED]> wrote:
>> I'm using geo_distance() from contrib/earthdistance would like to find a
>> way to spend up the geo distance calculation if possible. This is for a
Merlin Moncure wrote:
> On 2/5/07, Mark Stosberg <[EMAIL PROTECTED]> wrote:
>> Bruno Wolff III wrote:
>> > On Sat, Feb 03, 2007 at 14:00:26 -0500,
>> > Mark Stosberg <[EMAIL PROTECTED]> wrote:
>> >> I'm using geo_distance() from contrib/ear
coming up with any
thing that
benchmarked faster.
I'm reading the explain analyze output correctly myself, nearly all of
the time spent is related to the 'pets' table, but I can't see what to
about it.
Help appreciated!
Mark
Nested Loop (cost=11.82..29.90 rows=1 width=0
Mark Stosberg wrote:
>
> I'm reading the explain analyze output correctly myself, nearly all of
> the time spent is related to the 'pets' table, but I can't see what to
> about it.
Something about typing that message jarred by brain to think to try:
VACUUM FULL
101 - 200 of 951 matches
Mail list logo