Todd A. Cook wrote:
Tom Lane wrote:
"Todd A. Cook" writes:
First, the numbers:
PG VersionLoad time pg_database_size autovac
--
8.2.13179 min 92,807,992,820on
8.3.7
Tom Lane wrote:
"Todd A. Cook" writes:
First, the numbers:
PG VersionLoad time pg_database_size autovac
--
8.2.13179 min 92,807,992,820on
8.3.7 180 min 84,048,744,044
Vick Khera wrote:
On Wed, Jun 17, 2009 at 10:50 AM, Todd A.
Cook wrote:
The loads were all done on the same machine, with the DB going on a pair
of SATA drives in a RAID-0 stripe. The machine has 2 non-HT Xeons and
8GB RAM. maintenance_work_mem was set to 512MB in all three cases.
What if
Hi,
First, the numbers:
PG VersionLoad time pg_database_size autovac
--
8.2.13179 min 92,807,992,820on
8.3.7 180 min 84,048,744,044on (defaults)
8.4b2 206 min 84,0
Tom Lane wrote:
It turned out to be a very easy change, so it's done: QUERY isn't a
reserved word anymore.
Thanks for your help. :)
-- todd
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi,
I saw the item in the release notes about the new "return query" syntax in
pl/pgsql,
but I didn't see any note about "query" being reserved now. Perhaps an explicit
mention should be added?
I loaded a dump from 8.2.4 into 8.3b2 without error. However, every function
that
uses "query" as
Tom Lane wrote:
Hmm. One of the things that's on my TODO list is to make the planner
smarter about drilling down into sub-selects to extract statistics.
I think that's what's called for here, but your example has eliminated
all the interesting details. Can you show us the actual query, its
EXP
Tom Lane wrote:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
oom_test=> explain select val,count(*) from oom_tab group by val;
QUERY PLAN
-
HashAggregate (cost=1163446.
Tom Lane wrote:
Misestimated hash aggregation, perhaps? What is the query and what does
EXPLAIN show for it? What have you got work_mem set to?
oom_test=> \d oom_tab
Table "public.oom_tab"
Column | Type | Modifiers
+-+---
val| integer |
oom_test=> explai
Hi,
I am consistently running into out-of-memory issues in 8.1.4 running on
RHEL3 and 8.0.5 on RHEL4. The logs show entries like this:
AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks);
-2130724312 used
TupleHashTable: 893902872 total in 119 blocks; 1088688 free (449 chunks);
n or larger, that's a lot of data to
store.
On Oct 2, 2005, at 12:14 PM, Todd A. Cook wrote:
Ben wrote:
Just the number of bits, not which ones. Basically, the hamming
distance.
I see. Could you pre-compute the bit counts for the vectors in the
table?
You could count th
Hi,
Try breaking the vector into 4 bigint columns and building a multi-column
index, with index columns going from the most evenly distributed to the
least. Depending on the distribution of your data, you may only need 2
or 3 columns in the index. If you can cluster the table in that order,
it
. Can you give me an example with some
reasonably sized vectors?
On Oct 2, 2005, at 10:59 AM, Todd A. Cook wrote:
Hi,
Try breaking the vector into 4 bigint columns and building a multi-
column
index, with index columns going from the most evenly distributed to the
least. Depending on the
Ben wrote:
Just the number of bits, not which ones. Basically, the hamming distance.
I see. Could you pre-compute the bit counts for the vectors in the table?
You could count the bits in the search vector as Martijn suggested, and then
do a lookup based on the count.
-- todd
14 matches
Mail list logo