Thank you all for the very helpful advice. Upping work_mem made it possible
for me to generate the table within this century without bringing the server
to a near standstill. I have not yet experimented with GROUP BY, but I'll
do this next.
Cheers,
Kynn
;s uncanny. Even printing it
out to a file takes forever, let alone creating an index for it.
Any words of wisdom on how to speed this up would be appreciated.
TIA!
Kynn
Andreas, Heikki:
Thanks!
Kynn
ow long it took to execute?
Thanks!
Kynn
the other?
TIA!
Kynn
On Fri, Mar 14, 2008 at 2:59 PM, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
> have you seen contrib/hstore ? You can have one table with common
> attributes
> and hide others in hstore
>
That's interesting. I'll check it out. Thanks!
Kynn
On Fri, Mar 14, 2008 at 3:46 PM, Heikki Linnakangas <[EMAIL PROTECTED]>
wrote:
>
>
>
From performance point of view, I would go with a single table with
> NULL fields on PostgreSQL.
Wow. I'm so glad I asked! Thank you very much!
Kynn
ould you go
about deciding whether the number of exceptional cases is small enough to
warrant a second table? Of course, one could do a systematic profiling of
various possible scenarios, but as a first approximation what's your
rule-of-thumb?
TIA!
Kynn
; accurately reflects how the real query will go down.
>
That's a very helpful reminder. Thanks.
Kynn
On Sat, Mar 8, 2008 at 1:01 PM, Heikki Linnakangas <[EMAIL PROTECTED]>
wrote:
> Kynn Jones wrote:
> > Hi!
> >
> > As part of a data warehousing project, I need to pre-process data
> downloaded
> > from an external source, in the form of several large flat fi
y something that can be done with plain SQL (even if to do
this I must produce additional helper tables, indices, etc.), or must I
write a stored procedure to implement it?
TIA!
Kynn
On Mon, Feb 25, 2008 at 11:56 AM, Matthew <[EMAIL PROTECTED]> wrote:
> On Mon, 25 Feb 2008, Kynn Jones wrote:
> > This is just GREAT!!! It fits the problem to a tee.
>
> It makes the queries quick then?
It is good that you ask. Clearly you know the story: a brilliant-
On Mon, Feb 25, 2008 at 8:45 AM, Matthew <[EMAIL PROTECTED]> wrote:
> On Fri, 22 Feb 2008, Kynn Jones wrote:
> > Hi. I'm trying to optimize...
> >
> > (Q1) SELECT a1.word, a2.word
> > FROM T a1 JOIN T a2 USING ( zipk )
> >WHERE a1.t
han that, because in the actual application
there is not just one T but several dozen, similar to what would happen to
the schema in the multilingual dictionary analogy if we wanted to
pre-segregate the words according to some categories, say a T for animals, a
T for fruits, a T for verbs, a T for professions, etc.)
(I need to do a bit more work before I can post the EXPLAIN results.)
kynn
On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <
[EMAIL PROTECTED]> wrote:
> On 2008-02-22 12:49, Kynn Jones wrote:
> > Of course, I expect that using views V and V... would
> > result in a loss in performance relative to a version that used bona
> &g
On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator) <
[EMAIL PROTECTED]> wrote:
> On 2008-02-22 12:49, Kynn Jones wrote:
> > Of course, I expect that using views V and V... would
> > result in a loss in performance relative to a version that used bona
> &g
, what design would make querying V1, V2, V3 ... as fast as
possible? Is it possible to approach the performance of the design that
uses bona fide tables T1, T2, T3, ... instead of views V1, V2, V3 ...?
Thank you very much for reading this long post, and many thanks in advance
for your comments!
K
Consider these two very similar schemas:
Schema 1:
CREATE TABLE foo (
id serial PRIMARY KEY,
frobnitz character(varying 100) NOT NULL UNIQUE
);
CREATE TABLE bar (
id serial PRIMARY KEY,
foo_id int REFERENCES foo(id)
)
Schema 2:
CREATE TABLE foo (
frobnitz character(varying 100) PRIMA
On 5/24/06, Tom Lane <[EMAIL PROTECTED]> wrote:
<[EMAIL PROTECTED]> writes:> Limit (cost=19676.75..21327.99
rows=6000 width=84)>-> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)> Hash Cond: (upper(("outer".id)::text) = upper(("inner".id)::text))> -> Seq Sc
[ I had a problem with my mailer when I first sent this. My apologies
for any repeats. ]
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the follow
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:
CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );
...while tiny_table
On 5/24/06, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> <[EMAIL PROTECTED]> writes:
> > Limit (cost=19676.75..21327.99 rows=6000 width=84)
> >-> Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)
> > Hash Cond: (upper(("outer".id)::text) upper(("inner".id)::text))
> >
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:
CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );
...while tiny_table c
23 matches
Mail list logo