the number of index based lookups where going to be more expensive than reading
the entire table.
Regards,
Stephen Denne.
Disclaimer:
At the Datamail Group we value team commitment, respect, achievement, customer
focus, and courage. This email with any attachments is confidential and may be
su
pages. More likely you'd be able to determine that through a few hundred pages.
If the table was clustered by an index on that field, you'd have to read 4000
pages.
Is this question completely unrelated to PostgreSQL implementation reality, or
something worth considering?
Regards,
Steph
> What do you mean whan you say "Don't top post???
http://en.wikipedia.org/wiki/Posting_style
At the Datamail Group we value teamwork, respect, achievement, client focus,
and courage.
This email with any attachments is confidential and may be subject to legal
privilege.
If it is not intended
ly
> grateful.
SELECT small.fips, small.geom, small.name, SUM(huge.value)
from small JOIN huge on huge.fips = small.fips
WHERE
huge.pollutant='co';
GROUP BY small.fips, small.geom, small.name
HAVING SUM(huge.value) > 500;
Regards,
Stephen Denne
At the Datamail Group we value te
t you're looking for is the equivalent to oracles external
> tables which invoke sqlldr every time you access them in the
> background. No such animal in the pg universe that I know of.
There was a similar discussion of this on -hackers in April. Closest to this
idea was
http://archive
chosen plan is different, and that portion is only performed once. The
Materialize part was expected to be looped through 12 times, but it went
through 3174 times.
In the third plan, it isn't under a Materialize, and is expected to loop 6
times. It loops 3174 times.
Hopefully others can pro
that password in the service,
- started the service,
- that worked,
- stopped the service,
- closed the services window,
- went back to the installer error message,
- clicked retry,
- which succeeded in starting the service, and completing the upgrade.
Regards,
Stephen Denne.
--
At the D
.nz/pgQuilt.png
Application: http://www.datacute.co.nz/pgQuilt.apk
Cheers,
Stephen Denne.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t editing the JDBC connection string, especially on a
> puny phone keyboard. I think it'd be easier to enter hostname, database,
> user, and password in separate fields.
>
> Interesting little idea, though, and seems reasonably well put-together.
>
> Josh
>
> On J
that we can verify whether our partially
completed database move process is going to result in a database that starts up
ok?
Regards, Stephen Denne.
This email with any attachments is confidential and may be subject to legal
privilege. If it is not intended for you please advise by replying immediatel
Thanks for sharing your experience and thoughts Venkat,
Venkat Balaji said:
> We are performing backups to our production server exactly the same way. We
> have been through some problems while restoring and bringing up the database.
> If you are planning to take initial complete rsync with sub
11 matches
Mail list logo