After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] belched out:
> We would be recommending to our ct. on the use of Postgresql db as
> compared to MS SQL Server. We are targetting to use Redhat Linux ES
> v2.1, Postgresql v7.3.4 and Postgresql ODBC 07.03.0100.
>
> We would like to know the
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>> Define "no longer works well".
> Well it seems to completely bar the use of a straight merge join between two
> index scans:
Hmmm ... [squints] ... it's not supposed to do that ... [digs] ... yeah,
there's something
Dear all
We would be recommending to our ct. on the use of Postgresql db as compared
to MS SQL Server. We are targetting to use Redhat Linux ES v2.1, Postgresql
v7.3.4 and Postgresql ODBC 07.03.0100.
We would like to know the minimum specs required for our below target. The
minimum specs is refer
4) Currently, pg_dump does *not* back up statistics settings.
Is this a TODO?
Oops - sorry I thought you meant 'pg_dump does not back up statistics'.
Probably still should be a TODO :)
Chris
---(end of broadcast)---
TIP 2: you can get off al
4) Currently, pg_dump does *not* back up statistics settings.
Is this a TODO?
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Tom Lane <[EMAIL PROTECTED]> writes:
> Define "no longer works well". varchar doesn't have its own comparison
> operators anymore, but AFAIK that makes no difference.
Well it seems to completely bar the use of a straight merge join between two
index scans:
test=# set enable_seqscan = off;
SET
Greg Stark <[EMAIL PROTECTED]> writes:
> Huh. The following shows something strange. It seems joining on two varchars
> no longer works well. Instead the optimizer has to convert both columns to
> text.
Define "no longer works well". varchar doesn't have its own comparison
operators anymore, but
Josh Berkus <[EMAIL PROTECTED]> writes:
> Oh, good. Was this a 7.4 improvement?
No, it was in 7.3
-Neil
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Neil,
> > 1) to keep it working, you will probably need to run ANALZYE more
> >often than you have been;
>
> I'm not sure why this would be the case -- can you elaborate?
For the more granular stats to be useful, they have to be accurate; otherwise
you'll go back to a nestloop as soon as th
Greg Stark <[EMAIL PROTECTED]> writes:
> Huh. The following shows something strange.
Worse, with enable_hashjoin off it's even more obvious something's broken:
test=# set enable_hashjoin = off;
SET
test=# explain select * from a,b where a.x=b.x;
QUERY PLAN
Jared Carr <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
> > Well it looks like you have something strange going on. What data type is
> > car_id in each table?
> >
> car_id is a varchar(10) in both tables.
Huh. The following shows something strange. It seems joining on two varchars
no long
Josh Berkus <[EMAIL PROTECTED]> writes:
> 1) to keep it working, you will probably need to run ANALZYE more
>often than you have been;
I'm not sure why this would be the case -- can you elaborate?
> 4) Currently, pg_dump does *not* back up statistics settings.
Yes, it does.
-Neil
I took advantage of last weekend to upgrade from 7.2.4 to 7.4.0 on a
new faster box.
Now I'm trying to implement pg_autovacuum. It seems to work ok, but
after about an hour or so, it does nothing. The process still is
running, but nothing is sent to the log file.
I'm running the daemon as distr
Greg Stark wrote:
Jared Carr <[EMAIL PROTECTED]> writes:
Greg Stark wrote:
Merge Cond: ("outer"."?column7?" = "inner"."?column5?")
Well it looks like you have something strange going on. What data type is
car_id in each table?
car_id is a varchar(10) in both table
Jared Carr <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
> >
> >> Merge Cond: ("outer"."?column7?" = "inner"."?column5?")
> >>
> >
> >Well it looks like you have something strange going on. What data type is
> > car_id in each table?
> car_id is a varchar(10) in both tables.
Well for
Jeremiah,
> Thanks to all, I had already run analyze. But the STATISTICS setting
> seems to have worked. I'm just not sure what it did..? Would anyone care
> to explain.
The STATISTICS setting improves the granularity of statistics kept by the
query planner on that column; increasing the granul
Greg Stark wrote:
Merge Cond: ("outer"."?column7?" = "inner"."?column5?")
Well it looks like you have something strange going on. What data type is
car_id in each table?
car_id is a varchar(10) in both tables.
---(end of broadcast)---
Thanks to all, I had already run analyze. But the STATISTICS setting
seems to have worked. I'm just not sure what it did..? Would anyone care
to explain.
On Mon, 2003-12-01 at 13:47, Josh Berkus wrote:
> Jeremiah,
>
> > I've attached the Analyze below. I have no idea why the db thinks there
>
Jared Carr <[EMAIL PROTECTED]> writes:
> Furthermore noticed that in the following query plan it is doing the
> sequential scan on quotes first, and then doing the sequential on zips. IMHO
> this should be the other way around, since the result set for zips is
> considerably smaller especially giv
Robert Treat wrote:
On Mon, 2003-12-01 at 16:44, Jared Carr wrote:
I am currently working on optimizing some fairly time consuming queries
on a decently large
dataset.
The Following is the query in question.
SELECT z.lat, z.lon, z.city, z.state, q.date_time, c.make, c.model, c.year
FROM q
On Mon, 2003-12-01 at 16:44, Jared Carr wrote:
> I am currently working on optimizing some fairly time consuming queries
> on a decently large
> dataset.
>
> The Following is the query in question.
>
> SELECT z.lat, z.lon, z.city, z.state, q.date_time, c.make, c.model, c.year
> FROM quotes A
Erik Norvelle <[EMAIL PROTECTED]> writes:
> Here's the query I am running:
> update indethom
> set query_counter = nextval('s2.query_counter_seq'), -- Just for keeping track
> of how fast the query is running
> sectref = (select clavis from s2.sectiones where
> s2.sectio
Erik Norvelle <[EMAIL PROTECTED]> writes:
> update indethom
> set query_counter =3D nextval('s2.query_counter_seq'), -- Just=
> =20=20
> for keeping track of how fast the query is running
> sectref =3D (select clavis from s2.sectiones where
> s2.sectiones.nomeope
On Tue, 2 Dec 2003, Erik Norvelle wrote:
> ** My question has to do with whether or not I am getting maximal speed
> out of PostgreSQL, or whether I need to perform further optimizations.
> I am currently getting about 200,000 updates per hour, and updating the
> entire 10 million rows thus requi
Folks:
I´m running a query which is designed to generate a foreign key for a table of approx. 10 million records (I've mentioned this in an earlier posting). The table is called "indethom", and each row contains a single word from the works of St. Thomas Aquinas, along with grammatical data about
25 matches
Mail list logo