On Thu, Nov 01, 2007 at 02:07:55PM +0100, Palle Girgensohn wrote:
> I have a table "login" with approx 600,000 tuples, a person table with
> approx 10 tuples.
> When running
> select max("when") from login where userid='userid'
> it takes a second or two, but when adding "group by userid" the
<>
In this particular example, this was done to "force" the query on the Linux
box to use the same plan as on the Windows box to prove that - once the
correct plan was chosen - the Linux box could at least MATCH the Windows
box.
That being said, I should mention this: we take certain "core" queri
On 11/1/07, Mark Floyd <[EMAIL PROTECTED]> wrote:
> Hello,
> Dell PowerEdge Energy 2950
> (2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB
> 4GB 667Mhz Dual Ranked DIMMs, Energy Smart
>
> PERC 5/i, x8 Backplane, Integrated Controller Card
>
> Hard Drive Configuration: Integrated S
Hello,
I am new to setting up PostgreSQL machines for our operational
environments and would appreciate if someone can take a look at this
setup; throw tomatoes if it looks too bad. We're expecting an
initial load of about 5 million text meta-data records to our
database; and are expecti
On Thu, 2007-11-01 at 11:16 -0700, Steve Crawford wrote:
> Magnus Hagander wrote:
> > Ow Mun Heng wrote:
> >>> You're likely better off (performance-wise) putting it on the same disk
> >>> as the database itself if that one has better RAID, for example.
> >> I'm thinking along the lines of since n
"Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
> Still, the Linux server did not create the same, fast plan as the Windows
> server. In order to get the same plan we had to:
> set enable_hashjoin to 'off';
> set enable_mergejoin to 'off';
This is just about never the appropriate way to solve a p
On 11/1/07, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
> I am comparing the same query on two different PG 8.2 servers, one Linux
> (8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.
>
> The Windows posgrestsql.config is pretty well tuned but it looks like
> someone had wipe
I am comparing the same query on two different PG 8.2 servers, one Linux
(8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.
The Windows posgrestsql.config is pretty well tuned but it looks like
someone had wiped out the Linux config so the default one was re-installed.
A
Magnus Hagander wrote:
> Ow Mun Heng wrote:
>>> You're likely better off (performance-wise) putting it on the same disk
>>> as the database itself if that one has better RAID, for example.
>> I'm thinking along the lines of since nothing much writes to the OS
>> Disk, I should(keyword) be safe.
>
--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane <[EMAIL PROTECTED]>
wrote:
Palle Girgensohn <[EMAIL PROTECTED]> writes:
When running
select max("when") from login where userid='userid'
it takes a second or two, but when adding "group by userid" the planner
decides on using another
Ketema wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used). I have read so many articles now that I am just saturated. I
> have a general idea but would like feedback from others.
Describe a bit
"Palle Girgensohn" <[EMAIL PROTECTED]> writes:
> Are there any other things I can modify?
You might consider an index on . Keep in mind that every new
index imposes an incremental cost on every update and insert and increases the
time for vacuum.
> max_prepared_transactions = 100 # can
Palle Girgensohn <[EMAIL PROTECTED]> writes:
> --On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane <[EMAIL PROTECTED]>
> wrote:
>> Um, you did re-ANALYZE the table after changing the setting?
> alter table login alter userid SET statistics 1000;
> vacuum analyze login;
Hm, that's the approve
On 11/1/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> Palle Girgensohn <[EMAIL PROTECTED]> writes:
> > Unfortunately, altering statistics doesn't help. I see no difference when
> > altering the value from 10 (default) to 100, 1000 or 10. :-(
>
> Um, you did re-ANALYZE the table after changing the s
--On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane <[EMAIL PROTECTED]>
wrote:
Palle Girgensohn <[EMAIL PROTECTED]> writes:
Unfortunately, altering statistics doesn't help. I see no difference
when altering the value from 10 (default) to 100, 1000 or 10. :-(
Um, you did re-ANALYZ
Palle Girgensohn <[EMAIL PROTECTED]> writes:
> Unfortunately, altering statistics doesn't help. I see no difference when
> altering the value from 10 (default) to 100, 1000 or 10. :-(
Um, you did re-ANALYZE the table after changing the setting?
regards, tom lane
--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane <[EMAIL PROTECTED]>
wrote:
Palle Girgensohn <[EMAIL PROTECTED]> writes:
When running
select max("when") from login where userid='userid'
it takes a second or two, but when adding "group by userid" the planner
decides on using another
Thanks for your answer.
So, basically, what you are saying is that there is nothing particularly
wrong with the query, nor with its optimization ? So if I need
performance for this query, I should just revert to other techniques
(giving more memory to postgres, caching outside postgres, etc..) ?
Palle Girgensohn <[EMAIL PROTECTED]> writes:
> When running
> select max("when") from login where userid='userid'
> it takes a second or two, but when adding "group by userid" the planner
> decides on using another plan, and it gets *much* faster. See example below.
It's only faster for cases wh
Sami Dalouche <[EMAIL PROTECTED]> writes:
> Compare that to the following query, that is exactly the same except
> that the City table is inner'joined instead of outer joined
> ...
> the explain analyze is available at :
> http://www.photosdesami.com/temp/exp6.txt
AFAICS it's just absolutely blind
Hi,
I have a table "login" with approx 600,000 tuples, a person table with
approx 10 tuples.
When running
select max("when") from login where userid='userid'
it takes a second or two, but when adding "group by userid" the planner
decides on using another plan, and it gets *much* faster.
> > You're likely better off (performance-wise) putting it on the same disk
> > as the database itself if that one has better RAID, for example.
> I'm thinking along the lines of since nothing much writes to the OS
> Disk, I should(keyword) be safe.
You are almost certainly wrong about this; thin
Magnus Hagander wrote:
> Ron St-Pierre wrote:
>
>> Joe Uhl wrote:
>>
>>> I realize there are people who discourage looking at Dell, but i've been
>>> very happy with a larger ball of equipment we ordered recently from
>>> them. Our database servers consist of a PowerEdge 2950 connected to
23 matches
Mail list logo