"Scott Marlowe" writes:
> Isn't it amazing how many small businesses won't buy from other small
> businesses?
It's entertaining that Dell is considered one of the "safe choices"
in this thread. They were a pretty small business not so long ago
(and remain a lot smaller than IBM or HP) ...
On Sat, 2008-12-13 at 19:16 -0700, Scott Marlowe wrote:
> Isn't it amazing how many small businesses won't buy from other small
> businesses? They'd much rather give their money to a company they
> don't like because "they'll be around a while" (the big company).
>
True enough!
Joshua D. Drake
On Sat, Dec 13, 2008 at 1:03 PM, Joshua D. Drake wrote:
> On Sat, 2008-12-13 at 12:57 -0700, Scott Marlowe wrote:
>> On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
>> wrote:
>> > On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
>> >> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
>
El Sábado, 13 de Diciembre de 2008 Scott Marlowe escribió:
> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> >> On Sat, 13 Dec 2008, Robert Haas wrote:
> >
> >> > This may be a little off-topic, but I'd be interested in hear
On Sat, 2008-12-13 at 12:57 -0700, Scott Marlowe wrote:
> On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
> >> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> >> wrote:
> >> > On Sat, 2008-12-13 at 07:44 -0800, da...@lan
On Sat, Dec 13, 2008 at 12:47 PM, Joshua D. Drake
wrote:
> On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
>> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
>> wrote:
>> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
>> >> On Sat, 13 Dec 2008, Robert Haas wrote:
>
>> > htt
On Sat, 2008-12-13 at 12:45 -0700, Scott Marlowe wrote:
> On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
> wrote:
> > On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> >> On Sat, 13 Dec 2008, Robert Haas wrote:
> > http://h71016.www7.hp.com/ctoBases.asp?oi=E9CED&BEID=19701&SBLID=&Prod
On Sat, Dec 13, 2008 at 11:37 AM, Joshua D. Drake
wrote:
> On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
>> On Sat, 13 Dec 2008, Robert Haas wrote:
>
>> > This may be a little off-topic, but I'd be interested in hearing more
>> > details about how you (or others) would do this... manuf
On Sat, 2008-12-13 at 07:44 -0800, da...@lang.hm wrote:
> On Sat, 13 Dec 2008, Robert Haas wrote:
> > This may be a little off-topic, but I'd be interested in hearing more
> > details about how you (or others) would do this... manufacturer,
> > model, configuration? How many hard drives do you n
On Sat, Dec 13, 2008 at 6:22 AM, Robert Haas wrote:
> On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake
> wrote:
>>> Those intel SSDs sound compelling. I've been waiting for SSDs to get
>>> competitive price and performance wise for a while, and when the
>>> intels came out and I read the first b
On Sat, 13 Dec 2008, Robert Haas wrote:
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake wrote:
Those intel SSDs sound compelling. I've been waiting for SSDs to get
competitive price and performance wise for a while, and when the
intels came out and I read the first benchmarks I immediately be
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake wrote:
>> Those intel SSDs sound compelling. I've been waiting for SSDs to get
>> competitive price and performance wise for a while, and when the
>> intels came out and I read the first benchmarks I immediately began
>> scheming. Sadly, that was r
On Tue, 9 Dec 2008, Scott Carey wrote:
My system is now CPU bound, the I/O can do sequential reads of more than
1.2GB/sec but Postgres can't do a seqscan 30% as fast because it eats up
CPU like crazy just reading and identifying tuples... In addition to the
fadvise patch, postgres needs to mer
Scott Marlowe wrote:
involves tiny bits of data scattered throughout the database. Our
current database is about 20-25 Gig, which means it's quickly reaching
the point where it will not fit in our 32G of ram, and it's likely to
grow too big for 64Gig before a year or two is out.
...
I wonde
Tom,
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
not the road we want to go
Greg Stark wrote:
On Sun, Dec 7, 2008 at 7:38 PM, Josh Berkus wrote:
Also, the following patches currently still have bugs, but when the bugs are
fixed I'll be looking for performance testers, so please either watch the
wiki or watch this space:
...
-- posix_fadvise (Gregory Stark)
Eh? Quite
I would expect higher shared_buffers to raise the curve before the first
breakpoint but after the first breakpoint make the drop steeper and deeper.
The equilibrium where the curve becomes flatter should be lower.
On SpecJAppserver specifically, I remember seeing a drop when the
database size
On Tue, 9 Dec 2008, Scott Carey wrote:
For what it is worth, you can roughly double to triple the iops of an Intel
X-25M on pure random reads if you queue up
multiple concurrent reads rather than serialize them. But it is not due to
spindles, it is due to the latency of the
SATA interface and
, "Scott Carey" <[EMAIL PROTECTED]> wrote:
>
> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of > Jean-David Beyer
> [EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2008 5:08 AM
> To: pgsql-performance@postgresql.org
&
Just to clarify, I'm not talking about random I/O bound loads today, on hard
drives, targetted by the fadvise stuff - these aren't CPU bound, and they will
be helped by it.
For sequential scans, this situation is different, since the OS has sufficient
read-ahead prefetching algorithms of its ow
> Well, when select count(1) reads pages slower than my disk, its 16x + slower
> than my RAM. Until one can demonstrate that the system can even read pages
> in RAM faster than what disks will do next year, it doesn't matter much that
> RAM is faster. It does matter that RAM is faster for sorts,
justin wrote:
Tom Lane wrote:
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
no
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU eff
Matthew Wakeling <[EMAIL PROTECTED]> writes:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we're looking at in the next few year
Scott Carey <[EMAIL PROTECTED]> writes:
> And as far as I can tell, even after the 8.4 fadvise patch, all I/O is in
> block_size chunks. (hopefully I am wrong)
>...
> In addition to the fadvise patch, postgres needs to merge adjacent I/O's
> into larger ones to reduce the overhead. It only really
Prefetch CPU cost should be rather low in the grand scheme of things, and does
help performance even for very fast I/O. I would not expect a very large CPU
use increase from that sort of patch in the grand scheme of things - there is a
lot that is more expensive to do on a per block basis.
The
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU
On Tue, 9 Dec 2008, Robert Haas wrote:
I don't believe the thesis. The gap between disk speeds and memory
speeds may narrow over time, but I doubt it's likely to disappear
altogether any time soon, and certainly not for all users.
I think the "not for all users" is the critical part. In 2 yea
On Tue, 2008-12-09 at 17:38 -0500, Tom Lane wrote:
> Scott Carey <[EMAIL PROTECTED]> writes:
> > Which brings this back around to the point I care the most about:
> > I/O per second will diminish as the most common database performance
> > limiting factor in Postgres 8.4's lifetime, and become alm
> Hmm ... I wonder whether this means that the current work on
> parallelizing I/O (the posix_fadvise patch in particular) is a dead
> end. Because what that is basically going to do is expend more CPU
> to improve I/O efficiency. If you believe this thesis then that's
> not the road we want to g
Scott Carey <[EMAIL PROTECTED]> writes:
> Which brings this back around to the point I care the most about:
> I/O per second will diminish as the most common database performance limiting
> factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
> Becoming more CPU efficient will
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU efficient will become very important, and for some, already
On Tue, 2008-12-09 at 15:07 -0500, Merlin Moncure wrote:
> On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> > Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> > 3U with controller and battery backed cache for <$10k.
>
> While I agree with your g
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Hard drives work, their cheap and fast. I can get 25 spindles, 15k in a
> 3U with controller and battery backed cache for <$10k.
While I agree with your general sentiments about early adoption, etc
(the intel ssd products
On Tue, 2008-12-09 at 11:08 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> > Let me re-phrase this.
> >
> > For today, at 200GB or less of required space, and 500GB or less next year.
> >
> > "Where we're going, we don't NEED spindles."
>
>
On Tue, Dec 9, 2008 at 11:01 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Let me re-phrase this.
>
> For today, at 200GB or less of required space, and 500GB or less next year.
>
> "Where we're going, we don't NEED spindles."
Those intel SSDs sound compelling. I've been waiting for SSDs to get
co
Let me re-phrase this.
For today, at 200GB or less of required space, and 500GB or less next year.
"Where we're going, we don't NEED spindles."
Seriously, go down to the store and get 6 X25-M's, they're as cheap as $550
each and will be sub $500 soon. These are more than sufficient for all bu
On Tue, Dec 9, 2008 at 10:35 AM, Matthew Wakeling <[EMAIL PROTECTED]> wrote:
> On Tue, 9 Dec 2008, Scott Marlowe wrote:
>>
>> I wonder how many hard drives it would take to be CPU bound on random
>> access patterns? About 40 to 60? And probably 15k / SAS drives to
>> boot. Cause that's what we'r
On Tue, 9 Dec 2008, Scott Marlowe wrote:
I wonder how many hard drives it would take to be CPU bound on random
access patterns? About 40 to 60? And probably 15k / SAS drives to
boot. Cause that's what we're looking at in the next few years where
I work.
There's a problem with that thinking.
> Lucky you, having needs that are fulfilled by sequential reads. :)
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? About 40 to 60? And probably 15k / SAS drives to
> boot. Cause that's what we're looking at in the next few years where
> I work.
Abo
On Tue, 2008-12-09 at 10:21 -0700, Scott Marlowe wrote:
> On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> Lucky you, having needs that are fulfilled by sequential reads. :)
>
> I wonder how many hard drives it would take to be CPU bound on random
> access patterns? Abou
On Tue, Dec 9, 2008 at 9:37 AM, Scott Carey <[EMAIL PROTECTED]> wrote:
> As for tipping points and pg_bench -- It doesn't seem to reflect the kind of
> workload we use postgres for at all, though my workload does a lot of big
> hashes and seqscans, and I'm curious how much improved those may be
>
> From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of > Jean-David Beyer
> [EMAIL PROTECTED]
> Sent: Tuesday, December 09, 2008 5:08 AM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Need help with 8.4 Performance
On Sun, Dec 7, 2008 at 7:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
>
> Also, the following patches currently still have bugs, but when the bugs are
> fixed I'll be looking for performance testers, so please either watch the
> wiki or watch this space:
>...
> -- posix_fadvise (Gregory Stark)
Eh
On Tuesday 09 December 2008 13:08:14 Jean-David Beyer wrote:
>
> and even if they can, I do not know if postgres uses that ability. I doubt
> it, since I believe (at least in Linux) a process can do that only if run
> as root, which I imagine few (if any) users do.
Disclaimer: I'm not a system pr
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Greg Smith wrote:
| On Mon, 8 Dec 2008, Merlin Moncure wrote:
|
|> I wonder if shared_buffers has any effect on how far you can go before
|> you hit the 'tipping point'.
|
| If your operating system has any reasonable caching itself, not so much at
|
Greg Smith <[EMAIL PROTECTED]> writes:
> On Mon, 8 Dec 2008, Merlin Moncure wrote:
>
>> I wonder if shared_buffers has any effect on how far you can go before
>> you hit the 'tipping point'.
>
> If your operating system has any reasonable caching itself, not so much at
> first. As long as the in
On Mon, 8 Dec 2008, Merlin Moncure wrote:
I wonder if shared_buffers has any effect on how far you can go before
you hit the 'tipping point'.
If your operating system has any reasonable caching itself, not so much at
first. As long as the index on the account table fits in shared_buffers,
e
On Mon, Dec 8, 2008 at 5:52 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 8 Dec 2008, Scott Marlowe wrote:
>
>> Well, I have 32 Gig of ram and wanted to test it against a database
>> that was at least twice as big as memory. I'm not sure why you'd
>> consider the results uninteresting though
On Mon, 8 Dec 2008, Scott Marlowe wrote:
Well, I have 32 Gig of ram and wanted to test it against a database
that was at least twice as big as memory. I'm not sure why you'd
consider the results uninteresting though, I'd think knowing how the
db will perform with a very large transactional stor
On Mon, Dec 8, 2008 at 1:15 PM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Sun, 7 Dec 2008, Scott Marlowe wrote:
>
>> When I last used pgbench I wanted to test it with an extremely large
>> dataset, but it maxes out at -s 4xxx or so, and that's only in the
>> 40Gigabyte range. Is the limit raised
On Sun, 7 Dec 2008, Scott Marlowe wrote:
When I last used pgbench I wanted to test it with an extremely large
dataset, but it maxes out at -s 4xxx or so, and that's only in the
40Gigabyte range. Is the limit raised for the pgbench included in
contrib in 8.4? I'm guessing it's an arbitrary limi
I'll be glad to test the patches using pgbench on my POWER4 box
running AIX 5.3 and an IA64 that runs HP-UX 11.31.
Derek
On Dec 7, 2008, at 2:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
Database performance geeks,
We have a number of patches pending for 8.4 designed to improve
databas
On Sun, Dec 7, 2008 at 12:38 PM, Josh Berkus <[EMAIL PROTECTED]> wrote:
I've got a pair of 8 core opteron 16 drive machines I would like to
test it on. If nothing else I'll just take queries from the log to
run against an 8.4 install. It'll have to be late at night though...
> If you are going
Josh,
Since a number of these performance patches use our hash function, would
it make sense to apply the last patch to upgrade the hash function mix()
to the two function mix()/final()? Since the additional changes increases
the performance of the hash function by another 50% or so. My two cents.
Database performance geeks,
We have a number of patches pending for 8.4 designed to improve database
performance in a variety of circumstances. We need as many users as possible
to build test versions of PostgreSQL with these patches, and test how well
they perform, and report back in some det
56 matches
Mail list logo