testing
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Thanks Yves for the clarification!
It used to be very important to pre-warm EBS before running benchmarks
in order to get consistent results.
Then at re:Invent 2015, the AWS engineers said that it is not needed
anymore, which IMO is a lot less work for us to do benchmarking in
AWS, because pre-wa
On 2016-05-26 09:03, Artem Tomyuk wrote:
> Why no? Or you missed something?
I think Rayson is correct, but the double negative makes it hard to read:
"So no EBS pre-warming does not apply to EBS volumes created from snapshots."
Which I interpret as:
So, "no EBS pre-warming", does not apply to EB
Why no? Or you missed something?
It should be done on every EBS restored from snapshot.
Is that from your personal experience, and if so, when did you do the test??
Yes we are using this practice, because as a part of our production load we
are using auto scale groups to create new instances, wh
Thanks Artem.
So no EBS pre-warming does not apply to EBS volumes created from snapshots.
Rayson
==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/Gri
Please look at the official doc.
"New EBS volumes receive their maximum performance the moment that they are
available and do not require initialization (formerly known as
pre-warming). However, storage blocks on volumes that were restored from
snapshots must be initialized (pulled down from Amazo
On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk wrote:
>
> 2016-05-26 16:50 GMT+03:00 Rayson Ho :
>
>> Amazon engineers said that EBS pre-warming is not needed anymore.
>
>
> but still if you will skip this step you wont get much performance on ebs
> created from snapshot.
>
IIRC, that's not wha
2016-05-26 16:50 GMT+03:00 Rayson Ho :
> Amazon engineers said that EBS pre-warming is not needed anymore.
but still if you will skip this step you wont get much performance on ebs
created from snapshot.
On Thu, May 26, 2016 at 9:00 AM, Artem Tomyuk wrote:
>
> But still strong recommendation to pre-warm your ebs in any case,
especially if they created from snapshot.
That used to be true. However, at AWS re:Invent 2015, Amazon engineers said
that EBS pre-warming is not needed anymore.
Rayson
===
Yes, the smaller instance you choose - the slower ebs will be.
EBS lives separately from EC2, they are communicating via network. So small
instance = low network bandwidth = poorer disk performance.
But still strong recommendation to pre-warm your ebs in any case,
especially if they created from sn
On 2016-05-25 19:08, Rayson Ho wrote:
> Actually, when "EBS-Optimized" is on, then the instance gets dedicated
> bandwidth to EBS.
Hadn't realised that, thanks.
Is the EBS bandwidth then somewhat limited depending on the type of instance
too?
--
http://yves.zioup.com
gpg: 4096R/32B0F416
--
Hi.
AWS EBS its a really painful story
How was created volumes for RAID? From snapshots?
If you want to get the best performance from EBS it needs to pre-warmed.
Here is the tutorial how to achieve that:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html
Also you should r
Actually, when "EBS-Optimized" is on, then the instance gets dedicated
bandwidth to EBS.
Rayson
==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridE
Indeed, old-style disk EBS vs new-style SSd EBS.
Be aware that EBS traffic is considered as part of the total "network"
traffic, and each type of instance has different limits on maximum network
throughput. Those difference are very significant, do tests on the same volume
between two different ty
There are many factors that can affect EBS performance. For example, the
type of EBS volume, the instance type, whether EBS-optimized is turned on
or not, etc.
Without the details, then there is no apples to apples comparsion...
Rayson
==
Open Grid
We are starting some testing in AWS, with EC2, EBS backed setups.
What I found interesting today, was a single EBS 1TB volume, gave me
something like 108MB/s throughput, however a RAID10 (4 250GB EBS
volumes), gave me something like 31MB/s (test after test after test).
I'm wondering what you folk
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, Apr 15, 2014 at 12:57 PM, Dave Cramer wrote:
> I have a client wanting to test PostgreSQL on ZFS running Linux. Other
> than pg_bench are there any other benchmarks that are easy to test?
Check Gregory Smith article about testing disks [1].
[1] http://www.westnet.com/~gsmith/content/po
I have a client wanting to test PostgreSQL on ZFS running Linux.
Other than pg_bench are there any other benchmarks that are easy to test?
One of the possible concerns is fragmentation over time. Any ideas on how
to fragment the database before running pg_bench ?
Also there is some concern about
Greg Smith wrote:
> > * How to test for power failure?
>
> I've had good results using one of the early programs used to
> investigate this class of problems:
> http://brad.livejournal.com/2116715.html?page=2
FYI, this tool is mentioned in the Postgres documentation:
http://www.postgr
On 10-08-04 03:49 PM, Scott Carey wrote:
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
After a week testing I think I can answer the question above: does it work
like it's supposed to under PostgreSQL?
YES
The drive I have tested is
j...@commandprompt.com ("Joshua D. Drake") writes:
> On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
>> Greg Smith wrote:
>> > Note that not all of the Sandforce drives include a capacitor; I hope
>> > you got one that does! I wasn't aware any of the SF drives with a
>> > capacitor on them
g...@2ndquadrant.com (Greg Smith) writes:
> Yeb Havinga wrote:
>> * What filesystem to use on the SSD? To minimize writes and maximize
>> chance for seeing errors I'd choose ext2 here.
>
> I don't consider there to be any reason to deploy any part of a
> PostgreSQL database on ext2. The potential
On Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote:
>
> 2) I've heard that some SSD have utilities that you can use to query
> the write cycles in order to estimate lifespan. Does this one, and is
> it possible to publish the output (an approximation of the amount of
> work along with this would b
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote:
> On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
>> After a week testing I think I can answer the question above: does it work
>> like it's supposed to under PostgreSQL?
>>
>> YES
>>
>> The drive I have tested is the $435,- 50GB OCZ Verte
On Jul 26, 2010, at 12:45 PM, Greg Smith wrote:
> Yeb Havinga wrote:
>> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
>> read/write test. (scale 300) No real winners or losers, though ext2
>> isn't really faster and the manual need for fix (y) during boot makes
>> it i
On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote:
> se note that the 10% was on a slower CPU. On a more recent CPU the
> difference was 47%, based on tests that ran for an hour.
I am not surprised at all that reading and writing almost twice as much
data from/to disk takes 47% longer. If less
On Tue, Aug 3, 2010 at 11:37 AM, Yeb Havinga wrote:
> Yeb Havinga wrote:
>>
>> Hannu Krosing wrote:
>>>
>>> Did it fit in shared_buffers, or system cache ?
>>>
>>
>> Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
>>>
>>> I first noticed this several years ago, when doing a C
Yeb Havinga wrote:
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer) when
Yeb Havinga wrote:
Small IO size: 4 KB
Maximum Small IOPS=86883 @ Small=8 and Large=0
Small IO size: 8 KB
Maximum Small IOPS=48798 @ Small=11 and Large=0
Conclusion: you can write 4KB blocks almost twice as fast as 8KB ones.
This is a useful observation about the effectiveness of the write
Hannu Krosing wrote:
Did it fit in shared_buffers, or system cache ?
Database was ~5GB, server has 16GB, shared buffers was set to 1920MB.
I first noticed this several years ago, when doing a COPY to a large
table with indexes took noticably longer (2-3 times longer) when the
indexes were in
Scott Marlowe wrote:
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith wrote:
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to blocks, I wouldn't be
surp
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith wrote:
> Josh Berkus wrote:
>>
>> That doesn't make much sense unless there's some special advantage to a
>> 4K blocksize with the hardware itself.
>
> Given that pgbench is always doing tiny updates to blocks, I wouldn't be
> surprised if switching to sm
Josh Berkus wrote:
That doesn't make much sense unless there's some special advantage to a
4K blocksize with the hardware itself.
Given that pgbench is always doing tiny updates to blocks, I wouldn't be
surprised if switching to smaller blocks helps it in a lot of situations
if one went looki
> Definately - that 10% number was on the old-first hardware (the core 2
> E6600). After reading my post and the 185MBps with 18500 reads/s number
> I was a bit suspicious whether I did the tests on the new hardware with
> 4K, because 185MBps / 18500 reads/s is ~10KB / read, so I thought thats
> a
Merlin Moncure wrote:
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
Postgres settings:
8.4.4
--with-blocksize=4
I saw about 10% increase in performance compared to 8KB blocksizes.
That's very interesting -- we need more testing in that department...
Definately - that 10% num
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote:
> After a week testing I think I can answer the question above: does it work
> like it's supposed to under PostgreSQL?
>
> YES
>
> The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro,
> http://www.newegg.com/Product/Product.aspx?Item=N82
6700tps?! Wow..
Ok, I'm impressed. May wait a bit for prices to come somewhat, but that
sounds like two of those are going in one of my production machines
(Raid 1, of course)
Yeb Havinga wrote:
> Greg Smith wrote:
>> Greg Smith wrote:
>>> Note that not all of the Sandforce drives include a
Greg Smith wrote:
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the chipset that doesn't include one still. Have
On Wed, Jul 28, 2010 at 03:45:23PM +0200, Yeb Havinga wrote:
Due to the LBA remapping of the SSD, I'm not sure of putting files
that are sequentially written in a different partition (together with
e.g. tables) would make a difference: in the end the SSD will have a
set new blocks in it's buffe
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga wrote:
> Yeb Havinga wrote:
>
>> Due to the LBA remapping of the SSD, I'm not sure of putting files that
>> are sequentially written in a different partition (together with e.g.
>> tables) would make a difference: in the end the SSD will have a set new
Yeb Havinga wrote:
Michael Stone wrote:
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If you
Michael Stone wrote:
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If you dedicate a partitio
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog block
device to remove the file system overhead and guaranteeing your data is
written sequentially every time?
If you dedicate a partition to xlog, you already
On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote:
Note that SSDs aren't usually real fast at large sequential writes
though, so it might be worth putting pg_xlog on a spinning pair in a
mirror and seeing how much, if any, the SSD drive speeds up when not
having to do pg_xlog.
xlog
On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote:
> Matthew Wakeling wrote:
> > Yeb also made the point - there are far too many points on that graph
> > to really tell what the average latency is. It'd be instructive to
> > have a few figures, like "only x% of requests took longer than y".
>
Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block device to remove the file system overhead and guaranteeing your
data is written sequentially every time?
It's possible to set the PostgreSQL wal_sync_method parameter in the
database to open_dat
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
> On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith wrote:
> > Yeb Havinga wrote:
> >> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
> >> read/write test. (scale 300) No real winners or losers, though ext2 isn't
>
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith wrote:
> Yeb Havinga wrote:
>
>> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
>> read/write test. (scale 300) No real winners or losers, though ext2 isn't
>> really faster and the manual need for fix (y) during boot makes it
>>
On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith wrote:
> Greg Spiegelberg wrote:
>>
>> Speaking of the layers in-between, has this test been done with the ext3
>> journal on a different device? Maybe the purpose is wrong for the SSD. Use
>> the SSD for the ext3 journal and the spindled drives for f
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2
isn't really faster and the manual need for fix (y) during boot makes
it impractical in its standard configuration.
That's what happens
Yeb Havinga wrote:
To get similar *average* performance results you'd need to put about
4 drives and a BBU into a server. The
Please forget this question, I now see it in the mail i'm replying to.
Sorry for the spam!
-- Yeb
--
Sent via pgsql-performance mailing list (pgsql-performance@po
Greg Smith wrote:
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results you
Greg Smith wrote:
> Yeb's data is showing that a single SSD is competitive with a
> small array on average, but with better worst-case behavior than
> I'm used to seeing.
So, how long before someone benchmarks a small array of SSDs? :-)
-Kevin
--
Sent via pgsql-performance mailing list (p
Greg Spiegelberg wrote:
Speaking of the layers in-between, has this test been done with the
ext3 journal on a different device? Maybe the purpose is wrong for
the SSD. Use the SSD for the ext3 journal and the spindled drives for
filesystem?
The main disk bottleneck on PostgreSQL databases
Matthew Wakeling wrote:
Yeb also made the point - there are far too many points on that graph
to really tell what the average latency is. It'd be instructive to
have a few figures, like "only x% of requests took longer than y".
Average latency is the inverse of TPS. So if the result is, say,
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga wrote:
> Matthew Wakeling wrote:
>
>> Apologies, I was interpreting the graph as the latency of the device, not
>> all the layers in-between as well. There isn't any indication in the email
>> with the graph as to what the test conditions or software
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device,
not all the layers in-between as well. There isn't any indication in
the email with the graph as to what the test conditions or software are.
That info was in the email preceding the graph mail, but I s
On Mon, 26 Jul 2010, Greg Smith wrote:
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so, this
device is really slow - some requests have a latency of more than a second!
Have you tried that yourself? If you generate one of those with standard
hard
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results you'd need to put abou
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than
a second!
Have you tried that yourself? If you generate one of those with
standard hard drives and a BBWC under Linux, I expec
Matthew Wakeling wrote:
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis?
Yes
If so, this device is really slow - some requests have a latency of
m
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than a
second!
Matthew
Yeb Havinga wrote:
Greg Smith wrote:
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
I ran a pgbench run and power failure test during pgbench with a 3
year old computer
On th
Yeb Havinga wrote:
8GB DDR2 something..
(lots of details removed)
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Thanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for
the gnuplot and psql scripts!
--
Sent via pgsql-performan
Greg Smith wrote:
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
I ran a pgbench run and power failure test during pgbench with a 3 year
old computer
8GB DDR ?
Intel Core 2 duo
Yeb Havinga wrote:
Writes/s start low but quickly converge to a number in the range of
1200 to 1800. The writes diskchecker does are 16kB writes. Making this
4kB writes does not increase writes/s. 32kB seems a little less, 64kB
is about two third of initial writes/s and 128kB is half.
Let's t
Yeb Havinga wrote:
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes;
39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
When playing with it a bit more, I couldn't get the test_file to be
created in the right
Joshua D. Drake wrote:
That is quite the toy. I can get 4 SATA-II with RAID Controller, with
battery backed cache, for the same price or less :P
True, but if you look at tests like
http://www.anandtech.com/show/2899/12 it suggests there's probably at
least a 6:1 performance speedup for wor
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote:
> Greg Smith wrote:
> > Note that not all of the Sandforce drives include a capacitor; I hope
> > you got one that does! I wasn't aware any of the SF drives with a
> > capacitor on them were even shipping yet, all of the ones I'd seen
> > wer
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the chipset that doesn't include one still. Haven't checked in a
f
Yeb Havinga wrote:
diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s)
Total errors: 0
:-)
OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subsc
Greg Smith wrote:
Note that not all of the Sandforce drives include a capacitor; I hope
you got one that does! I wasn't aware any of the SF drives with a
capacitor on them were even shipping yet, all of the ones I'd seen
were the chipset that doesn't include one still. Haven't checked in a
f
On Sat, Jul 24, 2010 at 3:20 AM, Yeb Havinga wrote:
> Hello list,
>
> Probably like many other's I've wondered why no SSD manufacturer puts a
> small BBU on a SSD drive. Triggered by Greg Smith's mail
> http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here,
> and also anandtec
Yeb Havinga wrote:
Probably like many other's I've wondered why no SSD manufacturer puts
a small BBU on a SSD drive. Triggered by Greg Smith's mail
http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php
here, and also anandtech's review at
http://www.anandtech.com/show/2899/1 (s
On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote:
> The problem in this scenario is that even when the SSD would show not data
> loss and the rotating disk would for a few times, a dozen tests without
> failure isn't actually proof that the drive can write it's complete buffer to
> disk after po
On Sat, 24 Jul 2010, David Boreham wrote:
Do you guys have any more ideas to properly 'feel this disk at its teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier to
determine if the drive is behaving correctly by using a simple test program
that emulates the stora
Do you guys have any more ideas to properly 'feel this disk at its
teeth' ?
While an 'end-to-end' test using PG is fine, I think it would be easier
to determine if the drive is behaving correctly by using a simple test
program that emulates the storage semantics the WAL expects. Have it
wri
Hello list,
Probably like many other's I've wondered why no SSD manufacturer puts a
small BBU on a SSD drive. Triggered by Greg Smith's mail
http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php
here, and also anandtech's review at
http://www.anandtech.com/show/2899/1 (see pag
On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote:
> As an aside, some folks in our Systems Engineering department here did
> do some testing of FusionIO, and they found that the helper daemons were
> inefficient and placed a fair amount of load on the server. That might
> be something to watch o
On Wed, 17 Mar 2010, Brad Nicholson wrote:
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which
wear le
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote:
> On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
>
> > On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
> >> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
> >> which wear levels across 100GB of actual ins
Greg,
Did you ever contact them and get your hands on one?
We eventually did see long SSD rebuild times on server crash as well. But data
came back uncorrupted per my blog post. This is a good case for Slony Slaves.
Anyone in a high TX low downtime environment would have already engineered
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which
wear levels across 100GB of actual installed capacity.
http://community.fusionio.com/forums/p/34/258.aspx#258
Max drive performance would be about 41TB/day, which coincidently works out
very close to the 3 year warr
On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
> I've been hearing bad things from some folks about the quality of the
> FusionIO drives from a durability standpoint.
Can you be more specific about that? Durability over what time frame? How many
devices in the sample set? How did FusionIO de
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote:
> On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
>> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
>> which wear levels across 100GB of actual installed capacity.
>> http://community.fusionio.com/forums/p/34/2
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote:
> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device,
> which wear levels across 100GB of actual installed capacity.
> http://community.fusionio.com/forums/p/34/258.aspx#258
>
20% of overall capacity free for levelling
On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote:
> On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote:
>
> > I've been hearing bad things from some folks about the quality of the
> > FusionIO drives from a durability standpoint.
>
> Can you be more specific about that? Durability over what t
On Wed, 2010-03-17 at 14:30 +0200, Devrim GÜNDÜZ wrote:
> On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
> > We've enjoyed our FusionIO drives very much. They can do 100k iops
> > without breaking a sweat.
>
> Yeah, performance is excellent. I bet we could get more, but CPU was
> bottleneck
On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote:
> We've enjoyed our FusionIO drives very much. They can do 100k iops
> without breaking a sweat.
Yeah, performance is excellent. I bet we could get more, but CPU was
bottleneck in our test, since it was just a demo server :(
--
Devrim GÜNDÜZ
Po
On Mar 8, 2010, at 12:50 PM, Greg Smith wrote:
> Ben Chobot wrote:
>> We've enjoyed our FusionIO drives very much. They can do 100k iops without
>> breaking a sweat. Just make sure you shut them down cleanly - it can up to
>> 30 minutes per card to recover from a crash/plug pull test.
>
> Ye
Ben Chobot wrote:
We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test.
Yeah...I got into an argument with Kenny Gorman over my concerns
We've enjoyed our FusionIO drives very much. They can do 100k iops without
breaking a sweat. Just make sure you shut them down cleanly - it can up to 30
minutes per card to recover from a crash/plug pull test.
I also have serious questions about their longevity and failure mode when the
flash
2010/3/8 Devrim GÜNDÜZ :
> Hi,
>
> I have a FusionIO drive to test for a few days. I already ran iozone and
> bonnie++ against it. Does anyone have more suggestions for it?
>
> It is a single drive (unfortunately).
vdbench
--
Łukasz Jagiełło
System Administrator
G-Forces Web Management Polska sp
Devrim GÜNDÜZ wrote:
Hi,
I have a FusionIO drive
Cool!!
to test for a few days. I already ran iozone and
bonnie++ against it. Does anyone have more suggestions for it?
Oracle has a tool to test drives specifically for database loads kinds
called orion - its free software and comes with a
Hi,
I have a FusionIO drive to test for a few days. I already ran iozone and
bonnie++ against it. Does anyone have more suggestions for it?
It is a single drive (unfortunately).
Regards,
--
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http
Testing list access
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
--
+-+
| Ron Johnson, Jr.Home: [EMAIL PROTECTED] |
| Jefferson, LA USA |
| |
| "I'm not a vegetarian be
98 matches
Mail list logo