[PERFORM] testing - ignore

2016-06-28 Thread George Neuner
testing -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Rayson Ho
Thanks Yves for the clarification! It used to be very important to pre-warm EBS before running benchmarks in order to get consistent results. Then at re:Invent 2015, the AWS engineers said that it is not needed anymore, which IMO is a lot less work for us to do benchmarking in AWS, because pre-wa

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Yves Dorfsman
On 2016-05-26 09:03, Artem Tomyuk wrote: > Why no? Or you missed something? I think Rayson is correct, but the double negative makes it hard to read: "So no EBS pre-warming does not apply to EBS volumes created from snapshots." Which I interpret as: So, "no EBS pre-warming", does not apply to EB

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Artem Tomyuk
Why no? Or you missed something? It should be done on every EBS restored from snapshot. Is that from your personal experience, and if so, when did you do the test?? Yes we are using this practice, because as a part of our production load we are using auto scale groups to create new instances, wh

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Rayson Ho
Thanks Artem. So no EBS pre-warming does not apply to EBS volumes created from snapshots. Rayson == Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ http://gridscheduler.sourceforge.net/GridEngine/Gri

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Artem Tomyuk
Please look at the official doc. "New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazo

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Rayson Ho
On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk wrote: > > 2016-05-26 16:50 GMT+03:00 Rayson Ho : > >> Amazon engineers said that EBS pre-warming is not needed anymore. > > > but still if you will skip this step you wont get much performance on ebs > created from snapshot. > IIRC, that's not wha

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Artem Tomyuk
2016-05-26 16:50 GMT+03:00 Rayson Ho : > Amazon engineers said that EBS pre-warming is not needed anymore. but still if you will skip this step you wont get much performance on ebs created from snapshot.

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Rayson Ho
On Thu, May 26, 2016 at 9:00 AM, Artem Tomyuk wrote: > > But still strong recommendation to pre-warm your ebs in any case, especially if they created from snapshot. That used to be true. However, at AWS re:Invent 2015, Amazon engineers said that EBS pre-warming is not needed anymore. Rayson ===

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Artem Tomyuk
Yes, the smaller instance you choose - the slower ebs will be. EBS lives separately from EC2, they are communicating via network. So small instance = low network bandwidth = poorer disk performance. But still strong recommendation to pre-warm your ebs in any case, especially if they created from sn

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Yves Dorfsman
On 2016-05-25 19:08, Rayson Ho wrote: > Actually, when "EBS-Optimized" is on, then the instance gets dedicated > bandwidth to EBS. Hadn't realised that, thanks. Is the EBS bandwidth then somewhat limited depending on the type of instance too? -- http://yves.zioup.com gpg: 4096R/32B0F416 --

Re: [PERFORM] Testing in AWS, EBS

2016-05-26 Thread Artem Tomyuk
Hi. AWS EBS its a really painful story How was created volumes for RAID? From snapshots? If you want to get the best performance from EBS it needs to pre-warmed. Here is the tutorial how to achieve that: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html Also you should r

Re: [PERFORM] Testing in AWS, EBS

2016-05-25 Thread Rayson Ho
Actually, when "EBS-Optimized" is on, then the instance gets dedicated bandwidth to EBS. Rayson == Open Grid Scheduler - The Official Open Source Grid Engine http://gridscheduler.sourceforge.net/ http://gridscheduler.sourceforge.net/GridEngine/GridE

Re: [PERFORM] Testing in AWS, EBS

2016-05-25 Thread Yves Dorfsman
Indeed, old-style disk EBS vs new-style SSd EBS. Be aware that EBS traffic is considered as part of the total "network" traffic, and each type of instance has different limits on maximum network throughput. Those difference are very significant, do tests on the same volume between two different ty

Re: [PERFORM] Testing in AWS, EBS

2016-05-25 Thread Rayson Ho
There are many factors that can affect EBS performance. For example, the type of EBS volume, the instance type, whether EBS-optimized is turned on or not, etc. Without the details, then there is no apples to apples comparsion... Rayson == Open Grid

[PERFORM] Testing in AWS, EBS

2016-05-25 Thread Tory M Blue
We are starting some testing in AWS, with EC2, EBS backed setups. What I found interesting today, was a single EBS 1TB volume, gave me something like 108MB/s throughput, however a RAID10 (4 250GB EBS volumes), gave me something like 31MB/s (test after test after test). I'm wondering what you folk

[PERFORM] testing - please ignore

2016-04-27 Thread George Neuner
-- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Testing strategies

2014-04-15 Thread Matheus de Oliveira
On Tue, Apr 15, 2014 at 12:57 PM, Dave Cramer wrote: > I have a client wanting to test PostgreSQL on ZFS running Linux. Other > than pg_bench are there any other benchmarks that are easy to test? Check Gregory Smith article about testing disks [1]. [1] http://www.westnet.com/~gsmith/content/po

[PERFORM] Testing strategies

2014-04-15 Thread Dave Cramer
I have a client wanting to test PostgreSQL on ZFS running Linux. Other than pg_bench are there any other benchmarks that are easy to test? One of the possible concerns is fragmentation over time. Any ideas on how to fragment the database before running pg_bench ? Also there is some concern about

Re: [PERFORM] Testing Sandforce SSD

2010-08-11 Thread Bruce Momjian
Greg Smith wrote: > > * How to test for power failure? > > I've had good results using one of the early programs used to > investigate this class of problems: > http://brad.livejournal.com/2116715.html?page=2 FYI, this tool is mentioned in the Postgres documentation: http://www.postgr

Re: [PERFORM] Testing Sandforce SSD

2010-08-05 Thread Brad Nicholson
On 10-08-04 03:49 PM, Scott Carey wrote: On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote: On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote: After a week testing I think I can answer the question above: does it work like it's supposed to under PostgreSQL? YES The drive I have tested is

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Chris Browne
j...@commandprompt.com ("Joshua D. Drake") writes: > On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote: >> Greg Smith wrote: >> > Note that not all of the Sandforce drives include a capacitor; I hope >> > you got one that does! I wasn't aware any of the SF drives with a >> > capacitor on them

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Chris Browne
g...@2ndquadrant.com (Greg Smith) writes: > Yeb Havinga wrote: >> * What filesystem to use on the SSD? To minimize writes and maximize >> chance for seeing errors I'd choose ext2 here. > > I don't consider there to be any reason to deploy any part of a > PostgreSQL database on ext2. The potential

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Aug 3, 2010, at 9:27 AM, Merlin Moncure wrote: > > 2) I've heard that some SSD have utilities that you can use to query > the write cycles in order to estimate lifespan. Does this one, and is > it possible to publish the output (an approximation of the amount of > work along with this would b

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Aug 2, 2010, at 7:26 AM, Merlin Moncure wrote: > On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote: >> After a week testing I think I can answer the question above: does it work >> like it's supposed to under PostgreSQL? >> >> YES >> >> The drive I have tested is the $435,- 50GB OCZ Verte

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Scott Carey
On Jul 26, 2010, at 12:45 PM, Greg Smith wrote: > Yeb Havinga wrote: >> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory >> read/write test. (scale 300) No real winners or losers, though ext2 >> isn't really faster and the manual need for fix (y) during boot makes >> it i

Re: [PERFORM] Testing Sandforce SSD

2010-08-04 Thread Hannu Krosing
On Tue, 2010-08-03 at 10:40 +0200, Yeb Havinga wrote: > se note that the 10% was on a slower CPU. On a more recent CPU the > difference was 47%, based on tests that ran for an hour. I am not surprised at all that reading and writing almost twice as much data from/to disk takes 47% longer. If less

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Merlin Moncure
On Tue, Aug 3, 2010 at 11:37 AM, Yeb Havinga wrote: > Yeb Havinga wrote: >> >> Hannu Krosing wrote: >>> >>> Did it fit in shared_buffers, or system cache ? >>> >> >> Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. >>> >>> I first noticed this several years ago, when doing a C

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Yeb Havinga wrote: Hannu Krosing wrote: Did it fit in shared_buffers, or system cache ? Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. I first noticed this several years ago, when doing a COPY to a large table with indexes took noticably longer (2-3 times longer) when

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Greg Smith
Yeb Havinga wrote: Small IO size: 4 KB Maximum Small IOPS=86883 @ Small=8 and Large=0 Small IO size: 8 KB Maximum Small IOPS=48798 @ Small=11 and Large=0 Conclusion: you can write 4KB blocks almost twice as fast as 8KB ones. This is a useful observation about the effectiveness of the write

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Hannu Krosing wrote: Did it fit in shared_buffers, or system cache ? Database was ~5GB, server has 16GB, shared buffers was set to 1920MB. I first noticed this several years ago, when doing a COPY to a large table with indexes took noticably longer (2-3 times longer) when the indexes were in

Re: [PERFORM] Testing Sandforce SSD

2010-08-03 Thread Yeb Havinga
Scott Marlowe wrote: On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith wrote: Josh Berkus wrote: That doesn't make much sense unless there's some special advantage to a 4K blocksize with the hardware itself. Given that pgbench is always doing tiny updates to blocks, I wouldn't be surp

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Scott Marlowe
On Mon, Aug 2, 2010 at 6:07 PM, Greg Smith wrote: > Josh Berkus wrote: >> >> That doesn't make much sense unless there's some special advantage to a >> 4K blocksize with the hardware itself. > > Given that pgbench is always doing tiny updates to blocks, I wouldn't be > surprised if switching to sm

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Greg Smith
Josh Berkus wrote: That doesn't make much sense unless there's some special advantage to a 4K blocksize with the hardware itself. Given that pgbench is always doing tiny updates to blocks, I wouldn't be surprised if switching to smaller blocks helps it in a lot of situations if one went looki

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Josh Berkus
> Definately - that 10% number was on the old-first hardware (the core 2 > E6600). After reading my post and the 185MBps with 18500 reads/s number > I was a bit suspicious whether I did the tests on the new hardware with > 4K, because 185MBps / 18500 reads/s is ~10KB / read, so I thought thats > a

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Yeb Havinga
Merlin Moncure wrote: On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote: Postgres settings: 8.4.4 --with-blocksize=4 I saw about 10% increase in performance compared to 8KB blocksizes. That's very interesting -- we need more testing in that department... Definately - that 10% num

Re: [PERFORM] Testing Sandforce SSD

2010-08-02 Thread Merlin Moncure
On Fri, Jul 30, 2010 at 11:01 AM, Yeb Havinga wrote: > After a week testing I think I can answer the question above: does it work > like it's supposed to under PostgreSQL? > > YES > > The drive I have tested is the $435,- 50GB OCZ Vertex 2 Pro, > http://www.newegg.com/Product/Product.aspx?Item=N82

Re: [PERFORM] Testing Sandforce SSD

2010-07-30 Thread Karl Denninger
6700tps?! Wow.. Ok, I'm impressed. May wait a bit for prices to come somewhat, but that sounds like two of those are going in one of my production machines (Raid 1, of course) Yeb Havinga wrote: > Greg Smith wrote: >> Greg Smith wrote: >>> Note that not all of the Sandforce drives include a

Re: [PERFORM] Testing Sandforce SSD

2010-07-30 Thread Yeb Havinga
Greg Smith wrote: Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the chipset that doesn't include one still. Have

Re: [PERFORM] Testing Sandforce SSD

2010-07-29 Thread Michael Stone
On Wed, Jul 28, 2010 at 03:45:23PM +0200, Yeb Havinga wrote: Due to the LBA remapping of the SSD, I'm not sure of putting files that are sequentially written in a different partition (together with e.g. tables) would make a difference: in the end the SSD will have a set new blocks in it's buffe

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Greg Spiegelberg
On Wed, Jul 28, 2010 at 9:18 AM, Yeb Havinga wrote: > Yeb Havinga wrote: > >> Due to the LBA remapping of the SSD, I'm not sure of putting files that >> are sequentially written in a different partition (together with e.g. >> tables) would make a difference: in the end the SSD will have a set new

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Yeb Havinga wrote: Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Yeb Havinga
Michael Stone wrote: On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a partitio

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? If you dedicate a partition to xlog, you already

Re: [PERFORM] Testing Sandforce SSD

2010-07-28 Thread Michael Stone
On Mon, Jul 26, 2010 at 01:47:14PM -0600, Scott Marlowe wrote: Note that SSDs aren't usually real fast at large sequential writes though, so it might be worth putting pg_xlog on a spinning pair in a mirror and seeing how much, if any, the SSD drive speeds up when not having to do pg_xlog. xlog

Re: [PERFORM] Testing Sandforce SSD

2010-07-27 Thread Hannu Krosing
On Mon, 2010-07-26 at 14:34 -0400, Greg Smith wrote: > Matthew Wakeling wrote: > > Yeb also made the point - there are far too many points on that graph > > to really tell what the average latency is. It'd be instructive to > > have a few figures, like "only x% of requests took longer than y". >

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Greg Spiegelberg wrote: I know I'm talking development now but is there a case for a pg_xlog block device to remove the file system overhead and guaranteeing your data is written sequentially every time? It's possible to set the PostgreSQL wal_sync_method parameter in the database to open_dat

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Andres Freund
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote: > On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith wrote: > > Yeb Havinga wrote: > >> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory > >> read/write test. (scale 300) No real winners or losers, though ext2 isn't >

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Spiegelberg
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith wrote: > Yeb Havinga wrote: > >> I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory >> read/write test. (scale 300) No real winners or losers, though ext2 isn't >> really faster and the manual need for fix (y) during boot makes it >>

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Scott Marlowe
On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith wrote: > Greg Spiegelberg wrote: >> >> Speaking of the layers in-between, has this test been done with the ext3 >> journal on a different device?  Maybe the purpose is wrong for the SSD.  Use >> the SSD for the ext3 journal and the spindled drives for f

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Yeb Havinga wrote: I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory read/write test. (scale 300) No real winners or losers, though ext2 isn't really faster and the manual need for fix (y) during boot makes it impractical in its standard configuration. That's what happens

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Yeb Havinga wrote: To get similar *average* performance results you'd need to put about 4 drives and a BBU into a server. The Please forget this question, I now see it in the mail i'm replying to. Sorry for the spam! -- Yeb -- Sent via pgsql-performance mailing list (pgsql-performance@po

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Greg Smith wrote: Yeb Havinga wrote: Please remember that particular graphs are from a read/write pgbench run on a bigger than RAM database that ran for some time (so with checkpoints), on a *single* $435 50GB drive without BBU raid controller. To get similar *average* performance results you

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Kevin Grittner
Greg Smith wrote: > Yeb's data is showing that a single SSD is competitive with a > small array on average, but with better worst-case behavior than > I'm used to seeing. So, how long before someone benchmarks a small array of SSDs? :-) -Kevin -- Sent via pgsql-performance mailing list (p

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Greg Spiegelberg wrote: Speaking of the layers in-between, has this test been done with the ext3 journal on a different device? Maybe the purpose is wrong for the SSD. Use the SSD for the ext3 journal and the spindled drives for filesystem? The main disk bottleneck on PostgreSQL databases

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Matthew Wakeling wrote: Yeb also made the point - there are far too many points on that graph to really tell what the average latency is. It'd be instructive to have a few figures, like "only x% of requests took longer than y". Average latency is the inverse of TPS. So if the result is, say,

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Spiegelberg
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga wrote: > Matthew Wakeling wrote: > >> Apologies, I was interpreting the graph as the latency of the device, not >> all the layers in-between as well. There isn't any indication in the email >> with the graph as to what the test conditions or software

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Matthew Wakeling wrote: Apologies, I was interpreting the graph as the latency of the device, not all the layers in-between as well. There isn't any indication in the email with the graph as to what the test conditions or software are. That info was in the email preceding the graph mail, but I s

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Matthew Wakeling
On Mon, 26 Jul 2010, Greg Smith wrote: Matthew Wakeling wrote: Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Have you tried that yourself? If you generate one of those with standard hard

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Yeb Havinga wrote: Please remember that particular graphs are from a read/write pgbench run on a bigger than RAM database that ran for some time (so with checkpoints), on a *single* $435 50GB drive without BBU raid controller. To get similar *average* performance results you'd need to put abou

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Greg Smith
Matthew Wakeling wrote: Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Have you tried that yourself? If you generate one of those with standard hard drives and a BBWC under Linux, I expec

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Matthew Wakeling wrote: On Sun, 25 Jul 2010, Yeb Havinga wrote: Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Does your latency graph really have milliseconds as the y axis? Yes If so, this device is really slow - some requests have a latency of m

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Matthew Wakeling
On Sun, 25 Jul 2010, Yeb Havinga wrote: Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Does your latency graph really have milliseconds as the y axis? If so, this device is really slow - some requests have a latency of more than a second! Matthew

Re: [PERFORM] Testing Sandforce SSD

2010-07-26 Thread Yeb Havinga
Yeb Havinga wrote: Greg Smith wrote: Put it on ext3, toggle on noatime, and move on to testing. The overhead of the metadata writes is the least of the problems when doing write-heavy stuff on Linux. I ran a pgbench run and power failure test during pgbench with a 3 year old computer On th

Re: [PERFORM] Testing Sandforce SSD

2010-07-25 Thread Yeb Havinga
Yeb Havinga wrote: 8GB DDR2 something.. (lots of details removed) Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at http://tinypic.com/r/x5e846/3 Thanks http://www.westnet.com/~gsmith/content/postgresql/pgbench.htm for the gnuplot and psql scripts! -- Sent via pgsql-performan

Re: [PERFORM] Testing Sandforce SSD

2010-07-25 Thread Yeb Havinga
Greg Smith wrote: Put it on ext3, toggle on noatime, and move on to testing. The overhead of the metadata writes is the least of the problems when doing write-heavy stuff on Linux. I ran a pgbench run and power failure test during pgbench with a 3 year old computer 8GB DDR ? Intel Core 2 duo

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Yeb Havinga wrote: Writes/s start low but quickly converge to a number in the range of 1200 to 1800. The writes diskchecker does are 16kB writes. Making this 4kB writes does not increase writes/s. 32kB seems a little less, 64kB is about two third of initial writes/s and 128kB is half. Let's t

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Yeb Havinga wrote: Yeb Havinga wrote: diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s) Total errors: 0 :-) OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer. When playing with it a bit more, I couldn't get the test_file to be created in the right

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Joshua D. Drake wrote: That is quite the toy. I can get 4 SATA-II with RAID Controller, with battery backed cache, for the same price or less :P True, but if you look at tests like http://www.anandtech.com/show/2899/12 it suggests there's probably at least a 6:1 performance speedup for wor

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Joshua D. Drake
On Sat, 2010-07-24 at 16:21 -0400, Greg Smith wrote: > Greg Smith wrote: > > Note that not all of the Sandforce drives include a capacitor; I hope > > you got one that does! I wasn't aware any of the SF drives with a > > capacitor on them were even shipping yet, all of the ones I'd seen > > wer

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the chipset that doesn't include one still. Haven't checked in a f

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Yeb Havinga wrote: diskchecker: running 37 sec, 4.47% coverage of 500 MB (1468 writes; 39/s) Total errors: 0 :-) OTOH, I now notice the 39 write /s .. If that means ~ 39 tps... bummer. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subsc

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Greg Smith wrote: Note that not all of the Sandforce drives include a capacitor; I hope you got one that does! I wasn't aware any of the SF drives with a capacitor on them were even shipping yet, all of the ones I'd seen were the chipset that doesn't include one still. Haven't checked in a f

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Merlin Moncure
On Sat, Jul 24, 2010 at 3:20 AM, Yeb Havinga wrote: > Hello list, > > Probably like many other's I've wondered why no SSD manufacturer puts a > small BBU on a SSD drive. Triggered by Greg Smith's mail > http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here, > and also anandtec

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Greg Smith
Yeb Havinga wrote: Probably like many other's I've wondered why no SSD manufacturer puts a small BBU on a SSD drive. Triggered by Greg Smith's mail http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here, and also anandtech's review at http://www.anandtech.com/show/2899/1 (s

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread Ben Chobot
On Jul 24, 2010, at 12:20 AM, Yeb Havinga wrote: > The problem in this scenario is that even when the SSD would show not data > loss and the rotating disk would for a few times, a dozen tests without > failure isn't actually proof that the drive can write it's complete buffer to > disk after po

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread david
On Sat, 24 Jul 2010, David Boreham wrote: Do you guys have any more ideas to properly 'feel this disk at its teeth' ? While an 'end-to-end' test using PG is fine, I think it would be easier to determine if the drive is behaving correctly by using a simple test program that emulates the stora

Re: [PERFORM] Testing Sandforce SSD

2010-07-24 Thread David Boreham
Do you guys have any more ideas to properly 'feel this disk at its teeth' ? While an 'end-to-end' test using PG is fine, I think it would be easier to determine if the drive is behaving correctly by using a simple test program that emulates the storage semantics the WAL expects. Have it wri

[PERFORM] Testing Sandforce SSD

2010-07-24 Thread Yeb Havinga
Hello list, Probably like many other's I've wondered why no SSD manufacturer puts a small BBU on a SSD drive. Triggered by Greg Smith's mail http://archives.postgresql.org/pgsql-performance/2010-02/msg00291.php here, and also anandtech's review at http://www.anandtech.com/show/2899/1 (see pag

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Ben Chobot
On Mar 17, 2010, at 7:41 AM, Brad Nicholson wrote: > As an aside, some folks in our Systems Engineering department here did > do some testing of FusionIO, and they found that the helper daemons were > inefficient and placed a fair amount of load on the server. That might > be something to watch o

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread david
On Wed, 17 Mar 2010, Brad Nicholson wrote: On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote: On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote: On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote: FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear le

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 14:11 -0400, Justin Pitts wrote: > On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote: > > > On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote: > >> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, > >> which wear levels across 100GB of actual ins

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Kenny Gorman
Greg, Did you ever contact them and get your hands on one? We eventually did see long SSD rebuild times on server crash as well. But data came back uncorrupted per my blog post. This is a good case for Slony Slaves. Anyone in a high TX low downtime environment would have already engineered

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts
FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, which wear levels across 100GB of actual installed capacity. http://community.fusionio.com/forums/p/34/258.aspx#258 Max drive performance would be about 41TB/day, which coincidently works out very close to the 3 year warr

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts
On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote: > I've been hearing bad things from some folks about the quality of the > FusionIO drives from a durability standpoint. Can you be more specific about that? Durability over what time frame? How many devices in the sample set? How did FusionIO de

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Justin Pitts
On Mar 17, 2010, at 10:41 AM, Brad Nicholson wrote: > On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote: >> FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, >> which wear levels across 100GB of actual installed capacity. >> http://community.fusionio.com/forums/p/34/2

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 09:52 -0400, Justin Pitts wrote: > FusionIO is publicly claiming 24 years @ 5TB/day on the 80GB SLC device, > which wear levels across 100GB of actual installed capacity. > http://community.fusionio.com/forums/p/34/258.aspx#258 > 20% of overall capacity free for levelling

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 09:11 -0400, Justin Pitts wrote: > On Mar 17, 2010, at 9:03 AM, Brad Nicholson wrote: > > > I've been hearing bad things from some folks about the quality of the > > FusionIO drives from a durability standpoint. > > Can you be more specific about that? Durability over what t

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Brad Nicholson
On Wed, 2010-03-17 at 14:30 +0200, Devrim GÜNDÜZ wrote: > On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote: > > We've enjoyed our FusionIO drives very much. They can do 100k iops > > without breaking a sweat. > > Yeah, performance is excellent. I bet we could get more, but CPU was > bottleneck

Re: [PERFORM] Testing FusionIO

2010-03-17 Thread Devrim GÜNDÜZ
On Mon, 2010-03-08 at 09:38 -0800, Ben Chobot wrote: > We've enjoyed our FusionIO drives very much. They can do 100k iops > without breaking a sweat. Yeah, performance is excellent. I bet we could get more, but CPU was bottleneck in our test, since it was just a demo server :( -- Devrim GÜNDÜZ Po

Re: [PERFORM] Testing FusionIO

2010-03-08 Thread Ben Chobot
On Mar 8, 2010, at 12:50 PM, Greg Smith wrote: > Ben Chobot wrote: >> We've enjoyed our FusionIO drives very much. They can do 100k iops without >> breaking a sweat. Just make sure you shut them down cleanly - it can up to >> 30 minutes per card to recover from a crash/plug pull test. > > Ye

Re: [PERFORM] Testing FusionIO

2010-03-08 Thread Greg Smith
Ben Chobot wrote: We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. Yeah...I got into an argument with Kenny Gorman over my concerns

Re: [PERFORM] Testing FusionIO

2010-03-08 Thread Ben Chobot
We've enjoyed our FusionIO drives very much. They can do 100k iops without breaking a sweat. Just make sure you shut them down cleanly - it can up to 30 minutes per card to recover from a crash/plug pull test. I also have serious questions about their longevity and failure mode when the flash

Re: [PERFORM] Testing FusionIO

2010-03-08 Thread Łukasz Jagiełło
2010/3/8 Devrim GÜNDÜZ : > Hi, > > I have a FusionIO drive to test for a few days. I already ran iozone and > bonnie++ against it. Does anyone have more suggestions for it? > > It is a single drive (unfortunately). vdbench -- Łukasz Jagiełło System Administrator G-Forces Web Management Polska sp

Re: [PERFORM] Testing FusionIO

2010-03-08 Thread Yeb Havinga
Devrim GÜNDÜZ wrote: Hi, I have a FusionIO drive Cool!! to test for a few days. I already ran iozone and bonnie++ against it. Does anyone have more suggestions for it? Oracle has a tool to test drives specifically for database loads kinds called orion - its free software and comes with a

[PERFORM] Testing FusionIO

2010-03-08 Thread Devrim GÜNDÜZ
Hi, I have a FusionIO drive to test for a few days. I already ran iozone and bonnie++ against it. Does anyone have more suggestions for it? It is a single drive (unfortunately). Regards, -- Devrim GÜNDÜZ PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer PostgreSQL RPM Repository: http

[PERFORM] Testing list access

2005-05-03 Thread Jona
Testing list access ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match

[PERFORM] testing

2003-08-03 Thread Ron Johnson
-- +-+ | Ron Johnson, Jr.Home: [EMAIL PROTECTED] | | Jefferson, LA USA | | | | "I'm not a vegetarian be