Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-11 Thread Robert Milkowski
> Now, if anyone is still reading, I have another question. The new Solaris 11 > device naming convention hides the physical tree from me. I got just a list of > long disk names all starting with "c0" (see below) but I need to know which > disk is connected to which controller so that I can creat

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
Downloaded, unzipped and flying! It shows GUID which is part of the /dev/rdsk/c0t* name! Thanks!!! And thanks again! This msg goes to the group. root@carbon:~/bin/LSI-SAS2IRCU/SAS2IRCU_P13/sas2ircu_solaris_x86_rel# ./sas2ircu 0 DISPLAY | grep GUID GUID

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
I followed this guide but instead of 2108it.bin I downloaded the latest firmware file for 9211-8i from LSI web site. I now have three 9211's! :) http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393 On 4 May 2012 18:33, Bob Friesenhahn wrote: > On Fri, 4 May 2012, Rocky

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-09 Thread Roman Matiyenko
Thanks for the tips, everybody! Progress report: OpenIndiana failed to recognise LSI 9240-8i's. I installed 4.7 drivers from LSI website ("for Solaris 11 and up") but it started throwing "component failed" messages. So I gave up on 9240's and re-flashed them into 9211-8i's ("IT mode"). Solaris 11

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Hugues LEPESANT
Hi,   We add several bad experience with a LSI card (LSI 3081E, LSI SAS84016E). Even with Solaris official drivers provided by LSI.   Finally we use LSI SAS9201-16i card.   http://www.lsi.com/channel/france/products/storagecomponents/Pages/LSISAS9201-16i.aspx   This one work as expected on Nex

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Bob Friesenhahn
On Fri, 4 May 2012, Rocky Shek wrote: If I were you, I will not use 9240-8I. I will use 9211-8I as pure HBA with IT FW for ZFS. Is there IT FW for the 9240-8i? They seem to use the same SAS chipset. My next system will have 9211-8i with IT FW. Playing it safe. Good enough for Nexenta is

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Rocky Shek
: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS performance on LSI 9240-8i? On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote: Hi all, I have a bad bad problem with our brand new server! The lengthy details are below but to cut the story short, on the same hardware (3 x LSI

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Hung-Sheng Tsao Ph.D.
hi s11 come with its own driver for some lsi sas HCA but on the HCL I only see LSI SAS 9200-8e LSI MegaRAID SAS 9260-8i

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Richard Elling
On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote: > Hi all, > > I have a bad bad problem with our brand new server! > > The lengthy details are below but to cut the story short, on the same > hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS > sequential writes of 1.4GB/s on Solari

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-19 Thread Thomas Nau
Hi Bob > I don't know what the request pattern from filebench looks like but it seems > like your ZEUS RAM devices are not keeping up or > else many requests are bypassing the ZEUS RAM devices. > > Note that very large synchronous writes will bypass your ZEUS RAM device and > go directly to a l

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Bob Friesenhahn
On Thu, 18 Aug 2011, Thomas Nau wrote: Tim the client is identical as the server but no SAS drives attached. Also right now only one 1gbit Intel NIC Is available I don't know what the request pattern from filebench looks like but it seems like your ZEUS RAM devices are not keeping up or else

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Thomas Nau
Tim the client is identical as the server but no SAS drives attached. Also right now only one 1gbit Intel NIC Is available Thomas Am 18.08.2011 um 17:49 schrieb Tim Cook : > What are the specs on the client? > > On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote: > > Dear all. > > We finally got al

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Tim Cook
What are the specs on the client? On Aug 18, 2011 10:28 AM, "Thomas Nau" wrote: > Dear all. > We finally got all the parts for our new fileserver following several > recommendations we got over this list. We use > > Dell R715, 96GB RAM, dual 8-core Opterons > 1 10GE Intel dual-port NIC > 2 LSI 920

Re: [zfs-discuss] ZFS performance falls off a cliff

2011-05-13 Thread Don
~# uname -a SunOS nas01a 5.11 oi_147 i86pc i386 i86pc Solaris ~# zfs get version pool0 NAME PROPERTY VALUESOURCE pool0 version 5- ~# zpool get version pool0 NAME PROPERTY VALUESOURCE pool0 version 28 default -- This message posted from opensolaris.org __

Re: [zfs-discuss] ZFS performance falls off a cliff

2011-05-13 Thread Aleksandr Levchuk
sirket, could you please share your OS, zfs, and zpool versions? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Performance Question

2011-05-10 Thread Luke Lonergan
Robert, > I belive it's not solved yet but you may want to try with > latest nevada and see if there's a difference. It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express post build 47 I think. - Luke ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Brandon High
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High wrote: > It moves from "best fit" to "any fit" at a certain point, which is at > ~ 95% (I think). Best fit looks for a large contiguous space to avoid > fragmentation while any fit looks for any free space. I got the terminology wrong, it's first-fit

Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Torrey McMahon
On 2/25/2011 4:15 PM, Torrey McMahon wrote: On 2/25/2011 3:49 PM, Tomas Ögren wrote: On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. It happens at

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Eric D. Mudama
On Mon, Feb 28 at 0:30, Toby Thain wrote: I would expect COW puts more pressure on near-full behaviour compared to write-in-place filesystems. If that's not true, somebody correct me. Off the top of my head, I think it'd depend on the workload. Write-in-place will always be faster with large

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Toby Thain
On 27/02/11 9:59 AM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of David Blasingame Oracle >> >> Keep pool space under 80% utilization to maintain pool performance. > > For what it's worth, the same is true for a

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Brandon High
On Sun, Feb 27, 2011 at 6:59 AM, Edward Ned Harvey wrote: > But there is one specific thing, isn't there?  Where ZFS will choose to use > a different algorithm for something, when pool usage exceeds some threshold. > Right?  What is that? It moves from "best fit" to "any fit" at a certain point,

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Roy Sigurd Karlsbakk
> In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. It is, and in my experience, it doesn't matter much if you have a full pool and add another VDEV, the existing VDEVs will be full still, and performance will be slow. For this reason, new sy

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of David Blasingame Oracle > > Keep pool space under 80% utilization to maintain pool performance. For what it's worth, the same is true for any other filesystem too. What really matters is the

Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Torrey McMahon
On 2/25/2011 3:49 PM, Tomas Ögren wrote: On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. It happens at about 90% for me.. all of a sudden, the mail

Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Tomas Ögren
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. It happens at about 90% for me.. all of a sudden, the mail server got butt slow.. killed an old snapshot

Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Cindy Swearingen
Hi Dave, Still true. Thanks, Cindy On 02/25/11 13:34, David Blasingame Oracle wrote: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. > > from : > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide >

Re: [zfs-discuss] ZFS performance Tuning

2010-08-04 Thread Richard Elling
On Aug 4, 2010, at 3:22 AM, TAYYAB REHMAN wrote: > Hi, > i am working with ZFS now a days, i am facing some performance issues > from application team, as they said writes are very slow in ZFS w.r.t UFS. > Kindly send me some good reference or books links. i will be very thankful to > you.

Re: [zfs-discuss] zfs performance issue

2010-05-10 Thread Eric D. Mudama
On Mon, May 10 at 9:08, Erik Trimble wrote: Abhishek Gupta wrote: Hi, I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2 with a few slices on a single disk. I was expecting a good read/write performance but I got the speed of 12-15MBps. How can I enhance the read/write

Re: [zfs-discuss] zfs performance issue

2010-05-10 Thread Erik Trimble
Abhishek Gupta wrote: Hi, I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2 with a few slices on a single disk. I was expecting a good read/write performance but I got the speed of 12-15MBps. How can I enhance the read/write performance of my raid? Thanks, Abhi. You ab

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-04-19 Thread Richard Skelton
> On 18/03/10 08:36 PM, Kashif Mumtaz wrote: > > Hi, > > I did another test on both machine. And write > performance on ZFS extraordinary slow. > > Which build are you running? > > On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my > desktop), I > see these results: > > > $ time dd if=/dev/ze

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-22 Thread Kashif Mumtaz
hi, Thanks for all the reply. I have found the real culprit. Hard disk was faulty. I changed the hard disk.And now ZFS performance is much better. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org htt

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Erik Trimble
Erik Trimble wrote: James C. McPherson wrote: On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 1

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Erik Trimble
James C. McPherson wrote: On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) U

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread James C. McPherson
On 18/03/10 10:05 PM, Kashif Mumtaz wrote: Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) UFS machine Hard disk 1 TB s

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Svein Skogen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 18.03.2010 21:31, Daniel Carosone wrote: > You have a gremlin to hunt... Wouldn't Sun help here? ;) (sorry couldn't help myself, I've spent a week hunting gremlins until I hit the brick wall of the MPT problem) //Svein - -- - +

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Daniel Carosone
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote: > I did another test on both machine. And write performance on ZFS > extraordinary slow. > - > In ZFS data was being write around 1037 kw/s while disk remain busy

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Kashif Mumtaz
Hi, Thanks for your reply BOTH are Sun Sparc T1000 machines. Hard disk 1 TB sata on both ZFS system Memory32 GB , Processor 1GH 6 core os Solaris 10 10/09 s10s_u8wos_08a SPARC PatchCluster level 142900-02(Dec 09 ) UFS machine Hard disk 1 TB sata Memory 16 GB Processor Processor 1GH 6 c

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread James C. McPherson
On 18/03/10 08:36 PM, Kashif Mumtaz wrote: Hi, I did another test on both machine. And write performance on ZFS extraordinary slow. Which build are you running? On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my desktop), I see these results: $ time dd if=/dev/zero of=test.dbf bs=8k count

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-18 Thread Kashif Mumtaz
Hi, I did another test on both machine. And write performance on ZFS extraordinary slow. I did the following test on both machines For write time dd if=/dev/zero of=test.dbf bs=8k count=1048576 For read time dd if=/testpool/test.dbf of=/dev/null bs=8k ZFS machine has 32GB memory UFS machine

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Daniel Carosone
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote: > Clearly there are many more reads per second occuring on the zfs > filesystem than the ufs filesystem. yes > Assuming that the application-level requests are really the same From the OP, the workload is a "find /". So, ZFS mak

Re: [zfs-discuss] ZFS Performance on SATA Deive

2010-03-17 Thread Bob Friesenhahn
On Wed, 17 Mar 2010, Kashif Mumtaz wrote: but on UFS file system averge busy is 50% , any idea why ZFS makes disk more busy ? Clearly there are many more reads per second occuring on the zfs filesystem than the ufs filesystem. Assuming that the application-level requests are really the sam

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-20 Thread Edward Ned Harvey
> Doesn't this mean that if you enable write back, and you have > a single, non-mirrored raid-controller, and your raid controller > dies on you so that you loose the contents of the nvram, you have > a potentially corrupt file system? It is understood, that any single point of failure could resul

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-20 Thread Edward Ned Harvey
> ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not > prefetch. Can you point me to any reference? I didn't find anything stating yay or nay, for either of these. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.o

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Neil Perrin
If I understand correctly, ZFS now adays will only flush data to non volatile storage (such as a RAID controller NVRAM), and not all the way out to disks. (To solve performance problems with some storage systems, and I believe that it also is the right thing to do under normal circumstances.) D

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote: > The PERC cache measurably and significantly accelerates small disk writes. > However, for read operations, it is insignificant compared to system ram, > both in terms of size and speed. There is no significant performance > improvement by

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Richard Elling
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote: > One more thing I’d like to add here: > > The PERC cache measurably and significantly accelerates small disk writes. > However, for read operations, it is insignificant compared to system ram, > both in terms of size and speed. There is no

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Günther
hello i have made some benchmarks with my napp-it zfs-server http://www.napp-it.org/bench.pdf"; target="_blank">screenshot http://www.napp-it.org/bench.pdf"; target="_blank">www.napp-it.org/bench.pdf -> 2gb vs 4 gb vs 8 gb ram -> mirror vs raidz vs raidz2 vs raidz3 -> dedup and compress enabled

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Edward Ned Harvey
One more thing I¹d like to add here: The PERC cache measurably and significantly accelerates small disk writes. However, for read operations, it is insignificant compared to system ram, both in terms of size and speed. There is no significant performance improvement by enabling adaptive readahead

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Daniel Carosone
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote: > This sounds like an initial 'silver' rather than a 'resilver'. Yes, in particular it will be entirely seqential. ZFS resilver is in txg order and involves seeking. > What I am interested in is the answer to these sort of questio

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Edward Ned Harvey wrote: Actually, that's easy. Although the "zpool create" happens instantly, all the hardware raid configurations required an initial resilver. And they were exactly what you expect. Write 1 Gbit/s until you reach the size of the drive. I watched the pro

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
> A most excellent set of tests. We could use some units in the PDF > file though. Oh, by the way, you originally requested the 12G file to be used in benchmark, and later changed to 4G. But by that time, two of the tests had already completed on the 12G, and I didn't throw away those results, b

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
> A most excellent set of tests. We could use some units in the PDF > file though. Oh, hehehe. ;-) The units are written in the raw txt files. On your tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you like, you can always grab the xlsx and modify it to your tastes, and

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Edward Ned Harvey wrote: Ok, I’ve done all the tests I plan to complete.  For highest performance, it seems: · The measure I think is the most relevant for typical operation is the fastest random read /write / mix.  (Thanks Bob, for suggesting I do this test.) Th

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
Ok, I've done all the tests I plan to complete. For highest performance, it seems: . The measure I think is the most relevant for typical operation is the fastest random read /write / mix. (Thanks Bob, for suggesting I do this test.) The winner is clearly striped mirrors in ZFS .

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-15 Thread Carson Gaspar
Richard Elling wrote: ... As you can see, so much has changed, hopefully for the better, that running performance benchmarks on old software just isn't very interesting. NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they would not be competitive in the market. The n

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Richard Elling
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote: > > Whatever. Regardless of what you say, it does show: > > · Which is faster, raidz, or a stripe of mirrors? > > · How much does raidz2 hurt performance compared to raidz? > > · Which is faster, raidz, or hardware raid

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Thomas Burgess wrote: Solaris 10 has a really old version of ZFS.  i know there are some pretty big differences in zfs versions from my own non scientific benchmarks.  It would make sense that people wouldn't be as interested in benchmarks of solaris 10 ZFS seeing as ther

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Edward Ned Harvey wrote: iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G Good news, cuz I kicked off the first test earlier today, and it seems like it will run till Wednesday. ;-) The

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Bob Friesenhahn
On Sun, 14 Feb 2010, Edward Ned Harvey wrote: > Never mind. I have no interest in performance tests for Solaris 10. > The code is so old, that it does not represent current ZFS at all. Whatever.  Regardless of what you say, it does show: Since Richard abandoned Sun (in favor of gmail), he ha

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Thomas Burgess
> Whatever. Regardless of what you say, it does show: > > · Which is faster, raidz, or a stripe of mirrors? > > · How much does raidz2 hurt performance compared to raidz? > > · Which is faster, raidz, or hardware raid 5? > > · Is a mirror twice as fast as a single d

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
> > iozone -m -t 8 -T -O -r 128k -o -s 12G > > Actually, it seems that this is more than sufficient: > >iozone -m -t 8 -T -r 128k -o -s 4G Good news, cuz I kicked off the first test earlier today, and it seems like it will run till Wednesday. ;-) The first run, on a single disk, took 6.5

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
> Never mind. I have no interest in performance tests for Solaris 10. > The code is so old, that it does not represent current ZFS at all. Whatever. Regardless of what you say, it does show: . Which is faster, raidz, or a stripe of mirrors? . How much does raidz2 hurt perfor

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Richard Elling
On Feb 13, 2010, at 10:54 AM, Edward Ned Harvey wrote: > > Please add some raidz3 tests :-) We have little data on how raidz3 > > performs. > > Does this require a specific version of OS? I'm on Solaris 10 10/09, and > "man zpool" doesn't seem to say anything about raidz3 ... I haven't tried

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
On Sat, 13 Feb 2010, Edward Ned Harvey wrote: > kind as to collect samples of "iosnoop -Da" I would be eternally > grateful :-) I'm guessing iosnoop is an opensolaris thing?  Is there an equivalent for solaris? Iosnoop is part of the DTrace Toolkit by Brendan Gregg, which does work on Sol

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Edward Ned Harvey
> IMHO, sequential tests are a waste of time. With default configs, it > will be > difficult to separate the "raw" performance from prefetched > performance. > You might try disabling prefetch as an option. Let me clarify: Iozone does a nonsequential series of sequential tests, specifi

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
On Sat, 13 Feb 2010, Bob Friesenhahn wrote: Make sure to also test with a command like iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G since it creates a 4GB test file for each thread, with 8 threads. Bob --

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Bob Friesenhahn
On Sat, 13 Feb 2010, Edward Ned Harvey wrote: Will test, including the time to flush(), various record sizes inside file sizes up to 16G, sequential write and sequential read.  Not doing any mixed read/write requests.  Not doing any random read/write. iozone -Reab somefile.wks -g 17G -i 1 -i

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Richard Elling
Some thoughts below... On Feb 13, 2010, at 6:06 AM, Edward Ned Harvey wrote: > I have a new server, with 7 disks in it. I am performing benchmarks on it > before putting it into production, to substantiate claims I make, like > “striping mirrors is faster than raidz” and so on. Would anybody

Re: [zfs-discuss] ZFS performance issues over 2.5 years.

2009-12-16 Thread William D. Hathaway
Hi Yariv - It is hard to say without more data, but perhaps you might be a victim of "Stop looking and start ganging": http://bugs.opensolaris.org/view_bug.do?bug_id=6596237 It looks like this was fixed in S10u8, which was released last month. If you open a support ticket (or search for this

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-07 Thread John-Paul Drawneek
Final rant on this. Managed to get the box re-installed and the performance issue has vanished. So there is a performance bug in zfs some where. Not sure to put in a bug log as I can't now provide any more information. -- This message posted from opensolaris.org

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-03 Thread Collier Minerich
Please unsubscribe me COLLIER -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of John-Paul Drawneek Sent: Thursday, September 03, 2009 2:13 AM To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] zfs performance

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-03 Thread John-Paul Drawneek
So I have poked and prodded the disks and they both seem fine. Any yet my rpool is still slow. Any ideas on what do do now. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-02 Thread John-Paul Drawneek
No joy. c1t0d0 89 MB/sec c1t1d0 89 MB/sec c2t0d0 123 MB/sec c2t1d0 123 MB/sec First two are the rpool -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn
On Tue, 1 Sep 2009, Jpd wrote: Thanks. Any idea on how to work out which one. I can't find smart in ips, so what other ways are there? You could try using a script like this one to find pokey disks: #!/bin/ksh # Date: Mon, 14 Apr 2008 15:49:41 -0700 # From: Jeff Bonwick # To: Henrik Hjort

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn
On Tue, 1 Sep 2009, John-Paul Drawneek wrote: i did not migrate my disks. I now have 2 pools - rpool is at 60% as is still dog slow. Also scrubbing the rpool causes the box to lock up. This sounds like a hardware problem and not something related to fragmentation. Probably you have a slow/

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread John-Paul Drawneek
i did not migrate my disks. I now have 2 pools - rpool is at 60% as is still dog slow. Also scrubbing the rpool causes the box to lock up. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically migrate to the other disks. You will have to rewrite the data somehow, usually a backup/restore. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-10-01 Thread William D. Hathaway
You might want to also try toggling the Nagle tcp setting to see if that helps with your workload: ndd -get /dev/tcp tcp_naglim_def (save that value, default is 4095) ndd -set /dev/tcp tcp_naglim_def 1 If no (or a negative) difference, set it back to the original value ndd -set /dev/tcp tcp_nagl

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Richard Elling
gm_sjo wrote: > 2008/9/30 Jean Dion <[EMAIL PROTECTED]>: > >> If you want performance you do not put all your I/O across the same physical >> wire. Once again you cannot go faster than the physical wire can support >> (CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on >> si

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread gm_sjo
2008/9/30 Jean Dion <[EMAIL PROTECTED]>: > If you want performance you do not put all your I/O across the same physical > wire. Once again you cannot go faster than the physical wire can support > (CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on > single port you "share" the

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Gary Mills
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote: > > Legato client and server contains tuning parameters to avoid such small file > problems. Check your Legato buffer parameters. These buffer will use your > server memory as disk cache. Our backup person tells me that there are no

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Gary Mills
On Tue, Sep 30, 2008 at 10:32:50AM -0700, William D. Hathaway wrote: > Gary - >Besides the network questions... Yes, I suppose I should see if traffic on the Iscsi network is hitting a limit of some sort. >What does your zpool status look like? Pretty simple: $ zpool status pool:

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Jean Dion
Normal iSCSI setup split network traffic at physical layer and not logical layer. That mean physical ports and often physical PCI bridge chip if you can.  That will be fine for small traffic but we are talking backup performance issues.  IP network and number of small files are very often the b

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread gm_sjo
2008/9/30 Jean Dion <[EMAIL PROTECTED]>: > Simple. You cannot go faster than the slowest link. That is indeed correct, but what is the slowest link when using a Layer 2 VLAN? You made a broad statement that iSCSI 'requires' a dedicated, standalone network. I do not believe this is the case. > Any

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread William D. Hathaway
Gary - Besides the network questions... What does your zpool status look like? Are you using compression on the file systems? (Was single-threaded and fixed in s10u4 or equiv patches) -- This message posted from opensolaris.org ___ zfs-disc

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Jean Dion
For Solaris internal debugging tools look here http://opensolaris.org/os/community/advocacy/events/techdays/seattle/OS_SEA_POD_JMAURO.pdf;jsessionid=9B3E275EEB6F1A0E0BC191D8DEC0F965 ZFS specifics is available here http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Jean

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Gary Mills
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote: > Do you have dedicated iSCSI ports from your server to your NetApp? Yes, it's a dedicated redundant gigabit network. > iSCSI requires dedicated network and not a shared network or even VLAN. > Backup cause large I/O that fill your ne

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread Jean Dion
Simple. You cannot go faster than the slowest link. Any VLAN share the bandwidth workload and do not provide a dedicated bandwidth for each of them.   That means if you have multiple VLAN coming out of the same wire of your server you do not have "n" time the bandwidth but only a fraction of i

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread gm_sjo
2008/9/30 Jean Dion <[EMAIL PROTECTED]>: > iSCSI requires dedicated network and not a shared network or even VLAN. > Backup cause large I/O that fill your network quickly. Like ans SAN today. Could you clarify why it is not suitable to use VLANs for iSCSI? __

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-29 Thread Jean Dion
Do you have dedicated iSCSI ports from your server to your NetApp? iSCSI requires dedicated network and not a shared network or even VLAN. Backup cause large I/O that fill your network quickly. Like ans SAN today. Backup are extremely demanding on hardware (CPU, Mem, I/O ports, disk etc).

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-23 Thread Richard Elling
Ralf Bertling wrote: > Hi list, > as this matter pops up every now and then in posts on this list I just > want to clarify that the real performance of RaidZ (in its current > implementation) is NOT anything that follows from raidz-style data > efficient redundancy or the copy-on-write design us

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-22 Thread Bob Friesenhahn
On Sun, 22 Jun 2008, Will Murnane wrote: > >> Perhaps the solution is to install more RAM in the system so that the >> stripe is fully cached and ZFS does not need to go back to disk prior >> to writing an update. > I don't think the problem is that the stripe is falling out of cache, > but that it

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-22 Thread Bob Friesenhahn
On Sun, 22 Jun 2008, Brian Hechinger wrote: > On Sun, Jun 22, 2008 at 10:37:34AM -0500, Bob Friesenhahn wrote: >> >> Perhaps the solution is to install more RAM in the system so that the >> stripe is fully cached and ZFS does not need to go back to disk prior >> to writing an update. The need to

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-22 Thread Will Murnane
On Sun, Jun 22, 2008 at 15:37, Bob Friesenhahn <[EMAIL PROTECTED]> wrote: > Keep in mind that ZFS checksums all data, the checksum is stored in a > different block than the data, and that if ZFS were to checksum on the > stripe segment level, a lot more checksums would need to be stored. > All thes

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-22 Thread Brian Hechinger
On Sun, Jun 22, 2008 at 10:37:34AM -0500, Bob Friesenhahn wrote: > > Perhaps the solution is to install more RAM in the system so that the > stripe is fully cached and ZFS does not need to go back to disk prior > to writing an update. The need to read prior to write is clearly what > kills ZFS

Re: [zfs-discuss] ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored

2008-06-22 Thread Bob Friesenhahn
On Sun, 22 Jun 2008, Ralf Bertling wrote: > > Now lets see if this really has to be this way (this implies no, doesn't it > ;-) > When reading small blocks of data (as opposed to streams discussed earlier) > the requested data resides on a single disk and thus reading it does not > require to se

Re: [zfs-discuss] ZFS performance lower than expected

2008-05-09 Thread Bart Van Assche
> > The disks in the SAN servers were indeed striped together with Linux LVM > > and exported as a single volume to ZFS. > > That is really going to hurt. In general, you're much better off > giving ZFS access to all the individual LUNs. The intermediate > LVM layer kills the concurrency that's

Re: [zfs-discuss] zfs performance so bad on my system

2008-04-29 Thread Chris Linton-Ford
> > For example I am trying to copy 1.4G file from my /var/mail to /d/d1 > > directory > > which is zfs file system on mypool2 pool. It takes 25 minutes to copy it, > > while > > copying it to tmp directory only takes few seconds. Whats wrong with this? > > Why > > its so long to copy that wil

Re: [zfs-discuss] zfs performance so bad on my system

2008-04-29 Thread Bob Friesenhahn
On Tue, 29 Apr 2008, Krzys wrote: > I am not sure, I had very ok system when I did originally build it and when I > did originally started to use zfs, but now its so horribly slow. I do believe > that amount of snaps that I have are causing it. This seems like a bold assumption without supportive

Re: [zfs-discuss] ZFS performance lower than expected

2008-03-26 Thread Jeff Bonwick
> The disks in the SAN servers were indeed striped together with Linux LVM > and exported as a single volume to ZFS. That is really going to hurt. In general, you're much better off giving ZFS access to all the individual LUNs. The intermediate LVM layer kills the concurrency that's native to ZF

  1   2   >