Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Neil Perrin
On 10/04/12 15:59, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Neil Perrin The ZIL code chains blocks together and these are allocated round robin among slogs or if they don'

Re: [zfs-discuss] Changing rpool device paths/drivers

2012-10-04 Thread Jerry Kemp
thanks for the link. This was the youtube link that I had. http://www.youtube.com/watch?v=1zw8V8g5eT0 Jerry On 10/ 4/12 08:07 PM, Jens Elkner wrote: > On Thu, Oct 04, 2012 at 07:57:34PM -0500, Jerry Kemp wrote: >> I remember a similar video that was up on YouTube as done by some of the >> S

Re: [zfs-discuss] Changing rpool device paths/drivers

2012-10-04 Thread Jens Elkner
On Thu, Oct 04, 2012 at 07:57:34PM -0500, Jerry Kemp wrote: > I remember a similar video that was up on YouTube as done by some of the > Sun guys employed in Germany. They build a big array from USB drives, > then exported the pool. Once the system was down, they re-arranged all > the drives in r

Re: [zfs-discuss] Changing rpool device paths/drivers

2012-10-04 Thread Jerry Kemp
Its been awhile, but it seems like in the past, you would power the system down, boot from removable media, import your pool then destroy or archive the /etc/zfs/zpool.cache, and possibly your /etc/path_to_inst file, power down again and re-arrange your hardware, then come up one final time with a

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 1:33 PM, "Schweiss, Chip" wrote: > Again thanks for the input and clarifications. > > I would like to clarify the numbers I was talking about with ZiL performance > specs I was seeing talked about on other forums. Right now I'm getting > streaming performance of sync writ

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-05 1:44, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov There are also loops ;) # svcs -d filesystem/usr STATE STIMEFMRI online Aug_27

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Neil Perrin > > The ZIL code chains blocks together and these are allocated round robin > among slogs or > if they don't exist then the main pool devices. So, if somebody is doing sync writes

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Schweiss, Chip > > If I get to build it this system, it will house a decent size VMware > NFS storage w/ 200+ VMs, which will be dual connected via 10Gbe.   This is all > medical imaging resear

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > There are also loops ;) > > # svcs -d filesystem/usr > STATE STIMEFMRI > online Aug_27 svc:/system/scheduler:default > ... > > # svcs -d scheduler > STAT

Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Schweiss, Chip
Sounds similar to the problem discussed here: http://blogs.everycity.co.uk/alasdair/2011/05/adjusting-drive-timeouts-with-mdb-on-solaris-or-openindiana/ Check 'iostat -xn' and see if one or more disks is stuck at 100%. -Chip On Thu, Oct 4, 2012 at 3:42 PM, Cindy Swearingen < cindy.swearin...@or

Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Cindy Swearingen
Hi Charles, Yes, a faulty or failing disk can kill performance. I would see if FMA has generated any faults: # fmadm faulty Or, if any of the devices are collecting errors: # fmdump -eV | more Thanks, Cindy On 10/04/12 11:22, Knipe, Charles wrote: Hey guys, I’ve run into another ZFS perf

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Schweiss, Chip
Again thanks for the input and clarifications. I would like to clarify the numbers I was talking about with ZiL performance specs I was seeing talked about on other forums. Right now I'm getting streaming performance of sync writes at about 1 Gbit/S. My target is closer to 10Gbit/S. If I get

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Richard Elling
Thanks Neil, we always appreciate your comments on ZIL implementation. One additional comment below... On Oct 4, 2012, at 8:31 AM, Neil Perrin wrote: > On 10/04/12 05:30, Schweiss, Chip wrote: >> >> Thanks for all the input. It seems information on the performance of the >> ZIL is sparse and

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 1:56 PM, Jim Klimov wrote: What if the backup host is down (i.e. the ex-master after the failover)? Will your failed-over pool accept no writes until both storage machines are working? What if internetworking between these two heads has a glitch, and as a result both of them become

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 21:19, Dan Swartzendruber writes: Sorry to be dense here, but I'm not getting how this is a cluster setup, or what your point wrt authoritative vs replication meant. In the scenario I was looking at, one host is providing access to clients - on the backup host, no services are provide

[zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Knipe, Charles
Hey guys, I've run into another ZFS performance disaster that I was hoping someone might be able to give me some pointers on resolving. Without any significant change in workload write performance has dropped off dramatically. Based on previous experience we tried deleting some files to free

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 12:19 PM, Richard Elling wrote: On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber > wrote: On 10/4/2012 11:48 AM, Richard Elling wrote: On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber > wrote: This whole thread has been fascinat

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Freddie Cash
On Thu, Oct 4, 2012 at 9:45 AM, Jim Klimov wrote: > 2012-10-04 20:36, Freddie Cash пишет: >> >> On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling >> wrote: >>> >>> On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote: >>> The return code for zpool is ambiguous. Do not rely upon it to determine >>> if the poo

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Jim Klimov
2012-10-04 20:36, Freddie Cash пишет: On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling wrote: On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote: The return code for zpool is ambiguous. Do not rely upon it to determine if the pool is healthy. You should check the health property instead. Huh. Learn s

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Freddie Cash
On Thu, Oct 4, 2012 at 9:14 AM, Richard Elling wrote: > On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote: > The return code for zpool is ambiguous. Do not rely upon it to determine > if the pool is healthy. You should check the health property instead. Huh. Learn something new everyday. You just sim

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 19:48, Richard Elling wrote: 2. CARP. This exists as part of the OHAC project. -- richard Wikipedia says CARP is the open-source equivalent of VRRP. And we have that in OI, don't we? Would it suffice? # pkg info -r vrrp Name: system/network/routing/vrrp Summary:

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber wrote: > On 10/4/2012 11:48 AM, Richard Elling wrote: >> >> On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote: >> >>> >>> This whole thread has been fascinating. I really wish we (OI) had the two >>> following things that freebsd supports: >

Re: [zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 8:58 AM, Jan Owoc wrote: > Hi, > > I have a machine whose zpools are at version 28, and I would like to > keep them at that version for portability between OSes. I understand > that 'zpool status' asks me to upgrade, but so does 'zpool status -x' > (the man page says it should

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 11:48 AM, Richard Elling wrote: On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber > wrote: This whole thread has been fascinating. I really wish we (OI) had the two following things that freebsd supports: 1. HAST - provides a block-level driver that mir

[zfs-discuss] removing upgrade notice from 'zpool status -x'

2012-10-04 Thread Jan Owoc
Hi, I have a machine whose zpools are at version 28, and I would like to keep them at that version for portability between OSes. I understand that 'zpool status' asks me to upgrade, but so does 'zpool status -x' (the man page says it should only report errors or unavailability). This is a problem

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote: > > This whole thread has been fascinating. I really wish we (OI) had the two > following things that freebsd supports: > > 1. HAST - provides a block-level driver that mirrors a local disk to a > network "disk" presenting the result as a

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
Forgot to mention: my interest in doing this was so I could have my ESXi host point at a CARP-backed IP address for the datastore, and I would have no single point of failure at the storage level. ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
This whole thread has been fascinating. I really wish we (OI) had the two following things that freebsd supports: 1. HAST - provides a block-level driver that mirrors a local disk to a network "disk" presenting the result as a block device using the GEOM API. 2. CARP. I have a prototype w

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Neil Perrin
On 10/04/12 05:30, Schweiss, Chip wrote: Thanks for all the input.  It seems information on the performance of the ZIL is sparse and scattered.   I've spent significant time researching this the past day.  I'll summarize what I've found.   Please correct me if I'm w

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 16:06, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: From: Jim Klimov [mailto:jimkli...@cos.ru] Well, on my system that I complained a lot about last year, I've had a physical pool, a zvol in it, shared and imported over iscsi on loopback (or sometimes initiated from

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Andrew Gabriel
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Schweiss, Chip How can I determine for sure that my ZIL is my bottleneck? If it is the bottleneck, is it possible to keep adding m

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Jim Klimov [mailto:jimkli...@cos.ru] > > Well, on my system that I complained a lot about last year, > I've had a physical pool, a zvol in it, shared and imported > over iscsi on loopback (or sometimes initiated from another > box), and another pool inside that zvol ultimately. Ick. And

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Schweiss, Chip > > . The ZIL can have any number of SSDs attached either mirror or > individually.   ZFS will stripe across these in a raid0 or raid10 fashion > depending on how you configure.

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Andrew Gabriel [mailto:andrew.gabr...@cucumber.demon.co.uk] > > > Temporarily set sync=disabled > > Or, depending on your application, leave it that way permanently. I know, > for the work I do, most systems I support at most locations have > sync=disabled. It all depends on the workload

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Schweiss, Chip
Thanks for all the input. It seems information on the performance of the ZIL is sparse and scattered. I've spent significant time researching this the past day. I'll summarize what I've found. Please correct me if I'm wrong. - The ZIL can have any number of SSDs attached either mirror or

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-03 22:03, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: If you are going to be an initiator only, then it makes sense for svc:/network/iscsi/initiator to be required by svc:/system/filesystem/local If you are going to be a target only, then it makes sense for svc:/syst