Re: [zfs-discuss] vm server storage mirror

2012-10-20 Thread Timothy Coalson
On Sat, Oct 20, 2012 at 7:39 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) < opensolarisisdeadlongliveopensola...@nedharvey.com> wrote: > > From: Timothy Coalson [mailto:tsc...@mst.edu] > > Sent: Friday, October 19, 2012 9:43 PM > > > > A shot in the dark here, but perhaps one of th

Re: [zfs-discuss] vm server storage mirror

2012-10-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Timothy Coalson [mailto:tsc...@mst.edu] > Sent: Friday, October 19, 2012 9:43 PM > > A shot in the dark here, but perhaps one of the disks involved is taking a > long > time to return from reads, but is returning eventually, so ZFS doesn't notice > the problem?  Watching 'iostat -x' for b

Re: [zfs-discuss] vm server storage mirror

2012-10-19 Thread Timothy Coalson
> Several times, I destroyed the pool and recreated it completely from > backup. zfs send and zfs receive both work fine. But strangely - when I > launch a VM, the IO grinds to a halt, and I'm forced to powercycle > (usually) the host. > A shot in the dark here, but perhaps one of the disks invo

Re: [zfs-discuss] vm server storage mirror

2012-10-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Yikes, I'm back at it again, and so frustrated. For about 2-3 weeks now, I had the iscsi mirror configuration in production, as previously described. Two disks on system 1 mirror against two disks on system 2, everything done via iscsi, so you could zpool export on machine 1, and then zpoo

Re: [zfs-discuss] vm server storage mirror

2012-10-06 Thread Jim Klimov
2012-10-06 14:49, Jim Klimov wrote: $ cat /lib/svc/method/iscsi-mount-dcpool -- #!/bin/sh DELAY=600 case "$1" in start) if [ -f /etc/zfs/delay.dcpool ]; then D="`head -1 /etc/zfs/delay.dcpool`" [ "$D" -gt 0 ] 2>/dev

Re: [zfs-discuss] vm server storage mirror

2012-10-06 Thread Jim Klimov
Hello Ed and all, Just for the sake of completeness, I dug out my implementation of SMF services for iscsi-imported pools. As I said, it is kinda ugly due to hardcoded things which should rather be in SMF properties or at least in config files, but this was a single-solution POC. Here is the clie

Re: [zfs-discuss] vm server storage mirror

2012-10-06 Thread Jim Klimov
2012-10-05 22:53, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: http://nedharvey.com/blog/?p=105 Nice writeup, thanks. Perhaps you could also post/link it on OI wiki so the community can find it easier? A few comments: 1) For readability I'd use "...| awk '{print $1}'" inst

Re: [zfs-discuss] vm server storage mirror

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > I must be missing something - I don't see anything above that indicates any > required vs optional dependencies. Ok, I see that now. (Thanks to the SMF FAQ). A dependenc

Re: [zfs-discuss] vm server storage mirror

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > Well, it seems just like a peculiar effect of required vs. optional > dependencies. The loop is in the default installation. Details: > > # svcprop filesystem/usr | grep schedul

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-05 1:44, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov There are also loops ;) # svcs -d filesystem/usr STATE STIMEFMRI online Aug_27

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > There are also loops ;) > > # svcs -d filesystem/usr > STATE STIMEFMRI > online Aug_27 svc:/system/scheduler:default > ... > > # svcs -d scheduler > STAT

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 1:56 PM, Jim Klimov wrote: What if the backup host is down (i.e. the ex-master after the failover)? Will your failed-over pool accept no writes until both storage machines are working? What if internetworking between these two heads has a glitch, and as a result both of them become

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 21:19, Dan Swartzendruber writes: Sorry to be dense here, but I'm not getting how this is a cluster setup, or what your point wrt authoritative vs replication meant. In the scenario I was looking at, one host is providing access to clients - on the backup host, no services are provide

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 12:19 PM, Richard Elling wrote: On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber > wrote: On 10/4/2012 11:48 AM, Richard Elling wrote: On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber > wrote: This whole thread has been fascinat

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 19:48, Richard Elling wrote: 2. CARP. This exists as part of the OHAC project. -- richard Wikipedia says CARP is the open-source equivalent of VRRP. And we have that in OI, don't we? Would it suffice? # pkg info -r vrrp Name: system/network/routing/vrrp Summary:

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber wrote: > On 10/4/2012 11:48 AM, Richard Elling wrote: >> >> On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote: >> >>> >>> This whole thread has been fascinating. I really wish we (OI) had the two >>> following things that freebsd supports: >

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
On 10/4/2012 11:48 AM, Richard Elling wrote: On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber > wrote: This whole thread has been fascinating. I really wish we (OI) had the two following things that freebsd supports: 1. HAST - provides a block-level driver that mir

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Richard Elling
On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote: > > This whole thread has been fascinating. I really wish we (OI) had the two > following things that freebsd supports: > > 1. HAST - provides a block-level driver that mirrors a local disk to a > network "disk" presenting the result as a

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
Forgot to mention: my interest in doing this was so I could have my ESXi host point at a CARP-backed IP address for the datastore, and I would have no single point of failure at the storage level. ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Dan Swartzendruber
This whole thread has been fascinating. I really wish we (OI) had the two following things that freebsd supports: 1. HAST - provides a block-level driver that mirrors a local disk to a network "disk" presenting the result as a block device using the GEOM API. 2. CARP. I have a prototype w

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-04 16:06, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: From: Jim Klimov [mailto:jimkli...@cos.ru] Well, on my system that I complained a lot about last year, I've had a physical pool, a zvol in it, shared and imported over iscsi on loopback (or sometimes initiated from

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Jim Klimov [mailto:jimkli...@cos.ru] > > Well, on my system that I complained a lot about last year, > I've had a physical pool, a zvol in it, shared and imported > over iscsi on loopback (or sometimes initiated from another > box), and another pool inside that zvol ultimately. Ick. And

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Jim Klimov
2012-10-03 22:03, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: If you are going to be an initiator only, then it makes sense for svc:/network/iscsi/initiator to be required by svc:/system/filesystem/local If you are going to be a target only, then it makes sense for svc:/syst

Re: [zfs-discuss] vm server storage mirror

2012-10-03 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > it doesn't work right - It turns out, iscsi > devices (And I presume SAS devices) are not removable storage. That > means, if the device goes offline and comes back onlin

Re: [zfs-discuss] vm server storage mirror

2012-10-01 Thread Jim Klimov
2012-10-01 17:07, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) пишет: Well, now I know why it's stupid. Cuz it doesn't work right - It turns out, iscsi devices (And I presume SAS devices) are not removable storage. That means, if the device goes offline and comes back online again

Re: [zfs-discuss] vm server storage mirror

2012-10-01 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Jim Klimov > > > If they are close enough for "crossover cable" where the cable is UTP, > > then they are > > close enough for SAS. > > Pardon my ignorance, can a system easily serve its local

Re: [zfs-discuss] vm server storage mirror

2012-10-01 Thread Jim Klimov
2012-09-27 3:11, Richard Elling wrote: Option 2:At present, both systems are using local mirroring ,3 mirror pairs of 6 disks.I could break these mirrors, and export one side over to the other system...And vice versa.So neither server will be doing local mirroring; they will both be mirroring acr

Re: [zfs-discuss] vm server storage mirror

2012-09-27 Thread Tim Cook
On Thu, Sep 27, 2012 at 12:48 PM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) < opensolarisisdeadlongliveopensola...@nedharvey.com> wrote: > > From: Tim Cook [mailto:t...@cook.ms] > > Sent: Wednesday, September 26, 2012 3:45 PM > > > > I would suggest if you're doing a crossover betwe

Re: [zfs-discuss] vm server storage mirror

2012-09-27 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: Tim Cook [mailto:t...@cook.ms] > Sent: Wednesday, September 26, 2012 3:45 PM > > I would suggest if you're doing a crossover between systems, you use > infiniband rather than ethernet.  You can eBay a 40Gb IB card for under > $300.  Quite frankly the performance issues should become almos

Re: [zfs-discuss] vm server storage mirror

2012-09-26 Thread Richard Elling
On Sep 26, 2012, at 10:54 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: > Here's another one. > > Two identical servers are sitting side by side. They could be connected to > each other via anything (presently using crossover ethernet cable.) And > obviously they bot

Re: [zfs-discuss] vm server storage mirror

2012-09-26 Thread Tim Cook
On Wed, Sep 26, 2012 at 12:54 PM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) < opensolarisisdeadlongliveopensola...@nedharvey.com> wrote: > Here's another one. > > ** ** > > Two identical servers are sitting side by side. They could be connected > to each other via anything (pr

Re: [zfs-discuss] vm server storage mirror

2012-09-26 Thread matthew patton
"head units" crash or do weird things, but disks persist. There are a couple of HA head-unit solutions out there but most of them have their own separate storage and they effectively just send transaction groups to each other. The other way is to connect 2 nodes to an external SAS/FC chassis. cr

Re: [zfs-discuss] vm server storage mirror

2012-09-26 Thread Freddie Cash
If you're willing to try FreeBSD, there's HAST (aka high availability storage) for this very purpose. You use hast to create mirror pairs using 1 disk from each box, thus creating /dev/hast/* nodes. Then you use those to create the zpool one the 'primary' box. All writes to the pool on the primar