Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
Rob Logan wrote: you meant to type zpool import -d /var/tmp grow Bah - of course, I can not just expect zpool to know what random directory to search. You Sir, are a genius. Works like a charm, and thank you. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ex

Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread Rob Logan
> How does one look at the disk traffic? iostat -xce 1 > OpenSolaris, raidz2 across 8 7200 RPM SATA disks: > 17179869184 bytes (17 GB) copied, 127.308 s, 135 MB/s > OpenSolaris, "flat" pool across the same 8 disks: > 17179869184 bytes (17 GB) copied, 61.328 s, 280 MB/s one raidz2 set of 8 disk

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Rob Logan
> zpool offline grow /var/tmp/disk01 > zpool replace grow /var/tmp/disk01 /var/tmp/bigger_disk01 one doesn't need to offline before the replace, so as long as you have one free disk interface one can cfgadm -c configure sata0/6 each disk as you go... or you can offline and cfgadm each disk in the

[zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
What is the current answer regarding replacing HDDs in a raidz, one at a time, with a larger HDD? The Best-Practises-Wiki seems to suggest it is possible (but perhaps just for mirror, not raidz?) I am currently running osol-b114. I did this test with data files to simulate this situation;

Re: [zfs-discuss] Errors on mirrored drive

2009-05-22 Thread Toby Thain
On 22-May-09, at 5:24 PM, Frank Middleton wrote: There have been a number of threads here on the reliability of ZFS in the face of flaky hardware. ZFS certainly runs well on decent (e.g., SPARC) hardware, but isn't it reasonable to expect it to run well on something less well engineered?

Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread John Levon
On Sun, May 17, 2009 at 02:16:01PM +0300, Ahmed Kamal wrote: > I am wondering whether the reliability of solaris/zfs is still guaranteed if > I will be running zfs not directly over real hardware, but over Xen > virtualization ? The plan is to assign physical raw access to the disks to > the xen g

Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread Joseph Mocker
Blake wrote: On Fri, May 22, 2009 at 2:44 PM, Ahmed Kamal > wrote: However, if you need to decide, whether to use Xen, test your setup before going into production and ask your boss, whether he can live with innovat

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Nicolas Williams
On Fri, May 22, 2009 at 04:40:43PM -0600, Eric D. Mudama wrote: > As another datapoint, the 111a opensolaris preview got me ~29MB/s > through an SSH tunnel with no tuning on a 40GB dataset. > > Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon > E5520 writing to a few mirrored

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Eric D. Mudama
On Fri, May 22 at 11:05, Robert Milkowski wrote: btw: caching data fro zfs send anf zfs recv on another side could make it even faster. you could use something like mbuffer with buffers of 1-2GB for example. As another datapoint, the 111a opensolaris preview got me ~29MB/s through an SSH t

Re: [zfs-discuss] replicating a root pool

2009-05-22 Thread Ian Collins
Lori Alt wrote: On 05/21/09 22:40, Ian Collins wrote: Mark J Musante wrote: On Thu, 21 May 2009, Ian Collins wrote: I'm trying to use zfs send/receive to replicate the root pool of a system and I can't think of a way to stop the received copy attempting to mount the filesystem over the root

Re: [zfs-discuss] Errors on mirrored drive

2009-05-22 Thread Frank Middleton
There have been a number of threads here on the reliability of ZFS in the face of flaky hardware. ZFS certainly runs well on decent (e.g., SPARC) hardware, but isn't it reasonable to expect it to run well on something less well engineered? I am a real ZFS fan, and I'd hate to see folks trash it be

[zfs-discuss] eon or nexentacore or opensolaris

2009-05-22 Thread Joe S
I don't want to run SXCE anymore. I'm trying to decide between: EON ZFS NAS http://eonstorage.blogspot.com/ --or-- NexentaCore Platform (v2.0 RC3) http://www.nexenta.org/os/NexentaCore --or-- OpenSolaris 2009.06 (when it's released) My needs are: * Easy package management * Easy upgrades *

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-22 Thread Miles Nordin
> "mg" == Mike Gerdts writes: mg> A rather interesting putback just happened... yeah, it is good when you can manually offline the same set of devices as the set of those which are allowed to fail without invoking the pool's failmode. I guess the putback means one less such difference.

Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread Blake
On Fri, May 22, 2009 at 2:44 PM, Ahmed Kamal < email.ahmedka...@googlemail.com> wrote: > However, if you need to decide, whether to use Xen, test your setup >> before going into production and ask your boss, whether he can live with >> innovative ... solutions ;-) >> > > Thanks a lot for the infor

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-22 Thread Mike Gerdts
On Tue, May 19, 2009 at 2:16 PM, Paul B. Henson wrote: > > I was checking with Sun support regarding this issue, and they say "The CR > currently has a high priority and the fix is understood. However, there is > no eta, workaround, nor IDR." > > If it's a high priority, and it's known how to fix

Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread Ahmed Kamal
> > However, if you need to decide, whether to use Xen, test your setup > before going into production and ask your boss, whether he can live with > innovative ... solutions ;-) > Thanks a lot for the informative reply. It has been definitely helpful I am however interested in the reliability of r

Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread David Abrahams
on Fri May 22 2009, Richard Elling wrote: > David Abrahams wrote: >> http://groups.google.com/group/zfs-fuse/msg/5fac5eaf2c7fccb8 shows some >> (admittedly very crude) tests I did with OpenSolaris 0906, with some >> very surprising performance results. In particular, read speed on an >> 8-disk

Re: [zfs-discuss] replicating a root pool

2009-05-22 Thread Lori Alt
On 05/21/09 22:40, Ian Collins wrote: Mark J Musante wrote: On Thu, 21 May 2009, Ian Collins wrote: I'm trying to use zfs send/receive to replicate the root pool of a system and I can't think of a way to stop the received copy attempting to mount the filesystem over the root of the destinatio

Re: [zfs-discuss] LU snv_93 - snv_101a (ZFS - ZFS )

2009-05-22 Thread Nandini Mocherla
Does this have to be done from booting into failsafe mode of new BE? Nandini Mark J Musante wrote: On Thu, 21 May 2009, Nandini Mocherla wrote: Then I booted into failsafe mode of 101a and then tried to run the following command as given in luactivate output. Yeah, that's a known bug in the

Re: [zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread Richard Elling
David Abrahams wrote: http://groups.google.com/group/zfs-fuse/msg/5fac5eaf2c7fccb8 shows some (admittedly very crude) tests I did with OpenSolaris 0906, with some very surprising performance results. In particular, read speed on an 8-disk pool seemed to drop by 50% when I set up the pool to use

Re: [zfs-discuss] replicating a root pool

2009-05-22 Thread Blake
On Fri, May 22, 2009 at 12:40 AM, Ian Collins wrote: > Mark J Musante wrote: > >> On Thu, 21 May 2009, Ian Collins wrote: >> >> I'm trying to use zfs send/receive to replicate the root pool of a system >>> and I can't think of a way to stop the received copy attempting to mount the >>> filesyste

[zfs-discuss] RAIDZ2: only half the read speed?

2009-05-22 Thread David Abrahams
http://groups.google.com/group/zfs-fuse/msg/5fac5eaf2c7fccb8 shows some (admittedly very crude) tests I did with OpenSolaris 0906, with some very surprising performance results. In particular, read speed on an 8-disk pool seemed to drop by 50% when I set up the pool to use RAIDZ2. Can anyone she

Re: [zfs-discuss] LU snv_93 - snv_101a (ZFS - ZFS )

2009-05-22 Thread Mark J Musante
On Thu, 21 May 2009, Nandini Mocherla wrote: Then I booted into failsafe mode of 101a and then tried to run the following command as given in luactivate output. Yeah, that's a known bug in the luactivate output. CR 6722845 # mount -F zfs /dev/dsk/c1t2d0s0 /mnt cannot open '/dev/dsk/c1t2d0s0

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-22 Thread Darren J Moffat
Miles Nordin wrote: "djm" == Darren J Moffat writes: djm> I do; because I've done it to my own personal data pool. djm> However it is not a procedure I'm willing to tell anyone how djm> to do - so please don't ask - k, fine, fair enough and noted. djm> a) it was highly dangerou

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Jorgen Lundman
Sorry, yes. It is straight; # time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001 real19m48.199s # /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest received 82.3GB stream in 1195 seconds (70.5MB/sec) Sending is osol-b114. Receiver is Solaris 10 10/08 Whe

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Ian Collins
Brent Jones wrote: On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman wrote: To finally close my quest. I tested "zfs send" in osol-b114 version: received 82.3GB stream in 1195 seconds (70.5MB/sec) Can you give any details about your data set, what you piped zfs send/receive through (SS

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Robert Milkowski
btw: caching data fro zfs send anf zfs recv on another side could make it even faster. you could use something like mbuffer with buffers of 1-2GB for example. On Fri, 22 May 2009, Jorgen Lundman wrote: To finally close my quest. I tested "zfs send" in osol-b114 version: received 82.3GB