Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
On 26/04/2010 11:14, Phillip Oldham wrote: You don't have to do exports as I suggested to use 'zpool -R / pool' (notice -R). I tried this after your suggestion (including the -R switch) but it failed, saying the pool I was trying to import didn't exist. which means it couldn't discov

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Phillip Oldham
> You don't have to do exports as I suggested to use > 'zpool -R / pool' > (notice -R). I tried this after your suggestion (including the -R switch) but it failed, saying the pool I was trying to import didn't exist. > If you do so that a pool won't be added to > zpool.cache and therefore > af

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
On 26/04/2010 09:27, Phillip Oldham wrote: Then perhaps you should do zpool import -R / pool *after* you attach EBS. That way Solaris won't automatically try to import the pool and your scripts will do it once disks are available. zpool import doesn't work as there was no previous export.

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Phillip Oldham
> Then perhaps you should do zpool import -R / pool > *after* you attach EBS. > That way Solaris won't automatically try to import > the pool and your > scripts will do it once disks are available. zpool import doesn't work as there was no previous export. I'm trying to solve the case where the

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Phillip Oldham
I can replicate this case; Start new instance > attach EBS volumes > reboot instance > data finally available. Guessing that it's something to do with the way the volumes/devices are "seen" & then made available. I've tried running various operations (offline/online, scrub) to see whether it

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Robert Milkowski
On 23/04/2010 13:38, Phillip Oldham wrote: The instances are "ephemeral"; once terminated they cease to exist, as do all their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the case I'm dealing with - its when the instance terminates unexpectedly. For instance, if

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote: > The instances are "ephemeral"; once terminated they cease to exist, as do all > their settings. Rebooting an image keeps any EBS volumes attached, but this > isn't the case I'm dealing with - its when the instance terminates > unexpectedly. For

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Phillip Oldham
One thing I've just noticed is that after a reboot of the new instance, which showed no data on the EBS volume, the files return. So: 1. Start new instance 2. Attach EBS vols 3. `ls /foo` shows no data 4. Reboot instance 5. Wait a few minutes 6. `ls /foo` shows data as expected Not sure if this

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Phillip Oldham
The instances are "ephemeral"; once terminated they cease to exist, as do all their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the case I'm dealing with - its when the instance terminates unexpectedly. For instance, if a reboot operation doesn't succeed or if the

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote: > I'm not actually issuing any when starting up the new instance. None are > needed; the instance is booted from an image which has the zpool > configuration stored within, so simply starts and sees that the devices > aren't available, which beco

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Phillip Oldham
I'm not actually issuing any when starting up the new instance. None are needed; the instance is booted from an image which has the zpool configuration stored within, so simply starts and sees that the devices aren't available, which become available after I've attached the EBS device. Before t

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote: > > I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure > already defined. Starting an instance from this image, without attaching the > EBS volume, shows the pool structure exists and that the pool state is > "UNAVAIL" (as

[zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Phillip Oldham
I'm trying to provide some "disaster-proofing" on Amazon EC2 by using a ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots. My aim is to ensure that, should the instance terminate, a new instance can spin-up, attach the EBS volume and auto-/re-configure the zpool. I'v

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
I'm running vanilla 2009.06 since its release. I'll definitely give it a shot with the Live CD. Also I tried importing with only the five good disks physically attached and get the same message. - Kyle On Mon, Sep 21, 2009 at 3:50 AM, Chris Murray wrote: > That really sounds like a scenario tha

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
On Mon, Sep 21, 2009 at 3:37 AM, wrote: > > > > >The disk has since been replaced, so now: > >k...@localhost:~$ pfexec zpool import > > pool: chronicle > >id: 11592382930413748377 > > state: DEGRADED > >status: One or more devices contains corrupted data. > >action: The pool can be imported

Re: [zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Casper . Dik
> >The disk has since been replaced, so now: >k...@localhost:~$ pfexec zpool import > pool: chronicle >id: 11592382930413748377 > state: DEGRADED >status: One or more devices contains corrupted data. >action: The pool can be imported despite missing or damaged devices. The >fault tol

[zfs-discuss] Re-import RAID-Z2 with faulted disk

2009-09-21 Thread Kyle J. Aleshire
Hi all, I have a RAID-Z2 setup with 6x 500Gb SATA disks. I exported the array to use under a different system but during or after the export one of the disks failed: k...@localhost:~$ pfexec zpool import pool: chronicle id: 11592382930413748377 state: DEGRADED status: One or more devices ar

Re: [zfs-discuss] RE : rsync using 100% of a cpu

2008-12-02 Thread zfs user
Francois Dion wrote: > >>"Francois Dion" wrote: > >> Source is local to rsync, copying from a zfs file system, > >> destination is remote over a dsl connection. Takes forever to just > >> go through the unchanged files. Going the other way is not a > >> problem, it takes a fraction of the t

Re: [zfs-discuss] RE : rsync using 100% of a cp u

2008-12-02 Thread William D. Hathaway
How are the two sides different? If you run something like 'openssl md5sum' on both sides is it much faster on one side? Does one machine have a lot more memory/ARC and allow it to skip the physical reads? Is the dataset compressed on one side? -- This message posted from opensolaris.org

[zfs-discuss] RE : rsync using 100% of a cp u

2008-12-02 Thread Francois Dion
>>"Francois Dion" wrote: >> Source is local to rsync, copying from a zfs file system, >> destination is remote over a dsl connection. Takes forever to just >> go through the unchanged files. Going the other way is not a >> problem, it takes a fraction of the time. Anybody seen that? >> Sugg

Re: [zfs-discuss] Re-2: What is this, who is doing it, and how do I get you to stop?

2008-06-22 Thread Volker A. Brandt
[EMAIL PROTECTED] writes: > ..sorry, there was a misconfiguration in our email-system. I've fixed it in > this moment... > We apologize for any problems you had > > Andreas Gaida Wow, that was fast! And on a Sunday evening, too... So, everything is fixed, and we are all happy now :-) Regards -

[zfs-discuss] Re: Cheap ZFS homeserver.

2008-01-15 Thread Marcus Sundman
> So I was hoping that this board would work: [...]GA-M57SLI-S4 I've been looking at that very same board for the very same purpose. It has 2 gb nics, 6 sata ports, supports ECC memory and is passively cooled. And it's very cheap compared to most systems that people recommend for running OpenSolar

[zfs-discuss] Re: ZFS and Firewire/USB enclosures

2007-07-04 Thread Jeff Thompson
Besides the one you mention, bug 6560174 also shows the problems I've seen with ZFS on firewire. (This bug also shows the blank status page.) Is there any way to know if these will be addressed? Thanks, - Jeff >> I still haven't got any "warm and fuzzy" responses >> yet solidifying ZFS in combi

[zfs-discuss] Re: ZFS boot: another way

2007-07-02 Thread Jesse Hallio
If you are doing things manually, you could as well unmount the ufs partition and mount a zfs dataset to the same location just before the installer starts installing the packages? Saves some file copying and you could remove the ufs partition entirely? - Jesse This message posted from open

Re: [zfs-discuss] Re: cross fs/same pool mv

2007-07-02 Thread Carson Gaspar
roland wrote: is there a reliable method of re-compressing a whole zfs volume after turning on compression or changing compression scheme ? It would be slow, and the file system would need to be idle to avoid race conditions, and it wouldn't be very fast, but you _could_ do the following (PO

[zfs-discuss] Re: cross fs/same pool mv

2007-07-01 Thread roland
> > You can just re-copy all of the data after enabling compression (it's fairly > > easy to write a script, or just do something like: > > > > find . -xdev -type f | cpio -ocB | cpio -idmuv > > > > to re-write all of the data. > > and to destroy the content of all files > 5k. i tried the abo

[zfs-discuss] Re: Re: LZO compression?

2007-07-01 Thread roland
this is on linux with zfs-fuse (since no other zfs implementation besides zfs-fuse has support for lzo at this time) btw - here is some additional comparison - now with some real-world data. copying over some mysql database dir (var/lib/mysql) of size 231M gives: lzo | 0m41.152s | 2.31x lzjb |

Re: [zfs-discuss] Re: ZFS Performance with Thousands of File Systems

2007-07-01 Thread Matthew Ahrens
Stephen Le wrote: I think you will find that managing quotas for services is better when implemented by the service, rather than the file system. Thanks for the suggestion, Richard, but we're very happy with our current mail software, and we'd rather use file system quotas to control inbox siz

[zfs-discuss] Re: ZFS Performance with Thousands of File Systems

2007-06-30 Thread Stephen Le
> I think you will find that managing quotas for services is better > when implemented by the service, rather than the file system. Thanks for the suggestion, Richard, but we're very happy with our current mail software, and we'd rather use file system quotas to control inbox sizes (our mail adm

[zfs-discuss] Re: ZFS Performance with Thousands of File Systems

2007-06-30 Thread Stephen Le
(accidentally replied off-list via email, posting message here) We've already considered pooling user quotas together, if that's what you're going to suggest. Pooling user quotas would present a few problems for us, as we have a fair number of users with unique quotas and the general user quota

Re: [zfs-discuss] Re: LZO compression?

2007-06-30 Thread Cyril Plisko
On 6/30/07, roland <[EMAIL PROTECTED]> wrote: some other funny benchmark numbers: i wondered how performance/compressratio of lzjb,lzo and gzip would compare if we have optimal compressible datastream. since zfs handles repeating zero`s quite efficiently (i.e. allocating no space) i tried wri

[zfs-discuss] Re: DMU corruption

2007-06-30 Thread Peter Bortas
On 6/30/07, Peter Bortas <[EMAIL PROTECTED]> wrote: I'm currently doing a complete scrub, but according to zpool status latest estimate it will be 63h before I know how that went... The scrub has now completed with 0 errors and the there are no longer any corruption errors reported. -- Peter

[zfs-discuss] Re: LZO compression?

2007-06-30 Thread roland
some other funny benchmark numbers: i wondered how performance/compressratio of lzjb,lzo and gzip would compare if we have optimal compressible datastream. since zfs handles repeating zero`s quite efficiently (i.e. allocating no space) i tried writing non-zero values. the result is quite inter

Re: [zfs-discuss] Re: [zfs-code] Space allocation failure

2007-06-28 Thread Manoj Joseph
Manoj Joseph wrote: Manoj Joseph wrote: Hi, In brief, what I am trying to do is to use libzpool to access a zpool - like ztest does. [snip] No, AFAIK, the pool is not damaged. But yes, it looks like the device can't be written to by the userland zfs. Well, I might have figured out someth

[zfs-discuss] Re: [zfs-code] Space allocation failure

2007-06-28 Thread Manoj Joseph
Manoj Joseph wrote: Hi, In brief, what I am trying to do is to use libzpool to access a zpool - like ztest does. [snip] No, AFAIK, the pool is not damaged. But yes, it looks like the device can't be written to by the userland zfs. Well, I might have figured out something. Turssing the pr

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-28 Thread Erik Trimble
Richard Elling wrote: Erik Trimble wrote: If you had known about the drive sizes beforehand, the you could have done something like this: Partition the drives as follows: A: 1 20GB partition B: 1 20gb & 1 10GB partition C: 1 40GB partition D: 1 40GB partition & 2 10GB paritions then you d

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Richard Elling
Erik Trimble wrote: If you had known about the drive sizes beforehand, the you could have done something like this: Partition the drives as follows: A: 1 20GB partition B: 1 20gb & 1 10GB partition C: 1 40GB partition D: 1 40GB partition & 2 10GB paritions then you do: zpool create tank m

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Erik Trimble
On Wed, 2007-06-27 at 12:03 -0700, Jef Pearlman wrote: > > Jef Pearlman wrote: > > > Absent that, I was considering using zfs and just > > > having a single pool. My main question is this: what > > > is the failure mode of zfs if one of those drives > > > either fails completely or has errors? Do I

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Richard Elling
Jef Pearlman wrote: Perhaps I'm not asking my question clearly. I've already experimented a fair amount with zfs, including creating and destroying a number of pools with and without redundancy, replacing vdevs, etc. Maybe asking by example will clarify what I'm looking for or where I've missed

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Erik Trimble
On Wed, 2007-06-27 at 14:50 -0700, Darren Dunham wrote: > > Darren Dunham wrote: > > >> The problem I've come across with using mirror or raidz for this setup > > >> is that (as far as I know) you can't add disks to mirror/raidz groups, > > >> and if you just add the disk to the pool, you end up in

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Darren Dunham
> Darren Dunham wrote: > >> The problem I've come across with using mirror or raidz for this setup > >> is that (as far as I know) you can't add disks to mirror/raidz groups, > >> and if you just add the disk to the pool, you end up in the same > >> situation as above (with more space but no redund

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Neil Perrin
Darren Dunham wrote: The problem I've come across with using mirror or raidz for this setup is that (as far as I know) you can't add disks to mirror/raidz groups, and if you just add the disk to the pool, you end up in the same situation as above (with more space but no redundancy). You can't

[zfs-discuss] Re: [zfs-code] Space allocation failure

2007-06-27 Thread Manoj Joseph
Hi, In brief, what I am trying to do is to use libzpool to access a zpool - like ztest does. Matthew Ahrens wrote: Manoj Joseph wrote: Hi, Replying to myself again. :) I see this problem only if I attempt to use a zpool that already exists. If I create one (using files instead of devices,

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Darren Dunham
> Perhaps I'm not asking my question clearly. I've already experimented > a fair amount with zfs, including creating and destroying a number of > pools with and without redundancy, replacing vdevs, etc. Maybe asking > by example will clarify what I'm looking for or where I've missed the > boat. The

[zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Jef Pearlman
> Jef Pearlman wrote: > > Absent that, I was considering using zfs and just > > having a single pool. My main question is this: what > > is the failure mode of zfs if one of those drives > > either fails completely or has errors? Do I > > permanently lose access to the entire pool? Can I > > attemp

Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Matthew Ahrens
William D. Hathaway wrote: It would be really handy if whoever was responsible for the message at: http://www.sun.com/msg/ZFS-8000-A5 could add data about which zpool versions are supported at specific OS/patch releases. Did you look at http://www.opensolaris.org/os/community/zfs/version/N W

Re: [zfs-discuss] Re: zfs and 2530 jbod

2007-06-27 Thread Frank Cusack
On June 26, 2007 2:13:54 PM -0700 Joel Miller <[EMAIL PROTECTED]> wrote: The 2500 series engineering team is talking with the ZFS folks to understand the various aspects of delivering a complete solution. (There is a lot more to it than "it seems to work"...). Great news, you made my day! Any

Re: [zfs-discuss] Re: Suggestions on 30 drive configuration?

2007-06-27 Thread Dan Saul
I have 8 SATA on the motherboard, 4 PCI cards with 4 SATA each, one PCIe 4x sata card with two, and one PCIe 1x with two. The operating system itself will be on a hard drive attached to one ATA 100 connector. Kind of like a "poor man's" data centre, except not that cheap... It still is estimated

Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Mike Lee
I had a similar situation between x86 and SPARC, version number. When I created the pool on the LOWER rev machine, it was seen by the HIGHER rev machine. This was a USB HDD, not a stick. I can now move the drive between boxes. HTH, Mike Dick Davies wrote: Thanks to everyone for the sanity ch

Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Dick Davies
Thanks to everyone for the sanity check - I think it's a platform issue, but not an endian one. The stick was originally DOS-formatted, and the zpool was built on the first fdisk partition. So Sparcs aren't seeing it, but the x86/x64 boxes are. -- Rasputin :: Jack of All Trades - Master of Nuns

Re: [zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Mark J Musante
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote: > Yep, I just tried it, and it refuses to "zpool import" the newer pool, > telling me about the incompatible version. So I guess the pool format > isn't the correct explanation for the Dick Davies' (number9) problem. Have you tried creating the poo

Re: [zfs-discuss] Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-06-27 Thread Victor Latushkin
Gino wrote: Same problem here (snv_60). Robert, did you find any solutions? Couple of week ago I put together an implementation of space maps which completely eliminates loops and recursion from space map alloc operation, and allows to implement different allocation strategies quite easily (

[zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread William D. Hathaway
It would be really handy if whoever was responsible for the message at: http://www.sun.com/msg/ZFS-8000-A5 could add data about which zpool versions are supported at specific OS/patch releases. The current message doesn't help the user figure out how to accomplish their implied task, which is t

[zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Jürgen Keil
> Shouldn't S10u3 just see the newer on-disk format and > report that fact, rather than complain it is corrupt? Yep, I just tried it, and it refuses to "zpool import" the newer pool, telling me about the incompatible version. So I guess the pool format isn't the correct explanation for the Dick D

[zfs-discuss] Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-06-27 Thread Gino
Same problem here (snv_60). Robert, did you find any solutions? gino This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: ZFS - SAN and Raid

2007-06-27 Thread Richard L. Hamilton
> Victor Engle wrote: > > Roshan, > > > > As far as I know, there is no problem at all with > using SAN storage > > with ZFS and it does look like you were having an > underlying problem > > with either powerpath or the array. > > Correct. A write failed. > > > The best practices guide on opens

[zfs-discuss] Re: NFS, nested ZFS filesystems and ownership

2007-06-26 Thread Marko Milisavljevic
Well, I didn't realize this at first because I was testing with newly empty directories and sorry about wasting the bandwidth here, but it apprears NFS is not showing nested ZFS filesystems *at all*, all I was seing is mountpoints of the parent filesystem, and their changing ownership as server wa

[zfs-discuss] Re: ZFS usb keys

2007-06-26 Thread andrewk9
Shouldn't S10u3 just see the newer on-disk format and report that fact, rather than complain it is corrupt? Andrew. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] Re: NFS, nested ZFS filesystems and ownership

2007-06-26 Thread Marko Milisavljevic
I figured out how to get it to work, but I still don't quite understand it. The way i got it to work is to zfs unmount tank/fs/fs1 and tank/fs/fs2, and then it looked like this: ls -la /tank/fs user:group . root:root fs1 root:root fs2 That is, those mountpoints changed to root:root from user:gro

[zfs-discuss] Re: ZFS+NFS on storedge 6120 (sun t4)

2007-06-26 Thread Joel Miller
I am pretty sure the T3/6120/6320 firmware does not support the SYNCHRONIZE_CACHE commands.. Off the top of my head, I do not know if that triggers any change in behavior on the Solaris side... The firmware does support the use of the FUA bit...which would potentially lead to similar flushing

[zfs-discuss] Re: zfs and 2530 jbod

2007-06-26 Thread Joel Miller
Hi folks, So the expansion unit for the 2500 series is the 2501. The back-end drive channels are SAS. Currently it is not "supported" to connect a 2501 directly to a SAS HBA. SATA drives are in the pipe, but will not be released until the RAID firmware for the 2500 series officially supports th

[zfs-discuss] Re: ZFS usb keys

2007-06-26 Thread Jürgen Keil
> I used a zpool on a usb key today to get some core files off a non-networked > Thumper running S10U4 beta. > > Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to > 'zpool import sticky' and it worked ok. > > But when we attach the drive to a blade 100 (running s10u3), it

[zfs-discuss] Re: Suggestions on 30 drive configuration?

2007-06-25 Thread Bryan Wagoner
What is the controller setup going to look like for the 30 drives? Is it going to be fibre channel, SAS, etc. and what will be the Controller-to-Disk ratio? ~Bryan This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@ope

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Thomas Garner
Thanks, Roch! Much appreciated knowing what the problem is and that a fix is in a forthcoming release. Thomas On 6/25/07, Roch - PAE <[EMAIL PROTECTED]> wrote: Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http://bugs.

[zfs-discuss] Re: ZIL on user specified devices?

2007-06-25 Thread Bryan Wagoner
Thanks for the info Eric and Eric. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZIL on user specified devices?

2007-06-25 Thread Bryan Wagoner
Outstanding! Wow that was a ka-winky-dink in timing. This will clear up a lot of problems for my customers in HPC environments and in some of the SAN environments. Thanks a lot for the info. I'll keep my eyes open. This message posted from opensolaris.org ___

Re: [zfs-discuss] Re: ZFS Scalability/performance

2007-06-25 Thread Peter Schuller
> FreeBSD plays it safe too. It's just that UFS, and other file systems on > FreeBSD, understand write caches and flush at appropriate times. Do you have something to cite w.r.t. UFS here? Because as far as I know, that is not correct. FreeBSD shipped with write caching turned off by default for

Re: [zfs-discuss] Re: /dev/random problem after moving to zfs boot:

2007-06-25 Thread Darren J Moffat
I think the problem is a timing one. Something must be attempting to use the in kernel API to /dev/random sooner with ZFS boot that with UFS boot. We need some boot time DTrace output to find out who is attempting to call any of the APIs in misc/kcf - particularly the random provider ones.

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-25 Thread Roch - PAE
Sorry about that; looks like you've hit this: 6546683 marvell88sx driver misses wakeup for mv_empty_cv http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683 Fixed in snv_64. -r Thomas Garner writes: > > We have seen this behavior, but it appears to be entirely re

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-24 Thread Torrey McMahon
Gary Mills wrote: On Wed, Jun 20, 2007 at 12:23:18PM -0400, Torrey McMahon wrote: James C. McPherson wrote: Roshan Perera wrote: But Roshan, if your pool is not replicated from ZFS' point of view, then all the multipathing and raid controller backup in the world will not make a

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-24 Thread Torrey McMahon
Victor Engle wrote: On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: Also, how does replication at the ZFS level use more storage - I'm assuming raw block - then at the array level? ___ Just to add to the previous comments. In the case where you

[zfs-discuss] Re: zfs space efficiency

2007-06-24 Thread roland
update on this: i think i have been caught by a rsync trap. it seems, using rsync locally (i.e. rsync --inplace localsource localdestination) and "remotely" (i.e. rsync --inplace localsource localhost:/localdestination) is something different and rsync seems to handle the writing very differen

[zfs-discuss] Re: zfs space efficiency

2007-06-24 Thread roland
whoops - i see i have posted the same several times. this was duo to i got an error message when posting and thought, it didn`t get trough could some moderator probably delete those double posts ? meanwhile, i did some tests and have very weird results. first off, i tried "--inplace" to updat

Re: [zfs-discuss] Re: ZFS Scalability/performance

2007-06-24 Thread Pawel Jakub Dawidek
On Sat, Jun 23, 2007 at 10:21:14PM -0700, Anton B. Rang wrote: > > Oliver Schinagl wrote: > > > zo basically, what you are saying is that on FBSD there's no performane > > > issue, whereas on solaris there (can be if write caches aren't enabled) > > > > Solaris plays it safe by default. You can,

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-24 Thread Thomas Garner
We have seen this behavior, but it appears to be entirely related to the hardware having the "Intel IPMI" stuff swallow up the NFS traffic on port 623 directly by the network hardware and never getting. http://blogs.sun.com/shepler/entry/port_623_or_the_mount Unfortunately, this nfs hangs acr

[zfs-discuss] Re: Re: zpool mirror faulted

2007-06-24 Thread Michael Hase
So I ended up recreating the zpool from scratch, there seems no chance to repair anything. All data lost - luckily nothing really important. Never had such an experience with mirrored volumes on svm/ods since solaris 2.4. Just to clarify things: there was no mocking with the underlying disk devic

[zfs-discuss] Re: ZFS Scalability/performance

2007-06-23 Thread Anton B. Rang
> Nothing sucks more than your "redundant" disk array > losing more disks than it can support and you lose all your data > anyway. You'd be better off doing a giant non-parity stripe and dumping to > tape on a regular basis. ;) Anyone who isn't dumping to tape (or some other reliable and [b]off-s

[zfs-discuss] Re: ZFS Scalability/performance

2007-06-23 Thread Anton B. Rang
> Oliver Schinagl wrote: > > zo basically, what you are saying is that on FBSD there's no performane > > issue, whereas on solaris there (can be if write caches aren't enabled) > > Solaris plays it safe by default. You can, of course, override that safety. FreeBSD plays it safe too. It's just t

[zfs-discuss] Re: zfs space efficiency

2007-06-23 Thread roland
>So, in your case, you get maximum >space efficiency, where only the new blocks are stored, and the old >blocks simply are referenced. so - i assume that whenever some block is read from file A and written unchanged to file B, zfs recognizes this and just creates a new reference to file A ? tha

RE: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-23 Thread Paul Fisher
> From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Thomas Garner > > So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd > to stop responding after 2 hours of running a bittorrent client over > nfs4 from a linux client, causing zfs snapshots to hang and requi

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-23 Thread Thomas Garner
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd to stop responding after 2 hours of running a bittorrent client over nfs4 from a linux client, causing zfs snapshots to hang and requiring a hard reboot to get the world back in order? Thomas There is no NFS over ZFS issue (

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-22 Thread Brian Hechinger
On Thu, Jun 21, 2007 at 11:36:53AM +0200, Roch - PAE wrote: > > code) or Samba might be better by being careless with data. Well, it *is* trying to be a Microsoft replacement. Gotta get it right, you know? ;) -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In

Re: [zfs-discuss] Re: Indiana Wish List

2007-06-22 Thread Lori Alt
andrewk9 wrote: Apologies: I've just realised all this talk of "I've booted off of ZFS" is totally bogus. What they've actually done is booted off Ext3FS, for example, then jumped into loading the "real" root from the zpool. That'll teach me to read things first. This is indeed a pretty ugl

[zfs-discuss] Re: Indiana Wish List

2007-06-22 Thread andrewk9
Apologies: I've just realised all this talk of "I've booted off of ZFS" is totally bogus. What they've actually done is booted off Ext3FS, for example, then jumped into loading the "real" root from the zpool. That'll teach me to read things first. This is indeed a pretty ugly hack. The only obs

Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread michael schuster
Joubert Nel wrote: What I meant is that when I do "zpool create" on a disk, the entire contents of the disk doesn't seem to be overwritten/destroyed. I.e. I suspect that if I didn't copy any data to this disk, a large portion of what was on it is potentially recoverable. If so, is there a tool

Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Darren Dunham
> What I meant is that when I do "zpool create" on a disk, the entire > contents of the disk doesn't seem to be overwritten/destroyed. I.e. I > suspect that if I didn't copy any data to this disk, a large portion > of what was on it is potentially recoverable. Presumably a scavenger program could

Re: [zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Eric Schrock
On Thu, Jun 21, 2007 at 07:34:13PM -0700, Joubert Nel wrote: > > OK, so if I didn't copy any data to this disk, presumably a large > portion of what was on the disk previously is theoretically > recoverable. There is really one file in particular that I'd like to > recover (it is a cpio backup). >

[zfs-discuss] Re: ZFS + ISCSI + LINUX QUESTIONS

2007-06-22 Thread Gary Gendel
Al, Has there been any resolution to this problem? I get it repeatedly on my 5-500GB Raidz configuration. I sometimes get port drop/reconnect errors when this occurs. Gary This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-d

[zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Joubert Nel
> On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel > wrote: > > > > When I ran "zpool create", the pool got created > without a warning. > > zpool(1M) will diallow creation of the disk if it > contains data in > active use (mounted fs, zfs pool, dump device, swap, > etc). It will warn > if

[zfs-discuss] Re: Re: Undo/reverse zpool create

2007-06-22 Thread Joubert Nel
Richard, > Joubert Nel wrote: > >> If the device was actually in use on another > system, I > >> would expect that libdiskmgmt would have warned > you about > >> this when you ran "zpool create". > > AFAIK, libdiskmgmt is not multi-node aware. It does > know about local > uses of the disk. Remo

Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Richard Elling
Joubert Nel wrote: If the device was actually in use on another system, I would expect that libdiskmgmt would have warned you about this when you ran "zpool create". AFAIK, libdiskmgmt is not multi-node aware. It does know about local uses of the disk. Remote uses of the disk, especially thos

Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Eric Schrock
On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel wrote: > > When I ran "zpool create", the pool got created without a warning. zpool(1M) will diallow creation of the disk if it contains data in active use (mounted fs, zfs pool, dump device, swap, etc). It will warn if it contains a recogni

[zfs-discuss] Re: New german white paper on ZFS

2007-06-21 Thread mario heimel
good work! This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Chris Quenelle
Sorry I can't volunteer to test your script. I want to do the steps by hand to make sure I understand them. If I have to do it all again, I'll get in touch. Thanks for the advice! --chris Constantin Gonzalez wrote: Hi, Chris Quenelle wrote: Thanks, Constantin! That sounds like the right an

[zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Joubert Nel
> Joubert Nel wrote: > > Hi, > > > > If I add an entire disk to a new pool by doing > "zpool create", is this > > reversible? > > > > I.e. if there was data on that disk (e.g. it was > the sole disk in a zpool > > in another system) can I get this back or is zpool > create destructive? > > Short

[zfs-discuss] Re: marvell88sx error in command 0x2f: status 0x51

2007-06-21 Thread Rob Logan
> [hourly] marvell88sx error in command 0x2f: status 0x51 ah, its some kinda SMART or FMA query that model WDC WD3200JD-00KLB0 firmware 08.05J08 serial number WD-WCAMR2427571 supported features: 48-bit LBA, DMA, SMART, SMART self-test SATA1 compatible capacity = 625142448 sectors drives d

[zfs-discuss] Re: [Fwd: What Veritas is saying vs ZFS]

2007-06-21 Thread Craig Morgan
Also introduces the Veritas sfop utility, which is the 'simplified' front-end to VxVM/VxFS. As "imitation is the sincerest form of flattery", this smacks of a desperate attempt to prove to their customers that Vx can be just as slick as ZFS. More details at

Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-21 Thread Roch - PAE
Joe S writes: > After researching this further, I found that there are some known > performance issues with NFS + ZFS. I tried transferring files via SMB, and > got write speeds on average of 25MB/s. > > So I will have my UNIX systems use SMB to write files to my Solaris server. > This seem

Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Constantin Gonzalez
Hi, Chris Quenelle wrote: > Thanks, Constantin! That sounds like the right answer for me. > Can I use send and/or snapshot at the pool level? Or do I have > to use it on one filesystem at a time? I couldn't quite figure this > out from the man pages. the ZFS team is working on a zfs send -r (r

[zfs-discuss] Re: Migrating ZFS pool with zones from one host to another

2007-06-20 Thread mario heimel
before the zoneadm attach or boot you must create the configuration on the second host, manuell or with the detached config from first host. zonecfg -z heczone 'create -a /hecpool/zones/heczone' zoneadm -z heczone attach ( to attach the requirements must fulfilled (pkgs and patches in sync)

  1   2   3   4   5   6   7   8   9   10   >