Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Brent Jones
On Mon, Sep 29, 2008 at 9:28 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > Ahmed Kamal wrote: >> Hi everyone, >> >> We're a small Linux shop (20 users). I am currently using a Linux >> server to host our 2TBs of data. I am considering better options for >> our data storage needs. I mostly need in

Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Wilkinson, Alex
0n Mon, Sep 29, 2008 at 09:28:53PM -0700, Richard Elling wrote: >EMC does not, and cannot, provide end-to-end data validation. So how >would measure its data reliability? If you search the ZFS-discuss archives, >you will find instances where people using high-end storage also

Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Richard Elling
Ahmed Kamal wrote: > Hi everyone, > > We're a small Linux shop (20 users). I am currently using a Linux > server to host our 2TBs of data. I am considering better options for > our data storage needs. I mostly need instant snapshots and better > data protection. I have been considering EMC NS20

[zfs-discuss] S10U5: deadlock between 'zfs receive' and 'zfs list'

2008-09-29 Thread River Tarnell
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 hi, i have an X4500 running Solaris 10 Update 5 (with all current patches). it has a stripe-mirror ZFS pool over 44 disks with 2 hot spare. the system is entirely idle, except that every 60 seconds, a 'zfs recv' is run. a couple of days ago, while

[zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Ahmed Kamal
Hi everyone, We're a small Linux shop (20 users). I am currently using a Linux server to host our 2TBs of data. I am considering better options for our data storage needs. I mostly need instant snapshots and better data protection. I have been considering EMC NS20 filers and Zfs based solutions. F

Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-29 Thread Jean Dion
Do you have dedicated iSCSI ports from your server to your NetApp? iSCSI requires dedicated network and not a shared network or even VLAN. Backup cause large I/O that fill your network quickly. Like ans SAN today. Backup are extremely demanding on hardware (CPU, Mem, I/O ports, disk etc).

Re: [zfs-discuss] working closed blob driver

2008-09-29 Thread James C. McPherson
Miles Nordin wrote: >> "jcm" == James C McPherson <[EMAIL PROTECTED]> writes: > >jcm> Can I assume that my "2008-07-26 post" was in fact two >jcm> messages that were sent to you and cc'd to zfs-discuss: >jcm> > http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049605.htm

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-29 Thread Cindy . Swearingen
Ross, No need to apologize... Many of us work hard to make sure good ZFS information is available so a big thanks for bringing this wiki page to our attention. Playing with UFS on ZFS is one thing but even inexperienced admins need to know this kind of configuration will provide poor performance

Re: [zfs-discuss] zpool error: must be a block device or regular file

2008-09-29 Thread Ross
Oh, ok. So /dev/rdsk is never going to work then. Mind if I pick your brain a little more then while I try to understand this properly. The man pages for the nvram card state that /dev/rdsk will normally be the preferred way to access these devices, since /dev/dsk is cached by the kernel, whi

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Mark J Musante
On Tue, 30 Sep 2008, Ian Collins wrote: Mark J Musante wrote: On Sat, 27 Sep 2008, Marcin Woźniak wrote: After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot is /rpo

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Ian Collins
Mark J Musante wrote: > On Sat, 27 Sep 2008, Marcin Woźniak wrote: > >> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs >> boot). After luactive new BE with zfs. I am not able to ludelete old >> BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. > > This is due to

Re: [zfs-discuss] zpool error: must be a block device or regular file

2008-09-29 Thread William D. Hathaway
/dev/rdsk/* devices are character based devices, not block based. In general, character based devices have to be accessed serially (and don't do buffering), versus block devices which buffer and allow random access to the data. If you use: ls -lL /dev/*dsk/c3d1p0 you should see that the /dev/ds

Re: [zfs-discuss] working closed blob driver

2008-09-29 Thread Miles Nordin
> "jcm" == James C McPherson <[EMAIL PROTECTED]> writes: jcm> Can I assume that my "2008-07-26 post" was in fact two jcm> messages that were sent to you and cc'd to zfs-discuss: jcm> http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049605.html jcm> and jcm> http://mai

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-29 Thread Ross Becker
I have to come back and face the shame; this was a total newbie mistake by myself. I followed the ZFS shortcuts for noobs guide off bigadmin; http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs What that had me doing was creating a UFS filesystem on top of a ZFS volume, so I was usi

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Darren J Moffat
Volker A. Brandt wrote: >>> So they only work on and off. I never bothered to find out what the >>> problem was (in fact, I hadn't even tried the ramdiskadm cmd in that >>> version of Solaris before this email thread showed up). >>> >> AIUI, the memory assigned to a ramdisk must be contiguous. >>

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Volker A. Brandt
> > So they only work on and off. I never bothered to find out what the > > problem was (in fact, I hadn't even tried the ramdiskadm cmd in that > > version of Solaris before this email thread showed up). > > > > AIUI, the memory assigned to a ramdisk must be contiguous. > This makes some sense in

[zfs-discuss] zpool error: must be a block device or regular file

2008-09-29 Thread Ross
Hey folks, Can anybody help me out with this. I've finally gotten my hands on a Micro Memory nvram card, but I'm struggling to get it working with ZFS. The drivers appeared to install fine, and it works with ZFS if I use the /dev/dsk device, but whenever I try to use rdsk I get the error: #

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Richard Elling
Volker A. Brandt wrote: >>> [most people don't seem to know Solaris has ramdisk devices] >>> >> That is because only a select few are able to unravel the enigma wrapped in >> a clue that is solaris :) >> > > Hmmm... very enigmatic, your remark. :-) > > > However, in this case I suspect

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Mark J Musante
On Sat, 27 Sep 2008, Marcin Woźniak wrote: After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. This is due to a bug in the /usr/lib/lu/lulib sc

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Volker A. Brandt
> Note this from vmstat(1M): > > Without options, vmstat displays a one-line summary of the > virtual memory activity since the system was booted. Oops, you're correct. I was only trying to demonstrate that there was ample free memory and ramdiskadm just didn't work. Usually I do tha

Re: [zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Mike Gerdts
On Mon, Sep 29, 2008 at 2:12 AM, Volker A. Brandt <[EMAIL PROTECTED]> wrote: > kthr memorypagedisk faults cpu > r b w swap free re mf pi po fr de sr lf lf lf s0 in sy cs us sy id > 0 0 0 33849968 2223440 2 14 1 0 0 0 0 0 21 0 21 813 1

Re: [zfs-discuss] zfs resilvering

2008-09-29 Thread Mikael Kjerrman
Richard, thanks alot for that answer. It can be argued back and forth what is right, but it helps knowing the reason behind the problem. Again, thanks alot... //Mike -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@o

Re: [zfs-discuss] zfs resilvering

2008-09-29 Thread Mikael Kjerrman
Hi, it was actually shared both as a dataset and a NFS-share. we had zonedata/prodlogs set up as a dataset and then we had zonedata/tmp mounted as a NFS filesystem within the zone. //Mike -- This message posted from opensolaris.org ___ zfs-discuss mail

Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-29 Thread Darren J Moffat
Adam Leventhal wrote: > For a root device it doesn't matter that much. You're not going to be > writing to the device at a high data rate so write/erase cycles don't > factor much (MLC can sustain about a factor of 10 more). With MLC > you'll get 2-4x the capacity for the same price, but agai

Re: [zfs-discuss] RAIDZ one of the disk showing unavail

2008-09-29 Thread Ralf Ramge
Miles Nordin wrote: > Ralf, aren't you missing this obstinence-error: > > sc> the following errors must be manually repaired: > sc> /dev/dsk/c0t2d0s0 is part of active ZFS pool export_content. > > and he used the -f flag. No, I saw it. My understanding has been that the drive was unavai

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-09-29 Thread Trevor Watson
I had exactly the same problem and have not been able to find a resolution yet. Marcin Woźniak wrote: After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot is /rpool/boo

[zfs-discuss] OT: ramdisks (Was: Re: create raidz with 1 disk offline)

2008-09-29 Thread Volker A. Brandt
> > [most people don't seem to know Solaris has ramdisk devices] > > That is because only a select few are able to unravel the enigma wrapped in a > clue that is solaris :) Hmmm... very enigmatic, your remark. :-) However, in this case I suspect it is because ramdisks don't really work well on