[zfs-discuss] Live resize/grow of iscsi shared ZVOL

2008-08-14 Thread Martin Svensson
I have created a zvol. My client computer (windows) has the volume connected fine. But when I resize the zvol using: zfs set volsize=20G pool/volumes/v1 .. it disconnects the client. Is this by design? This message posted from opensolaris.org ___ zfs

Re: [zfs-discuss] Jumpstart + ZFS boot: profile?

2008-08-14 Thread Jens Elkner
On Thu, Aug 14, 2008 at 10:49:54PM -0400, Ellis, Mike wrote: > You can break out "just var", not the others. Yepp - and that's not sufficient :( Regards, jel. -- Otto-von-Guericke University http://www.cs.uni-magdeburg.de/ Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2

Re: [zfs-discuss] Jumpstart + ZFS boot: profile?

2008-08-14 Thread Jens Elkner
On Thu, Aug 14, 2008 at 02:33:19PM -0700, Richard Elling wrote: > There is a section on jumpstart for root ZFS in the ZFS Administration > Guide. >http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf Ah ok - thanx for the link. Seems to be almost the same, as on the web pages (though

Re: [zfs-discuss] integrated failure recovery thoughts

2008-08-14 Thread paul
I apologize for in effect suggesting that which was previously suggested in an earlier thread: http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046234.html And discovering that the feature to attempt worst case single bit recovery had apparently already been present in some form in

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-14 Thread Tomas Ögren
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes: > This problem is becoming a real pain to us again and I was wondering > if there has been in the past few month any known fix or workaround. Sun is sending me an IDR this/next week regarding this bug.. /Tomas -- Tomas Ögren, [EMAIL PRO

Re: [zfs-discuss] Jumpstart + ZFS boot: profile?

2008-08-14 Thread Richard Elling
There is a section on jumpstart for root ZFS in the ZFS Administration Guide. http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf You should also find it documented in the appropriate release installation documents (though I haven't checked those lately) -- richard Jens Elkner wro

Re: [zfs-discuss] cannot delete file when fs 100% full

2008-08-14 Thread Paul Raines
This problem is becoming a real pain to us again and I was wondering if there has been in the past few month any known fix or workaround. I normally create zfs fs's like this: zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/newvol and then just nfs export through /etc/dfs/dfstab.

[zfs-discuss] Jumpstart + ZFS boot: profile?

2008-08-14 Thread Jens Elkner
Hi, I wanna try to setup a machine via jumpstart with ZFS boot using svn_b95. Usually (UFS) I use a profile like this for it: install_typeinitial_install system_type standalone usedisk c1t0d0 partitioningexplicit filesys c1t0d0s0256 / filesys c1t0d0s116384 s

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Bob Friesenhahn
On Thu, 14 Aug 2008, Miles Nordin wrote: >> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes: > >mb> Ask your hardware vendor. The hardware corrupted your data, >mb> not ZFS. > > You absolutely do NOT have adequate basis to make this statement. Unfortunately I was unable to read your en

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Darren J Moffat
Miles Nordin wrote: >> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes: > > mb> Ask your hardware vendor. The hardware corrupted your data, > mb> not ZFS. > > You absolutely do NOT have adequate basis to make this statement. > > I would further argue that you are probably wrong, and t

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Miles Nordin
> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes: mb> Ask your hardware vendor. The hardware corrupted your data, mb> not ZFS. You absolutely do NOT have adequate basis to make this statement. I would further argue that you are probably wrong, and that I think based on what we know t

Re: [zfs-discuss] GUI support for ZFS root?

2008-08-14 Thread Rich Teer
On Thu, 14 Aug 2008, Ross wrote: > Huh? Now I'm confused, I thought b95 was just the latest build of > OpenSolaris, I didn't realise that OpenSolaris 2008.05 was different, I > thought it was just an older, more stable build that was updated less > often. Welcome to the world of ret-conning. Wh

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-14 Thread Richard Elling
paul wrote: > bob wrote: > >> On Wed, 13 Aug 2008, paul wrote: >> >> >>> Shy extremely noisy hardware and/or literal hard failure, most >>> errors will most likely always be expressed as 1 bit out of some >>> very large N number of bits. >>> >> This claim ignores the fact that mos

Re: [zfs-discuss] Mac Mini (OS X 10.5.4) with globalSAN

2008-08-14 Thread Bob Friesenhahn
On Thu, 14 Aug 2008, Richard L. Hamilton wrote: > > Ok, but that leaves the question what a better value would be. I gather > that HFS+ operates in terms of 512-byte sectors but larger allocation units; > however, unless those allocation units are a power of two between 512 and 128k > inclusive _a

Re: [zfs-discuss] GUI support for ZFS root?

2008-08-14 Thread Ross
Huh? Now I'm confused, I thought b95 was just the latest build of OpenSolaris, I didn't realise that OpenSolaris 2008.05 was different, I thought it was just an older, more stable build that was updated less often. Is there anything else I'm missing out on by using snv_94 instead of OpenSolari

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-14 Thread paul
Yes, Thank you. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] integrated failure recovery thoughts (single-bit

2008-08-14 Thread paul
bob wrote: > On Wed, 13 Aug 2008, paul wrote: > >> Shy extremely noisy hardware and/or literal hard failure, most >> errors will most likely always be expressed as 1 bit out of some >> very large N number of bits. > > This claim ignores the fact that most computers today are still based > on

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Chris Cosby
To further clarify Will's point... Your current setup provides excellent hardware protection, but absolutely no data protection. ZFS provides excellent data protection when it has multiple copies of the data blocks (>1 hardware devices). Combine the two, provide >1 hardware devices to ZFS, and yo

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Will Murnane
On Thu, Aug 14, 2008 at 07:42, Borys Saulyak <[EMAIL PROTECTED]> wrote: > I've got, lets say, 10 disks in the storage. They are currently in RAID5 > configuration and given to my box as one LUN. You suggest to create 10 LUNs > instead, and give them to ZFS, where they will be part of one raidz, r

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-14 Thread Tim
I don't have any extra cards lying around and can't really take my server down, so my immediate question would be: Is there any sort of PCI bridge chip on the card? I know in my experience I've seen all sorts of headaches with less than stellar bridge chips. Specifically some of the IBM bridge chi

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Borys Saulyak
> I would recommend you to make multiple LUNs visible > to ZFS, and create So, you are saying that ZFS will cope better with failures then any other storage system, right? I'm just trying to imagine... I've got, lets say, 10 disks in the storage. They are currently in RAID5 configuration and giv

Re: [zfs-discuss] marvell88sx patch

2008-08-14 Thread Enda O'Connor
Hi build 93 contains all the fixes in 138053-02 it would appear. Just to avoid confusion, patch 138053-02 is only relevant to the solaris 10 updates, and does not apply to the opensolaris variants. To get all the fixes for opensolaris, upgrade or install build 93. If on solaris 10, then sugges

Re: [zfs-discuss] marvell88sx patch

2008-08-14 Thread Martin Gasthuber
Hi, in which opensolaris (nevada) version this fix is included thanks, Martin On 13 Aug, 2008, at 18:52, Bob Friesenhahn wrote: I see that a driver patch has now been released for marvell88sx hardware. I expect that this is the patch that Thumper owners have been anxiously waiting

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-14 Thread Ross
This is the problem when you try to write up a good summary of what you found. I've got pages and pages of notes of all the tests I did here, far more than I could include in that PDF. What makes me think it's driver is that I've done much of what you suggested. I've replicated the exact same

Re: [zfs-discuss] Mac Mini (OS X 10.5.4) with globalSAN

2008-08-14 Thread Richard L. Hamilton
> On Wed, 13 Aug 2008, Richard L. Hamilton wrote: > > > > Reasonable enough guess, but no, no compression, > nothing like that; > > nor am I running anything particularly demanding > most of the time. > > > > I did have the volblocksize set down to 512 for > that volume, since I thought > > that fo

Re: [zfs-discuss] Kernel panic at zpool import

2008-08-14 Thread Marc Bevand
Borys Saulyak eumetsat.int> writes: > > > Your pools have no redundancy... > > Box is connected to two fabric switches via different HBAs, storage is > RAID5, MPxIP is ON, and all after that my pools have no redundancy?!?! As Darren said: no, there is no redundancy that ZFS can use. It is impor