I have created a zvol. My client computer (windows) has the volume connected
fine.
But when I resize the zvol using:
zfs set volsize=20G pool/volumes/v1
.. it disconnects the client. Is this by design?
This message posted from opensolaris.org
___
zfs
On Thu, Aug 14, 2008 at 10:49:54PM -0400, Ellis, Mike wrote:
> You can break out "just var", not the others.
Yepp - and that's not sufficient :(
Regards,
jel.
--
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2
On Thu, Aug 14, 2008 at 02:33:19PM -0700, Richard Elling wrote:
> There is a section on jumpstart for root ZFS in the ZFS Administration
> Guide.
>http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Ah ok - thanx for the link. Seems to be almost the same, as on the web
pages (though
I apologize for in effect suggesting that which was previously suggested in an
earlier thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-March/046234.html
And discovering that the feature to attempt worst case single bit recovery had
apparently
already been present in some form in
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
> This problem is becoming a real pain to us again and I was wondering
> if there has been in the past few month any known fix or workaround.
Sun is sending me an IDR this/next week regarding this bug..
/Tomas
--
Tomas Ögren, [EMAIL PRO
There is a section on jumpstart for root ZFS in the ZFS Administration
Guide.
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
You should also find it documented in the appropriate release
installation documents (though I haven't checked those lately)
-- richard
Jens Elkner wro
This problem is becoming a real pain to us again and I was wondering
if there has been in the past few month any known fix or workaround.
I normally create zfs fs's like this:
zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/newvol
and then just nfs export through /etc/dfs/dfstab.
Hi,
I wanna try to setup a machine via jumpstart with ZFS boot using svn_b95.
Usually (UFS) I use a profile like this for it:
install_typeinitial_install
system_type standalone
usedisk c1t0d0
partitioningexplicit
filesys c1t0d0s0256 /
filesys c1t0d0s116384 s
On Thu, 14 Aug 2008, Miles Nordin wrote:
>> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
>
>mb> Ask your hardware vendor. The hardware corrupted your data,
>mb> not ZFS.
>
> You absolutely do NOT have adequate basis to make this statement.
Unfortunately I was unable to read your en
Miles Nordin wrote:
>> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
>
> mb> Ask your hardware vendor. The hardware corrupted your data,
> mb> not ZFS.
>
> You absolutely do NOT have adequate basis to make this statement.
>
> I would further argue that you are probably wrong, and t
> "mb" == Marc Bevand <[EMAIL PROTECTED]> writes:
mb> Ask your hardware vendor. The hardware corrupted your data,
mb> not ZFS.
You absolutely do NOT have adequate basis to make this statement.
I would further argue that you are probably wrong, and that I think
based on what we know t
On Thu, 14 Aug 2008, Ross wrote:
> Huh? Now I'm confused, I thought b95 was just the latest build of
> OpenSolaris, I didn't realise that OpenSolaris 2008.05 was different, I
> thought it was just an older, more stable build that was updated less
> often.
Welcome to the world of ret-conning. Wh
paul wrote:
> bob wrote:
>
>> On Wed, 13 Aug 2008, paul wrote:
>>
>>
>>> Shy extremely noisy hardware and/or literal hard failure, most
>>> errors will most likely always be expressed as 1 bit out of some
>>> very large N number of bits.
>>>
>> This claim ignores the fact that mos
On Thu, 14 Aug 2008, Richard L. Hamilton wrote:
>
> Ok, but that leaves the question what a better value would be. I gather
> that HFS+ operates in terms of 512-byte sectors but larger allocation units;
> however, unless those allocation units are a power of two between 512 and 128k
> inclusive _a
Huh? Now I'm confused, I thought b95 was just the latest build of OpenSolaris,
I didn't realise that OpenSolaris 2008.05 was different, I thought it was just
an older, more stable build that was updated less often.
Is there anything else I'm missing out on by using snv_94 instead of
OpenSolari
Yes, Thank you.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bob wrote:
> On Wed, 13 Aug 2008, paul wrote:
>
>> Shy extremely noisy hardware and/or literal hard failure, most
>> errors will most likely always be expressed as 1 bit out of some
>> very large N number of bits.
>
> This claim ignores the fact that most computers today are still based
> on
To further clarify Will's point...
Your current setup provides excellent hardware protection, but absolutely no
data protection.
ZFS provides excellent data protection when it has multiple copies of the
data blocks (>1 hardware devices).
Combine the two, provide >1 hardware devices to ZFS, and yo
On Thu, Aug 14, 2008 at 07:42, Borys Saulyak <[EMAIL PROTECTED]> wrote:
> I've got, lets say, 10 disks in the storage. They are currently in RAID5
> configuration and given to my box as one LUN. You suggest to create 10 LUNs
> instead, and give them to ZFS, where they will be part of one raidz, r
I don't have any extra cards lying around and can't really take my server
down, so my immediate question would be:
Is there any sort of PCI bridge chip on the card? I know in my experience
I've seen all sorts of headaches with less than stellar bridge chips.
Specifically some of the IBM bridge chi
> I would recommend you to make multiple LUNs visible
> to ZFS, and create
So, you are saying that ZFS will cope better with failures then any other
storage system, right? I'm just trying to imagine...
I've got, lets say, 10 disks in the storage. They are currently in RAID5
configuration and giv
Hi
build 93 contains all the fixes in 138053-02 it would appear.
Just to avoid confusion, patch 138053-02 is only relevant to the solaris
10 updates, and does not apply to the opensolaris variants.
To get all the fixes for opensolaris, upgrade or install build 93.
If on solaris 10, then sugges
Hi,
in which opensolaris (nevada) version this fix is included
thanks,
Martin
On 13 Aug, 2008, at 18:52, Bob Friesenhahn wrote:
I see that a driver patch has now been released for marvell88sx
hardware. I expect that this is the patch that Thumper owners have
been anxiously waiting
This is the problem when you try to write up a good summary of what you found.
I've got pages and pages of notes of all the tests I did here, far more than I
could include in that PDF.
What makes me think it's driver is that I've done much of what you suggested.
I've replicated the exact same
> On Wed, 13 Aug 2008, Richard L. Hamilton wrote:
> >
> > Reasonable enough guess, but no, no compression,
> nothing like that;
> > nor am I running anything particularly demanding
> most of the time.
> >
> > I did have the volblocksize set down to 512 for
> that volume, since I thought
> > that fo
Borys Saulyak eumetsat.int> writes:
>
> > Your pools have no redundancy...
>
> Box is connected to two fabric switches via different HBAs, storage is
> RAID5, MPxIP is ON, and all after that my pools have no redundancy?!?!
As Darren said: no, there is no redundancy that ZFS can use. It is impor
26 matches
Mail list logo