Awesome - thank you to all who responded with both the autoexpand and
import/export suggestions! I will try it out!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
not being recognized.
Thanks!
-Nick
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
One other question - I'm seeing the same sort of behavior when I try to do
something like "zfs set sharenfs=off storage/fs" - is there a reason that
turning off NFS sharing should halt I/O?
--
This message posted from opensolaris.org
___
zfs-discuss ma
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
te so
understanding or patient. This is a pretty big roadblock, IMHO, to this being
a workable storage solution. I certainly do understand that I'm using the dev
releases, so it is under development and I should expect bugs - this one just
seems pretty significant, like I would need
and volumes upgraded to the latest available versions. I am using
deduplication on my ZFS volumes, set at the highest volume level, so I'm not
sure if this has an impact. Can anyone provide any hints as to whether this is
a bug or expected behavior, what's causing it, and how I can sol
o
or will work in Opensolaris, especially given the fact that the Java-based
webconsole is being phased out?
Thanks!
-Nick
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
the snapshots probably aren't that useful to me. However,
if the snapshots are critical to your operations and your ability to service
user requests, then, yes, putting them onto a secondary storage location is a
good idea.
-Nick
This e-mail may contain confidential and privileg
> Wait...whoah, hold
> on.If snapshots reside within the confines of the
> pool, are you saying that dedup will also count
> what's contained inside the snapshots? I'm
> not sure why, but that thought is vaguely disturbing
> on some level.
>
> Then again (not sure how gurus feel on this
> point) b
-Original Message-
From: Bone, Nick
Sent: 16 December 2009 16:33
To: oab
Subject: RE: [zfs-discuss] Import a SAN cloned disk
Hi
I know that EMC don't recommend a SnapView snapshot being added to the original
hosts Storage Group although it is not prevented.
I tried this jus
things we want our
> file system to do for us, the stronger CPU it'll
> take.
>
Understood and agreed...but if you have the extra CPU cycles already, then,
depending on the type of data and your deduplication ratios, it may be worth it
to use the extra CPU to avoid buyi
Yes, in fact the Openpegasus server is already included with Opensolaris under
the SUNWcimserver package. I don't know how extensive the implementation is,
though, yet - I was able to install it and get it running, but not much beyond
that.
--
This message posted from opensolaris.org
_
(if that's
important to you). I also don't believe that EON currently has a web-based
management interface - it's in the works - so that doesn't really help you
there.
-Nick
--
This message posted from opensolaris.org
___
zfs-dis
Is it possible to mirror a zfs filesystem with another on another
disk? Two separate zpools, one on each disk, each one with a number
of fs filesystems and one in particular on each one mirroring another.
Nick
___
zfs-discuss mailing list
zfs
e anything on either Solaris 10u6 or OpenSolaris 2008.11. I am doing
something wrong here?
Also what should be the contents of this "verbose information" anyway?
Regards,
Nick
--
This message posted from opensolaris.org
___
zfs-discuss
What 'verbose information' is reported by the "zfs send -v " contain?
Also on Solaris 10u6 I don't get any output at all - is this a bug?
Regards,
Nick
--
This message posted from opensolaris.org
___
zfs-discuss
zfs receive'.
Many Thanks for any help.
Nick Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uot;>
>
>
>
>
> div bgcolor="#ff" text="#00">
>
>
> Richard Elling wrote:
>
> Nick wrote:
>
>
> Using the RAID cards capability for RAID6
> sounds attractive?
>
>
> /blockquote>
> Assum
should you so-wish. I had kinda given up on expecting read-ahead assistance
from the hardware... Im hoping that the large-ish write cache will simply
release zfs to go away and continue its work as soon as possible.
Thoughts much appreciated.
Nick
This message posted from opensolaris.org
I have been tasked with putting together a storage solution for use in a
virtualization setup, serving NFS, CIFS, and iSCSI, over GigE. I've inherited a
few components to work with:
x86 dual core server , 512MB LSI-ELP RAID card
12 x 300GB 15Krpm SAS disks & array
2GB
> I have no idea what to make of all
> this, except that it ZFS has a problem with this
> hardware/drivers that UFS and other traditional file
> systems, don't. Is it a bug in the driver that
> ZFS is inadvertently exposing? A specific feature
> that ZFS assumes the hardware to have, but it
> doesn
Don't know how much this will help, but my results:
Ultra 20 we just got at work:
# uname -a
SunOS unknown 5.10 Generic_118855-15 i86pc i386 i86pc
raw disk
dd if=/dev/dsk/c1d0s6 of=/dev/null bs=128k count=1 0.00s user 2.16s system
14% cpu 15.131 total
1,280,000k in 15.131 seconds
84768k/
22 matches
Mail list logo