I checked this out at the Solaris internals link above, because I am also
interested in the best setup for ZFS.
Assuming 500GB drives ...
It turns out that the most cost effective option (meaning the least "lost"
drive space due to redundancy is to ...
1. Setup RaidZ of up to 8 drives (All mus
OpenSolaris builds are like "development snapshots"...they're not a release and
thus there are no patches.
SXCE is just binary build from these snapshots... it's there are convenience
only, and "patches" are applied like in every other development project... by
updating from source repository,
Henk,
By upgrading do you mean, rebooting and installing Open Solaris from DVD or
Network?
Like, no Patch Manager install some quick patches and updates and a quick
reboot, right?
Bill
This message posted from opensolaris.org
___
zfs-discuss mail
OK,
I guess using this ...
set md:mirrored_root_flag=1
for Solaris Volume Manager (SVM) is not supported and could cause problems.
I guess it's back to my first idea ...
With 2 disks, setup three SDR's (State Database Replicas)
Drive 0 = 1 SDR -> If this drive fails auto-magically
On 30/09/2007, at 7:45 AM, Kugutsumen wrote:
> Thanks Jürgen,
>
> I was hit by 6423745 (see below) but the main problem is that /sbin/
> zpool is linked to /usr/lib/libdiskmgt.so.1 and since /usr was on
> my datapool, it failed systematically.
>
> I don't think it makes sense that an executabl
Thanks Jürgen,
I was hit by 6423745 (see below) but the main problem is that /sbin/
zpool is linked to /usr/lib/libdiskmgt.so.1 and since /usr was on my
datapool, it failed systematically.
I don't think it makes sense that an executable in /sbin be linked to
a lib in /usr/lib. The only binar
On Sep 29, 2007, at 5:03 PM, Scott wrote:
> Prabahar wrote:
>> Nope. This feature hasn't made it to S10U4. We are
>> anticipating it to be
>> available in S10U5.
>>
>> --
>> Prabahar.
>>
>> Scott wrote:
>>> Did the ZFS gzip compression feature (i.e. "zfs set
>> compression=gzip") make it in to Sol
Prabahar wrote:
> Nope. This feature hasn't made it to S10U4. We are
> anticipating it to be
> available in S10U5.
>
> --
> Prabahar.
>
> Scott wrote:
> > Did the ZFS gzip compression feature (i.e. "zfs set
> compression=gzip") make it in to Solaris 10 U4? I was
> looking forward to being able to
Nope. This feature hasn't made it to S10U4. We are anticipating it to be
available in S10U5.
--
Prabahar.
Scott wrote:
> Did the ZFS gzip compression feature (i.e. "zfs set compression=gzip") make
> it in to Solaris 10 U4? I was looking forward to being able to use it in a
> production Solaris
> Did the ZFS gzip compression feature (i.e. "zfs set compression=gzip")
> make it in to Solaris 10 U4? I was looking forward to being able to use it
> in a production Solaris release without having to compile my OpenSolaris
> build, but it doesnt' seem to be there.
No. This feature was introduce
Did the ZFS gzip compression feature (i.e. "zfs set compression=gzip") make it
in to Solaris 10 U4? I was looking forward to being able to use it in a
production Solaris release without having to compile my OpenSolaris build, but
it doesnt' seem to be there.
This message posted from opensola
Hi,
Our zfs nfs build server running snv_73 (pool created back before
zfs integrated to ON) paniced I guess from zfs the first time
and now panics on attempted boot every time as below. Is this
a known issue and, more importantly (2TB of data in the pool),
any suggestions on how to recover (othe
[EMAIL PROTECTED] wrote:
> Just checking status on the resilver/scrub + snap reset issue-- it is very
> painful for large pools such as exist on thumpers that make heavy use of
> snaps. Is this still on track for u5/pre-u5 or has it changed? Is there a
> different view of these bugs with more in
Robert Lor wrote:
> I'm trying to add filesystems from two different pools to a zone but can't
> seem to find any mention of how to do this in the docs.
>
> I tried this but the second set overwrites the first one.
>
> add dataset
> set name=pool1/fs1
> set name=pool2/fs2
> end
>
> Is this pos
While the density is only 3 drives per AT slot vs. 3.33 for the 5-drive
Addonics or Supermicro units, the build quality is slightly better than the
Addonics (as good as the Supermicro) and the convenience factor is superb as
no tray mounting and unmounting is required:
http://www.startech.com/P
15 matches
Mail list logo