I ran into the same thing where I had to manually delete directories.
Once you export the pool you can plug in the drives anywhere else. Reimport the
pool and the file systems come right up — as long as the drives can be seen by
the system.
--
This message posted from opensolaris.org
_
I use the RE4's at work on the storage server, but at home I use the consumer
1TB green drives.
My system [2009.06] uses an Intel Atom 330 based motherboard, 4 gigs of non-ecc
ram, a Supermicro AOC-SAT2-MV8 controller with 5 1TB Western Digital [WD10EARS]
drives in a raidz1.
There are many rea
I have moved drives between controllers, rearranged drives in other slots, and
moved disk sets between different machines and I've never had an issue with a
zpool not importing. Are you sure you didn't remove the drives while the system
was powered up?
Try this:
zpool import -D
If zpool lists
I had a problems with a UFS file system on a hardware raid controller. It was
spitting out errors like crazy, so I rsynced it to a ZFS volume on the same
machine. There were a lot of read errors during the transfer and the RAID
controller alarm was going off constantly. Rsync was copying the cor
> are you using comstar or the old iSCSI target (iscsitadm) to provision
> targets?
I'm using zfs set shareiscsi=on to confugure the logical units and COMSTAR for
the rest on the OpenSolaris side. The targets are initiated on Solaris 10 with
iscsiadm.
This thing was humming right along and all
I have a storage server with snv_134 installed. This has four zfs file systems
shared with iscsi that are mounted as zfs volumes on a Sun v480.
Everything has been working great for about a month, and all of a sudden the
v480 has timeout errors when trying to connect to the iscsi volumes on the
This didn't occur on a production server, but I thought I'd post this anyway
because it might be interesting.
I'm currently testing a ZFS NAS machine consisting of a Dell R710 with two Dell
5/E SAS HBAs. Right now I'm in the middle of torture testing the system,
simulating drive failures, expor
I was able to get Netatalk built on OpenSolaris for my ZFS NAS at home.
Everything is running great so far, and I'm planning on using it on the 96TB
NAS I'm building for my office. It would be nice to have this supported out of
the box, but there are probably licensing issues involved.
--
This
This non-raid sas controller is $199 and is based on the LSI SAS 1068.
http://accessories.us.dell.com/sna/products/Networking_Communication/productdetail.aspx?c=us&l=en&s=bsd&cs=04&sku=310-8285&~lt=popup&~ck=TopSellers
What kind of chassis do these drives currently reside in? Does the backplane
Is there a formula to determine the optimal size of dedicated cache space for
zraid systems to improve speed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
> Suffice to say, 2 top-level raidz2 vdevs of similar size with copies=2
> should offer very nearly the same protection as raidz2+1.
> -- richard
This looks like the way to go. Thanks for your input. It's much appreciated!
--
This message posted from opensolaris.org
_
I'm putting together a 48 bay NAS for my company [24 drives to start]. My
manager has already ordered 24 2TB [b]WD Caviar Green[/b] consumer drives -
should we send these back and order the 2TB [b]WD RE4-GP[/b] enterprise drives
instead?
I'm tempted to try these out. First off, they're about $
> You'll have to add a bit of meat to "this"!
>
> What are you resiliency, space and performance
> requirements?
Resiliency is most important, followed by space and then speed. It's primary
function is to host digital assets for ad agencies and backups of other servers
and workstations in the o
I'm in the process of setting up a NAS for my company. It's going to be based
on Open Solaris and ZFS, running on a Dell R710 with two SAS 5/E HBAs. Each HBA
will be connected to a 24 bay Supermicro JBOD chassis. Each chassis will have
12 drives to start out with, giving us room for expansion as
14 matches
Mail list logo