I have a system with ZFS root that imports another zpool from a start
method. It uses a separate cache file for this zpool, like this:
if [ -f $CCACHE ]
then
echo "Importing $CPOOL with cache $CCACHE"
zpool import -o cachefile=$CCACHE -c $CCACHE $CPOOL
Hi all
It seems recent WD drives that aren't "Raid edition" can cause rather a lot of
problems on RAID systems. We have a few machines with LSI controllers
(6801/6081/9201) and we're seeing massive errors occuring. The usual pattern is
a drive failing or even a resilver/scrub starting and then,
The drives are attached to a backplane?
Try using 4k sector sizes and see if that improves it - I've seen and
been part of a number of discussions which involved this - and you, I
think, actually.
- Rich
On Mon, Aug 29, 2011 at 5:07 PM, Roy Sigurd Karlsbakk
wrote:
> Hi all
>
> It seems recent
All drives have 512b sector sizes, WD FASS (blacks) and WD EADS (greens) both
use plain old 512 sectors.
- Original Message -
> The drives are attached to a backplane?
>
> Try using 4k sector sizes and see if that improves it - I've seen and
> been part of a number of discussions whi
And, yes, they're connected to an LSI SAS expander from Super Micro. Works well
with Seagate and Hitachi, but not with WD
- Original Message -
> The drives are attached to a backplane?
>
> Try using 4k sector sizes and see if that improves it - I've seen and
> been part of a number o
Q?
are you intent to import this zpool to different host?
Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D
On Aug 29, 2011, at 14:13, Gary Mills wrote:
> I have a system with ZFS root that imports another zpool from a start
> method. It uses a separate cache file for this zpool, like this:
>
Hi Gary,
We use this method to implement NexentaStor HA-Cluster and, IIRC,
Solaris Cluster uses shared cachefiles, too. More below...
On Aug 29, 2011, at 11:13 AM, Gary Mills wrote:
> I have a system with ZFS root that imports another zpool from a start
> method. It uses a separate cache file f
On Mon, Aug 29, 2011 at 07:56:16PM -0400, LaoTsao wrote:
> Q?
> are you intent to import this zpool to different host?
Yes, it can be imported on another server. That part works when it
has been exported cleanly first. I was concerned about a possible
import failure when the original server lost
On Mon, Aug 29, 2011 at 05:24:18PM -0700, Richard Elling wrote:
> We use this method to implement NexentaStor HA-Cluster and, IIRC,
> Solaris Cluster uses shared cachefiles, too. More below...
Mine's a cluster too, with quite a simple design.
> On Aug 29, 2011, at 11:13 AM, Gary Mills wrote:
> >
> From: Richard Elling [mailto:richard.ell...@gmail.com]
>
>> What do you expect to happen if you're in progress doing a zfs send, and
>> then simultaneously do a zfs destroy of the snapshot you're sending?
>
> It depends on the release. For modern implementations, a hold is placed on
> the snaps
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Daniel Carosone
>
> On Sat, Aug 27, 2011 at 08:44:13AM -0700, Richard Elling wrote:
> > I'm getting a but tired of people designing for fast resilvering.
>
> It is a design consideration, rega
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all. Sorry if I am asking a FAQ, but I haven't found a really
authorizative answer to this. Most references are old, incomplete or
of "I have heard of" kind.
I am running Solaris 10 Update 9, and my pool is v22.
I recently got two 40GB SSD I plan
Are you truly new to ZFS? Or do you work for NetApp or EMC or somebody else
that is curious?
- Mike
On Aug 29, 2011, at 9:15 PM, Jesus Cea wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all. Sorry if I am asking a FAQ, but I haven't found a really
> authorizative answer to t
13 matches
Mail list logo