> I know about those SoHo boxes and the whatnot, they
> keep spinning up and down all the time and the worst
> thing is that you cannot disable this sleep/powersave
> feature on most of these devices.

That to judge is in the eye of the beholder. We have a couple of Thecus NAS 
boxes and some LVM Raids on Ubuntu in the company which work like a charm with 
WD green drives spinning down on inactivity.  A hot spare is typically inactive 
most of the time and it does spin down unless required. That's because there 
are people in the Linux world who have a focus on implementing and maintaining  
power save options. I think that's great.

> I believe I have seen a "sleep mode" support when I
> skimmed through the feature lists of the LSI
> controllers but I'm not sure right now.

Neither am I. LSIs feature list is centred around SAS support on their own 
drivers. What exactly will work after you have added SAS Expanders (which I 
haven't but might do in the future) and attached SATA drives (which I have but 
might change some of them to SAS), running the native kernel driver (mpt) 
instead of LSI's proprietary one, changing the firmware from LSI default IR to 
IT mode (which you will typically do for MPxIO), is a challenge facing the 
possible permutations.
Bottom line: you will have to try.

> My idea is rather that the "hot spares" (or perhaps
> we should say "cold spares" then) are off all the
> time until they are needed or when a user
> initiated/scheduled system integrity check is being
> conducted. They could go up for a "test spin" at each
> occasion a scrub is initiated which is not too
> frequently.

I wouldn't know anything that will work like you figure, including zfs. I.e., a 
hot spare is completely inactive during a scrub, but each 'zpool status' 
command will return the state of the spare - which will work for a spun-down 
drive as well btw. The idea of test-spinning a hot spare is quite far-fetched 
on that background. Scrub as well isn't a generic function to do a "system 
integrity" test, but has specific functionality in respect to a zpool. Putting 
things in the usual categories, your spare requirements are closer to having a 
cold spare, I would say. Maybe you can find a method starting from there.

> 
>  I figured
> that real enterprise applications rather use Solaris
> together with carefully selected hardware whereas
> OpenSolaris is more aimed at lower-budget/mainstream
> applications as a way of gaining a wider acceptance
> for OpenSolaris and ZFS 

That's a selective perspective I would say. I.e., Fishworks is derived from 
Opensolaris directly while the feature set of Solaris is again quite different. 
The salient point for the enterprise solutions is that you pay for the 
services, among other that expensive engineers have figured out which 
components will provide the functions you have requested, and there will be 
somebody you can bark up to if things don't work as advertised. For the white 
box, open approach this is up to you.

>It has been discussed in many places that
> file systems do not change as frequently as the
> operating systems which is considered to be an issue
> when it comes to the implementation of newer and
> better technology.

FWIW, ZFS is mutating quite dynamically.

> > ... you should be able to move a disk to another
> location (channel, slot)
> > and it would still be a part of the same pool and
> VDEV.
> >
> > This works very well, given your controller
> properly supports it. ...
> 
> I hope you are absolutely sure about this. The main
> reason I asked this question comes from the thread
> "Intel SASUC8I worth every penny" in this forum
> section where the thread starter warned that one
> should use "zpool export" "zpool import" when
> migrating a tank from one (set of) controller(s) to
> another.

It's not obvious that my assertion about mpt functions will apply when your 
want move from a contoller I figure is an AMD onboard (?) to an LSI SAS mpt. 
Instead, you will have to investigate what driver module your onboard 
controller will use, and what the properties of this driver are. Additionally, 
you will have to cross-check if the two drivers will have a behaviour that is 
compatible, i.e. the receiving controller doesn't do any things based on 
implicit assumptions the sending controller didn't provide.
To give you an example what will happen when you have a drive on a controller 
that is driven by ahci (7M), the difference will be, among other, that before 
you pull the drive you will have to unconfigure it in cfgadm as an additional 
step. If you don't observe that, you can blow things up.
Moreover, ahci (7M) according to the specs I know will not support power save. 
Bottom line: you will have to find out. 

What the "warning" is concerned: migrating a whole pool is not the same thing 
as swapping slots within a pool. I.e., if you pull more than the allowed number 
(failover resilience) from your pool at the same time while the pool is hot, 
you will simply destroy the pool.

Regards,

Tonmaus
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to