On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:

> On 2017-11-13 09:56, Gene Heskett wrote:
> > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> >> On 2017-11-11 01:49, Jon LaBadie wrote:
> >>> Just a thought.  My amanda server has seven hard drives
> >>> dedicated to saving amanda data.  Only 2 are typically
> >>> used (holding and one vtape drive) during an amdump run.
> >>> Even then, the usage is only for about 3 hours.
> >>>
> >>> So there is a lot of electricity and disk drive wear for
> >>> inactive drives.
> >>>
> >>> Can todays drives be unmounted and powered down then
> >>> when needed, powered up and mounted again?
> >>>
> >>> I'm not talking about system hibernation, the system
> >>> and its other drives still need to be active.
> >>>
> >>> Back when 300GB was a big drive I had 2 of them in
> >>> external USB housings.  They shut themselves down
> >>> on inactivity.  When later accessed, there would
> >>> be about 5-10 seconds delay while the drive spun
> >>> up and things proceeded normally.
> >>>
> >>> That would be a fine arrangement now if it could
> >>> be mimiced.
> >>
> >> Aside from what Stefan mentioned (using hdparam to set the standby
> >> timeout, check the man page for hdparam as the numbers are not
> >> exactly sensible), you may consider looking into auto-mounting each
> >> of the drives, as that can help eliminate things that would keep
> >> the drives on-line (or make it more obvious that something is still
> >> using them).
> >
> > I've investigated that, and I have amanda wrapped up in a script
> > that could do that, but ran into a showstopper I've long since
> > forgotten about.  Al this was back in the time I was writing that
> > wrapper, years ago now. One of the show stoppers AIR was the fact
> > that only root can mount and unmount a drive, and my script runs as
> > amanda.
>
> While such a wrapper might work if you use sudo inside it (you can
> configure sudo to allow root to run things as the amanda user without
> needing a password, then run the wrapper as root), what I was trying
> to refer to in a system-agnostic manner (since the exact mechanism is
> different between different UNIX derivatives) was on-demand
> auto-mounting, as provided by autofs on Linux or the auto-mount daemon
> (amd) on BSD.  When doing on-demand auto-mounting, you don't need a
> wrapper at all, as the access attempt will trigger the mount, and then
> the mount will time out after some period of inactivity and be
> unmounted again.  It's mostly used for network resources (possibly
> with special auto-lookup mechanisms), as certain protocols (NFS in
> particular) tend to have issues if the server goes down while a share
> is mounted remotely, even if nothing is happening on that share, but
> it works just as well for auto-mounting of local fixed or removable
> volumes that aren't needed all the time (I use it for a handful of
> things on my personal systems to minimize idle resource usage).

Sounds good perhaps. I am currently up to my eyeballs in an unrelated 
problem, and I won't get to this again until that project is completed 
and I have brought the 2TB drive in and configured it for amanda's 
usage. That will tend to enforce my one thing at a time but do it right 
bent. :)  What I have is working for a loose definition of working...

But if I allow the 2TB to be  unmounted and self-powered down, once 
daily, what shortening of its life would I be subjected to?  In other 
words, how many start-stop cycles can it survive? 

Interesting, I had started a long time test yesterday, and the reported 
hours has wrapped in the report, apparently at 65636 hours. Somebody 
apparently didn't expect a drive to last that long? ;-)  The drive? 
Healthy as can be.
 
Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>

Reply via email to