Chris Hoogendyk wrote:
>
> After that, I convinced management to pay for mirrored drives.
>
How much was the overtime bill? ;)
--
Enable your software for Intel(R) Active Management Technology to meet the
growing mana
On 3/23/11 12:51 PM, Alan Brown wrote:
> Mehma Sarja wrote:
>> Since drives ONLY fail on Friday afternoons local time, an effective
>> remedy is to check for SMART messages before the weekend. Foolish as
>> that is, I am surprised how many times it has held true for me.
> For similar reasons we o
Mehma Sarja wrote:
> Since drives ONLY fail on Friday afternoons local time, an effective
> remedy is to check for SMART messages before the weekend. Foolish as
> that is, I am surprised how many times it has held true for me.
For similar reasons we only perform work on critical infrastructure
On 3/23/11 7:28 AM, Alan Brown wrote:
> Phil Stracchino wrote:
>
>> Well, a good start is to use something like SMART monitoring set up to
>> alert you when any drive enters what it considers a pre-fail state.
>> (Which can be simple age, increasing numbers of hard errors, increasing
>> variation i
John Drescher wrote:
>> I haven't had as many die as you have (Do your users kick their computers
>> around the room?) but my experience matches yours when looking at changes in
>> the raw data. The problem is I haven't had enough die to put 100% certainty
>> on it so I tend to rely on smartd's out
> I haven't had as many die as you have (Do your users kick their computers
> around the room?) but my experience matches yours when looking at changes in
> the raw data. The problem is I haven't had enough die to put 100% certainty
> on it so I tend to rely on smartd's output.
>
I have between 10
John Drescher wrote:
> I would say this is true for smart PASS / FAIL but if you look at the
> raw SMART data you can use this to predict failure before it totally
> fails.
I agree but they don't do that.
> At least I have been able to predict this for the 10 to 20
> drives that have died here
>> Well, a good start is to use something like SMART monitoring set up to
>> alert you when any drive enters what it considers a pre-fail state.
>> (Which can be simple age, increasing numbers of hard errors, increasing
>> variation in spindle speed, increasing slow starts, etc, etc...)
>
> FWIW: N
Phil Stracchino wrote:
> Well, a good start is to use something like SMART monitoring set up to
> alert you when any drive enters what it considers a pre-fail state.
> (Which can be simple age, increasing numbers of hard errors, increasing
> variation in spindle speed, increasing slow starts, etc,
Mehma Sarja wrote:
> There is one more thing to think about and that is cumulative aging.
> Starting with all new disks is a false sense of security because as they
> age, and if they are in any sort of RAID/performance configuration, they
> will age and wear evenly.
Expanding on that:
It is
On 03/18/11 21:00, Mehma Sarja wrote:
> I can only think of staggering drive age and maintenance. Here's hoping
> that someone on the list can come up with more creative solutions/practices.
Try to avoid buying a large number of drives from the same batch. This
is frequently easily accomplished
On 3/18/11 4:41 PM, Marcello Romani wrote:
> Il 18/03/2011 19:01, Mehma Sarja ha scritto:
>> On 3/17/11 4:57 PM, Phil Stracchino wrote:
>>> On 03/17/11 18:46, Marcello Romani wrote:
Il 16/03/2011 18:38, Phil Stracchino ha scritto:
> On 03/16/11 13:08, Mike Hobbs wrote:
>> Hello,
On 03/18/11 19:41, Marcello Romani wrote:
> Il 18/03/2011 19:01, Mehma Sarja ha scritto:
>> There is one more thing to think about and that is cumulative aging.
>> Starting with all new disks is a false sense of security because as they
>> age, and if they are in any sort of RAID/performance config
Il 18/03/2011 19:01, Mehma Sarja ha scritto:
> On 3/17/11 4:57 PM, Phil Stracchino wrote:
>> On 03/17/11 18:46, Marcello Romani wrote:
>>> Il 16/03/2011 18:38, Phil Stracchino ha scritto:
On 03/16/11 13:08, Mike Hobbs wrote:
> Hello, I'm currently testing bacula v5.0.3 and so far so
On 3/17/11 4:57 PM, Phil Stracchino wrote:
> On 03/17/11 18:46, Marcello Romani wrote:
>> Il 16/03/2011 18:38, Phil Stracchino ha scritto:
>>> On 03/16/11 13:08, Mike Hobbs wrote:
Hello, I'm currently testing bacula v5.0.3 and so far so good. One
of my issues though, I have a 16 bay
Not really, RAID6+0 only requires 8 drives minimum you can create two
RAID6's of 4 drives each and stripe them together.This has a benefit
as multi-layer based parity raids increases random write iops
performance. But the main issue is array integrity, mainly with
large capacity drives
Phil Stracchino wrote:
> With RAID6, you can survive any one or two disk failures, in degraded
> mode. You'll have a larger working set than RAID10, but performance
> will be slower because of the overhead of parity calculations. A third
> failure will bring the array down and you will lose the
Il 18/03/2011 00:57, Phil Stracchino ha scritto:
> On 03/17/11 18:46, Marcello Romani wrote:
>> Il 16/03/2011 18:38, Phil Stracchino ha scritto:
>>> On 03/16/11 13:08, Mike Hobbs wrote:
Hello, I'm currently testing bacula v5.0.3 and so far so good. One
of my issues though, I have a
On 03/17/11 18:46, Marcello Romani wrote:
> Il 16/03/2011 18:38, Phil Stracchino ha scritto:
>> On 03/16/11 13:08, Mike Hobbs wrote:
>>>Hello, I'm currently testing bacula v5.0.3 and so far so good. One
>>> of my issues though, I have a 16 bay Promise Technologies VessJBOD. How
>>> do I get
Il 16/03/2011 18:38, Phil Stracchino ha scritto:
> On 03/16/11 13:08, Mike Hobbs wrote:
>>Hello, I'm currently testing bacula v5.0.3 and so far so good. One
>> of my issues though, I have a 16 bay Promise Technologies VessJBOD. How
>> do I get bacula to use all the disks for writing volumes
Il 16/03/2011 18:08, Mike Hobbs ha scritto:
>Hello, I'm currently testing bacula v5.0.3 and so far so good. One
> of my issues though, I have a 16 bay Promise Technologies VessJBOD. How
> do I get bacula to use all the disks for writing volumes to?
>
> I guess the way I envision it working w
On Wed, Mar 16, 2011 at 1:29 PM, Mike Hobbs wrote:
> On 03/16/2011 01:12 PM, Robison, Dave wrote:
>> Just curious, why not put that jbod into a RAID array? I believe you'd
>> get far better performance with the additional spools and you'd get
>> redundancy as well.
>>
>> Personally I'd set that u
On 03/16/11 13:08, Mike Hobbs wrote:
> Hello, I'm currently testing bacula v5.0.3 and so far so good. One
> of my issues though, I have a 16 bay Promise Technologies VessJBOD. How
> do I get bacula to use all the disks for writing volumes to?
>
> I guess the way I envision it working would
On 03/16/2011 06:29 PM, Mike Hobbs wrote:
> On 03/16/2011 01:12 PM, Robison, Dave wrote:
>> Just curious, why not put that jbod into a RAID array? I believe you'd
>> get far better performance with the additional spools and you'd get
>> redundancy as well.
>>
>> Personally I'd set that up as a RA
On 03/16/2011 01:12 PM, Robison, Dave wrote:
> Just curious, why not put that jbod into a RAID array? I believe you'd
> get far better performance with the additional spools and you'd get
> redundancy as well.
>
> Personally I'd set that up as a RAIDZ using ZFS on FreeBSD.
>
>
I believe the reas
Hello, I'm currently testing bacula v5.0.3 and so far so good. One
of my issues though, I have a 16 bay Promise Technologies VessJBOD. How
do I get bacula to use all the disks for writing volumes to?
I guess the way I envision it working would be, 50gb volumes would be
used and when disk1
26 matches
Mail list logo