jahammonds prost wrote:
From: Brad Campbell [EMAIL PROTECTED]
I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15 drives and a 600W PSU. It's
not rocket science.
Where did you find reasonably priced cases to hold so many drives? Each of my
home servers top out at 8 data d
I will soon be adding another same sized drive to an existing 3 drive
RAID 5 array.
The machine is running Fedora Core 6 with kernel 2.6.20-1.2952.fc6 and
mdadm 2.5.4, both of which are the latest available Fedora packages.
Is anyone aware of any obvious bugs in either of these that will
jeo
On Friday June 22, [EMAIL PROTECTED] wrote:
>
> I'm considering simply wiping /dev/hde completely so there's no trace of
> the superblock and then re-adding it correctly, but perhaps there's a less
> drastic way to do it.
>
> Any insights would be appreciated :)
mdadm --zero-superblock /dev/hde
On Fri, 22 Jun 2007, Bill Davidsen wrote:
By delaying parity computation until the first write to a stripe only the
growth of a filesystem is slowed, and all data are protected without waiting
for the lengthly check. The rebuild speed can be set very low, because
on-demand rebuild will do most
Bill Davidsen wrote:
David Greaves wrote:
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
If you end up 'fiddling' in md because someone specified
--assume-clean on a raid5 [in this case just to save a few minutes
*testing time* on system with a heavily choked bus!] then th
David Greaves wrote:
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
That's not a bad thing - until you look at the complexity it brings
- and then consider the impact and exceptions when you do, eg
hardware acceleration? md information fed up to the fs layer for
xfs? simp
From: Brad Campbell [EMAIL PROTECTED]
> I've got 2 boxes. One has 14 drives and a 480W PSU and the other has 15
> drives and a 600W PSU. It's
> not rocket science.
Where did you find reasonably priced cases to hold so many drives? Each of my
home servers top out at 8 data drives each - plus a
I have found a 16MB stripe_cache_size results in optimal performance after
testing many many values :)
On Fri, 22 Jun 2007, Raz wrote:
On 6/22/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
On Thu, 21 Jun 2007, Raz wrote:
> What is your raid configuration ?
> Please note that the stripe_cache_siz
On 6/22/07, Jon Nelson <[EMAIL PROTECTED]> wrote:
On Thu, 21 Jun 2007, Raz wrote:
> What is your raid configuration ?
> Please note that the stripe_cache_size is acting as a bottle neck in some
> cases.
Well, it's 3x SATA drives in raid5. 320G drives each, and I'm using a
314G partition from ea
John Hendrikx wrote:
I'm not sure why this keeps going wrong, but I do know I made a mistake
when initially reconstructing the array. What I did was the following:
# mdadm /dev/md1 --add /dev/hde
Releazing that I didn't want to add the complete drive (/dev/hde) but only
one of its partitions (
Hi, I currently have a little problem where one my drives is kicked from
the raid array on every reboot. dmesg claims the following:
md: md1 stoppped.
md: bind
md: bind
md: could not open unknown-block(33,1).
md: md_import_device returned -6
md: bind
md: bind
md: bind
md: kicking non-fresh hde fr
[EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, David Greaves wrote:
That's not a bad thing - until you look at the complexity it brings -
and then consider the impact and exceptions when you do, eg hardware
acceleration? md information fed up to the fs layer for xfs? simple
long term maintenan
On Fri, 22 Jun 2007, David Greaves wrote:
That's not a bad thing - until you look at the complexity it brings - and
then consider the impact and exceptions when you do, eg hardware
acceleration? md information fed up to the fs layer for xfs? simple long term
maintenance?
Often these problems
Neil Brown wrote:
On Thursday June 21, [EMAIL PROTECTED] wrote:
I didn't get a comment on my suggestion for a quick and dirty fix for
-assume-clean issues...
Bill Davidsen wrote:
How about a simple solution which would get an array on line and still
be safe? All it would take is a flag which
14 matches
Mail list logo