Hi Peter,
indeed I was not trying to say that RAID1 is an insurance against disks going
bad. It is only the first defense against sudden and unpredictable failure (and
has saved us a couple of times). On the contrary, we regularly inspect
/var/log/messages since (on RHELx) this has mdadm-relate
On Wed, 2019-11-27 at 14:03 +, Kay Diederichs wrote:
> Hi Vaheh,
>
> RAID on Linux comes in different flavours and levels; the flavours are
> software RAID (mdadm) and hardware RAID (dedicated RAID controller or
> motherboard), and the levels are RAID0 RAID1 RAID5 RAID6 RAID10 and a few
> o
Dear all,
On 27/11/2019 14:03, Kay Diederichs wrote:
As an example, by default in my lab we have the operating system on mdadm RAID1
which consists of two disks that mirror each other. If one of the disks fails,
typically we only notice this when inspecting the system log files.
This won't h
I agree about the complexity of the RAID situation. But it can be narrowed down
a bit.
Since the claim is made there were only two hard drives, the only possibilities
are:
RAID 0 - "striping". In which case his data will probably not be recoverable,
and he would be unable to boot.
RAID 1 - "mir
Hi Vaheh,
RAID on Linux comes in different flavours and levels; the flavours are software
RAID (mdadm) and hardware RAID (dedicated RAID controller or motherboard), and
the levels are RAID0 RAID1 RAID5 RAID6 RAID10 and a few others. These details
influence what the user will notice when a disk
Hello ccp4-ers,
A bit off topic (actually a lot off topic) question regarding RAID array
system. On linux box one of two hard drives failed. I've found identical one
and replaced it. Can someone point me in the direction where I can get
instructions on what to do next to be able to login? Curre
Harry M. Greenblatt schrieb:
BS"D
Thank you to all the respondents.
Some comments:
1. Some believe that the write performance in RAID 5 is only as good as
I'm one of those "some" (based on experience, and a basic understanding
of how RAID5 operates).
Even 3ware gives performance data fo
BS"D
Thank you to all the respondents.
Some comments:
1. Some believe that the write performance in RAID 5 is only as good
as performance to one disk. This is true only in RAID 3 (under
certain conditions), where parity is written as a separate operation
to one dedicated parity disk. W
Woops! Yes, of course you would want an ampersand in my little
pseudo-script to background the "dd" jobs. My mistake. "seq" is also
one of my favorite commands, but some systems are so stripped-down that
they don't have it!
-James
Tim Gruene wrote:
Interesting and simple way to test the wr
Interesting and simple way to test the write performance. Simultaneous
writes could then be tested by putting an ampersand ('&') at the end of
the 'dd' command, couldn't they? And if you get tired of typing all the
number, you could use the 'seq' command instead.
Cheers, Tim
/bin/tcsh
set ti
Ahh, there is nothing quite like a nice big cluster to bring any file
server to its knees.
My experience with cases like this is that the culprit is usually NFS,
or the disk file system being used on the RAID array. It is generally
NOT a bandwidth problem. Basically, I suspect the bottleneck
Harry M. Greenblatt schrieb:
BS"D
To those hardware oriented:
We have a compute cluster with 23 nodes (dual socket, dual core Intel
servers). Users run simulation jobs on the nodes from the head node.
At the end of each simulation, a result file is compressed to 2GB, and
copied to the fi
BS"D
To those hardware oriented:
We have a compute cluster with 23 nodes (dual socket, dual core
Intel servers). Users run simulation jobs on the nodes from the head
node. At the end of each simulation, a result file is compressed to
2GB, and copied to the file server for the cluster (
13 matches
Mail list logo