On Tue, 15 May 2007, Tomasz Chmielewski wrote:
I have a RAID-10 setup of four 400 GB HDDs. As the data grows by several GBs
a day, I want to migrate it somehow to RAID-5 on separate disks in a separate
machine.
Which would be easy, if I didn't have to do it online, without stopping any
servi
On Thu, 5 Apr 2007, Lennert Buytenhek wrote:
[*] probably an entirely defective batch of 14 Samsung Spinpoint
500G disks
Lets hope not... Keep checking those SMART values...
Failed disk #2 still reports a SMART status of "PASSED"..
Hmmm.. I'd be tempted to double check your hardware + kern
On Wed, 4 Apr 2007, Lennert Buytenhek wrote:
(please CC on replies, not subscribed to linux-raid@)
Hi!
While my RAID6 array was rebuilding after one disk had failed (which
I replaced), a second disk failed[*], and this caused the rebuild
process to start over from the beginning.
Why would the
On Fri, 23 Mar 2007, Mattias Wadenstein wrote:
On Fri, 23 Mar 2007, Gordon Henderson wrote:
Are there any plans in the near future to enable growing RAID-6 arrays by
adding more disks into them?
I have a 15x500GB - drive unit and I need to add another 15 drives into
it... Hindsight is
Are there any plans in the near future to enable growing RAID-6 arrays by
adding more disks into them?
I have a 15x500GB - drive unit and I need to add another 15 drives into
it... Hindsight is telling me that maybe I should have put LVM on top of
the RAID-6, however, the usable 6TB it yield
On Mon, 15 Jan 2007, dean gaudet wrote:
you can also run monthly "checks"...
echo check >/sys/block/mdX/md/sync_action
it'll read the entire array (parity included) and correct read errors as
they're discovered.
A-Ha ... I've not been keeping up with the list for a bit - what's the
minimum
Yeechang Lee wrote:
[Also posted to
comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]
I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives [...]
I'm of the opinion that more drives means more ch
On Fri, 15 Dec 2006, Andre Majorel wrote:
Pardon the probably silly question but...
Can you use RAID1 devices for your root and swap with a "straight"
kernel ? (i.e. without the need for initrd/initramfs.)
Yes - but you need the md (and ide/scsi/sata) drivers compiled into your
kernel, and m
On Tue, 24 Oct 2006, David Greaves wrote:
> Gordon Henderson wrote:
> >1747 ?S< 724:25 [md9_raid5]
> >
> > It's kernel 2.6.18 and
>
> Wasn't the module merged to raid456 in 2.6.18?
Ah, was it? I might have missed that...
> Are your m
Heres an oddity - Just built a server with 15 external disks over 2 SAS
channels and I've noticed that the kernel is saying it's RAID5 rather than
RAID6 ...
Hard to explain what I mean in words, but:
bertha:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md9 : a
On Tue, 17 Oct 2006, Andrew Moise wrote:
> On 10/17/06, Gordon Henderson <[EMAIL PROTECTED]> wrote:
> > Anyway, it's currently in a RAID-1 configuration (which I used for some
> > initial soaktests) and seems to be just fine:
> >
> > FilesystemSiz
On Tue, 17 Oct 2006, Greg Dickie wrote:
> Never lost an XFS filesystem completely. Can't say the same about ext3.
Whereas I have exactly the reverse of the problem... Never lost an ext2/3,
but had a few XFSs trashed when I played with it a couple of years ago...
My 2 euros,
Gordon
-
To unsubscri
For anyone who cares about my saga so-far ;-) ...
I got physical access to the unit this morning and setup the drives as 15
RAID-0 Logical drives and booted up Linux, and it then attached all the
drives in the usual way.
And I can see all 15 drives. So the down-side is that I can't use any sort
This might not be strictly on-topic here, but you may provide
enlightenment, as a lot of web searching hasn't helpmed me so-far )-:
A client has bought some Dell hardware - Dell 1950 1U server, 2 on-board
SATA drives connected to a Fusion MPT SAS controller. This works just
fine. The on-board dri
On Sun, 8 Oct 2006, Ian Brown wrote:
> Then I created a RAID1 by running:
>
> mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdb2
>
> I got : mdadm: array /dev/md0 started
>
> cat /proc/mdstat shows:
>
> Personalities : [raid1]
> md0 : active raid1 sdb2[1] sdb1[0]
> 1
On Sat, 16 Sep 2006, Dexter Filmore wrote:
> Am Samstag, 16. September 2006 19:26 schrieb Bill Davidsen:
> > Dexter Filmore wrote:
> > >Is anyone here who runs a soft raid on Slackware?
> > >Out of the box there are no raid scripts, the ones I made myself seem a
> > > little rawish, barely more th
On Tue, 5 Sep 2006, Paul Waldo wrote:
> Gordon Henderson wrote:
> > On Tue, 5 Sep 2006, Steve Cousins wrote:
> [snip]
> > and my weekly badblocks script looks like:
> >
> > #!/bin/csh
> >
> > echo "`uname -n`: Badblocks test starting at [`dat
On Tue, 5 Sep 2006, Steve Cousins wrote:
> Would people be willing to list their setup? Including such things as
> mdadm.conf file, crontab -l, plus scripts that they use to check the
> smart data and the array, mdadm daemon parameters and anything else that
> is relevant to checking and maintaini
On Tue, 5 Sep 2006, Patrik Jonsson wrote:
> mtbf seems to have an exponential dependence on temperature, so it pays
> off to keep temp down. Exactly what temp you consider safe is
> individual, but my drives only occasionally go above 40C.
I had a pair (2 x Hitachi IDE 80GB) that ran in a sealed
On Tue, 5 Sep 2006, Paul Waldo wrote:
> Hi all,
>
> I have a RAID6 array and I wondering about care and feeding instructions :-)
>
> Here is what I currently do:
> - daily incremental and weekly full backups to a separate machine
> - run smartd tests (short once a day, long once a week)
>
On Thu, 24 Aug 2006, Adam Kropelin wrote:
> > Generally speaking the channels on onboard ATA are independant with any
> > vaguely modern card.
>
> Ahh, I did not know that. Does this apply to master/slave connections on
> the same PATA cable as well? I know zero about PATA, but I assumed from
> th
On Thu, 24 Aug 2006, Richard Scobie wrote:
> Gordon Henderson wrote:
>
> > While I haven't done this, I have a client who uses Firewire drives
> > (Lacie) as a backup solution and they seem to "just work", and look like
> > locally attached SCSI drives (Perf
On Wed, 16 Aug 2006, andy liebman wrote:
> Thanks Gordon,
>
> I may not have been clear what I was asking. I wanted to know if you can
> make DISK IMAGES -- for example, with a program like Norton Ghost or
> Acronis True Image (better) -- of EACH of the two OS drives from a
> mirrored pair. Then r
On Tue, 15 Aug 2006, andy liebman wrote:
> -- If I were to create disk images of EACH drive (i.e., /dev/sda and
> /dev/sdb), could I restore each of those images to NEW drives -- with
> all of their respective partitions -- and have a working RAIDED OS? I
> ask because my ultimate goal is to put
A client of mine desperately wants a Dell solution rather than a
self-build. They are looking at an external Dell box with 15 x 500GB SATA
drives in it and a Dell 1U host controller - but the connection between
them is SAS, and they want to use a (Dell) PERC5e card in the host, so
does anyone know
On Thu, 13 Jul 2006, Burn Alting wrote:
> Last year, there were discussions on this list about the possible
> use of a 'co-processor' (Intel's IOP333) to compute raid 5/6's
> parity data.
>
> We are about to see low cost, multi core cpu chips with very
> high speed memory bandwidth. In light of th
On Wed, 28 Jun 2006, Christian Pernegger wrote:
> I also subscribe to the "almost commodity hardware" philosophy,
> however I've not been able to find a case that comfortably takes even
> 8 drives. (The Stacker is an absolute nightmare ...) Even most
> rackable cases stop at 6 3.5" drive bays -- e
I've seen a few comments to the effect that some disks have problems when
used in a RAID setup and I'm a bit preplexed as to why this might be..
What's the difference between a drive in a RAID set (either s/w or h/w)
and a drive on it's own, assuming the load, etc. is roughly the same in
each set
On Sun, 25 Jun 2006, Chris Allen wrote:
> Back to my 12 terabyte fileserver, I have decided to split the storage
> into four partitions
> each of 3TB. This way I can choose between XFS and EXT3 later on.
>
> So now, my options are between the following:
>
> 1. Single 12TB /dev/md0, partitioned int
On Fri, 23 Jun 2006, Chris Allen wrote:
> Strange that whatever the filesystem you get equal numbers of people
> saying that
> they have never lost a single byte to those who have had horrible
> corruption and
> would never touch it again. We stopped using XFS about a year ago because we
> were ge
On Thu, 22 Jun 2006, Chris Allen wrote:
> Dear All,
>
> I have a Linux storage server containing 16x750GB drives - so 12TB raw
> space.
Just one thing - Do you want to use RAID-5 or RAID-6 ?
I just ask, as with that many drives (and that much data!) the
possibilities of a 2nd drive failure is in
On Thu, 15 Jun 2006, Adam Talbot wrote:
> What I hope to be an easy fix. Running Gentoo Linux and trying to setup
> RAID 1 across the root partition hda3 hdc3. Have the fstab set up to
> look for /dev/md3 and I have built the OS on /dev/md3. Works fine until
> I reboot. System loads and states
On Tue, 13 Jun 2006, Adam Talbot wrote:
> I still have not figured out if "block" is per disk or per stripe?
> My current array is rebuilding and states "64k chunk" is this a per disk
> number or is that a functional stripe?
The block-size in the argument to mkfs is the size of the basic data blo
On Tue, 13 Jun 2006, Justin Piszcz wrote:
> mkfs -t xfs -f -d su=128k,sw=14 /dev/md9
>
> Gordon, What speed do you get on your RAID, read and write?
>
> When I made my XFS/RAID-5, I accepted the defaults for the XFS filesystem
> but used a 512kb stripe. I get 80-90MB/s reads and ~39MB/s write
On Mon, 12 Jun 2006, Adam Talbot wrote:
> RAID tuning?
> Just got my new array setup running RAID 6 on 6 disks. Now I am looking
> to tune it. I am still testing and playing with it, so I dont mind
> rebuild the array a few times.
>
> Is chunk size per disk or is it total stripe?
As I understan
I'm just after conformation (or not!) of something I've done for a long
time which I think is right - it certainly seem right, but one of those
things I've always wondered about ...
When creating an array I allocate drives from alternative controllers with
the thought that the OS/system/hardware
I know this has come up before, but a few quick googles hasn't answered my
questions - I'm after the max. array size that can be created under
bog-standard 32-bit intel Linux, and any issues re. partitioning.
I'm aiming to create a raid-6 over 12 x 500GB drives - am I going to
have any problems?
On Sat, 13 May 2006, Ra�l G�mez Cabrera wrote:
> Hi Gordon, thanks for your quick response.
>
> Well my client does not want to spend more money on this particular
> server, I think maybe that is because they are planning to replace it...
Ask your client just how valuable their email data is...
On Sat, 13 May 2006, Raúl Gómez Cabrera wrote:
> Hi everyone,
>
> I have a installed a system (mail server) wich had a RAID 1 (software)
> with two SCSI Disk running on Linux. The sdb disk has failed a few month
> ago and the system is still working as expected.
>
> Since the failure of the disk I
On Wed, 22 Mar 2006, Shai wrote:
> Hi,
>
> I have two raid5 MDs: /dev/md0 and /dev/md1;
>
> I had broken md0 the other day and had to rebuild it.
> i've formated it as xfs and wanted to make md1 also xfs so I decided to
> move all the data from md1 to md0.
>
> while doing the cp of all that data,
On Sat, 18 Mar 2006, Ewan Grantham wrote:
> OK, managed to use assemble force to get the five remaining drives of
> the array up in degraded mode. But running ex2fsck (I had an ext3 fs
> on the RAID) is revealing a number of bad dtimes and invalid blocks.
> Trying to run ex2fsck with the -p option
On Sun, 5 Mar 2006, Bill Davidsen wrote:
> I agree, but it's easier to configure to keep going with a dead drive
> than fan in many enclosures. You seem to have more heat tolerance and
> monitoring than many installations. And you have done testing on the
> heat issues, another unusual thing.
I g
On Mon, 6 Mar 2006, Raz Ben-Jehuda(caro) wrote:
> Neil Hello .
> I have a performance question.
>
> I am using raid5 stripe size 1024K over 4 disks.
> I am benchmarking it with an asynchronous tester.
> This tester submits 100 IOs of size of 1024 K --> as the stripe size.
> It reads raw io from th
On Sun, 5 Mar 2006, Bill Davidsen wrote:
> >Still scratching my head, trying to work out if raid-10 can withstand
> >(any) 2 disks of failure though, although after reading md(4) a few times
> >now, I'm begining to think it can't (unless you are lucky!) So maybe I'll
> >just stick with Raid-6 as I
On Sat, 18 Feb 2006, PFC wrote:
> >> Anybody tried a Raid1 or Raid5 on USB2.
> >> If so did it crawl or was it usable ?
>
> Why not external SATA ?
> After all, the little cute SATA cables are a lot more suited to this
> than
> the old, ugly flat PATA cables...
Until you break a moth
On Fri, 17 Feb 2006, Andy Smith wrote:
> On Fri, Feb 17, 2006 at 03:14:37PM +0000, Gordon Henderson wrote:
> > Still scratching my head, trying to work out if raid-10 can withstand
> > (any) 2 disks of failure though, although after reading md(4) a few times
> > now, I
On Fri, 17 Feb 2006, Francois Barre wrote:
> 2006/2/17, Gordon Henderson <[EMAIL PROTECTED]>:
> > On Fri, 17 Feb 2006, berk walker wrote:
> >
> > > RAID-6 *will* give you your required 2-drive redundancy.
> >
> Anyway, if you wish to resize your setup to
On Fri, 17 Feb 2006, berk walker wrote:
> RAID-6 *will* give you your required 2-drive redundancy.
Hm. I was under the impression (mistakenly?) that RAID10 (as opposed to
RAID1+0) would give me 2 disk redundancy in far mode, however maybe I need
to re-read the stuff on RAID10 again ...
Gordon
-
I'm building a little test server and I wanted ~500GB of storage with
2-drive redundancy, so the best price vs. num. drives vs. the need for 2
drive redundancy came to 4 x 250GB drives. (And I have a mobo with 5 SATA
ports, and taking into account case power requirements, etc. 4 drives has
worked
On Wed, 8 Feb 2006, discman (sent by Nabble.com) wrote:
> Hi.
>
> Anyone with some experience on mdadm?
Just about everyone here, I'd hope ;-)
> I have a running RAID0-array with mdadm and it`s using monitor-mode with an
> e-mail address.
> Anyone knows how to remove that e-mail adress without d
On Thu, 2 Feb 2006, Mattias Wadenstein wrote:
> Yes, but then you (probably) lose hotswap. A feature here was to use the
> 3ware hw raid for the raid1 pairs and use the hw-raid hotswap instead of
> having to deal with linux hotswap (unless both drives in a raid1-set
> dies).
I'm not familiar with
On Wed, 1 Feb 2006, David Liontooth wrote:
> We're wondering if it's possible to run the following --
>
> * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card
> * the OS will see these are four normal drives
> * use md to configure them into a RAID 6 array
>
> Would this work? Would it
On Wed, 1 Feb 2006, Enrique Garcia Briones wrote:
> > I'd be tempted to remove the A1000 and install on the 2 internal
> > drives, then once that's happy, plug the A1000 back in again. It
> > might be that the OBP (Open Boot Prom) code is favouring the
> > external device to boot off, but it's bee
be doing.
Gordon
>
> thanks
>
> -- Forwarded Message ---
> From: Gordon Henderson <[EMAIL PROTECTED]>
> To: Enrique Garcia Briones <[EMAIL PROTECTED]>
> Cc: linux-raid@vger.kernel.org
> Sent: Tue, 31 Jan 2006 17:05:12 + (GMT)
> Subject: Re: Configuring co
On Tue, 31 Jan 2006, Enrique Garcia Briones wrote:
> Hi,
>
> Let me introduce myself.
>
> I'm newbie in linux and in RAID over linux, I have configured a RAID-0 in a
> NetBSD 2.0 BOX. so, now let me explain what I'm trying to do,
>
> Antecedents/Equipment:
>
> I have a Sparc 420 with 4 processors
On Tue, 24 Jan 2006, Francois Barre wrote:
> > Some drives do support quiet vs. performance modes.
> >
> > hdparm will set this for you, however, from the hdparm manual page:
> >
> > -M Get/set Automatic Acoustic Management (AAM) setting. Most
> > modern
> > harddisk drives
On Tue, 24 Jan 2006, Francois Barre wrote:
> Is it possible to make the drives turn slower ? To make the heads move slower
> ?
> That would be my dream. No more heat, a 10mA consumption, no more noise...
Some drives do support quiet vs. performance modes.
hdparm will set this for you, however,
On Mon, 23 Jan 2006, Gilberto Diaz wrote:
>The problem is that the following proccesses are using a lot of cpu
> time.
>
> md1_raid1
> md1_resync
> ..
> md6_raid1
> md6_resync
>
> Here is a sample of the uptime command
>
> 17:54:16 up 5:48, 2 users, load average:
On Sun, 22 Jan 2006, Mitchell Laks wrote:
> So I have to compile my own 2.6.15 kernel. So what version of mdadm do I use?
> How shall I install it?
I'm using a stock (www.kernel.org) 2.6.15 kernel on several Debian Sarge
servers and just using the debian packaged mdadm. Seems to work OK.
> Does
On Sat, 21 Jan 2006, Gerd Knops wrote:
> Hi,
>
> I have a RAID5 setup with 3 250GB SATA disks. Often the RAID is not
> accessed for days, so I wonder if I can extend the life of the disks
> by spinning them down, eg by setting the spindown timeout for the
> drives with hdparm -S nn.
>
> The hdparm
On Wed, 18 Jan 2006, John Hendrikx wrote:
> I agree with the original poster though, I'd really love to see Linux
> Raid take special action on sector read failures. It happens about 5-6
> times a year here that a disk gets kicked out of the array for a simple
> read failure. A rebuild of the ar
On Tue, 17 Jan 2006, Neil Brown wrote:
> - status of RAID6
> I believe it is as stable/reliable as raid5.
FWIW: I've been using RAID-6 since early last year in production
environments and so-far so good. Kernels 2.6.11 to 2.6.15 (on a Dell 2850
server I'm building tonight with 6 drives). I di
On Mon, 9 Jan 2006, Molle Bestefich wrote:
> Andre Majorel wrote:
> > With LVM, you can create a snapshot of the block device at any
> > time, mount the snapshot read-only (looks like another block
> > device) and backup that. Ensuring consistency at application level
> > is still up to you but at
On Thu, 5 Jan 2006, Francois Barre wrote:
> 2006/1/5, berk walker <[EMAIL PROTECTED]>:
> [...]
> > >
> > >
> > Ext3 does have a fine record. Might I also suggest an added expense of
> > 18 1/2% and do RAID6 for better protection against data loss?
> > b-
> >
>
> Well, I guess so. I just hope I'll
On Thu, 5 Jan 2006, Francois Barre wrote:
> Well, anyway, thanks for the advice. Guess I'll have to stay on ext3
> if I don't want to have nightmares...
And you can always mount it as ext2 if you think the journal is corrupt.
Have you considered Raid-6 rather than R5?
The biggest worry I have i
On Wed, 21 Dec 2005, Sebastian Kuzminsky wrote:
> > But how does the performance for read and write compare?
>
> Good question! I'll post some performance numbers of the RAID-6
> configuration when I have it up and running.
Post your hardware config too if you don't mind. I have one server with
On Fri, 16 Dec 2005, Neil Brown wrote:
> > - Does RAID6 have disadvantages wrt write speed?
>
> Probably. I haven't done any measurements myself, but from a
> theoretical standpoint, you would expect raid6 to impose more CPU load
> (though that may not be noticeable) and as raid6 need to see the
On Thu, 4 Aug 2005, Stefan Majer wrote:
> Hi,
>
> i have a server running on only one scsi disk. I got now one scsi disk
> extra and i want to transform the actual installation to use both disks in
> a raid1.
> Therefore i want to mirror each partition with md.
> The Question now is how to do that
On Tue, 2 Aug 2005, Boik Moon wrote:
> Hi,
>
> According to RAID theory, the READ performance with RAID0, 1 and 5
> should
> Be faster than one with non-RAID. I tested it on Redhat linux(ES) on
> Pentium PC, but they are almost same. I am using
> RocketRAID404(HPT374)PCI card to connect 4 master I
On Sat, 30 Jul 2005, Jeff Breidenbach wrote:
>
> Hi all,
>
> I just ran a Linux software RAID-1 benchmark with some 500GB SATA
> drives in NCQ mode, along with a non-RAID control. Details are here
> for those interested.
>
> http://www.jab.org/raid-bench/
>
> Comments are appreciated. I'm cu
On Thu, 14 Apr 2005, Laurent CARON wrote:
> Hello,
>
> We are in the process of increasing the size our RAID Arrays as our
> storage needs increase.
>
> I've got 2 solutions for this:
>
> - Copy the data over a new array and replace the disks
Do this! You know it makes sense. If nothing else, it'
On Mon, 4 Apr 2005, H. Peter Anvin wrote:
> I can't speak for the EVMS people, but I got to stress-test my RAID6
> test system some this weekend; after having run in 1-disk degraded
> mode for several months (thus showing that the big bad "degraded
> write" bug has been thoroughly fixed) I changed
On Sat, 2 Apr 2005, peter pilsl wrote:
> The only explantion to me is, that I had the wrong entry in my
> lilo.conf. I had root=/dev/hda6 there instead of root=/dev/md2
> So maybe root was always mounted as /dev/hda6 and never as /dev/md2,
> which was started, but never had any data written to it.
On Sat, 2 Apr 2005, Matt Domsch wrote:
> On Sat, Apr 02, 2005 at 04:04:48PM +0100, Gordon Henderson wrote:
> > The cheaper cards that I've used seem to have mostly the SII chipset - and
> > that appears to be well supported by Linux. The 3112 is a dual-port card,
> >
On Sat, 2 Apr 2005, Max Waterman wrote:
> http://www.sonnettech.com/product/tempo-x_esata8.html
>
> do you think this will work with Linux?
>
> what about Linux on an Intel platform?
Hard to tell without knowing the actual chip-set on-board.
> I wonder how it performs - esp. compared to the supe
On Fri, 1 Apr 2005, Alvin Oga wrote:
> - ambient temp should be 65F or less
> and disk operating temp ( hddtemp ) should be 35 or less
Are we confusing F and C here?
hddtemp typically reports temperatures in C. 35F is bloody cold!
65F is barely room temperature. (18C)
Gordon
-
To u
On Fri, 1 Apr 2005, Alvin Oga wrote:
>
> hi ya raiders ..
>
> we(they) have 14x 72GB scsi disks config'd as raid5,
> ( no hot spare .. )
>
> - if 1 disk dies, no problem ... ez to recover
>
> - my dumb question is,
> - if 2 disks dies at the same time, i
> assume the entire raid5 is ba
On Tue, 22 Mar 2005, Schuett Thomas EXT wrote:
> Hello,
>
> I am sorry for having to ask a question you might rate very stupid,
> but I really want to know it:
>
> If you think through system crash scenarios, what types of chrashes
> are you are thinking of? Do you only consider harddisk faults, o
As part of a (Dell) server purchase, a client was given a free Dell 750
PowerEdge (Celeron) box with 2 x 120GB SATA drives... Opening the lid (as
you do :) revealed that the motherboard has on-board SATA, but Dell had
also plugged in an Adaptec 6-port SATA RAID card, and connected the 2
drives to
On Wed, 9 Mar 2005, Brad Campbell wrote:
> Gordon Henderson wrote:
>
> > And do check your disks regularly, although I don't think current version
> > of smartmontools fully supports sata under the scsi subsystem yet...
> >
>
> Actually, if you are using a
Interestingly enough, having just typed up that last post, it's gotten me
thinking...
I've just taken delivery of a lot of old PC bits & disks. Mostly 18Gb SCSI
drives. So I've built up 2 boxes with 8 disks in each. Only old Xeon 500
processors, but all good stuff in its day.
Now I'm thinking th
On Wed, 9 Mar 2005 [EMAIL PROTECTED] wrote:
> Greetings All,
>
> I have been lurking for a while I recently put together a raid 5
> system (Asus K8NE SIL 3114/2.6.8.1 kernel) with 4 300GB SATA Seagate
> drives (a lot smaller than the bulk of what seems to be on this list!).
Size isn't importa
On Tue, 8 Mar 2005, Tobias Hofmann wrote:
> > I stuffed a bunch of cheap SATA disks and crappy controllers in an old
> > system. (And replaced the power supply with one that has enough power
> > on the 12V rail.)
> >
> > It's running 2.4, and since it's IDE disks, I just call 'hdparm
> > -S' in rc
When I was building a server recently, I ran into a file-system corruption
issue with ext3, although I didn't have the time to do anything about it
(or even verify that it wasn't me doing something stupid)
However, I only saw it when I used the stride=8 parameter to mk2efs, and
the -j (ext3) opt
On Tue, 22 Feb 2005, Guy wrote:
> I have NOT been able to share a SCSI cable/card with disks and a tape
> drive. Tried for days. I would get disk errors, or timeouts. I
> corrected the problem by putting the tape drive on a dedicated SCSI
> bus/card. I don't recall the details of the failures,
On Tue, 22 Feb 2005, Matthias Julius wrote:
> Hi,
>
> I have a raid5 out of 4 drives where 2 drives failed and were
> removed. Since it happened during backup I would very much like to
> reactivate the array to rescue as much data as there are intact.
>
> When I try to assemble the array with mda
On Tue, 22 Feb 2005, Louis-David Mitterrand wrote:
> I am considering getting a Sony SAIT 3 with 500G/1TB tapes, which seems
> like a nice solution for backuping a whole server on a single tape.
I've been using DLT tapes for many years now, prior to that DAT tapes and
Exabytes before that... Capa
On Sat, 19 Feb 2005, berk walker wrote:
> [I usually do not spend bandwidth in quoting big stuff, but your's might
> be worth it]
> Properly chastised. One CAN do net raid, 4,000 [where's my pound key?]
> is still a lot to me, [don't forget my name IS berk :)]
Thats for 2 servers, remember. Worr
On Sat, 19 Feb 2005, berk walker wrote:
> Do you want a glass or some cheese?
Not really... I just thought I'd pass on my experiences and thank those
who gave me support recently. By posting my configurations and thoughts
and issues I've encountered during the way, I'm essentially opening myself
This is a bit OT and long, but it might help someone in the future, you
never know!
I've been struggling recently to get a box together with some supposedly
nice hardware and it's turned out to be a bit of a nightmare.. The good
news is that it's now sorted and working well enough to go into
prod
On Thu, 17 Feb 2005, Phantazm wrote:
> I use master slave. Problem is that i cant break raid set couse if i do i
> will loose over 1TB of data :/
>
> Goin to see if i can get more controller cards though.
Do it. Use 4 2-port cards for your 8 drives and only one drive per cable.
It is possible, an
On Sun, 13 Feb 2005, Tim Moore wrote:
> Gordon Henderson wrote:
> > What I wanted was an 8-way RAID-1 for the boot partition (all of /, in
> > reality) and I've done this many times in the past on other 2-5 way
> > systems without issue. So I do the stuff I'
On Sun, 13 Feb 2005, Tim Moore wrote:
> Gordon Henderson wrote:
> >
> > Anyone using Tyan Thunder K8W motherboards???
> >
> > I now know, there is a K8S (server?) version of that mobo, but at the time
> > it was all orderd, I wasn't aware of it - my though
On Sun, 13 Feb 2005, Mark Hahn wrote:
> > Interesting - the private mail was from me, and I've got two dual
> > Opterons in service. The one with significantly more PCI activity has
> > significantly more problems then the one with less PCI activity.
>
> that's pretty odd, since the most intense I
On Fri, 4 Feb 2005, Andrew Walrond wrote:
> Hi Gordon,
>
> > Anyone using Tyan Thunder K8W motherboards???
>
> I'm using K8W's here with a combo od raid0/1 on on-board SATA, and its been
> rock solid for months (2.6.10). Looks like your problems are all with the PCI
> cards, but I can't help there
On Thu, 3 Feb 2005, H. Peter Anvin wrote:
> Guy wrote:
> > Would you say that the 2.6 Kernel is suitable for storing mission-critical
> > data, then?
>
> Sure. I'd trust 2.6 over 2.4 at this point.
This is interesting to hear.
> > I ask because I have read about a lot of problems with data corr
On Sat, 29 Jan 2005, T. Ermlich wrote:
> That's right: each harddisk is partitioned absolutly identically, like:
> 0 - 19456 - /dev/sda1 - extended partition
> 1 - 6528 - /dev/sda5 - /dev/md0
> 6529 - 9138 - /dev/sda6 - /dev/md1
> 9139 - 16970 - /dev/sda7 - /dev/md2
> 16971 - 19456
On Sat, 29 Jan 2005, T. Ermlich wrote:
> Hello there,
>
> I just got here from http://cgi.cse.unsw.edu.au/~neilb/Contact ...
> Hopefully I'm more/less right here.
>
> Several month ago I set-up an raid1 using mdadm.
> Two drives (/dev/sda & /dev/sdb, each one is an 160GB Samsung SATA
> disks) are
On Tue, 25 Jan 2005, Steve Witt wrote:
> I'm installing a software raid system on a new server that I've just
> installed Debian 3.1 (sarge) on. It will be a raid5 on 5 IDE disks using
> mdadm. I'm trying to create the array with 'mdadm --create /dev/md0 ...'
> and am getting an error: 'mdadm: err
On Thu, 20 Jan 2005, Mark Bellon wrote:
> I've seen this too. The worst case can actually last for over 2 minutes.
>
> We've been running with a patch to the RAID 1 driver that handles this
> so critical applications do not hang for too long. Basically it uses
> timers in the RAID 1 driver to forc
1 - 100 of 105 matches
Mail list logo