--On Wednesday, December 15, 2004 08:53 +0100 Andrej <[EMAIL PROTECTED]>
wrote:
We have a new server which we want to use as a webserver. The following
hardware components are included:
- Intel XEON 3GHz
- 2GB RAM
- Intel SRCU42L RAID Controller SCSI
- 3 Fujitsu u320 SCSI discs in a
--On Wednesday, December 15, 2004 13:38 +0100 Marcin Owsiany
<[EMAIL PROTECTED]> wrote:
On Wed, Dec 15, 2004 at 02:40:37AM -0700, Michael Loftis wrote:
Additionally Linux uses 128K disk I/O
blocks, if you've built your RAID array with any other size stripe you
may suffer p
On Wed, Dec 15, 2004 at 02:40:37AM -0700, Michael Loftis wrote:
> Additionally Linux uses 128K disk I/O
> blocks, if you've built your RAID array with any other size stripe you may
> suffer pathological performance loss.
Do you mean that that driver uses such blocks, or that l
Op wo, 15-12-2004 te 08:53 +0100, schreef Andrej:
> The problem is, that we've been running hdparm on the RAID device with
> these result:
>
> hdparm -t /dev/sda
> /dev/sda:
> Timing buffered disk reads: 138 MB in 3.04 seconds = 45.39 MB/sec
>
>
> Th
We have a new server which we want to use as a webserver. The following
hardware components are included:
- Intel XEON 3GHz
- 2GB RAM
- Intel SRCU42L RAID Controller SCSI
- 3 Fujitsu u320 SCSI discs in a RAID-5 array
We've installed Debian Sarge.
The problem is, that we've been running
Thanks for the help.
Have to get this box up pretty rapidly, so I decided to just do
software RAID. However, am going to order a couple of cards and test
them so the next time I'll have something I'm comfortable with. I
will definitely get one of the LSI's, and probably one of the
s Windows and Linux are concerned, the same holds for Intel's
Server RAID adapters. Debian included.
> They're fast cards, and fairly reasonably priced.
Hmm... "fast" ain't exactly true for the lower-end Intel RAID adapters :P
and whether you consider Intel hardw
The only RAID Cards I recommend anymore are the ICP Vortex cards. They've
got a good product line, tools and monitoring available for basically every
OS, they are FULLY online. You can get into the configuration area from
within your host OS, be it Windows, DOS, FreeBSD, Linux, Netware
Can anyone recommend a scsi raid controller for debian. I like the
serveraid from IBM, but, when I last built a box with this, the
monitoring software was only for Redhat and stuff.
Fairly small box. Probably 3 18.6G in a RAID-5 is all I need. And,
U160 would be fine, as would PCI-32. But, I
Henrique de Moraes Holschuh wrote:
>On Fri, 15 Oct 2004, Dave Watkins wrote:
>
>
>>The reason i2c won't work on these boards is because they use IPMI
>>rather than i2c and have a BMC on them which does much more in the way
>>of management than desktop type boards
>>
>>
>
>Well, if it is anyt
On Fri, 15 Oct 2004, Dave Watkins wrote:
> The reason i2c won't work on these boards is because they use IPMI
> rather than i2c and have a BMC on them which does much more in the way
> of management than desktop type boards
Well, if it is anything like SE750x boards, you need to first setup the
en
Achim Schmidt wrote:
>Am Do, 2004-10-14 um 22.01 schrieb Franz Georg Köhler:
>
>
>>Isn't i2c supposed to be standardized?
>>
>>
>>
>
>today i had to speak to their support and the hint given was to take the
>redhats rpm and create a own deb using alien :/ Further i was told using
>lm_senors
Simon Buchanan said on Fri, Oct 15, 2004 at 08:20:15AM +1300:
> Hi There, I am looking to deploy some 1U rack servers based on the Intel
> Entry Server Platform SR1325TP1-E, but using a 3ware Escalade 9500S-4LP
> hardware raid with 3 x SATA 200GB drives (RAID 5) instead of the onboard
won't work because the board is
optimized for ISM...
> We are running a lot of boxes with 3ware 7000 series SATA-RAID without
> experiencing any difficulties (as we did with mylex).
>
> Where did you get the 3ware utilities for linux?
http://www.3ware.com/support/download.asp?code=8
On Do, Okt 14, 2004 at 09:30:13 +0200, Achim Schmidt <[EMAIL PROTECTED]> wrote:
> Hi Simon,
>
> We recently installed a intel SR1325TP1-E with a 3ware 7000-SATA-RAID
> running sarge - no problems with RAID - 3ware-software (RAID-cli and
> management via webserver works fine)
Hi There, I am looking to deploy some 1U rack servers based on the Intel
Entry Server Platform SR1325TP1-E, but using a 3ware Escalade 9500S-4LP
hardware raid with 3 x SATA 200GB drives (RAID 5) instead of the onboard
stuff (as i have heard there are some problems with compatability here
Hi Simon,
We recently installed a intel SR1325TP1-E with a 3ware 7000-SATA-RAID
running sarge - no problems with RAID - 3ware-software (RAID-cli and
management via webserver works fine) - but unfortunatelly "Intels Server
Management 5.*" does not support debian (better said no other lin
Hi There, I am looking to deploy some 1U rack servers based on the Intel
Entry Server Platform SR1325TP1-E, but using a 3ware Escalade 9500S-4LP
hardware raid with 3 x SATA 200GB drives (RAID 5) instead of the onboard
stuff (as i have heard there are some problems with compatability here
Hi,
Everytime I boot my system It says that my partitions have different UUID.
If anybody know what I can do about it...
Thanks in advance,
Agustín
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
On Mon, 20 Sep 2004 21:37, Josh Bonnett <[EMAIL PROTECTED]> wrote:
> >Do you have benchmark results to support this assertion? Last time I
> > tested the performance of software RAID-1 on Linux I was unable to get
> > anywhere near 2x disk speed for writing.
>
> N
P3-650 machines that could sustain
a 1296MB/s IO load (that requires more than two 64bit 66MHz PCI buses).
Machines that can handle such an IO load have faster CPUs. So for any but the
very biggest machines there is no chance of CPU performance being a problem
for RAID-5.
For really big machines
On Tue, 14 Sep 2004 09:54, Donovan Baarda <[EMAIL PROTECTED]> wrote:
> Is there any up-to-date "State of the RAID Nation" statement? I'd hate
> to start digging into RAID code only to find that RAID Mk.2 was going to
> replace everything I'd been looking at
ements in this
> area!
Is there any up-to-date "State of the RAID Nation" statement? I'd hate
to start digging into RAID code only to find that RAID Mk.2 was going to
replace everything I'd been looking at.
--
Donovan Baarda <[EMAIL PROTECTED]>
http://minkirri.ap
On Mon, 13 Sep 2004 18:32, "Donovan Baarda" <[EMAIL PROTECTED]> wrote:
> > Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.
> > RAID one is just for 'safety' purposes. Yes, you do have 2 disks, but
> > in an
> > ideal wor
On Mon, 13 Sep 2004 15:39, Adrian 'Dagurashibanipal' von Bidder
<[EMAIL PROTECTED]> wrote:
> While I really substantiate my assumtption, Russel's right, in theory: in
> RAID1, you *do* have 2 disks, so reading 2 independent files *should* be
> possible without too much seeking.
>
> But OTOH you mi
On Mon, 13 Sep 2004 05:20, Adrian 'Dagurashibanipal' von Bidder
<[EMAIL PROTECTED]> wrote:
> > Machines that can handle such an IO load have faster CPUs. So for any
> > but the very biggest machines there is no chance of CPU performance being
> > a problem for R
G'day again,
From: "Andrew Miehs" <[EMAIL PROTECTED]>
[...]
> Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.
> RAID one is just for 'safety' purposes. Yes, you do have 2 disks, but
> in an
> ideal world, they will both be s
On Mon, 13 Sep 2004 09:55, Donovan Baarda <[EMAIL PROTECTED]> wrote:
> > Do you have benchmark results to support this assertion? Last time I
> > tested the performance of software RAID-1 on Linux I was unable to get
> > anywhere near 2x disk speed for writing. I
On Mon, 13 Sep 2004 16:39, Andrew Miehs <[EMAIL PROTECTED]> wrote:
> Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.
RAID-1 in the strict definition has two disks with the same data. In the
modern loose definition it means two or more disks with the same data
Hi all,
Ummm... Bit confused here, but RAID 1 is not faster, than a single disk.
RAID one is just for 'safety' purposes. Yes, you do have 2 disks, but
in an
ideal world, they will both be synced with one another, and both be
doing
exactly the same thing at the same time.
If you want
On Monday 13 September 2004 01.55, Donovan Baarda wrote:
> On Mon, 2004-09-13 at 00:41, Russell Coker wrote:
[reading 2 files from RAID1]
While I really substantiate my assumtption, Russel's right, in theory: in
RAID1, you *do* have 2 disks, so reading 2 independent files *should* be
possible w
On Mon, 2004-09-13 at 00:41, Russell Coker wrote:
> On Mon, 6 Sep 2004 23:35, Adrian 'Dagurashibanipal' von Bidder
> <[EMAIL PROTECTED]> wrote:
[...]
> Do you have benchmark results to support this assertion? Last time I tested
> the performance of software RAID-1
U performance being
> a problem for RAID-5.
You certainly have more experience than I - I was thinking about machines
where the CPU is already heavily loaded by userspace tasks, where the
additional load from RAID5 might be a poblem. Don't know for certain,
though.
[me being silly]
> I
nes that could sustain
a 1296MB/s IO load (that requires more than two 64bit 66MHz PCI buses).
Machines that can handle such an IO load have faster CPUs. So for any but the
very biggest machines there is no chance of CPU performance being a problem
for RAID-5.
For really big machines there is a
Dear Lucas,
I tried with linear and my debian just boot! I'm sending to you my lilo.conf file (in
case
it seems to you useful for your how-to with LILO) after making some changes that
"lilo-doc" recommended for raid booting.
As I thought it couldn't be the only problem... C
On Wed, 8 Sep 2004 15:14:03 -0600 (MDT), Lucas wrote in message
<[EMAIL PROTECTED]>:
>
> Arnt Karlsen said:
> > ..play with this:
> > #!/bin/sh
> > /bin/cp -f /usr/share/grub/i386-pc/* /boot/grub
> > /usr/sbin/grub --batch < /dev/null 2> /dev/null
> > # device (hd0) /dev/hda
> > # device (hd1) /d
Arnt Karlsen said:
> ..play with this:
> #!/bin/sh
> /bin/cp -f /usr/share/grub/i386-pc/* /boot/grub
> /usr/sbin/grub --batch < /dev/null 2> /dev/null
> # device (hd0) /dev/hda
> # device (hd1) /dev/hdc
> device (md0) /dev/md0
> root (md0,0)
> # setup (hd0) #installs onto /dev/hda
> # setup (hd2)
On Wed, 8 Sep 2004 18:17:56 +0200, maarten MAEEELED!!! in
message <[EMAIL PROTECTED]>:
> On Wednesday 08 September 2004 05:11, Arnt Karlsen wrote:
> > On Tue, 7 Sep 2004 15:33:13 -0300, Agustín wrote in message
> > <[EMAIL PROTECTED]>:
>
>
> > > Could you send me your lilo.conf configuratio
On Tue, 7 Sep 2004 15:33:13 -0300, Agustín wrote in message
<[EMAIL PROTECTED]>:
> > - Original Message -
> > From: "Arnt Karlsen" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Tuesday, September 07, 2004 3:10 PM
>
At 08:07 PM 9/7/04 +0800, Jason Lim wrote:
>> "Currently only supports Windows XP, 2000, & 2003
>I'm guessing since it is completely OS transparent it should work... not
>that I have used it.
>
>I have been wondering about the merits of using OS-transparent RAI
Dear Arnt,
Thank you very much for your reply!
Could you send me your lilo.conf configuration to compare it with mine?
Agustín
- Original Message -
From: "Arnt Karlsen" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, September 07, 2004 3:10 PM
Subjec
On Tue, 7 Sep 2004 14:25:49 -0300, Agustín wrote in message
<[EMAIL PROTECTED]>:
> Hi Everybody!
>
> I'm trying to run a bootable RAID 1. I'm using mdadm and everything
> goes fine. All the partitions are completely synchronized and the
> problem comes in the last
Hi Everybody!
I'm trying to run a bootable RAID 1. I'm using mdadm and everything goes fine. All the
partitions are completely synchronized and the problem comes in the last step:
I edit lilo to boot from the RAID and reboot. The system hangs up saying "LIL". I've
r
> On Tue, Sep 07, 2004 at 12:06:13AM -0400, Chris Wagner wrote:
> > If ur looking for a fast RAID product that's reasonably priced I'ld
take a
> > look at NetCell's SyncRAID product (http://www.netcell.com/) which
uses a 64
> > bit RAID-3 variant they call
On Tue, Sep 07, 2004 at 12:06:13AM -0400, Chris Wagner wrote:
> If ur looking for a fast RAID product that's reasonably priced I'ld take a
> look at NetCell's SyncRAID product (http://www.netcell.com/) which uses a 64
> bit RAID-3 variant they call RAID XL. It got
At 03:35 PM 9/6/04 +0200, Adrian 'Dagurashibanipal' von Bidder wrote:
>For writing, RAID5 tends to be noticeably slower than RAID1, especially for
>writes smaller than stripe size, because a write actually is a
>read-recompute-write cycle.
If ur looking for a fast
On Monday 06 September 2004 13.38, Dmitry Golubev wrote:
> > PS: i wouldn't recommend software raid 5 if you care about performance.
> > i am going to convert one of my raid-5 machines (4 x 80GB barracudas)
> > to raid-1 (2 x 200GB barracudas) very soon becau
Hi,
could anybody comment, what's the current inofficial quality of RAID6
vs. RAID5? The kernel help does read as if it's pretty beta still. Has
anybody bothered trying?
Thanks!
--
Best regards,
Kilian
signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil
> PS: i wouldn't recommend software raid 5 if you care about performance. i
> am going to convert one of my raid-5 machines (4 x 80GB barracudas) to
> raid-1 (2 x 200GB barracudas) very soon because i'm unhappy with the
> performance(*)...if i had a spare approx $600AUD, i&
7;ve actually done this exact thing before and it
> > worked flawlessly.
>
> Ooh you lovely people - thank you for the good news :)
i've done it too, and it works.but the catch is that it takes a lot longer
to do it this way than to just backup your data, create the new raid from
On Friday 03 September 2004 06:28, Dave Watkins wrote:
> >After that is done you can delete the old raid1 completly and add the now
> > free disk to the raid5...
> Ralph Paßgang wrote:
> I've actually done this exact thing before and it worked flawlessly.
Ooh you lovely people - thank you for the
le to add a
>>>third disk and convert the whole set to a 400GB RAID5 later on by
>>>logically removing the second disk from the RAID1 set?
>>>
>>>
>>Nope... migrating to a different raid configuration wipes your disks
>>So you'll have to ba
o a 400GB RAID5 later on by
> > logically removing the second disk from the RAID1 set?
>
> Nope... migrating to a different raid configuration wipes your disks
> So you'll have to backup, migrate and restore.
Yes, but you can make something like this:
remove one drive from the raid-1.
On Thursday 02 September 2004 14:18, Mark Janssen wrote:
> Nope... migrating to a different raid configuration wipes your disks
> So you'll have to backup, migrate and restore.
That's a shame - I kinda knew that was the answer but hoped I'd missed
something :)
gdh
--
pe... migrating to a different raid configuration wipes your disks
So you'll have to backup, migrate and restore.
--
Mark Janssen -- maniac(at)maniac.nl -- GnuPG Key Id: 357D2178
Unix / Linux, Open-Source and Internet Consultant @ SyConOS IT
Maniac.nl Unix-God.Net|Org MarkJanssen.o
Hello - just a quickie :)
If I construct a RAID1 with two 200GB disks, will I be able to add a third
disk and convert the whole set to a 400GB RAID5 later on by logically
removing the second disk from the RAID1 set?
Cheers,
Gavin.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject
Hi everybody,
I would like to know if there is a way to get a message from the RAID
controller when a disk of a raid array (raid 5) fails. I wan't to send
something like a mail alert in this case so I need to know this with
another way than looking at the disk light ;-)
The server is a C
e bill
> though). the configuration is 3-disk raid5.
A while ago we startet changing from GDT SCSI-RAID to 3ware SCSI/IDE.
We have models from 2 to 12 channel in use, both ATA and SATA and are
quite happy with them. Never caused any troubles so far. Some of the
larger servers use the 12 chann
On Mon, 2004-08-16 at 10:19, Russell Coker wrote:
> That read speed is quite poor. I would expect to see better speeds than that
> from a single S-ATA disk! For several years people have been reporting
> better speeds than that from 3ware controllers (although almost no-one tested
> with as fe
On Mon, 16 Aug 2004 17:39, "R.M. Evers" <[EMAIL PROTECTED]> wrote:
> pci, the speeds are fairly good (surely not top of the bill though). the
> configuration is 3-disk raid5. fyi, here's the hdparm test:
>
> /dev/sda:
> Timing buffered disk reads: 64 MB in 1.43 seconds = 44.76 MB/sec
That read
On Sun, 2004-08-15 at 18:08, IOhannes m zmoelnig wrote:
> hello.
>
> we are planning to set up a serial-ATA based fileserver (>1 Terabyte).
> the host will be dedicated solely to serving files (via samba and nfs
> (and probably appletalk), so i was thinking about a soft
On Sun, Aug 15, 2004 at 06:08:31PM +0200, IOhannes m zmoelnig wrote:
>hello.
>
>we are planning to set up a serial-ATA based fileserver (>1 Terabyte).
>the host will be dedicated solely to serving files (via samba and nfs
>(and probably appletalk), so i was thinking about a soft
hello.
we are planning to set up a serial-ATA based fileserver (>1 Terabyte).
the host will be dedicated solely to serving files (via samba and nfs
(and probably appletalk), so i was thinking about a software-raid solution.
are there any cards you would recommend ?
i was thinking about highpo
Lucas Albers wrote:
I have directions on grub and lilo config for software raid systems.
Switching to software raid from non-raid and setting lilo.conf and
grub.conf correctly.
This might help:
http://rootraiddoc.alioth.debian.org
Hi Lucas,
Thanks for that, after reading through your doco I found
I have directions on grub and lilo config for software raid systems.
Switching to software raid from non-raid and setting lilo.conf and
grub.conf correctly.
This might help:
http://rootraiddoc.alioth.debian.org
--
--Luke CS Sysadmin, Montana State University-Bozeman
--
To UNSUBSCRIBE
On Tue, 3 Aug 2004 13:43:37 +1000, Clayton wrote in message
<[EMAIL PROTECTED]>:
> Hi,
> I am currently looking into a problem I have with LILO & Software
> RAID. When upgrading a kernel, with boot=/dev/md0 in lilo.conf,
> running lilo succeeds, but reboot fails wit
Hi,
I am currently looking into a problem I have with LILO & Software RAID.
When upgrading a kernel, with boot=/dev/md0 in lilo.conf, running lilo
succeeds, but reboot fails with LI 40 40 type errors. The workaround I
have used for this in the past is to change boot= to /dev/hda, then run
Thanks that was very helpful.
Debian is now being installed.
On 30/07/04 16:19 +0200, Jeroen Coekaerts wrote:
> On Thu, 2004-07-29 at 15:11 -0400, Theodore Knab wrote:
> > Hello I am stuck.
> >
> > Knoppix finds this device. My debian woody image does not.
> >
> > :01:01.0 SCSI storage con
On Thu, 2004-07-29 at 15:11 -0400, Theodore Knab wrote:
> Hello I am stuck.
>
> Knoppix finds this device. My debian woody image does not.
>
> :01:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X
> Fusion-MPT Dual Ultra320 SCSI (rev 07)
> Subsystem: IBM: Unknown
## Theodore Knab ([EMAIL PROTECTED]):
>
> :01:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X
> Fusion-MPT Dual Ultra320 SCSI (rev 07)
> Subsystem: IBM: Unknown device 026d
> Flags: bus master, 66MHz, medium devsel, latency 72, IRQ 22
> I/O ports
Thanks that looks the most promising info I have found.
On 29/07/04 21:49 +0200, Rasmus Glud wrote:
> Hiya,
>
> did you see this thread on the debian list archive ?
>
> http://lists.debian.org/debian-boot/2003/02/msg00586.html
>
> * Theodore Knab ([EMAIL PROTECTED]) wrote:
> > Hello I am stuc
Hello I am stuck.
Knoppix finds this device. My debian woody image does not.
:01:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X
Fusion-MPT Dual Ultra320 SCSI (rev 07)
Subsystem: IBM: Unknown device 026d
Flags: bus master, 66MHz, medium devsel, latency 7
dies while the machine is running
> > > and then replace the defective hardware during a scheduled
> > > maintenance time.
> >
> > Except that in my experience a dead IDE drive takes the whole system
> > with it even with MD RAID, the system just locks up. (yes e
dies while the machine is running
> > > and then replace the defective hardware during a scheduled
> > > maintenance time.
> >
> > Except that in my experience a dead IDE drive takes the whole system
> > with it even with MD RAID, the system just locks up. (yes e
On Fri, 2 Jul 2004 05:09, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> Seriously, as I need more disk space and CPU than disk IO, I went for
> RAID 5. If level 0 or 1 fits your application better, software RAID
> might be an option. But why burn CPU on RAID when your cont
tenance time.
>
> Except that in my experience a dead IDE drive takes the whole system with
> it even with MD RAID, the system just locks up. (yes even on say three
> 'independent' channels).
That hasn't been my experience, maybe I haven't had a drive die in th ri
On Fri, 2 Jul 2004 05:09, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> Seriously, as I need more disk space and CPU than disk IO, I went for
> RAID 5. If level 0 or 1 fits your application better, software RAID
> might be an option. But why burn CPU on RAID when your cont
tenance time.
>
> Except that in my experience a dead IDE drive takes the whole system with
> it even with MD RAID, the system just locks up. (yes even on say three
> 'independent' channels).
That hasn't been my experience, maybe I haven't had a drive die in th ri
drive takes the whole system with
it even with MD RAID, the system just locks up. (yes even on say three
'independent' channels).
This isn't the case with decent hardware IDE RAID controllers (3ware comes
to mind, promise does NOT)
YMMV of course...I've kind of thought about d
drive takes the whole system with
it even with MD RAID, the system just locks up. (yes even on say three
'independent' channels).
This isn't the case with decent hardware IDE RAID controllers (3ware comes
to mind, promise does NOT)
YMMV of course...I've kind of thought about d
can leave the machine running with a dead disk instead of having
> > to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now. How much does
> hotswapping a dead IDE drive kill the machine? Does this at all depend
can leave the machine running with a dead disk instead of having
> > to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now. How much does
> hotswapping a dead IDE drive kill the machine? Does this at all depend
## Russell Coker ([EMAIL PROTECTED]):
> > Yes. Given the price of RAID controllers (ServerRAID, for example) and
> > the problems of software RAID, I strongly suggest getting a decent
> > controller and do whatever RAID level you need.
> Hardware RAID is more expensive.
## Russell Coker ([EMAIL PROTECTED]):
> > Yes. Given the price of RAID controllers (ServerRAID, for example) and
> > the problems of software RAID, I strongly suggest getting a decent
> > controller and do whatever RAID level you need.
> Hardware RAID is more expensive.
n leave the machine running with a dead disk instead of having
> >to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now. How much does
> hotswapping a dead IDE drive kill the machine? Does this at all depend on
&
not tried Linux's software RAID for about 5 years now. How much does
hotswapping a dead IDE drive kill the machine? Does this at all depend on
the IDE controller or can most modern ones cope with the abuse?
Hardware RAID I've never worried about -- other than how much effort it took
n leave the machine running with a dead disk instead of having
> >to do an emergency hardware replacement job.
>
> I've not tried Linux's software RAID for about 5 years now. How much does
> hotswapping a dead IDE drive kill the machine? Does this at all depend on
&
t be a significant load either.
> Generally the more disks in a RAID-5 the better the performance that you can
> get, so having a four-disk RAID-5 is likely to give better performance for no
> cost (run "iostat -x 10" to verify this).
Since we are at the testing stage at the mom
not tried Linux's software RAID for about 5 years now. How much does
hotswapping a dead IDE drive kill the machine? Does this at all depend on
the IDE controller or can most modern ones cope with the abuse?
Hardware RAID I've never worried about -- other than how much effort it took
On Thu, 1 Jul 2004 20:37, Jogi Hofmüller <[EMAIL PROTECTED]> wrote:
> * Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> > Is it possible to make lvm with raid ?? Is there anyone here that make
> > it? thanks.
>
> We just recently started tests with ada
t be a significant load either.
> Generally the more disks in a RAID-5 the better the performance that you can
> get, so having a four-disk RAID-5 is likely to give better performance for no
> cost (run "iostat -x 10" to verify this).
Since we are at the testing stage at the mom
> Is it possible to make lvm with raid ?? Is there anyone here that make it?
> thanks.
I use LVM over software RAID 1 (mirroring). I use Debian stable
and I decided that boot partition over LVM was not worth,
specially because of trouble in case of disaster recovery.
So my RAID is as
On Thu, 1 Jul 2004 20:37, Jogi Hofmüller <[EMAIL PROTECTED]> wrote:
> * Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> > Is it possible to make lvm with raid ?? Is there anyone here that make
> > it? thanks.
>
> We just recently started tests with ada
Hi!
* Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> Is it possible to make lvm with raid ?? Is there anyone here that make it?
> thanks.
We just recently started tests with adaptecs zcr cards (2010S) and
aic-7902 controlors. Our solution is to have one disk to hol
On Thu, 1 Jul 2004 17:43, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Russell Coker ([EMAIL PROTECTED]):
> > > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > > Is it possible to make lvm with raid ?? Is there anyone here that
> > > > ma
> Is it possible to make lvm with raid ?? Is there anyone here that make it?
> thanks.
I use LVM over software RAID 1 (mirroring). I use Debian stable
and I decided that boot partition over LVM was not worth,
specially because of trouble in case of disaster recovery.
So my RAID is as
Hi!
* Gustavo Polillo <[EMAIL PROTECTED]> [2004-06-30 17:22]:
> Is it possible to make lvm with raid ?? Is there anyone here that make it?
> thanks.
We just recently started tests with adaptecs zcr cards (2010S) and
aic-7902 controlors. Our solution is to have one disk to hol
## Russell Coker ([EMAIL PROTECTED]):
> > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > Is it possible to make lvm with raid ?? Is there anyone here that make
> > > it?
> > Works as expected. RAID appears as a simple SCSI drive.
> Only for hardware RA
On Thu, 1 Jul 2004 17:43, Christoph Moench-Tegeder <[EMAIL PROTECTED]> wrote:
> ## Russell Coker ([EMAIL PROTECTED]):
> > > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > > Is it possible to make lvm with raid ?? Is there anyone here that
> > > > ma
## Russell Coker ([EMAIL PROTECTED]):
> > ## Gustavo Polillo ([EMAIL PROTECTED]):
> > > Is it possible to make lvm with raid ?? Is there anyone here that make
> > > it?
> > Works as expected. RAID appears as a simple SCSI drive.
> Only for hardware RA
1 - 100 of 567 matches
Mail list logo