slow write performance with software RAID on nvme storage

2019-03-29 Thread Rick Warner
Hi All, We've been testing a 24 drive NVME software RAID and getting far lower write speeds than expected.  The drives are connected with PLX chips such that 12 drives are on 1 x16 connection and the other 12 drives use another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drive

WARNING: Software Raid 0 on SSD's and discard corrupts data

2015-05-21 Thread Holger Kiehl
Hello, all users using a Software Raid 0 on SSD's with discard should disable discard, if they use any recent kernel since mid-April 2015. The bug was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and the fix is not yet in Linus tree. The fix can be found here:

Anomaly with 2 x 840Pro SSDs in software raid 1

2013-09-20 Thread Andrei Banu
Hello, We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides other problems addressed also here a while ago (to which I have still found no solution) we have one more anomaly (or so I believe). Although both SSDs worked 100% of the time their wear is very different. /dev/sda

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
x sata controller, > > > and a nvidia pci based video card. > > > > > > I have the os on a pata drive, and have made a software raid array > > > consisting of 4 sata drives attached to the pcix sata controller. > > > I created the array, and formatted w

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread jeffunit
of ram, an intel stl-2 motherboard. > It also has a promise 100 tx2 pata controller, > a supermicro marvell based 8 port pcix sata controller, > and a nvidia pci based video card. > > I have the os on a pata drive, and have made a software raid array > consisting of 4 sata driv

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote: > > What's spooky is that I just did a google and we've had reports > since 1998 all crashing on exactly the same line in tcp_recvmsg. However, there's been no reports at all since 2000 apart from this one so the earlier ones are probably

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
Andrew Morton <[EMAIL PROTECTED]> wrote: > >> Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00 >> 00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00 >> 00 89 ce 2b 70 18 8b 83 90 00 00 00 <0f> b6 50 0d 89 d0 83 e0 02 3c >> 01 8b 43 50 83 d6 ff 39 c6 0f 82

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
so has a promise 100 tx2 pata controller, > a supermicro marvell based 8 port pcix sata controller, > and a nvidia pci based video card. > > I have the os on a pata drive, and have made a software raid array > consisting of 4 sata drives attached to the pcix sata controller. &g

oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-07 Thread jeffunit
nvidia pci based video card. I have the os on a pata drive, and have made a software raid array consisting of 4 sata drives attached to the pcix sata controller. I created the array, and formatted with reiserfs 3.6 I have run bonnie++ (filesystem benchmark) on the array without incident. When I use

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Randy Dunlap
Michael J. Evans wrote: From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Mic

[patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd se

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > >> Michael Evans wrote: > >>> On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > Oh, I see. I forgot about the changelogs. I'd

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patc

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > >> Michael Evans wrote: > >>> Oh, I see. I forgot about the changelogs. I'd send out version 5 > >>> now, but I'm not sure what kernel version to make the patch ag

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. A

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > Oh, I see. I forgot about the changelogs. I'd send out version 5 > > now, but I'm not sure what kernel version to make the patch against. > > 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. > > Addition

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
On Tuesday 28 August 2007, Jan Engelhardt wrote: > > On Aug 28 2007 06:08, Michael Evans wrote: > > > >Oh, I see. I forgot about the changelogs. I'd send out version 5 > >now, but I'm not sure what kernel version to make the patch against. > >2.6.23-rc4 is on kernel.org and I don't see any git s

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Bill Davidsen
Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. Additionally I never could tell what git tree was the 'mainline' as it isn't

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Jan Engelhardt
On Aug 28 2007 06:08, Michael Evans wrote: > >Oh, I see. I forgot about the changelogs. I'd send out version 5 >now, but I'm not sure what kernel version to make the patch against. >2.6.23-rc4 is on kernel.org and I don't see any git snapshots. 2.6.23-rc4 is a snapshot in itself, a tagged one a

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/27/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael J. Evans wrote: > > On Monday 27 August 2007, Randy Dunlap wrote: > >> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > >> > >>> = > >>> --- linux/drivers/md/md.c.or

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
Michael J. Evans wrote: On Monday 27 August 2007, Randy Dunlap wrote: On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: = --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 +++ linux/drivers/md/md.c 200

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
On Monday 27 August 2007, Randy Dunlap wrote: > On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > > > = > > --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 > > +++ linux/drivers/md/md.c 2007-08-21 04:3

Re: [patch v4 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > Note: between 2.6.22 and 2.6.23-rc3-git5 > rdev = md_import_device(dev,0, 0); > became > rdev = md_import_device(dev,0, 90); > So the patch has been edited to patch around that line. (might be fuzzy) so y

[patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael Evans
On 8/26/07, Kyle Moffett <[EMAIL PROTECTED]> wrote: > On Aug 26, 2007, at 08:20:45, Michael Evans wrote: > > Also, I forgot to mention, the reason I added the counters was > > mostly for debugging. However they're also as useful in the same > > way that listing the partitions when a new disk is ad

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Kyle Moffett
On Aug 26, 2007, at 08:20:45, Michael Evans wrote: Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing mes

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: > > > From: Michael J. Evans <[EMAIL PROTECTED]> > > > > Is there any way to tell the user what device (or partition?) is > bein skipped? This printk should just print (confirm) that >

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Randy Dunlap
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: > From: Michael J. Evans <[EMAIL PROTECTED]> > > In current release kernels the md module (Software RAID) uses a static array > (dev_t[128]) to store partition/device info temporarily for autostart. > > This pa

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Jan Engelhardt <[EMAIL PROTECTED]> wrote: > > On Aug 26 2007 04:51, Michael J. Evans wrote: > > { > >- if (dev_cnt >= 0 && dev_cnt < 127) > >- detected_devices[dev_cnt++] = dev; > >+ struct detected_devices_node *node_detected_dev; > >+ node_detected_dev = kz

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Jan Engelhardt
On Aug 26 2007 04:51, Michael J. Evans wrote: > { >- if (dev_cnt >= 0 && dev_cnt < 127) >- detected_devices[dev_cnt++] = dev; >+ struct detected_devices_node *node_detected_dev; >+ node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\ What's the \ good

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing messages the autodetect routines provide). As for using auto

[patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Michael Evans
wn <[EMAIL PROTECTED]> wrote: > On Wednesday August 22, [EMAIL PROTECTED] wrote: > > From: Michael J. Evans <[EMAIL PROTECTED]> > > > > In current release kernels the md module (Software RAID) uses a static array > > (dev_t[128]) to store partition/device info

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Neil Brown
On Wednesday August 22, [EMAIL PROTECTED] wrote: > From: Michael J. Evans <[EMAIL PROTECTED]> > > In current release kernels the md module (Software RAID) uses a static array > (dev_t[128]) to store partition/device info temporarily for autostart. > > This patch replace

[PATCH] [442/2many] MAINTAINERS - SOFTWARE RAID (Multiple Disks) SUPPORT

2007-08-13 Thread joe
Add file pattern to MAINTAINER entry Signed-off-by: Joe Perches <[EMAIL PROTECTED]> diff --git a/MAINTAINERS b/MAINTAINERS index d17ae3d..29a2179 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4205,6 +4205,8 @@ P:Neil Brown M: [EMAIL PROTECTED] L: [EMAIL PROTECTED] S: Suppo

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Theodore Tso
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote: > > Extrapolating these %cpu number makes ZFS the fastest. > > > > Are you sure these numbers are correct? > > Note, that %cpu numbers for fuse filesystems are inherently skewed, > because the CPU usage of the filesystem process itse

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
On Mon, 30 Jul 2007, Miklos Szeredi wrote: Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account. So

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Dave Kleikamp
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote: > Overall JFS seems the fastest but reviewing the mailing list for JFS it > seems like there a lot of problems, especially when people who use JFS > 1 > year, their speed goes to 5 MiB/s over time and the defragfs tool has been > removed(?

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Miklos Szeredi
> Extrapolating these %cpu number makes ZFS the fastest. > > Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account. So the numbers are not all that good, but acc

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Al Boldi
Justin Piszcz wrote: > CONFIG: > > Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. > Kernel was 2.6.21 or 2.6.22, did these awhile ago. > Hardware was SATA with PCI-e only, nothing on the PCI bus. > > ZFS was userspace+fuse of course. Wow! Use

bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Reiser was V3. EXT4 was created using the recommended options on its

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
On Fri, 20 Jul 2007, Lennart Sorensen wrote: On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Lennart Sorensen
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: > I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. > > I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for > x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the > secon

Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the second one I see upwards of what I should be seeing 500-520MB/s. NOTE:: The

Re: Help needed: Partitioned software raid > 2TB

2007-06-16 Thread Alexander E. Patrakov
Jan Engelhardt wrote: I am not sure (would have to check again), but I believe both opensuse and fedora (the latter of which uses LVM for all partitions by default) have that working, while still using GRUB. Keyword: partitions. I.e., they partition the hard drive (so that the first 31 sector

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Jan Engelhardt
On Jun 16 2007 11:38, Alexander E. Patrakov wrote: > Jan Engelhardt wrote: >> On Jun 15 2007 16:03, Christian Schmidt wrote: > >> > Thanks for the clarification. I didn't use LVM on the device on purpose, >> > as root on LVM requires initrd (which I strongly dislike as >> > yet-another-point-of-fa

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Alexander E. Patrakov
Jan Engelhardt wrote: On Jun 15 2007 16:03, Christian Schmidt wrote: Thanks for the clarification. I didn't use LVM on the device on purpose, as root on LVM requires initrd (which I strongly dislike as yet-another-point-of-failure). As LVM is on the large partition anyway I'll just add the sec

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Jan Engelhardt
On Jun 15 2007 16:03, Christian Schmidt wrote: >Hi Andi, > >Andi Kleen wrote: >> Christian Schmidt <[EMAIL PROTECTED]> writes: >>> Where is the inherent limit? The partitioning software, or partitioning >>> all by itself? >> >> DOS style partitioning don't support more than 2TB. You either need >

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Christian Schmidt
Hi Andi, Andi Kleen wrote: > Christian Schmidt <[EMAIL PROTECTED]> writes: >> Where is the inherent limit? The partitioning software, or partitioning >> all by itself? > > DOS style partitioning don't support more than 2TB. You either need > to use EFI partitions (e.g. using parted) or LVM. Since

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Andi Kleen
Christian Schmidt <[EMAIL PROTECTED]> writes: > > Where is the inherent limit? The partitioning software, or partitioning > all by itself? DOS style partitioning don't support more than 2TB. You either need to use EFI partitions (e.g. using parted) or LVM. Since parted's user interface is not goo

Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Christian Schmidt
Hi everyone, I added a drive to a linux software RAID-5 last night. Now that worked fine... until I changed the partition table. Disk /dev/md_d5: 2499.9 GB, 240978560 bytes 2 heads, 4 sectors/track, 610349360 cylinders Units = cylinders of 8 * 512 = 4096 bytes Device Boot Start

Re: FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-03 Thread Miguel Sousa Filipe
locally attached disks * DOS-style disk partitions (used extensively on Linux systems) * GPT disk partitions (mainly used on IA-64) * S/390 disk partitions (CDL/LDL) * BSD disk partitions * Macintosh disk partitions * Linux MD/Software-RAID devices * Linux LVM volume groups and

Re: FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-03 Thread David Greaves
tensively on Linux systems) * GPT disk partitions (mainly used on IA-64) * S/390 disk partitions (CDL/LDL) * BSD disk partitions * Macintosh disk partitions * Linux MD/Software-RAID devices * Linux LVM volume groups and logical volumes (versions 1 and 2) Anything else? Oh

Re: FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-02 Thread david
On Wed, 2 May 2007, Miguel Sousa Filipe wrote: On 5/2/07, Diego Calleja <[EMAIL PROTECTED]> wrote: El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe" <[EMAIL PROTECTED]> escribió: > I find it high irritanting having two kernel interfaces and two > userland tools that provide the same

Re: FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-02 Thread Miguel Sousa Filipe
On 5/2/07, Diego Calleja <[EMAIL PROTECTED]> wrote: El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe" <[EMAIL PROTECTED]> escribió: > I find it high irritanting having two kernel interfaces and two > userland tools that provide the same funcionality, which one should I > use? I doubt us

Re: FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-02 Thread Diego Calleja
El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe" <[EMAIL PROTECTED]> escribió: > I find it high irritanting having two kernel interfaces and two > userland tools that provide the same funcionality, which one should I > use? I doubt users care about kernel's design; however the lack of un

FEATURE REQUEST: merge MD software raid and LVM in one unique layer.

2007-05-02 Thread Miguel Sousa Filipe
Hello kernel hackers, Some weeks ago, in a ZFS related thread, some kernel hackers asked the user what did they liked in ZFS that linux didn't have, so that they could (possibly) work on it. So, here is my feature request: - merge MD software raid framework and LVM in one unique API/fram

Re: Kernel 2.6.20.4: Software RAID 5: ata13.00: (irq_stat 0x00020002, failed to transmit command FIS)

2007-04-09 Thread Tejun Heo
Justin Piszcz wrote: > > > On Thu, 5 Apr 2007, Justin Piszcz wrote: > >> Had a quick question, this is the first time I have seen this happen, >> and it was not even under during heavy I/O, hardly anything was going >> on with the box at the time. > > .. snip .. > > # /usr/bin/time badblocks -

Re: Kernel 2.6.20.4: Software RAID 5: ata13.00: (irq_stat 0x00020002, failed to transmit command FIS)

2007-04-05 Thread Justin Piszcz
On Thu, 5 Apr 2007, Justin Piszcz wrote: Had a quick question, this is the first time I have seen this happen, and it was not even under during heavy I/O, hardly anything was going on with the box at the time. .. snip .. # /usr/bin/time badblocks -b 512 -s -v -w /dev/sdl Checking for bad b

Kernel 2.6.20.4: Software RAID 5: ata13.00: (irq_stat 0x00020002, failed to transmit command FIS)

2007-04-05 Thread Justin Piszcz
Had a quick question, this is the first time I have seen this happen, and it was not even under during heavy I/O, hardly anything was going on with the box at the time. Any idea what could have caused this? I am running a badblocks test right now, but so far the disk looks OK. [369143.91609

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-30 Thread Justin Piszcz
On Fri, 30 Mar 2007, Neil Brown wrote: On Thursday March 29, [EMAIL PROTECTED] wrote: Did you look at "cat /proc/mdstat" ?? What sort of speed was the check running at? Around 44MB/s. I do use the following optimization, perhaps a bad idea if I want other processes to 'stay alive'? echo

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-29 Thread Neil Brown
On Thursday March 29, [EMAIL PROTECTED] wrote: > > > > > Did you look at "cat /proc/mdstat" ?? What sort of speed was the check > > running at? > Around 44MB/s. > > I do use the following optimization, perhaps a bad idea if I want other > processes to 'stay alive'? > > echo "Setting minimum res

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-29 Thread Henrique de Moraes Holschuh
On Thu, 29 Mar 2007, Justin Piszcz wrote: > >Did you look at "cat /proc/mdstat" ?? What sort of speed was the check > >running at? > Around 44MB/s. > > I do use the following optimization, perhaps a bad idea if I want other > processes to 'stay alive'? > > echo "Setting minimum resync speed to 2

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-29 Thread Justin Piszcz
On Thu, 29 Mar 2007, Henrique de Moraes Holschuh wrote: On Thu, 29 Mar 2007, Justin Piszcz wrote: Did you look at "cat /proc/mdstat" ?? What sort of speed was the check running at? Around 44MB/s. I do use the following optimization, perhaps a bad idea if I want other processes to 'stay aliv

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-29 Thread Justin Piszcz
On Thu, 29 Mar 2007, Neil Brown wrote: On Tuesday March 27, [EMAIL PROTECTED] wrote: I ran a check on my SW RAID devices this morning. However, when I did so, I had a few lftp sessions open pulling files. After I executed the check, the lftp processes entered 'D' state and I could do 'nothi

Re: Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-28 Thread Neil Brown
On Tuesday March 27, [EMAIL PROTECTED] wrote: > I ran a check on my SW RAID devices this morning. However, when I did so, > I had a few lftp sessions open pulling files. After I executed the check, > the lftp processes entered 'D' state and I could do 'nothing' in the > process until the check

Software RAID (non-preempt) server blocking question. (2.6.20.4)

2007-03-27 Thread Justin Piszcz
I ran a check on my SW RAID devices this morning. However, when I did so, I had a few lftp sessions open pulling files. After I executed the check, the lftp processes entered 'D' state and I could do 'nothing' in the process until the check finished. Is this normal? Should a check block all

Re: Need a little help with Software Raid 1

2007-02-21 Thread Sander
Marc Perkel wrote (ao): > I have a partition that used to be part of a software > raid 1 array. It is now loaded as /dev/sda3 but I'd > like to mirror it to /dev/sdb3 without losing the data > on the drive. I'm a little nervous about how to set it > up as I don&#x

Need a little help with Software Raid 1

2007-02-21 Thread Marc Perkel
I have a partition that used to be part of a software raid 1 array. It is now loaded as /dev/sda3 but I'd like to mirror it to /dev/sdb3 without losing the data on the drive. I'm a little nervous about how to set it up as I don't want to wipe out the data. How do I do this? Usi

Re: Modular kernel (2.6.20) and software raid auto detection

2007-02-15 Thread Neil Brown
On Thursday February 15, [EMAIL PROTECTED] wrote: > > With my ide driver and the md stuff all built into the kernel, my software > raid drives and associated /dev/md? devices are detected and created by the > kernel. Yep. > > With the md stuff built in but the ide driver

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-13 Thread Justin Piszcz
On Sat, 13 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > On Sat, 13 Jan 2007, Al Boldi wrote: > > > Justin Piszcz wrote: > > > > Btw, max sectors did improve my performance a little bit but > > > > stripe_cache+read_ahead were the main optimizations that made > > > > everything go faster

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Al Boldi
Justin Piszcz wrote: > On Sat, 13 Jan 2007, Al Boldi wrote: > > Justin Piszcz wrote: > > > Btw, max sectors did improve my performance a little bit but > > > stripe_cache+read_ahead were the main optimizations that made > > > everything go faster by about ~1.5x. I have individual bonnie++ > > > b

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Justin Piszcz
On Sat, 13 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > Btw, max sectors did improve my performance a little bit but > > stripe_cache+read_ahead were the main optimizations that made everything > > go faster by about ~1.5x. I have individual bonnie++ benchmarks of > > [only] the max_se

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Al Boldi
Justin Piszcz wrote: > Btw, max sectors did improve my performance a little bit but > stripe_cache+read_ahead were the main optimizations that made everything > go faster by about ~1.5x. I have individual bonnie++ benchmarks of > [only] the max_sector_kb tests as well, it improved the times from

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Bill Davidsen
Justin Piszcz wrote: # echo 3 > /proc/sys/vm/drop_caches # dd if=/dev/md3 of=/dev/null bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s # for i in sde sdg sdi sdk; do echo 192 > /sys/block/"$i"/queue/max_sectors_kb; echo "S

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Justin Piszcz
Btw, max sectors did improve my performance a little bit but stripe_cache+read_ahead were the main optimizations that made everything go faster by about ~1.5x. I have individual bonnie++ benchmarks of [only] the max_sector_kb tests as well, it improved the times from 8min/bonnie run -> 7min 1

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Justin Piszcz
On Fri, 12 Jan 2007, Al Boldi wrote: > Justin Piszcz wrote: > > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU > > > > This should be 1:14 not 1:06(was with a similarly sized file but not the > > same) the 1:14 is the same file as used with the other benchmarks. and to > > get that I used 256mb read

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Al Boldi
Justin Piszcz wrote: > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU > > This should be 1:14 not 1:06(was with a similarly sized file but not the > same) the 1:14 is the same file as used with the other benchmarks. and to > get that I used 256mb read-ahead and 16384 stripe size ++ 128 > max_sectors_kb

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Justin Piszcz
RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU This should be 1:14 not 1:06(was with a similarly sized file but not the same) the 1:14 is the same file as used with the other benchmarks. and to get that I used 256mb read-ahead and 16384 stripe size ++ 128 max_sectors_kb (same size as my sw raid5 ch

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Justin Piszcz
On Fri, 12 Jan 2007, Michael Tokarev wrote: > Justin Piszcz wrote: > > Using 4 raptor 150s: > > > > Without the tweaks, I get 111MB/s write and 87MB/s read. > > With the tweaks, 195MB/s write and 211MB/s read. > > > > Using kernel 2.6.19.1. > > > > Without the tweaks and with the tweaks: > >

Re: Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-12 Thread Michael Tokarev
Justin Piszcz wrote: > Using 4 raptor 150s: > > Without the tweaks, I get 111MB/s write and 87MB/s read. > With the tweaks, 195MB/s write and 211MB/s read. > > Using kernel 2.6.19.1. > > Without the tweaks and with the tweaks: > > # Stripe tests: > echo 8192 > /sys/block/md3/md/stripe_cache_siz

Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read & 195MB/s write)

2007-01-11 Thread Justin Piszcz
Using 4 raptor 150s: Without the tweaks, I get 111MB/s write and 87MB/s read. With the tweaks, 195MB/s write and 211MB/s read. Using kernel 2.6.19.1. Without the tweaks and with the tweaks: # Stripe tests: echo 8192 > /sys/block/md3/md/stripe_cache_size # DD TESTS [WRITE] DEFAULT: (512K) $ dd

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-15 Thread Phil Dier
On Sun, 14 Aug 2005 21:20:35 -0600 (MDT) Zwane Mwaikambo <[EMAIL PROTECTED]> wrote: > On Sun, 14 Aug 2005, Robert Love wrote: > > > On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote: > > > > > I'm new here, if the inode isn't being watched, what's to stop d_delete > > > from removing the

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-14 Thread Zwane Mwaikambo
On Sun, 14 Aug 2005, Robert Love wrote: > On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote: > > > I'm new here, if the inode isn't being watched, what's to stop d_delete > > from removing the inode before fsnotify_unlink proceeds to use it? > > Nothing. But check out > > http://kernel

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-14 Thread Robert Love
On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote: > I'm new here, if the inode isn't being watched, what's to stop d_delete > from removing the inode before fsnotify_unlink proceeds to use it? Nothing. But check out http://kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=com

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-14 Thread Zwane Mwaikambo
On Sun, 14 Aug 2005, Phil Dier wrote: > I just got this: > > Unable to handle kernel paging request at virtual address eeafefc0 > printing eip: > c0188487 > *pde = 00681067 > *pte = 2eafe000 > Oops: [#1] > SMP DEBUG_PAGEALLOC > Modules linked in: > CPU:1 > EIP:0060:[]Not tainted

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-14 Thread Phil Dier
I just got this: Unable to handle kernel paging request at virtual address eeafefc0 printing eip: c0188487 *pde = 00681067 *pte = 2eafe000 Oops: [#1] SMP DEBUG_PAGEALLOC Modules linked in: CPU:1 EIP:0060:[]Not tainted VLI EFLAGS: 00010296 (2.6.13-rc6) EIP is at inotify_inode_qu

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-12 Thread Sonny Rao
On Fri, Aug 12, 2005 at 12:35:05PM -0500, Phil Dier wrote: > On Fri, 12 Aug 2005 12:07:21 +1000 > Neil Brown <[EMAIL PROTECTED]> wrote: > > You could possibly put something like > > > > struct bio_vec *from; > > int i; > > bio_for_each_segment(from, bio, i) > > BUG_ON(page_

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-12 Thread Phil Dier
On Fri, 12 Aug 2005 12:07:21 +1000 Neil Brown <[EMAIL PROTECTED]> wrote: > You could possibly put something like > > struct bio_vec *from; > int i; > bio_for_each_segment(from, bio, i) > BUG_ON(page_zone(from->bv_page)==NULL); > > in generic_make_requst in drivers/

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-11 Thread Phil Dier
On Fri, 12 Aug 2005 12:07:21 +1000 Neil Brown <[EMAIL PROTECTED]> wrote: > On Thursday August 11, [EMAIL PROTECTED] wrote: > > Hi, > > > > I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results > > of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't > > complete

Re: 2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-11 Thread Neil Brown
On Thursday August 11, [EMAIL PROTECTED] wrote: > Hi, > > I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results > of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't completely > hosed; I can still log in and move around. It appears that the only things > that

2.6.13-rc6 Oops with Software RAID, LVM, JFS, NFS

2005-08-11 Thread Phil Dier
Hi, I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't completely hosed; I can still log in and move around. It appears that the only things that are locked are the apps that were doing i/o to the test part

Re: Dual 2.8ghz xeon, software raid, lvm, jfs

2005-08-10 Thread Phil Dier
On Tue, 9 Aug 2005 19:05:30 -0400 Sonny Rao <[EMAIL PROTECTED]> wrote: > > Generally on lkml, you want to post at least the output of an oops or > panic into your post. Okay, I'll keep this in mind for future posts. Thanks. > Now, try running 2.6.13-rc6 and see if it fixes your problem, IIRC > t

Re: Dual 2.8ghz xeon, software raid, lvm, jfs

2005-08-09 Thread Sonny Rao
On Tue, Aug 09, 2005 at 09:44:56AM -0500, Phil Dier wrote: > Hi, > > I have 2 identical dual 2.8ghz xeon machines with 4gb ram, using > software raid 10 with lvm layered on top, formatted with JFS (though > at this point any filesystem with online resizing support will do). I &

Dual 2.8ghz xeon, software raid, lvm, jfs

2005-08-09 Thread Phil Dier
Hi, I have 2 identical dual 2.8ghz xeon machines with 4gb ram, using software raid 10 with lvm layered on top, formatted with JFS (though at this point any filesystem with online resizing support will do). I have the boxes stable using 2.6.10, and they pass my stress test. I was trying to update

RE: IDE PIIX vs libata piix with software raid

2005-07-20 Thread David Lewis
>> My question is, what is the recommended driver to use for the PATA >>channel? > >If you're just using hard drives, there should be no problem using >libata for both PATA and SATA. > >However, in general, the IDE driver (CONFIG_IDE) is recommended for PATA. > > Jeff I took Jeff's suggesti

Re: IDE PIIX vs libata piix with software raid

2005-07-20 Thread Jeff Garzik
David Lewis wrote: Greetings, I am developing a system using the Intel SE7520BD2 motherboard. It has an ICH5 with two SATA ports and one PATA channel. I am able to drive the PATA channel with either the normal PIIX IDE driver or the libata driver which I am using for the SATA ports. Ultimately a

  1   2   >