Hi All,
We've been testing a 24 drive NVME software RAID and getting far lower
write speeds than expected. The drives are connected with PLX chips
such that 12 drives are on 1 x16 connection and the other 12 drives use
another x16 link The system is a Supermicro 2029U-TN24R4T. The drive
Hello,
all users using a Software Raid 0 on SSD's with discard should disable
discard, if they use any recent kernel since mid-April 2015. The bug
was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and
the fix is not yet in Linus tree. The fix can be found here:
Hello,
We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides
other problems addressed also here a while ago (to which I have still
found no solution) we have one more anomaly (or so I believe).
Although both SSDs worked 100% of the time their wear is very different.
/dev/sda
x sata controller,
> > > and a nvidia pci based video card.
> > >
> > > I have the os on a pata drive, and have made a software raid array
> > > consisting of 4 sata drives attached to the pcix sata controller.
> > > I created the array, and formatted w
of ram, an intel stl-2 motherboard.
> It also has a promise 100 tx2 pata controller,
> a supermicro marvell based 8 port pcix sata controller,
> and a nvidia pci based video card.
>
> I have the os on a pata drive, and have made a software raid array
> consisting of 4 sata driv
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote:
>
> What's spooky is that I just did a google and we've had reports
> since 1998 all crashing on exactly the same line in tcp_recvmsg.
However, there's been no reports at all since 2000 apart from this
one so the earlier ones are probably
Andrew Morton <[EMAIL PROTECTED]> wrote:
>
>> Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00
>> 00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00
>> 00 89 ce 2b 70 18 8b 83 90 00 00 00 <0f> b6 50 0d 89 d0 83 e0 02 3c
>> 01 8b 43 50 83 d6 ff 39 c6 0f 82
so has a promise 100 tx2 pata controller,
> a supermicro marvell based 8 port pcix sata controller,
> and a nvidia pci based video card.
>
> I have the os on a pata drive, and have made a software raid array
> consisting of 4 sata drives attached to the pcix sata controller.
&g
nvidia pci based video card.
I have the os on a pata drive, and have made a software raid array
consisting of 4 sata drives attached to the pcix sata controller.
I created the array, and formatted with reiserfs 3.6
I have run bonnie++ (filesystem benchmark) on the array without incident.
When I use
Michael J. Evans wrote:
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Mic
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd se
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > Oh, I see. I forgot about the changelogs. I'd
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patc
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> Oh, I see. I forgot about the changelogs. I'd send out version 5
> >>> now, but I'm not sure what kernel version to make the patch ag
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
A
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > Oh, I see. I forgot about the changelogs. I'd send out version 5
> > now, but I'm not sure what kernel version to make the patch against.
> > 2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
> > Addition
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On Tuesday 28 August 2007, Jan Engelhardt wrote:
>
> On Aug 28 2007 06:08, Michael Evans wrote:
> >
> >Oh, I see. I forgot about the changelogs. I'd send out version 5
> >now, but I'm not sure what kernel version to make the patch against.
> >2.6.23-rc4 is on kernel.org and I don't see any git s
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
Additionally I never could tell what git tree was the 'mainline' as it
isn't
On Aug 28 2007 06:08, Michael Evans wrote:
>
>Oh, I see. I forgot about the changelogs. I'd send out version 5
>now, but I'm not sure what kernel version to make the patch against.
>2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
2.6.23-rc4 is a snapshot in itself, a tagged one a
On 8/27/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael J. Evans wrote:
> > On Monday 27 August 2007, Randy Dunlap wrote:
> >> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
> >>
> >>> =
> >>> --- linux/drivers/md/md.c.or
Michael J. Evans wrote:
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
+++ linux/drivers/md/md.c 200
On Monday 27 August 2007, Randy Dunlap wrote:
> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
>
> > =
> > --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
> > +++ linux/drivers/md/md.c 2007-08-21 04:3
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
> Note: between 2.6.22 and 2.6.23-rc3-git5
> rdev = md_import_device(dev,0, 0);
> became
> rdev = md_import_device(dev,0, 90);
> So the patch has been edited to patch around that line. (might be fuzzy)
so y
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On 8/26/07, Kyle Moffett <[EMAIL PROTECTED]> wrote:
> On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
> > Also, I forgot to mention, the reason I added the counters was
> > mostly for debugging. However they're also as useful in the same
> > way that listing the partitions when a new disk is ad
On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
Also, I forgot to mention, the reason I added the counters was
mostly for debugging. However they're also as useful in the same
way that listing the partitions when a new disk is added can be (in
fact this augments that and the existing mes
On 8/26/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
>
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
>
> Is there any way to tell the user what device (or partition?) is
> bein skipped? This printk should just print (confirm) that
>
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
> From: Michael J. Evans <[EMAIL PROTECTED]>
>
> In current release kernels the md module (Software RAID) uses a static array
> (dev_t[128]) to store partition/device info temporarily for autostart.
>
> This pa
On 8/26/07, Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>
> On Aug 26 2007 04:51, Michael J. Evans wrote:
> > {
> >- if (dev_cnt >= 0 && dev_cnt < 127)
> >- detected_devices[dev_cnt++] = dev;
> >+ struct detected_devices_node *node_detected_dev;
> >+ node_detected_dev = kz
On Aug 26 2007 04:51, Michael J. Evans wrote:
> {
>- if (dev_cnt >= 0 && dev_cnt < 127)
>- detected_devices[dev_cnt++] = dev;
>+ struct detected_devices_node *node_detected_dev;
>+ node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\
What's the \ good
Also, I forgot to mention, the reason I added the counters was mostly
for debugging. However they're also as useful in the same way that
listing the partitions when a new disk is added can be (in fact this
augments that and the existing messages the autodetect routines
provide).
As for using auto
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
wn <[EMAIL PROTECTED]> wrote:
> On Wednesday August 22, [EMAIL PROTECTED] wrote:
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
> > In current release kernels the md module (Software RAID) uses a static array
> > (dev_t[128]) to store partition/device info
On Wednesday August 22, [EMAIL PROTECTED] wrote:
> From: Michael J. Evans <[EMAIL PROTECTED]>
>
> In current release kernels the md module (Software RAID) uses a static array
> (dev_t[128]) to store partition/device info temporarily for autostart.
>
> This patch replace
Add file pattern to MAINTAINER entry
Signed-off-by: Joe Perches <[EMAIL PROTECTED]>
diff --git a/MAINTAINERS b/MAINTAINERS
index d17ae3d..29a2179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4205,6 +4205,8 @@ P:Neil Brown
M: [EMAIL PROTECTED]
L: [EMAIL PROTECTED]
S: Suppo
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote:
> > Extrapolating these %cpu number makes ZFS the fastest.
> >
> > Are you sure these numbers are correct?
>
> Note, that %cpu numbers for fuse filesystems are inherently skewed,
> because the CPU usage of the filesystem process itse
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote:
> Overall JFS seems the fastest but reviewing the mailing list for JFS it
> seems like there a lot of problems, especially when people who use JFS > 1
> year, their speed goes to 5 MiB/s over time and the defragfs tool has been
> removed(?
> Extrapolating these %cpu number makes ZFS the fastest.
>
> Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So the numbers are not all that good, but acc
Justin Piszcz wrote:
> CONFIG:
>
> Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
> Kernel was 2.6.21 or 2.6.22, did these awhile ago.
> Hardware was SATA with PCI-e only, nothing on the PCI bus.
>
> ZFS was userspace+fuse of course.
Wow! Use
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its
On Fri, 20 Jul 2007, Lennart Sorensen wrote:
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
> I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
>
> I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
> x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
> secon
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
second one I see upwards of what I should be seeing 500-520MB/s.
NOTE:: The
Jan Engelhardt wrote:
I am not sure (would have to check again), but I believe both opensuse and
fedora (the latter of which uses LVM for all partitions by default) have
that working, while still using GRUB.
Keyword: partitions. I.e., they partition the hard drive (so that the first
31 sector
On Jun 16 2007 11:38, Alexander E. Patrakov wrote:
> Jan Engelhardt wrote:
>> On Jun 15 2007 16:03, Christian Schmidt wrote:
>
>> > Thanks for the clarification. I didn't use LVM on the device on purpose,
>> > as root on LVM requires initrd (which I strongly dislike as
>> > yet-another-point-of-fa
Jan Engelhardt wrote:
On Jun 15 2007 16:03, Christian Schmidt wrote:
Thanks for the clarification. I didn't use LVM on the device on purpose,
as root on LVM requires initrd (which I strongly dislike as
yet-another-point-of-failure). As LVM is on the large partition anyway
I'll just add the sec
On Jun 15 2007 16:03, Christian Schmidt wrote:
>Hi Andi,
>
>Andi Kleen wrote:
>> Christian Schmidt <[EMAIL PROTECTED]> writes:
>>> Where is the inherent limit? The partitioning software, or partitioning
>>> all by itself?
>>
>> DOS style partitioning don't support more than 2TB. You either need
>
Hi Andi,
Andi Kleen wrote:
> Christian Schmidt <[EMAIL PROTECTED]> writes:
>> Where is the inherent limit? The partitioning software, or partitioning
>> all by itself?
>
> DOS style partitioning don't support more than 2TB. You either need
> to use EFI partitions (e.g. using parted) or LVM. Since
Christian Schmidt <[EMAIL PROTECTED]> writes:
>
> Where is the inherent limit? The partitioning software, or partitioning
> all by itself?
DOS style partitioning don't support more than 2TB. You either need
to use EFI partitions (e.g. using parted) or LVM. Since parted's
user interface is not goo
Hi everyone,
I added a drive to a linux software RAID-5 last night. Now that worked
fine... until I changed the partition table.
Disk /dev/md_d5: 2499.9 GB, 240978560 bytes
2 heads, 4 sectors/track, 610349360 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Device Boot Start
locally attached disks
* DOS-style disk partitions (used extensively on Linux systems)
* GPT disk partitions (mainly used on IA-64)
* S/390 disk partitions (CDL/LDL)
* BSD disk partitions
* Macintosh disk partitions
* Linux MD/Software-RAID devices
* Linux LVM volume groups and
tensively on Linux systems)
* GPT disk partitions (mainly used on IA-64)
* S/390 disk partitions (CDL/LDL)
* BSD disk partitions
* Macintosh disk partitions
* Linux MD/Software-RAID devices
* Linux LVM volume groups and logical volumes (versions 1 and 2)
Anything else?
Oh
On Wed, 2 May 2007, Miguel Sousa Filipe wrote:
On 5/2/07, Diego Calleja <[EMAIL PROTECTED]> wrote:
El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe"
<[EMAIL PROTECTED]> escribió:
> I find it high irritanting having two kernel interfaces and two
> userland tools that provide the same
On 5/2/07, Diego Calleja <[EMAIL PROTECTED]> wrote:
El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe" <[EMAIL PROTECTED]>
escribió:
> I find it high irritanting having two kernel interfaces and two
> userland tools that provide the same funcionality, which one should I
> use?
I doubt us
El Wed, 2 May 2007 20:18:55 +0100, "Miguel Sousa Filipe" <[EMAIL PROTECTED]>
escribió:
> I find it high irritanting having two kernel interfaces and two
> userland tools that provide the same funcionality, which one should I
> use?
I doubt users care about kernel's design; however the lack of un
Hello kernel hackers,
Some weeks ago, in a ZFS related thread, some kernel hackers asked the
user what did they liked in ZFS that linux didn't have, so that they
could (possibly) work on it.
So, here is my feature request:
- merge MD software raid framework and LVM in one unique
API/fram
Justin Piszcz wrote:
>
>
> On Thu, 5 Apr 2007, Justin Piszcz wrote:
>
>> Had a quick question, this is the first time I have seen this happen,
>> and it was not even under during heavy I/O, hardly anything was going
>> on with the box at the time.
>
> .. snip ..
>
> # /usr/bin/time badblocks -
On Thu, 5 Apr 2007, Justin Piszcz wrote:
Had a quick question, this is the first time I have seen this happen, and it
was not even under during heavy I/O, hardly anything was going on with the
box at the time.
.. snip ..
# /usr/bin/time badblocks -b 512 -s -v -w /dev/sdl
Checking for bad b
Had a quick question, this is the first time I have seen this happen, and
it was not even under during heavy I/O, hardly anything was going on with
the box at the time.
Any idea what could have caused this? I am running a badblocks test right
now, but so far the disk looks OK.
[369143.91609
On Fri, 30 Mar 2007, Neil Brown wrote:
On Thursday March 29, [EMAIL PROTECTED] wrote:
Did you look at "cat /proc/mdstat" ?? What sort of speed was the check
running at?
Around 44MB/s.
I do use the following optimization, perhaps a bad idea if I want other
processes to 'stay alive'?
echo
On Thursday March 29, [EMAIL PROTECTED] wrote:
>
> >
> > Did you look at "cat /proc/mdstat" ?? What sort of speed was the check
> > running at?
> Around 44MB/s.
>
> I do use the following optimization, perhaps a bad idea if I want other
> processes to 'stay alive'?
>
> echo "Setting minimum res
On Thu, 29 Mar 2007, Justin Piszcz wrote:
> >Did you look at "cat /proc/mdstat" ?? What sort of speed was the check
> >running at?
> Around 44MB/s.
>
> I do use the following optimization, perhaps a bad idea if I want other
> processes to 'stay alive'?
>
> echo "Setting minimum resync speed to 2
On Thu, 29 Mar 2007, Henrique de Moraes Holschuh wrote:
On Thu, 29 Mar 2007, Justin Piszcz wrote:
Did you look at "cat /proc/mdstat" ?? What sort of speed was the check
running at?
Around 44MB/s.
I do use the following optimization, perhaps a bad idea if I want other
processes to 'stay aliv
On Thu, 29 Mar 2007, Neil Brown wrote:
On Tuesday March 27, [EMAIL PROTECTED] wrote:
I ran a check on my SW RAID devices this morning. However, when I did so,
I had a few lftp sessions open pulling files. After I executed the check,
the lftp processes entered 'D' state and I could do 'nothi
On Tuesday March 27, [EMAIL PROTECTED] wrote:
> I ran a check on my SW RAID devices this morning. However, when I did so,
> I had a few lftp sessions open pulling files. After I executed the check,
> the lftp processes entered 'D' state and I could do 'nothing' in the
> process until the check
I ran a check on my SW RAID devices this morning. However, when I did so,
I had a few lftp sessions open pulling files. After I executed the check,
the lftp processes entered 'D' state and I could do 'nothing' in the
process until the check finished. Is this normal? Should a check block
all
Marc Perkel wrote (ao):
> I have a partition that used to be part of a software
> raid 1 array. It is now loaded as /dev/sda3 but I'd
> like to mirror it to /dev/sdb3 without losing the data
> on the drive. I'm a little nervous about how to set it
> up as I don
I have a partition that used to be part of a software
raid 1 array. It is now loaded as /dev/sda3 but I'd
like to mirror it to /dev/sdb3 without losing the data
on the drive. I'm a little nervous about how to set it
up as I don't want to wipe out the data.
How do I do this? Usi
On Thursday February 15, [EMAIL PROTECTED] wrote:
>
> With my ide driver and the md stuff all built into the kernel, my software
> raid drives and associated /dev/md? devices are detected and created by the
> kernel.
Yep.
>
> With the md stuff built in but the ide driver
On Sat, 13 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > On Sat, 13 Jan 2007, Al Boldi wrote:
> > > Justin Piszcz wrote:
> > > > Btw, max sectors did improve my performance a little bit but
> > > > stripe_cache+read_ahead were the main optimizations that made
> > > > everything go faster
Justin Piszcz wrote:
> On Sat, 13 Jan 2007, Al Boldi wrote:
> > Justin Piszcz wrote:
> > > Btw, max sectors did improve my performance a little bit but
> > > stripe_cache+read_ahead were the main optimizations that made
> > > everything go faster by about ~1.5x. I have individual bonnie++
> > > b
On Sat, 13 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > Btw, max sectors did improve my performance a little bit but
> > stripe_cache+read_ahead were the main optimizations that made everything
> > go faster by about ~1.5x. I have individual bonnie++ benchmarks of
> > [only] the max_se
Justin Piszcz wrote:
> Btw, max sectors did improve my performance a little bit but
> stripe_cache+read_ahead were the main optimizations that made everything
> go faster by about ~1.5x. I have individual bonnie++ benchmarks of
> [only] the max_sector_kb tests as well, it improved the times from
Justin Piszcz wrote:
# echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/md3 of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 399.352 seconds, 26.9 MB/s
# for i in sde sdg sdi sdk; do echo 192 >
/sys/block/"$i"/queue/max_sectors_kb; echo "S
Btw, max sectors did improve my performance a little bit but
stripe_cache+read_ahead were the main optimizations that made everything
go faster by about ~1.5x. I have individual bonnie++ benchmarks of
[only] the max_sector_kb tests as well, it improved the times from 8min/bonnie
run -> 7min 1
On Fri, 12 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
> >
> > This should be 1:14 not 1:06(was with a similarly sized file but not the
> > same) the 1:14 is the same file as used with the other benchmarks. and to
> > get that I used 256mb read
Justin Piszcz wrote:
> RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
>
> This should be 1:14 not 1:06(was with a similarly sized file but not the
> same) the 1:14 is the same file as used with the other benchmarks. and to
> get that I used 256mb read-ahead and 16384 stripe size ++ 128
> max_sectors_kb
RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
This should be 1:14 not 1:06(was with a similarly sized file but not the
same) the 1:14 is the same file as used with the other benchmarks. and to
get that I used 256mb read-ahead and 16384 stripe size ++ 128
max_sectors_kb (same size as my sw raid5 ch
On Fri, 12 Jan 2007, Michael Tokarev wrote:
> Justin Piszcz wrote:
> > Using 4 raptor 150s:
> >
> > Without the tweaks, I get 111MB/s write and 87MB/s read.
> > With the tweaks, 195MB/s write and 211MB/s read.
> >
> > Using kernel 2.6.19.1.
> >
> > Without the tweaks and with the tweaks:
> >
Justin Piszcz wrote:
> Using 4 raptor 150s:
>
> Without the tweaks, I get 111MB/s write and 87MB/s read.
> With the tweaks, 195MB/s write and 211MB/s read.
>
> Using kernel 2.6.19.1.
>
> Without the tweaks and with the tweaks:
>
> # Stripe tests:
> echo 8192 > /sys/block/md3/md/stripe_cache_siz
Using 4 raptor 150s:
Without the tweaks, I get 111MB/s write and 87MB/s read.
With the tweaks, 195MB/s write and 211MB/s read.
Using kernel 2.6.19.1.
Without the tweaks and with the tweaks:
# Stripe tests:
echo 8192 > /sys/block/md3/md/stripe_cache_size
# DD TESTS [WRITE]
DEFAULT: (512K)
$ dd
On Sun, 14 Aug 2005 21:20:35 -0600 (MDT)
Zwane Mwaikambo <[EMAIL PROTECTED]> wrote:
> On Sun, 14 Aug 2005, Robert Love wrote:
>
> > On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote:
> >
> > > I'm new here, if the inode isn't being watched, what's to stop d_delete
> > > from removing the
On Sun, 14 Aug 2005, Robert Love wrote:
> On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote:
>
> > I'm new here, if the inode isn't being watched, what's to stop d_delete
> > from removing the inode before fsnotify_unlink proceeds to use it?
>
> Nothing. But check out
>
> http://kernel
On Sun, 2005-08-14 at 20:40 -0600, Zwane Mwaikambo wrote:
> I'm new here, if the inode isn't being watched, what's to stop d_delete
> from removing the inode before fsnotify_unlink proceeds to use it?
Nothing. But check out
http://kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=com
On Sun, 14 Aug 2005, Phil Dier wrote:
> I just got this:
>
> Unable to handle kernel paging request at virtual address eeafefc0
> printing eip:
> c0188487
> *pde = 00681067
> *pte = 2eafe000
> Oops: [#1]
> SMP DEBUG_PAGEALLOC
> Modules linked in:
> CPU:1
> EIP:0060:[]Not tainted
I just got this:
Unable to handle kernel paging request at virtual address eeafefc0
printing eip:
c0188487
*pde = 00681067
*pte = 2eafe000
Oops: [#1]
SMP DEBUG_PAGEALLOC
Modules linked in:
CPU:1
EIP:0060:[]Not tainted VLI
EFLAGS: 00010296 (2.6.13-rc6)
EIP is at inotify_inode_qu
On Fri, Aug 12, 2005 at 12:35:05PM -0500, Phil Dier wrote:
> On Fri, 12 Aug 2005 12:07:21 +1000
> Neil Brown <[EMAIL PROTECTED]> wrote:
> > You could possibly put something like
> >
> > struct bio_vec *from;
> > int i;
> > bio_for_each_segment(from, bio, i)
> > BUG_ON(page_
On Fri, 12 Aug 2005 12:07:21 +1000
Neil Brown <[EMAIL PROTECTED]> wrote:
> You could possibly put something like
>
> struct bio_vec *from;
> int i;
> bio_for_each_segment(from, bio, i)
> BUG_ON(page_zone(from->bv_page)==NULL);
>
> in generic_make_requst in drivers/
On Fri, 12 Aug 2005 12:07:21 +1000
Neil Brown <[EMAIL PROTECTED]> wrote:
> On Thursday August 11, [EMAIL PROTECTED] wrote:
> > Hi,
> >
> > I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results
> > of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't
> > complete
On Thursday August 11, [EMAIL PROTECTED] wrote:
> Hi,
>
> I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results
> of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't completely
> hosed; I can still log in and move around. It appears that the only things
> that
Hi,
I posted an oops a few days ago from 2.6.12.3 [1]. Here are the results
of my tests on 2.6.13-rc6. The kernel oopses, but it the box isn't completely
hosed; I can still log in and move around. It appears that the only things
that are
locked are the apps that were doing i/o to the test part
On Tue, 9 Aug 2005 19:05:30 -0400
Sonny Rao <[EMAIL PROTECTED]> wrote:
>
> Generally on lkml, you want to post at least the output of an oops or
> panic into your post.
Okay, I'll keep this in mind for future posts. Thanks.
> Now, try running 2.6.13-rc6 and see if it fixes your problem, IIRC
> t
On Tue, Aug 09, 2005 at 09:44:56AM -0500, Phil Dier wrote:
> Hi,
>
> I have 2 identical dual 2.8ghz xeon machines with 4gb ram, using
> software raid 10 with lvm layered on top, formatted with JFS (though
> at this point any filesystem with online resizing support will do). I
&
Hi,
I have 2 identical dual 2.8ghz xeon machines with 4gb ram, using
software raid 10 with lvm layered on top, formatted with JFS (though
at this point any filesystem with online resizing support will do). I
have the boxes stable using 2.6.10, and they pass my stress test. I was
trying to update
>> My question is, what is the recommended driver to use for the PATA
>>channel?
>
>If you're just using hard drives, there should be no problem using
>libata for both PATA and SATA.
>
>However, in general, the IDE driver (CONFIG_IDE) is recommended for PATA.
>
> Jeff
I took Jeff's suggesti
David Lewis wrote:
Greetings,
I am developing a system using the Intel SE7520BD2 motherboard. It has an
ICH5 with two SATA ports and one PATA channel. I am able to drive the PATA
channel with either the normal PIIX IDE driver or the libata driver which I
am using for the SATA ports. Ultimately a
1 - 100 of 126 matches
Mail list logo