> Linus Torvalds wrote:
> : Jan - can you give Jens a bit of an idea of what drivers and/or
> schedulers
> : you're using?
>
>       I have a Tyan S2882 dual Opteron, network is on-board tg3,
> there are 8 P-ATA HDDs hooked on 3ware 7506-8 controller (no HW RAID
> there, but the drives are partitioned and partition grouped to form
> software RAID-0, 1, 5, and 10 volumes - the main fileserving traffic
> is on a RAID-5 volume, and /var is on RAID-10 volume.
>
>       Filesystems are XFS for that RAID-5 volume, ext3 for the rest
> of the system. I have compiled-in the following I/O schedulers (according
> to my /var/log/dmesg :-)
>
> io scheduler noop registered
> io scheduler anticipatory registered
> io scheduler deadline registered
> io scheduler cfq registered
>
> I have not changed the scheduler by hand, so I suppose the anticipatory
> is the default.
>
>       No X, just serial console. The server does FTP serving mostly
> (ProFTPd with sendfile() compiled in), sending mail via qmail (cca
> 100-200k mails a day), and bits of other work (rsync, Apache, ...).
> Fedora core 3 with all relevant updates.
>
>       My fstab (physical devices only):
> /dev/md0                /                       ext3    defaults        1
> 1
> /dev/md1                /home                   ext3    defaults        1
> 2
> /dev/md6                /var                    ext3    defaults        1
> 2
> /dev/md4                /fastraid               xfs     noatime         1
> 3
> /dev/md5                /export                 xfs     noatime         1
> 4
> /dev/sde4               swap                    swap    pri=10          0
> 0
> /dev/sdf4               swap                    swap    pri=10          0
> 0
> /dev/sdg4               swap                    swap    pri=10          0
> 0
> /dev/sdh4               swap                    swap    pri=10          0
> 0
>
>       My mdstat:
>
> Personalities : [raid0] [raid1] [raid5]
> md6 : active raid0 md3[0] md2[1]
>       19550720 blocks 64k chunks
>
> md1 : active raid1 sdd1[1] sdc1[0]
>       14659200 blocks [2/2] [UU]
>
> md2 : active raid1 sdf1[1] sde1[0]
>       9775424 blocks [2/2] [UU]
>
> md3 : active raid1 sdh1[1] sdg1[0]
>       9775424 blocks [2/2] [UU]
>
> md4 : active raid0 sdh2[7] sdg2[6] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
> sda2[0]
>       39133184 blocks 256k chunks
>
> md5 : active raid5 sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
> sda3[0]
>       1572512256 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUUUUUUU]
>
> md0 : active raid1 sdb1[1] sda1[0]
>       14659200 blocks [2/2] [UU]

My guess would be the clone change, if raid was not leaking before. I
cannot lookup any patches at the moment, as I'm still at the hospital
taking care of my new born baby and wife :)

But try and reverse the patches to fs/bio.c that mention corruption due to
bio_clone and bio->bi_io_vec and see if that cures it. If it does, I know
where to look. When did you notice this started to leak?

Jens

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to