Ian Ward Comfort wrote:
: On Oct 3, 2007, at 10:18 AM, Dean S. Messing wrote:
: > I've created a software RAID-0, defined a Volume Group on in with
: > (currently) a single logical volume, and copied my entire
: > installation onto it, modifying the copied fstab to reflect where
: > the new
Goswin von Brederlow wrote:
: "Dean S. Messing" <[EMAIL PROTECTED]> writes:
:
: > I'm having the devil of a time trying to boot off
: > an "LVM-on-RAID0" device on my Fedora 7 system.
: >
: > I've created a software RAID-0, defined a Volume Group on in with
: > (currently) a single logical volum
On Wed, 3 Oct 2007 13:36:39 -0700, David Rees wrote:
> > # xfs_db -c frag -f /dev/md0
> > actual 1828276, ideal 1708782, fragmentation factor 6.54%
> >
> > Good or bad?
>
> Not bad, but not that good, either. Try running xfs_fsr into a nightly
> cronjob. By default, it will defrag mounted xfs fil
On Wed, 3 Oct 2007 16:35:21 -0400 (EDT), Justin Piszcz wrote:
> What does cat /sys/block/md0/md/mismatch_cnt say?
$ cat /sys/block/md0/md/mismatch_cnt
0
> That fragmentation looks normal/fine.
Cool.
> Justin.
Andrew
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
th
Andrew Clayton wrote:
Yeah, I was wondering about that. It certainly hasn't improved things,
it's unclear if it's made things any worse..
Many 3124 cards are PCI-X, so if you have one of these (and you seem to
be using a server board which may well have PCI-X), bus performance is
not going
On 10/3/07, Andrew Clayton <[EMAIL PROTECTED]> wrote:
> On Wed, 3 Oct 2007 12:43:24 -0400 (EDT), Justin Piszcz wrote:
> > Have you checked fragmentation?
>
> You know, that never even occurred to me. I've gotten into the mind set
> that it's generally not a problem under Linux.
It's probably not t
What does cat /sys/block/md0/md/mismatch_cnt say?
That fragmentation looks normal/fine.
Justin.
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:43:24 -0400 (EDT), Justin Piszcz wrote:
Have you checked fragmentation?
You know, that never even occurred to me. I've gotten int
On Wed, 03 Oct 2007 19:53:08 +0200, Goswin von Brederlow wrote:
> Andrew Clayton <[EMAIL PROTECTED]> writes:
>
> > Hi,
> >
> > Hardware:
> >
> > Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1
> > (root file system) is connected to the onboard Silicon Image 3114
> > controller. The
On Wed, 3 Oct 2007 12:43:24 -0400 (EDT), Justin Piszcz wrote:
> Have you checked fragmentation?
You know, that never even occurred to me. I've gotten into the mind set
that it's generally not a problem under Linux.
> xfs_db -c frag -f /dev/md3
>
> What does this report?
# xfs_db -c frag -f /de
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:48:27 -0400 (EDT), Justin Piszcz wrote:
Also if it is software raid, when you make the XFS filesyste, on it,
it sets up a proper (and tuned) sunit/swidth, so why would you want
to change that?
Oh I didn't, the sunit and swidth
"Dean S. Messing" <[EMAIL PROTECTED]> writes:
> I'm having the devil of a time trying to boot off
> an "LVM-on-RAID0" device on my Fedora 7 system.
>
> I've created a software RAID-0, defined a Volume Group on in with
> (currently) a single logical volume, and copied my entire
> installation onto
Andrew Clayton <[EMAIL PROTECTED]> writes:
> Hi,
>
> Hardware:
>
> Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1 (root file
> system) is connected to the onboard Silicon Image 3114 controller. The other
> 3 (/home) are in a software RAID 5 connected to a PCI Silicon Image 3124
I'm having the devil of a time trying to boot off
an "LVM-on-RAID0" device on my Fedora 7 system.
I've created a software RAID-0, defined a Volume Group on in with
(currently) a single logical volume, and copied my entire
installation onto it, modifying the copied fstab to reflect where
the new
Also if it is software raid, when you make the XFS filesyste, on it, it
sets up a proper (and tuned) sunit/swidth, so why would you want to change
that?
Justin.
On Wed, 3 Oct 2007, Justin Piszcz wrote:
Have you checked fragmentation?
xfs_db -c frag -f /dev/md3
What does this report?
Justi
Have you checked fragmentation?
xfs_db -c frag -f /dev/md3
What does this report?
Justin.
On Wed, 3 Oct 2007, Andrew Clayton wrote:
Hi,
Hardware:
Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1 (root file
system) is connected to the onboard Silicon Image 3114 controller. Th
Hi,
Hardware:
Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1 (root file
system) is connected to the onboard Silicon Image 3114 controller. The other 3
(/home) are in a software RAID 5 connected to a PCI Silicon Image 3124 card. I
moved the 3 raid disks off the on board controll
Rustedt, Florian wrote:
> Hello list,
>
> some folks reported severe filesystem-crashes with ext3 and reiserfs on
> mdraid level 1 and 5.
I guess much more strong evidience and details are needed.
Without any additional information I for one can only make
a (not-so-pleasant) guess about those "so
17 matches
Mail list logo