On Fri, Jun 29, 2007 at 12:16:44AM +0200, Rafael J. Wysocki wrote:
> There are two solutions possible, IMO. One would be to make these workqueues
> freezable, which is possible, but hacky and Oleg didn't like that very much.
> The second would be to freeze XFS from within the hibernation code path
On Wed, Jun 27, 2007 at 08:49:24PM +, Pavel Machek wrote:
> Hi!
>
> > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS
> > filesystem for a suspend/resume to work safely and have argued that the only
>
> Hmm, so XFS writes to disk even when its threads are frozen?
On Friday, 29 June 2007 00:00, Pavel Machek wrote:
> On Thu 2007-06-28 17:27:34, Rafael J. Wysocki wrote:
> > On Wednesday, 27 June 2007 22:49, Pavel Machek wrote:
> > > Hi!
> > >
> > > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an
> > > > XFS
> > > > filesystem for a
On Thu, Jun 28, 2007 at 04:27:15AM -0400, Justin Piszcz wrote:
>
>
> On Thu, 28 Jun 2007, Peter Rabbitson wrote:
>
> >Justin Piszcz wrote:
> >>mdadm --create \
> >> --verbose /dev/md3 \
> >> --level=5 \
> >> --raid-devices=10 \
> >> --chunk=1024 \
> >> --force \
> >>
On Thu 2007-06-28 17:27:34, Rafael J. Wysocki wrote:
> On Wednesday, 27 June 2007 22:49, Pavel Machek wrote:
> > Hi!
> >
> > > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an
> > > XFS
> > > filesystem for a suspend/resume to work safely and have argued that the
> > > onl
Still reviewing but it appears 8 + 256k looks good.
p34-noatime-logbufs=2-lbsize=256k,15696M,78172.3,99,450320,86.6667,178683,29,79808,99,565741,42.,610.067,0,16:10:16/64,2362,19.6667,15751.7,46,3993.33,22,2545.67,24.,13976,41,3781.33,28.6667
p34-noatime-logbufs=8-lbsize=256k,15696M,
Number Major Minor RaidDevice State
0 000 removed
1 8 341 active sync /dev/sdc2
2 002 removed
3 8 82- spare /dev/sdf2
4 8 66
On Wednesday, 27 June 2007 22:49, Pavel Machek wrote:
> Hi!
>
> > FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS
> > filesystem for a suspend/resume to work safely and have argued that the only
>
> Hmm, so XFS writes to disk even when its threads are frozen?
>
> > sa
Hi!
> FWIW, I'm on record stating that "sync" is not sufficient to quiesce an XFS
> filesystem for a suspend/resume to work safely and have argued that the only
Hmm, so XFS writes to disk even when its threads are frozen?
> safe thing to do is freeze the filesystem before suspend and thaw it aft
I was wanting to try out the "--write-behind" option. I have a raid1
with bitmaps and write-mostly enabled, which are all the pre-requisites,
I think.
It would be nice if you could tweak this parameter on a live array, but
failing that, it is hard to see why it couldn't be done at assemble
time. m
On Wed, 2007-06-27 at 08:48 -0700, Andrew Burgess wrote:
> >>> Odd
> >>> Maybe you have an initrd which is loading md as a module, then
> >>> running "raidautorun" or similar?
> ..
> >I suspect that the last comment is the clue, after pivotroot I bet it
> >runs another init, not from the boo
On Thu, Jun 28, 2007 at 09:12:56AM +0100, David Greaves wrote:
> (back on list for google's benefit ;) and because there are some good
> questions and I don't know all the answers... )
Thanks, I didn't realize I didn't 'reply-all' to stay on the list.
> Hopefully it will snowball as people who u
On Thu, 28 Jun 2007, Matti Aarnio wrote:
> I do have LVM in between the MD-RAID5 and XFS, so I did also align
> the LVM to that 3 * 256k.
How did you align the LVM ?
--
Jon Nelson <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a messa
On Thu, 28 Jun 2007, Matti Aarnio wrote:
On Thu, Jun 28, 2007 at 10:24:54AM +0200, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
mdadm --create \
--level=10 \
--chunk=1024
On Thu, Jun 28, 2007 at 10:24:54AM +0200, Peter Rabbitson wrote:
> Interesting, I came up with the same results (1M chunk being superior)
> with a completely different raid set with XFS on top:
>
> mdadm --create \
> --level=10 \
> --chunk=1024 \
> --raid-devices=4 \
> --l
On Thu, 28 Jun 2007, Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Cou
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
G
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
G
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
Good question, by the way how much cache do the d
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
Interesting, I came up with the same results (1M chun
Justin Piszcz wrote:
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
Interesting, I came up with the same results (1M chunk being superior)
with a completely different r
(back on list for google's benefit ;) and because there are some good questions
and I don't know all the answers... )
Oh, and Neil 'cos there may be a bug ...
Richard Michael wrote:
On Wed, Jun 27, 2007 at 08:49:22AM +0100, David Greaves wrote:
http://linux-raid.osdl.org/index.php/Partitionab
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
The results speak for themselves:
http://home.comcast.
10 disks total.
Justin.
On Thu, 28 Jun 2007, David Chinner wrote:
On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
That's four (4) drives, right?
If so, how do you get a block read rate of 578MB/s from
4 drives? That's 14
David Chinner wrote:
On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
That's four (4) drives, right?
I'm pretty sure he's using 10 - email a few days back...
Justin Piszcz wrote:
Running test with 10 RAPTOR 150 hard drives
25 matches
Mail list logo