Hi,
On Tue, May 15, 2001 at 04:37:01PM +1200, Chris Wedgwood wrote:
> On Sun, May 13, 2001 at 08:39:23PM -0600, Richard Gooch wrote:
>
> Yeah, we need a decent unfragmenter. We can do that now with
> bmap().
>
> SCT wrote a defragger for ext2 but it only handles 1k blocks :(
Actually,
Hi,
On Fri, May 18, 2001 at 09:55:14AM +0200, Rogier Wolff wrote:
> The "boot quickly" was an example. "Load netscape quickly" on some
> systems is done by dd-ing the binary to /dev/null.
This is one of the reasons why some filesystems use extent maps
instead of inode indirection trees. The p
Hi,
On Sat, May 19, 2001 at 12:47:15PM -0700, Linus Torvalds wrote:
>
> On Sat, 19 May 2001, Pavel Machek wrote:
> >
> > > Don't get _too_ hung up about the power-management kind of "invisible
> > > suspend/resume" sequence where you resume the whole kernel state.
> >
> > Ugh. Now I'm confused
> I'm confused. I've always wondered that before you suspend the state
> of a machine to disk, why we just don't throw away unnecessary data
> like anything not actively referenced.
swsusp does exactly that.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
On Sat, 19 May 2001, Pavel Machek wrote:
>
> > Don't get _too_ hung up about the power-management kind of "invisible
> > suspend/resume" sequence where you resume the whole kernel state.
>
> Ugh. Now I'm confused. How do you do usefull resume from disk when you
> don't restore complete state? D
Hi!
> > resume from disk is actually pretty hard to do in way it is readed linearily.
> >
> > While playing with swsusp patches (== suspend to disk) I found out that
> > it was slow. It needs to do atomic snapshot, and only reasonable way to
> > do that is free half of RAM, cli() and copy.
>
>
On Tue, 15 May 2001, Pavel Machek wrote:
>
> resume from disk is actually pretty hard to do in way it is readed linearily.
>
> While playing with swsusp patches (== suspend to disk) I found out that
> it was slow. It needs to do atomic snapshot, and only reasonable way to
> do that is free half
Linus Torvalds wrote:
> I'm really serious about doing "resume from disk". If you want a fast
> boot, I will bet you a dollar that you cannot do it faster than by loading
> a contiguous image of several megabytes contiguously into memory. There is
> NO overhead, you're pretty much guaranteed platt
Hi!
> Besides, just how often do you reboot the box? If that's the hotspot for
> you - when the hell does the boor beast find time to do something useful?
Ten times a day?
But booting is special case: You can read your mail while compiling kernel,
but try to read your mail while your machine i
Hi!
> And because your suspend/resume idea isn't really going to help me
> much. That's because my boot scripts have the notion of
> "personalities" (change the boot configuration by asking the user
> early on in the boot process). If I suspend after I've got XDM
> running, it's too late.
Why no
Hi!
> I'm really serious about doing "resume from disk". If you want a fast
> boot, I will bet you a dollar that you cannot do it faster than by loading
> a contiguous image of several megabytes contiguously into memory. There is
> NO overhead, you're pretty much guaranteed platter speeds, and th
Anton Altaparmakov wrote:
>
> True, but I was under the impression that Linus' master plan was that the
> two would be in entirely separate name spaces using separate cached copies
> of the device blocks.
>
Nothing was said about the superblock at all.
-hpa
--
<[EMAIL PROTECTED]> at
At 02:30 16/05/2001, H. Peter Anvin wrote:
>Anton Altaparmakov wrote:
> > And how are you thinking of this working "without introducing new
> > interfaces" if the caches are indeed incoherent? Please correct me if I
> > understand wrong, but when two caches are incoherent, I thought it means
> > t
Anton Altaparmakov wrote:
>
> And how are you thinking of this working "without introducing new
> interfaces" if the caches are indeed incoherent? Please correct me if I
> understand wrong, but when two caches are incoherent, I thought it means
> that the above _would_ screw up unless protected b
At 23:35 15/05/2001, H. Peter Anvin wrote:
>"Albert D. Cahalan" wrote:
> > H. Peter Anvin writes:
> > > This would leave no way (without introducing new interfaces) to write,
> > > for example, the boot block on an ext2 filesystem. Note that the
> > > bootblock (defined as the first 1024 bytes) i
"Albert D. Cahalan" wrote:
>
> H. Peter Anvin writes:
>
> > This would leave no way (without introducing new interfaces) to write,
> > for example, the boot block on an ext2 filesystem. Note that the
> > bootblock (defined as the first 1024 bytes) is not actually used by
> > the filesystem, alt
H. Peter Anvin writes:
> This would leave no way (without introducing new interfaces) to write,
> for example, the boot block on an ext2 filesystem. Note that the
> bootblock (defined as the first 1024 bytes) is not actually used by
> the filesystem, although depending on the block size it may s
On Tue, May 15, 2001 at 02:02:29PM -0700, Linus Torvalds wrote:
> In article <[EMAIL PROTECTED]>,
> Alexander Viro <[EMAIL PROTECTED]> wrote:
> >On Tue, 15 May 2001, H. Peter Anvin wrote:
> >
> >> Alexander Viro wrote:
> >> > >
> >> > > None whatsoever. The one thing that matters is that noone s
Alexander Viro wrote:
>
> void *.
>
> Look, methods of your address_space certainly know what they hell they
> are dealing with. Just as autofs_root_readdir() knows what inode->u.generic_ip
> really points to.
>
> Anybody else has no business to care about the contents of ->host.
>
Why do we
In article <[EMAIL PROTECTED]>,
Alexander Viro <[EMAIL PROTECTED]> wrote:
>>
>> How would you know what datatype it is? A union? Making "struct
>> block_device *" a "struct inode *" in a nonmounted filesystem? In a
>> devfs? (Seriously. Being able to do these kinds of data-structural
>> equ
On Tue, 15 May 2001, Alexander Viro wrote:
> On 15 May 2001, Kai Henningsen wrote:
>
> > [EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
><[EMAIL PROTECTED]>:
> >
> > > ... and Multics had all access to files through equivalent of mmap()
> > > in 60s. "Segments" in ls(1) got that na
In article <[EMAIL PROTECTED]>,
Alexander Viro <[EMAIL PROTECTED]> wrote:
>On Tue, 15 May 2001, H. Peter Anvin wrote:
>
>> Alexander Viro wrote:
>> > >
>> > > None whatsoever. The one thing that matters is that noone starts making
>> > > the assumption that mapping->host->i_mapping == mapping.
>
On 15 May 2001, Kai Henningsen wrote:
> [EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
><[EMAIL PROTECTED]>:
>
> > ... and Multics had all access to files through equivalent of mmap()
> > in 60s. "Segments" in ls(1) got that name for a good reason.
>
> Where's something called "seg
[EMAIL PROTECTED] (Alexander Viro) wrote on 15.05.01 in
<[EMAIL PROTECTED]>:
> ... and Multics had all access to files through equivalent of mmap()
> in 60s. "Segments" in ls(1) got that name for a good reason.
Where's something called "segments" connected with ls(1)? I can't seem to
find th
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> > >
> > > What else could it be, since it's a "struct inode *"? NULL?
> >
> > struct block_device *, for one thing. We'll have to do that as soon
> > as we do block devices in pagecache.
> >
>
> How would you know what dat
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> > >
> > > None whatsoever. The one thing that matters is that noone starts making
> > > the assumption that mapping->host->i_mapping == mapping.
> >
> > One actually shouldn't assume that mapping->host is an inode.
> >
>
>
Alexander Viro wrote:
> >
> > What else could it be, since it's a "struct inode *"? NULL?
>
> struct block_device *, for one thing. We'll have to do that as soon
> as we do block devices in pagecache.
>
How would you know what datatype it is? A union? Making "struct
block_device *" a "struct
On Tue, 15 May 2001, H. Peter Anvin wrote:
> Alexander Viro wrote:
> >
> > On 15 May 2001, H. Peter Anvin wrote:
> >
> > > isofs wouldn't be too bad as long as struct mapping:struct inode is a
> > > many-to-one mapping.
> >
> > Erm... What's wrong with inode->u.isofs_i.my_very_own_address_sp
Alexander Viro wrote:
> >
> > None whatsoever. The one thing that matters is that noone starts making
> > the assumption that mapping->host->i_mapping == mapping.
>
> One actually shouldn't assume that mapping->host is an inode.
>
What else could it be, since it's a "struct inode *"? NULL?
Alexander Viro wrote:
>
> On 15 May 2001, H. Peter Anvin wrote:
>
> > isofs wouldn't be too bad as long as struct mapping:struct inode is a
> > many-to-one mapping.
>
> Erm... What's wrong with inode->u.isofs_i.my_very_own_address_space ?
>
None whatsoever. The one thing that matters is that
On 15 May 2001, H. Peter Anvin wrote:
> isofs wouldn't be too bad as long as struct mapping:struct inode is a
> many-to-one mapping.
Erm... What's wrong with inode->u.isofs_i.my_very_own_address_space ?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
Followup to: <[EMAIL PROTECTED]>
By author:Anton Altaparmakov <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> They shouldn't, but maybe some stupid utility or a typo will do it creating
> two incoherent copies of the same block on the device. -> Bad Things can
> happen.
>
> Can't w
Followup to: <[EMAIL PROTECTED]>
By author:Alexander Viro <[EMAIL PROTECTED]>
In newsgroup: linux.dev.kernel
>
> UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it won't
> be fixed unless authors will do it. Tigran will probably fix BFS just as a
> learning experience ;-)
>And because your suspend/resume idea isn't really going to help me
>much. That's because my boot scripts have the notion of
>"personalities" (change the boot configuration by asking the user
>early on in the boot process). If I suspend after I've got XDM
>running, it's too late.
Preface:
On Tuesday, May 15, 2001 04:33:57 AM -0400 Alexander Viro
<[EMAIL PROTECTED]> wrote:
>
>
> On Tue, 15 May 2001, Linus Torvalds wrote:
>
>> Looks like there are 19 filesystems that use the buffer cache right now:
>>
>> grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
>>
>> So quite
On Tuesday 15 May 2001 12:44, Alexander Viro wrote:
> On Tue, 15 May 2001, Daniel Phillips wrote:
> > That's because you left out his invalidate:
> >
> > * create an instance in pagecache
> > * start reading into buffer cache (doesn't invalidate, right?)
> > * start writing using pagec
On Tue, 15 May 2001, Daniel Phillips wrote:
> That's because you left out his invalidate:
>
> * create an instance in pagecache
> * start reading into buffer cache (doesn't invalidate, right?)
> * start writing using pagecache (invalidate buffer copy)
Bzzert. You have a race
On Tuesday 15 May 2001 08:57, Alexander Viro wrote:
> On Tue, 15 May 2001, Richard Gooch wrote:
> > > What happens if you create a buffer cache entry? Does that
> > > invalidate the page cache one? Or do you just allow invalidates
> > > one way, and not the other? And why=
> >
> > I just figured o
[EMAIL PROTECTED] said:
> JFFS - dunno.
Bah. JFFS doesn't use any of those horrible block device thingies.
--
dwmw2
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-
At 08:13 15/05/01, Linus Torvalds wrote:
>On Tue, 15 May 2001, Richard Gooch wrote:
> > So what happens if I dd from the block device and also from a file on
> > the mounted FS, where that file overlaps the bnums I dd'ed? Do we get
> > two copies in the page cache? One for the block device access,
Alan Cox <[EMAIL PROTECTED]> writes:
> > Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
> > did not come up the idea.
> Seems to be TOPS-10
> http://www.opost.com/dlm/tenex/fjcc72/
TENEX is not TOPS-10. TOPS-10 didn't get virtual memory until around
1974. By then, TE
On Tue, 15 May 2001, Linus Torvalds wrote:
> Looks like there are 19 filesystems that use the buffer cache right now:
>
> grep -l bread fs/*/*.c | cut -d/ -f2 | sort -u | wc
>
> So quite a bit of work involved.
UNIX-like ones (and that includes QNX) are easy. HFS is hopeless - it won't
On Tue, 15 May 2001, Chris Wedgwood wrote:
>
> On Tue, May 15, 2001 at 12:13:13AM -0700, Linus Torvalds wrote:
>
> We should not create crap code just because we _can_.
>
> How about removing code?
Absolutely. It's not all that often that we can do it, but when we can,
it's the best thing i
On Tue, 15 May 2001, Richard Gooch wrote:
> >
> > What happens if you create a buffer cache entry? Does that
> > invalidate the page cache one? Or do you just allow invalidates one
> > way, and not the other? And why=
>
> I just figured on one way invalidates, because that seems cheap and
> eas
On Tue, 15 May 2001, Richard Gooch wrote:
> > What happens if you create a buffer cache entry? Does that
> > invalidate the page cache one? Or do you just allow invalidates one
> > way, and not the other? And why=
>
> I just figured on one way invalidates, because that seems cheap and
> easy a
Linus Torvalds writes:
>
> On Tue, 15 May 2001, Richard Gooch wrote:
> >
> > However, what about simply invalidating an entry in the buffer cache
> > when you do a write from the page cache?
>
> And how do you do the invalidate the other way, pray tell?
>
> What happens if you create a buffer
On Tue, 15 May 2001, Richard Gooch wrote:
>
> However, what about simply invalidating an entry in the buffer cache
> when you do a write from the page cache?
And how do you do the invalidate the other way, pray tell?
What happens if you create a buffer cache entry? Does that invalidate the
pag
Linus Torvalds writes:
> You could choose to do "partial coherency", ie be coherent only one
> way, for example. That would make the coherency overhead much less,
> but would also make the caches basically act very unpredictably -
> you might have somebody write through the page cache yet on a rea
Linus Torvalds writes:
>
> On Mon, 14 May 2001, Richard Gooch wrote:
> >
> > Is there some fundamental reason why a buffer cache can't ever be
> > fast?
>
> Yes.
>
> Or rather, there is a fundamental reason why we must NEVER EVER look at
> the buffer cache: it is not coherent with the page cac
On Mon, 14 May 2001, David S. Miller wrote:
>
> Larry McVoy writes:
> > Hell, that's the OS that gave us mmap, remember that?
>
> Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
> did not come up the idea.
s/TOPS-20/Multics/
-
To unsubscribe from this list: send the
On Mon, 14 May 2001, Linus Torvalds wrote:
> The current page cache is completely non-coherent (with _anything_: it's
> not coherent with other files using a page cache because they have a
> different index, and it's not coherent with the buffer cache because that
> one isn't even in the same n
Larry McVoy writes:
> Hell, that's the OS that gave us mmap, remember that?
Larry, go read up on TOPS-20. :-) SunOS did give unix mmap(), but it
did not come up the idea.
Later,
David S. Miller
[EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
On Mon, 14 May 2001, Larry McVoy wrote:
> Hell, that's the OS that gave us mmap, remember that?
"I got it from Agnes..."
Don't get me wrong, SunOS 4 was probably the nicest thing Sun had ever
released and I love it, but mmap(2) was _not_ the best of ideas. Files
as streams of bytes and file
On Mon, 14 May 2001, Linus Torvalds wrote:
>
> Or rather, there is a fundamental reason why we must NEVER EVER look at
> the buffer cache: it is not coherent with the page cache.
>
> And keeping it coherent would be _extremely_ expensive. How do we
> know? Because we used to do that. Remember
On Mon, May 14, 2001 at 09:00:44PM -0700, Linus Torvalds wrote:
> Or rather, there is a fundamental reason why we must NEVER EVER look at
> the buffer cache: it is not coherent with the page cache.
Not that Linus needs any backing up but Sun got rid of the buffer cache
and just had a page cache
On Mon, 14 May 2001, Richard Gooch wrote:
>
> Is there some fundamental reason why a buffer cache can't ever be
> fast?
Yes.
Or rather, there is a fundamental reason why we must NEVER EVER look at
the buffer cache: it is not coherent with the page cache.
And keeping it coherent would be _ext
On Tuesday 15 May 2001 01:19, Richard Gooch wrote:
> Linus Torvalds writes:
> > On Sun, 13 May 2001, Richard Gooch wrote:
> > > So, why can't the page cache check if a block is in the buffer
> > > cache?
> >
> > Because it would make the damn thing slower.
> >
> > The whole point of the page cache
Linus Torvalds writes:
>
>
> On Sun, 13 May 2001, Richard Gooch wrote:
> >
> > OK, provided the prefetch will queue up a large number of requests
> > before starting the I/O. If there was a way of controlling when the
> > I/O actually starts (say by having a START flag), that would be ideal,
> >
On Sun, 13 May 2001, Richard Gooch wrote:
>
> OK, provided the prefetch will queue up a large number of requests
> before starting the I/O. If there was a way of controlling when the
> I/O actually starts (say by having a START flag), that would be ideal,
> I think.
Ehh. The "start" flag is whe
Daniel writes:
> But we don't need anything so fancy to try out your idea, we just need
> a lvm-like device that can:
>
> - Maintain a block cache
> - Remap logical to physical blocks
> - Record the block accesses
> - Physically reorder the blocks according to the recorded order
> - Lo
On Monday 14 May 2001 07:15, Richard Gooch wrote:
> Linus Torvalds writes:
> > But sure, you can use bmap if you want. It would be interesting to
> > hear whether it makes much of a difference..
>
> I doubt bmap() would make any difference if there is a way of
> controlling when the I/O starts.
>
Richard Gooch <[EMAIL PROTECTED]>:
> >
> OK, provided the prefetch will queue up a large number of requests
> before starting the I/O. If there was a way of controlling when the
> I/O actually starts (say by having a START flag), that would be ideal,
> I think.
>
The START flag is equivalent to
Linus Torvalds writes:
>
> On Sun, 13 May 2001, Richard Gooch wrote:
> >
> > Think about it:-) You need to generate prefetch accesses in ascending
> > device bnum order.
>
> I seriously doubt it is worth it.
>
> Th ekernel will do the ordering for you anyway: that's what the
> elevator is, and
On Sun, 13 May 2001, Richard Gooch wrote:
>
> Think about it:-) You need to generate prefetch accesses in ascending
> device bnum order.
I seriously doubt it is worth it.
Th ekernel will do the ordering for you anyway: that's what the elevator
is, and that's why you have a "prefetch" system ca
Rik van Riel writes:
> On Sun, 13 May 2001, Richard Gooch wrote:
> > Larry McVoy writes:
>
> > > Ha. For once you're both wrong but not where you are thinking. One
> > > of the few places that I actually hacked Linux was for exactly this
> > > - it was in the 0.99 days I think. I saved the lis
On Sun, 13 May 2001, Richard Gooch wrote:
> Larry McVoy writes:
> > Ha. For once you're both wrong but not where you are thinking. One
> > of the few places that I actually hacked Linux was for exactly this
> > - it was in the 0.99 days I think. I saved the list of I/O's in a
> > file and fill
Larry McVoy writes:
> On Sun, May 13, 2001 at 06:32:02PM -0700, Linus Torvalds wrote:
> > > Hi, Linus. I've been thinking more about trying to warm the page
> > > cache with blocks needed by the bootup process. What is currently
> > > missing is (AFAIK) a mechanism to find out what inodes and bl
Linus Torvalds writes:
>
> On Sun, 13 May 2001, Richard Gooch wrote:
> >
> > Hi, Linus. I've been thinking more about trying to warm the page
> > cache with blocks needed by the bootup process. What is currently
> > missing is (AFAIK) a mechanism to find out what inodes and blocks have
> > been
On Sun, May 13, 2001 at 06:32:02PM -0700, Linus Torvalds wrote:
> > Hi, Linus. I've been thinking more about trying to warm the page
> > cache with blocks needed by the bootup process. What is currently
> > missing is (AFAIK) a mechanism to find out what inodes and blocks have
> > been accessed.
On Sun, 13 May 2001, Richard Gooch wrote:
>
> Hi, Linus. I've been thinking more about trying to warm the page
> cache with blocks needed by the bootup process. What is currently
> missing is (AFAIK) a mechanism to find out what inodes and blocks have
> been accessed. Sure, you can use bmap() t
70 matches
Mail list logo