lest it would ever
make the cache was 16M, it would prefer to kill processes rather than make
the cache smaller than that.
Contrived stressor program: (pseudo code)
fork(); fork(); fork(); fork(); //16 total processes
for (i=0;i Just do what I described above.
Done :).
Thanks,
Shane Nay.
-
To
er thing..., oh yes, and
its SOLDERED ON THE BOARD. Damn..., guess I just lost a grand or so.
Seriously folks, Linux isn't just for big webservers...
Thanks,
Shane Nay.
(Oh, BTW, I really appreciate the work that people have done on the VM, but
folks that are just talking..., well, thin
On Thursday 07 June 2001 13:00, Marcelo Tosatti wrote:
> On Thu, 7 Jun 2001, Shane Nay wrote:
> > (Oh, BTW, I really appreciate the work that people have done on the VM,
> > but folks that are just talking..., well, think clearly before you impact
> > other people that are
ut of a
directory sub-structure expecting a 1-1 relationship of the data in the
original directory sub-strucure, and the interpretation of your cramfs
filesystem. But then you pull out the write bits, and that 1-1 relationship
is gone. (I can only see my particular case, but there are probably others
th
om , and within cramfs never accesses that block device
anymore..., it's sort of silly, and _not_ the right way to do it.)
Thanks,
Shane Nay.
(Patches referenced here can be found at:
ftp://ftp.agendacomputing.com/pub/agenda/testing/CES/patches/ , contributed
by various authors: Rob Les
can create a writable cramfs, but oh well) Lord knows I could
use a few extra bits in the cramfs inode (Using sticky bit to denote XIP
mode binaries right now..., such a hack)
Thanks,
Shane Nay.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/
nstead. Should
I just cheat and put in a fake block device? Or am I going about this in an
imbecilic fashion?
Any ideas, criticisms are welcome...
Thanks,
Shane Nay.
(BTW: XIP implementation is by another fellow..., I'm just trying to put
together the linear addressing and his pieces int
g/fs/cramfs/inode.cFri Oct 27 04:22:36 2000
+++ linux/fs/cramfs/inode.c Fri Oct 27 04:30:18 2000
@@ -11,6 +11,20 @@
* The actual compression is based on zlib, see the other files.
*/
+/* Linear Addressing code
+ *
+ * Copyright (C) 2000 Shane Nay.
+ *
+ * Allows you to have a line
Daniel,
> Have you done a comparison of LZO against zlib (decompression
> speed/size vs. compression ratio)? It uses less RAM/CPU to decompress
> at the cost of wasting storage space, but it's hard to make a decision
> without real numbers.
I can't do a test on speed because I haven't had time
On Friday 08 December 2000 05:11, Daniel Quinlan wrote:
> Here's a patch for the cramfs filesystem. Lots of improvements and a
> new cramfsck program, see below for the full list of changes.
>
> It only modifies cramfs code (aside from adding cramfs to struct
> super_block) and aims to be complet
10 matches
Mail list logo