>There is a simple change in strategy that will fix up the updatedb case quite
>nicely, it goes something like this: a single access to a page (e.g., reading
>it) isn't enough to bring it to the front of the LRU queue, but accessing it
>twice or more is. This is being looked at.
Say, when a page
>The conclusion of most of this discussion is in my FREENIX
>paper, which can be found at http://www.surriel.com/lectures/.
Aha... that paper answers a lot of the questions I had about how
things work. I seem to remember asking some of them, too, and didn't
get an answer... :P
--
--
>Now my question is how can it be
>thrashing with swap explicitly turned off?
Easy. All applications are themselves swap space - the binary is
merely memory-mapped onto the executable file. When the system gets
low on memory, the only thing it can do is purge some binary pages,
and then repe
> > > Only the truly stupid would assume accuracy from decimal places.
>>
>> Well then, tell all the teachers in this world that they're stupid, and tell
>> everyone who learnt from them as well.
>
>*All*?
>
>> I'm in high school (gd. 11, junior)
>> and my physics teacher is always screaming
> > I have seen school projects with interfaces done in java (to be 'portable')
>> and you could go to have a coffee while a menu pulled down.
>
>Yeah, but the slowness there comes from the phrase "school project" and not
>the phrase "done in java". I've seen menuing interfaces on a 1 mhz commo
>You can scream all you want that "it isn't free software" but the fact
>of the matter is that you all scream that and then go do your slides for
>your Linux talks in PowerPoint.
Or AppleWorks (Mac), in my case. Or, if I wanted to be flashy, I'd
make the slides up in CorelXARA (which originated
> > > > Btw: can the aplication somehow ask the tcp/ip stack what was
>> >actualy acked?
>> >> (ie. how many bytes were acked).
>> >
>> >no, but it's not necessarily a useful number anyhow -- because it's
>> >possible that the remote end ACKd bytes but the ACK never arrives. so you
>> >c
> > Btw: can the aplication somehow ask the tcp/ip stack what was
>actualy acked?
>> (ie. how many bytes were acked).
>
>no, but it's not necessarily a useful number anyhow -- because it's
>possible that the remote end ACKd bytes but the ACK never arrives. so you
>can get into a situation wher
>> clock drift of a few minutes per day.
That's about 0.1%. It may be relatively large compared to tolerances of
hardware clocks, but it's realistically tiny. It certainly compares
favourably with mkLinux on my PowerBook 5300, which usually drifts by
several hours per day regardless of actual l
>My box has
>
>320280K
>
>from proc/meminfo
>
> 17140 buffer
>123696 cache
> 32303 free
>
>leaving unaccounted
>
>123627K
This is your processes' memory, the inode and dentry caches, and possibly
some extra kernel memory which may be allocated after boot time. It is
*very* much accounted for.
-
>> On the subject of Mike Galbraith's kernel compilation test, how much
>> physical RAM does he have for his machine, what type of CPU is it, and what
>> (approximate) type of device does he use for swap? I'll see if I can
>> partially duplicate his results at this end. So far all my tests have
[ Re-entering discussion after too long a day and a long sleep... ]
>> There is the problem in terms of some people want pure interactive
>> performance, while others are looking for throughput over all else,
>> but those are both extremes of the spectrum. Though I suspect
>> raw throughput is t
> just being curious. Since 2.4.4, I am watching my systems memory
>behaviour a bit:-) Just recently I realized the following: in the
>evening I leave my 128MB system at about 20 MB, 2 MB Buffered and 100 MB
>Cached (plus som 40 MB unneccesary swap :-)). When I come back in the
>morning, Used is s
At 12:29 am +0100 8/6/2001, Shane Nay wrote:
>(VM report at Marcelo Tosatti's request. He has mentioned that rather than
>complaining about the VM that people mention what there experiences were. I
>have tried to do so in the way that he asked.)
>> By performance you mean interactivity or throu
>> > This is going to make all pages have age 0 on an idle system after some
>> > time (the old code from Rik which has been replaced by this code tried to
>> > avoid that)
>
>There's another reason why I think the patch may be ok even without any
>added logic: not only does it simplify the code a
>> >As suggested by Linus, I've cleaned the reapswap code to be contained
>> >inside an inline function. (yes, the if statement is really ugly)
>>
>> I can't seem to find the patch which adds this behaviour to the background
>> scanning.
>
>I've just sent Linus a patch to free swap cache pages at
>As suggested by Linus, I've cleaned the reapswap code to be contained
>inside an inline function. (yes, the if statement is really ugly)
I can't seem to find the patch which adds this behaviour to the background
scanning. Can someone point me to it?
At 11:27 pm +0100 6/6/2001, android wrote:
>> >I'd be happy to write a new routine in assembly
>>
>>I sincerely hope you're joking.
>>
>>It's the algorithm that needs fixing, not the implementation of that
>>algorithm. Writing in assembler? Hope you're proficient at writing in
>>x86, PPC, 68k, M
>I'd be happy to write a new routine in assembly
I sincerely hope you're joking.
It's the algorithm that needs fixing, not the implementation of that
algorithm. Writing in assembler? Hope you're proficient at writing in
x86, PPC, 68k, MIPS (several varieties), ARM, SPARC, and whatever other
ar
>> > Did you try to put twice as much swap as you have RAM ? (e.g. add a
>> > 512M swapfile to your box)
>> > This is what Linus recommended for 2.4 (swap = 2 * RAM), saying
>> > that anything less won't do any good: 2.4 overallocates swap even
>> > if it doesn't use it all. So in your case you ju
>I am waiting patiently for the bug to be fixed. However, it is a real
>embarrasment that we can't run this "stable" kernel in production yet
>because somethign as fundamental as this is so badly broken.
Rest assured that a fix is in the works. I'm already seeing a big
improvement in behaviour o
> On a side question: does Linux support swap-files in addition to
>sawp-partitions? Even if that has a performance penalty, when the system
>is swapping performance is dead anyway.
Yes. Simply use mkswap and swapon/off on a regular file instead of a
partition device. I don't notice any signifi
>It seems bizarre that a 4GB machine with a working set _far_ lower than that
>should be dying from OOM and swapping itself to death, but that's life in 2.4
>land.
I posted a fix for the OOM problem long ago, and it didn't get integrated
(even after I sent Alan a separated-out version from the la
At 12:17 am +0100 3/6/2001, M.N. wrote:
>Basically, that's the question. I compiled my kernel with the SCSI AIC7xxx.o
>driver as a module, and then when it booted up, it paniced. I thought it was
>some sort of a kernel bug, but it didn't really seem that way when I
>recompiled the kernel with SCSI
>The page aging logic does seems fragile as heck. You never know how
>many folks are aging pages or at what rate. If aging happens too fast,
>it defeats the garbage identification logic and you rape your cache. If
>aging happens too slowly.. sigh.
Then it sounds like the current algorithm i
>> * Live Upgrade
>
>LOBOS will let one Linux kernel boot another, but that requires a boot
>step, so it is not a live upgrade. so, no, afaik
If you build nearly everything (except, obviously what you need to boot) as
modules, you can unload modules, build new versions, and reload them. So,
you
>Time to hunt around for a 386 or 486 which is limited to such
>a small amount of RAM ;)
I've got an old knackered 486DX/33 with 8Mb RAM (in 30-pin SIMMs, woohoo!),
a flat CMOS battery, a 2Gb Maxtor HD that needs a low-level format every
year, and no case. It isn't running anything right now...
>If you run into a case where you have a config which would work, but
>CML2 doesn't let you, why don't you fix the grammar instead of saying
>CML2 is wrong? Let's not confuse these two issues as well.
Strongly agree. Especially since I'm pushing for an explicit recognition
of the difference bet
>> order to hold down ruleset complexity and simplify the user
>> experience. The cost of deciding that the answer to that question is
>
>The user experience can be simplified by a NOVICE/EASY/SANE_DEFAULTS
>option, and perhaps a HACKER option for the really strange
>but _theoretically_ ok stuff.
>1. The Mac derivations were half-right. The MAC_SCC one is good but Macs
>can have either of two different SCSI controllers. I fixed that with help
>from Ray Knight, who maintains the 68K Mac port.
If I understand the "philosophy" correctly, it is still possible to specify
additional cards for
>> Aunt Tillie doesn't even know what a kernel is, nor does she want
>> to. I think it's fair to assume that people who configure and
>> compile their own kernel (as opposed to using the distribution
>> supplied ones) know what they are doing.
>
>I'd like to break these assumptions. Or at the ver
>That said, anyone who doesn't understand the former should probably
>get some more C experience before commenting on others' code...
I understood it, but it looked very much like a typo.
--
from: Jonathan "Chromatix" Morton
mail:
>- page_count(page) == (1 + !!page->buffers));
Two inversions in a row? I'd like to see that made more explicit,
otherwise it looks like a bug to me. Of course, if it IS a bug...
--
from: Jonathan "Chromatix"
At 3:41 pm +0100 5/5/2001, Alan Cox wrote:
>> My wild guess is that with the "faster" code, the K7 is avoiding loading
>> cache lines just to write them out again, and is just writing tons of data.
>> The PPC G4 - and perhaps even the G3 - performs a similar trick
>> automatically, without special
At 7:20 am +0100 5/5/2001, Mark Hahn wrote:
>On Fri, 4 May 2001, Seth Goldberg wrote:
>
>> Hi,
>>
>> Before I go any further with this investigation, I'd like to get an
>> idea
>> of how much of a performance improvement the K7 fast_page_copy will give
>> me.
>> Can someone suggest the best benc
>> I'm using an Abit KT7 board (KT133) and my new 1GHz T'bird (running 50-60°C
>> in a warm room) is giving me no trouble. This is with the board and RAM
>> pushed as fast as it will go without actually overclocking anything... and
>> yes, I do have Athlon/K7 optimisations turned on in my kernel
>> the only general issue is that kx133 systems seem to be difficult
>> to configure for stability. ugly things like tweaking Vio.
>> there's no implication that has anything to do with Linux, though.
>
>
>When I reported my problem a couple weeks back another fellow
>said he and several others o
>Where is a patch to allow the sensible OOM I had in prior kernels?
>(cause this crap is getting pitched)
I gave Alan a patch to fix the problem where the OOM activates too early
(eg. when there's still plenty of swap and buffer memory to eat). I don't
know whether this made it into the mainstre
>There seems to be one more reason, take a look at the function
>read_swap_cache_async() in swap_state.c, around line 240:
>
>/*
> * Add it to the swap cache and read its contents.
> */
>lock_page(new_page);
>add_to_swap_cache(new_page, entry);
>rw_s
>>I like this idea quite a bit. It would probably not
>>be terribly expensive to rent/buy the required equipment,
>>it would be easy to use and would not be terribly disruptive
>>to the preceedings.
>
>Just to keep this on topic... the real question is what would be
>the best way to interface thi
>I just ran netscape which for some reason or another went totally
>whacky and gobbled RAM. It has done this before and made the box
>totally unuseable in 2.2.17-2.2.19 befor the kernel killed 90% of
>my running apps before getting the right one. This time, it
>OOM'd and killed Netscape and I go
>ticks = jiffies; while (ticks == jiffies); ticks = jiffies; ?
jiffies is updated by an interrupt routine, I think.
--
from: Jonathan "Chromatix" Morton
mail: [EMAIL PROTECTED] (not for attachments)
big-mail: [EMAIL PROTECTED]
> double x = 5483.99;
> float y = 5483.99;
>5483.99
>5483.990234
Well, duh. Floats are less accurate than doubles, so what? Read your C
textbook again.
--
from: Jonathan "Chromatix" Morton
mail: [EMAIL PROTECTED] (not fo
The attached patch applies to 2.4.3 and should address the most serious
concerns surrounding OOM and low-memory situations for most people. A
summary of the patch contents follows:
MAJOR: OOM killer now only activates when truly out of memory, ie. when
buffer and cache memory has already been ea
There's clearly been lots of discussion about OOM (and memory management in
general) over the last week, so it looks like it's time to summarise it and
work out the solution that's actually going to find it's way into the
kernel.
Issue 1:
The OOM killer was activating too early. I have a
I'm going to be gentle here and try to point out where your suggestions are
flawed...
>a. don't kill any task with a uid < 100
Suppose your system daemon springs a leak? It will have to be killed
eventually, however system daemons can sensibly be given a little "grace".
Also, the UIDs used by a
>If we use my OOM killer API, this patch would be a module and
>could have module parameters to select that.
>
>Johnathan: I URGE you to apply my patch before adding OOM killer
> stuff. What's wrong with it, that you cannot use it? ;-)
>
>It is easy to add configurables to a module and play with
>> relative ages. The major flaw in my code is that a sufficiently
>> long-lived
>> process becomes virtually immortal, even if it happens to spring a serious
>> leak after this time - the flaw in yours is that system processes
>
>I think this could easily be fixed if you'd 'chop off' the runtime
>> >> Of course, I realised that. Actually, what the code does is take an
>> >> initial badness factor (the memory usage), then divide it using goodness
>> >> factors (some based on time, some purely arbitrary), both of which can be
>> >> considered dimensionless. Also, at the end, the absolute
>Plase change to 100 to 500 - this would make it consistant with
>the useradd command, which starts adding new users at the UID 500
Depends on which distribution you're using. In my experience, almost all
the really important stuff happens below 100. In any case, the
OOM-kill-selection algorith
>Out of Memory: Killed process 117 (sendmail).
>
>What we did to run it out of memory, I don't know. But I do know that
>it shouldn't be killing one process more than once... (the process
>should not exist after one try...)
This is a known bug in the Out-of-Memory handler, where it does not count
>> I have 2 ideas:
>> * glibc corrupted
>> * did you downgrade the cpu?
>
>These happen frequently to me (when compiling and installing a
>new glibc)
>But in this case you would have other messages (IIRC something
>like
>respawn too fast).
>Thus the problem is not this!
How about running memtest8
>> Understood - my Physics courses covered this as well, but not using the
>> word "normalise".
>
>Be that as it may, Martin's comments about normalizing are nonsense.
>Rik's killer (at least in 2.4.3-pre7) produces a badness value that's
>a product of badness factors of various units. It then us
>These are NOT the only 64 bit systems - Intel, PPC, IBM (in various guises).
>If you need raw compute power, the Alpha is pretty good (we have over a
>1000 in a Cray T3..).
Best of all, the PowerPC and the POWER are binary-compatible to a very
large degree - just the latter has an extra set of 6
>> I'm currently investigating the old non-overcommit patch, which (apart from
>> needing manual applying to recent kernels) appears to be rather broken in a
>> trivial way. It prevents allocation if total reserved memory is greater
>> than the total unallocated memory. Let me say that again, a
Ugh, something was going screwy. Trying from a different machine.
--
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing for...
-
ACK! that last diff got linewrapped somewhere in transit. Try this one...
-
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing fo
The attached patch is against 2.4.1 and incorporates the following:
- More optimistic OOM checking, and slightly improved OOM-kill algorithm,
as per my previous patch.
- Accounting of reserved memory, allowing for...
- Non-overcommittal of memory if sysctl_overcommit_memory < 0, enforced
even f
>> My patch already fixes OOM problems caused by overgrown caches/buffers, by
>> making sure OOM is not triggered until these buffers have been cannibalised
>> down to freepages.high. If balancing problems still exist, then they
>> should be retuned with my patch (or something very like it) in ha
>[ about non-overcommit ]
>> > Nobody feels its very important because nobody has implemented it.
>
>Enterprises use other systems because they have much better resource
>management than Linux -- adding non-overcommit wouldn't help them much.
>Desktop users, Linux newbies don't understan
>> I didn't quite understand Martin's comments about "not normalised" -
>> presumably this is some mathematical argument, but what does this actually
>> mean?
>
>Not mathematics. It's from physics. Very trivial physics, basic scool
>indeed.
>If you try to calculate some weightning
>factors which i
>> >start your app, wait for malloc to fail, hit enter for the other app and
>> >watch you app to be OOM killed ;)
>>
>> That would only happen if memory_overcommit was turned on, in which case my
>> modification would have zero effect anyway (the overcommit test happens
>> before my code).
>
>Tha
>- the AGE_FACTOR calculation will overflow after the system has
> an uptime of just _3_ days
Tsk tsk tsk...
>Now if you can make something which preserves the heuristics which
>serve us so well on desktop boxes and add something that makes it
>also work on your Oracle servers, then I'd be inte
>> While my post didn't give an exact formula, I was quite clear on the
>>fact that
>> the system is allowing the caches to overrun memory and cause oom problems.
>
>Yes. A testcase would be good. It's not happening to everybody nor is
>it happening under all loads. (if it were, it'd be long de
>> free = atomic_read(&buffermem_pages);
>> free += atomic_read(&page_cache_size);
>> free += nr_free_pages();
>> - free += nr_swap_pages;
>
>> + /* Since getting swap info is expensive, see if our allocation
>>can happen in physical RAM */
>
>Actually, getting
>Right now my best approximation is to make the OOM test be as optimistic as
>it is safe to be, and the vm_enough_memory() test as pessimistic as
>sensible. Expect a test patch to appear on this list soon.
...and here it is!
This fixes a number of small but linked problems:
- malloc() never re
>While my post didn't give an exact formula, I was quite clear on the fact that
>the system is allowing the caches to overrun memory and cause oom problems.
>I'm more than happy to test patches, and I would even be willing to suggest
>some algorithms that might help, but I don't know where to stic
>I thought of some things which could break it, which I want to try and deal
>with before releasing a patch. Specifically, I want to make freepages.min
>sacrosanct, so that malloc() *never* tries to use it. This should be
>fairly easy to implement - simply subtract freepages.min from the freemem
At 6:58 am + 24/3/2001, Rik van Riel wrote:
>On Sat, 24 Mar 2001, Jonathan Morton wrote:
>
>> Hmm... "if ( freemem < (size_of_mallocing_process / 20) )
>>fail_to_allocate;"
>>
>> Seems like a reasonable soft limit - processes which have already g
>General thread comment:
>To those who are griping, and obviously rightfully so, Rik has twice
>stated on this list that he could use some help with VM auto-balancing.
>The responses (visible on this list at least) was rather underwhelming.
>I noted no public exchange of ideas.. nada in fact.
>
>G
>Hmm... "if ( freemem < (size_of_mallocing_process / 20) ) fail_to_allocate;"
>
>Seems like a reasonable soft limit - processes which have already got lots
>of RAM can probably stand not to have that little bit more and can be
>curbed more quickly. Processes with less probably don't deserve to d
>[to various people]
>
>No, ulimit does not work. (But it helps a little.)
>No, /proc/sys/vm/overcommit_memory does not work.
Entirely correct. ulimit certainly makes it much harder for a single
runaway process to take down important parts of the system - now why
doesn't $(MAJOR_DISTRO_VENDOR) s
>The main point is letting malloc fail when the memory cannot be
>guaranteed.
If I read various things correctly, malloc() is supposed to fail as you
would expect if /proc/sys/vm/overcommit_memory is 0. This is the case on
my RH 6.2 box, dunno about yours. I can write a simple test program whic
>It would make much sense to make the oom killer
>leave not just root processes alone but processes belonging to a UID
>lower
>then a certain value as well (500). This would be:
>
>1. Easly managable by the admin. Just let oracle/www and analogous users
> have a UID lower then let's say 500.
Th
>> Rik, is there any way we could get a /proc entry for this, so that one
>> could do something like:
>
>I will respond; NO there is no way for security reasons this is not a
>good idea.
Just out of interest, what information does the OOM score expose that isn't
already available to Joe Random Un
>I'm annoyed when persons post virus alerts to unrelated lists but this
>is a serious threat. If your offended flame away.
Since this worm exploits a BIND vulerability, it would be better placed on
the BIND mailing list than the kernel one. If it exploited a kernel bug,
then it would be more wel
>- automated heavy stress testing
This would be an interesting one to me, from a benchmarking POV. I'd like
to know what my hardware can really do, for one thing - it's all very well
saying this box can do X Whetstones and has a 100Mbit NIC, but it's a much
more solid thing to be able to say "my
> Duh, before making such a claim you should consider the fact that
> this is overclocking your PCI/AGP bus and I have yet to see any
> graphic cards/IDE controllers/other devices which are rated for
> 37MHz PCI bus speed.
The "blue and white" PowerMac G3 and certain early PowerMac G4s used a
66M
> And we have done experiments with controlling interrupts and running
> the RX at "lower" priority. The idea is take RX-interrupt and immediately
> postponing the RX process to tasklet. The tasklet opens for new RX-ints.
> when its done. This way dropping now occurs outside the box since and
> d
>At this point I am 100% lost. any help would be
>greatly appreciated. I am willing to do any testing
>of the system that anyone may need. Currently I have
>no working copy of linux on the sytem. My normal
>process to get running is to install slackware.
>download 2.4.2 and the latest ac patch
>> If crashes are routine on this machine, I'd recommend that you take
>> a serious look at your ram. (or if you're overclocking, don't)
>
>Crashes were routine, and I was not overclocking, so I took Mike's
>advice and bought a new 256MB DIMM. The computer hasn't crashed
>once since I installed it
>Indeed. The whole concept is fatally flawed; probably the biggest
>challenge facing a cracker attacking this system is choosing which of the
>many avenues to start with :-)
>
>1. The drivers. I really like displaying audio and video via my hard
>drive, so I use drivers which do that...
Or you co
>> It's pretty clear that the IDE drive(r) is *not* waiting for the physical
>> write to take place before returning control to the user program, whereas
>> the SCSI drive(r) is.
>
>This would not be unexpected.
>
>IDE drives generally always do write buffering. I don't even know if you
>_can_ tur
\>Is there something generally wrong with how linux determines total cpu
>usage (via procmeter3 and top) when dealing with applications that are
>threaded? I routinely get 0% cpu usage when playing mpegs and mp3s and
>some avi's even (Divx when using no software enhancement) ... Somehow i
>doubt
>I am not going to bite on your flame bate, and are free to waste you money.
I don't flamebait. I was trying to clear up some confusion...
>No, SCSI does with queuing.
>I am saying that the ata/ide driver rips the heart out of the
>io_request_lock what to darn long. This means that upon execut
>VP_IDE: IDE controller on PCI bus 00 dev 39
>VP_IDE: chipset revision 16
>VP_IDE: not 100% native mode: will probe irqs later
>ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
>VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1
>ide0: BM-DMA at 0xd00
>Jonathan Morton ([EMAIL PROTECTED]) wrote :
>
>> The OS needs to know the physical act of writing data has finished
>>before
>> it tells the m/board to cut the power - period. Pathological data sets
>> included - they are the worst case which every engineer mus
>On Tue, 6 Mar 2001, Mike Black wrote:
>
>> Write caching is the culprit for the performance diff:
Indeed, and my during-the-boring-lecture benchmark on my 18Gb IBM
TravelStar bears this out. I was confused earlier by the fact that one of
my Seagate drives blatently ignores the no-write-caching
>> Pathological shutdown pattern: assuming scatter-gather is not allowed (for
>> IDE), and a 20ms full-stroke seek, write sectors at alternately opposite
>> ends of the disk, working inwards until the buffer is full. 512-byte
>> sectors, 2MB of them, is 4000 writes * 20ms = around 80 seconds (no
>> i assume you meant to time the xlog.c program? (or did i miss another
>> program on the thread?)
Yes.
>> i've an IBM-DJSA-210 (travelstar 10GB, 5411rpm) which appears to do
>> *something* with the write cache flag -- it gets 0.10s elapsed real time
>> in default config; and gets 2.91s if i d
>> It's pretty clear that the IDE drive(r) is *not* waiting for the physical
>> write to take place before returning control to the user program, whereas
>> the SCSI drive(r) is. Both devices appear to be performing the write
>
>Wrong, IDE does not unplug thus the request is almost, I hate to adm
>I don't know if there is any way to turn of a write buffer on an IDE disk.
hdparm has an option of this nature, but it makes no difference (as I
reported). It's worth noting that even turning off UDMA to the disk on my
machine doesn't help the situation - although it does slow things down a
lit
I've run the test on my own system and noted something interesting about
the results:
When the write() call extended the file (rather than just overwriting a
section of a file already long enough), the performance drop was seen, and
it was slower on SCSI than IDE - this is independent of whether
1) ES1371 driver in 2.4.2 produces high-pitched buzzing instead of sound.
2) AudioPCI/97 card in friend's Duron-based machine (very similar to mine,
but different soundcard) works fine under Mandrake 7.1 stock kernel
(2.2.15-4mdk), but produces only loud, high-pitched buzzing noises when
used und
>Does anyone know whereabouts I could go to get an index of all
>configurations options (i.e. drivers, etc.) that are available in the
>latest Linux kernel? I am waiting on a kernel mode driver for my USB
>digital camera, but I don't want to go ahead and download the full 24Mb
>just to find out if
>milkplus:~# hdparm /dev/hda
>/dev/hda:
> multcount= 0 (off)
> I/O support = 0 (default 16-bit)
> unmaskirq= 0 (off)
> using_dma= 1 (on)
> keepsettings = 0 (off)
> nowerr = 0 (off)
> readonly = 0 (off)
> readahead= 8 (on)
> geometry = 2584/240/63, sectors = 3
I'm seeing a lot of messages in my gateway's system log of the form:
lithium kernel: NAT: 0 dropping untracket packet c233f340 1 10.38.10.67 ->
224.0.0.2
Virtually all these packets come from machines on the student LAN on the
"outside" of the gateway. Whether or not iptables is configured to d
>> Would it not be useful if the isa-pnp driver would fall back
>> to utilizing the PnP BIOS (if possible) in order to read and
>
>I would find this EXTREMELY usefull... my Compaq laptop's
>hot-dock with power eject will only work if Linux uses
>PnP BIOS's insert/eject methods.
>
>I saw some code
At 2:32 am + 25/2/2001, Jeremy Jackson wrote:
>Jeff Garzik wrote:
>
>(about optimizing kernel network code for busmastering NIC's)
>
>> Disclaimer: This is 2.5, repeat, 2.5 material.
>
>Related question: are there any 100Mbit NICs with cpu's onboard?
>Something mainstream/affordable?(i.e. not
>Now that you provide source for r5 and dx_hack_hash, let me feed my
>collections to them.
>r5: catastrophic
>dx_hack_hash: not bad, but the linear hash is better.
So, not only does the linear hash normally provide a shorter worst-case
chain, its' results are actually more consistent than the o
1 - 100 of 122 matches
Mail list logo