On Wed, 23 Feb 2005, Nick Warne wrote: > > I upgraded memory - all 4 sticks - over Christmas, and after a few weeks > uptime, tried 2.4.10 again. > > I have had no problems since - so perhaps I did have bad memory (it was old). > But all tests never showed anything untoward. > > I was always suspicious why my 2.6.4 build ran OK, but newer builds always > failed. Could it be a subtle fault in memory whilst building kernels that > does it?
Perhaps, though I don't remember hearing of any example of that. I think what typically happens, on a build the size of the kernel, is that one of the compilations collapses with a SIGSEGV because some pointer within gcc gets corrupted by the bad memory, so the build fails to complete; rather than completing with a bad vmlinux output. A more likely cause for what you saw, if the bad memory is low down or high up (and what I mean by high may change: wli made an important change to memory allocation ordering in 2.6.8 which would affect it), is that one kernel's image or system initialization will happen to allocate the bad memory to something scarcely used, where another may allocate it to something vital. But please don't place too much weight on my idle speculations, others have a better appreciation of these issues. Hugh - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/