On Fri, Sep 01, 2000 at 12:05:03PM +0100, Alan Cox wrote:
> People would appreciate lots of things but stability happens to come first.
> Thats why its primarily focussed on driver stuff not on revamping the
> internals. Right now Im not happy with the nfsv3 stuff I last looked at and
> it seems
On Thu, Oct 05, 2000 at 04:58:39PM +0200, David Weinehall wrote:
> Using the NFSv3 server in the v2.4.0test9 kernel (I haven't tested any
> earlier v2.3.xx or v2.4.0testx kernels) I'm having problems with
> (for instance) compile glib.
>
> The setups I've tried are:
>
> wsize = rsize = 1kB
> Lin
On Wed, Mar 28, 2001 at 04:32:44PM +0200, Romano Giannetti wrote:
> But with the new VFS semantics, wouldn't be possible for a MUA to make a
> thing like the following:
>
> spawn a process with a private namespace. Here a minimun subset of the
> "real" tree (maybe all / except /dev) is mounted r
On Thu, Jan 27, 2005 at 05:37:14PM +0100, Vojtech Pavlik wrote:
> On Thu, Jan 27, 2005 at 11:34:31AM -0500, Bill Rugolsky Jr. wrote:
> > I have a Digital HiNote collecting dust which had this keyboard problem
> > with the RH 6.x 2.2.x boot disk kernels, IIRC. I can test if you
On Sun, Feb 13, 2005 at 09:22:46AM +0100, Vojtech Pavlik wrote:
> And I suppose it was running just fine without the patch as well?
Correct.
> The question was whether the patch helps, or whether it is not needed.
If you look again at the patch I posted, it only borrowed a few lines
of the pat
On Thu, Jan 27, 2005 at 03:14:36PM +, Alan Cox wrote:
> Myths are not really involved here. The IBM PC hardware specifications
> are fairly well defined and the various bits of "we glued a 2Mhz part
> onto the bus" stuff is all well documented. Nowdays its more complex
> because most kbc's aren
On Thu, Mar 03, 2005 at 02:15:06AM -0800, Andrew Morton wrote:
> If we were to get serious with maintenance of 2.6.x.y streams then that is
> a 100% productisation activity. It's a very useful activity, and there is
> demand for it. But it is a very different activity. And a lot of this
> discus
On Thu, Mar 03, 2005 at 02:33:58PM -0500, Dave Jones wrote:
> If you accelerate the merging process, you're lowering the review process.
> The only answer to get regressions fixed up as quickly as possible
> (because prevention is nigh on impossible at the current rate, so
> any faster is just abs
This patch against 2.6.11-rc1-bk6 adds /proc//rlimit to export
per-process resource limit settings. It was written to help analyze
daemon core dump size settings, but may be more generally useful.
Tested on 2.6.10. Sample output:
[EMAIL PROTECTED] ~ # cat /proc/$$/rlimit
cpu unlim
On Tue, Jan 18, 2005 at 04:10:56PM -0800, Chris Wright wrote:
> +#define INIT_RLIMITS \
> +{\
> + { RLIM_INFINITY, RLIM_INFINITY }, \
> + { RLIM_INFINITY, RLIM_INFINITY }, \
> +
On Wed, Jan 19, 2005 at 11:38:03AM -0800, Chris Wright wrote:
> * Jan Knutar ([EMAIL PROTECTED]) wrote:
> > A "cool feature" would be if you could do
> > echo nofile 8192 8192 >/proc/`pidof thatserverproess`/rlimit
> > :-)
>
> This is security sensitive, and is currently only expected to be change
On Thu, Jan 20, 2005 at 03:43:58PM +0100, Pavel Machek wrote:
> It would be nice if you could make it "value-per-file". That way,
> it could become writable in future. If "max nice level" ever becomes rlimit,
> this would be very usefull.
Agreed, though write support present difficulties.
My prin
On Tue, Jan 25, 2005 at 02:03:02PM -0800, Chris Wright wrote:
> * Ingo Molnar ([EMAIL PROTECTED]) wrote:
> > did that thread go into technical details? There are some rlimit users
> > that might not be prepared to see the rlimit change under them. The
> > RT_CPU_RATIO one ought to be safe, but gene
On Fri, May 18, 2007 at 11:14:57PM +0200, Krzysztof Halasa wrote:
> I'm certainly missing something but what are the advantages of this
> code (over current gzip etc.), and what will be using it?
Richard's patchset added it to the crypto library and wired it into
the JFFS2 file system. We recentl
On Thu, Apr 12, 2007 at 11:52:38AM -0400, Christopher S. Aker wrote:
> I've been trying to find a method for compressing process core dumps
> before they hit disk.
>
> I ask because we've got some fairly large UML processes (1GB for some),
> and we're trying to capture dumps to help Jeff debug a
On Thu, Apr 12, 2007 at 05:28:45PM +0100, Alan Cox wrote:
> > There are userspace solutions to this problem: allowing the
> > uncompressed core dump to spin out to disk and then coming in afterwards
> > and doing the compression, or maybe even a compressed filesystem where
> > the core dumps la
16 matches
Mail list logo