Joe Gidi [...@entropicblur.com] wrote:
>
> Does this mean that amd64 can now handle >4G of RAM, or is that a separate
> issue?
Separate issue
But if you have an iommu device and you set bigmem=1 then it might work for you
On Wed, Jan 27, 2010 at 10:48:01PM -0500, Ted Unangst wrote:
> Obviously, as any competent sysadmin like nixlists knows, you should
> restrict all your processes to a max of 20 megs.
64KB is enough for anyone. Giving people more resources they may
misuse is just "stupid". And swap is doubly so sin
On Thu, Jan 28, 2010 at 1:24 AM, Robert wrote:
> nixlists wrote:
>>
>> The idea is to limit memory such that running out of RAM+swap is not
>> possible, or unlikely. You can set the limit on the allowed number of
>> processes as well.
>
> I do use ulimit / login.conf for some processes, but does a
nixlists wrote:
The idea is to limit memory such that running out of RAM+swap is not
possible, or unlikely. You can set the limit on the allowed number of
processes as well.
I do use ulimit / login.conf for some processes, but does anybody really
use it for *all possible* processes on each pro
On Wed, Jan 27, 2010 at 9:23 PM, bofh wrote:
>> The idea is to limit memory such that running out of RAM+swap is not
>> possible, or unlikely. You can set the limit on the allowed number of
>> processes as well.
>
>
> $ ulimit -m
> 971876
> $ dmesg | grep real\ mem
> real mem = 1039691776 (991MB)
Obviously, as any competent sysadmin like nixlists knows, you should
restrict all your processes to a max of 20 megs.
On Jan 27, 2010, at 9:23 PM, bofh wrote:
On Wed, Jan 27, 2010 at 8:14 PM, nixlists wrote:
On Wed, Jan 27, 2010 at 7:53 PM, Denis Doroshenko
wrote:
aren't you missing the
On Wed, Jan 27, 2010 at 8:14 PM, nixlists wrote:
> On Wed, Jan 27, 2010 at 7:53 PM, Denis Doroshenko
> wrote:
>> aren't you missing the point of original comment made by Otto?
>>
>> consider a situation, when all the processes in the system "are
>> behaving", none of them violates their rlimits,
On Wed, Jan 27, 2010 at 7:53 PM, Denis Doroshenko
wrote:
> On 1/28/10, nixlists wrote:
>> Why kill random processes that may not be misbehaving and/or cause a
>> kernel panic when you want to kill the process(es) that leak memory or
>> are hungry in the first place? It's possible to avoid kern
On Wed, Jan 27, 2010 at 4:53 PM, Denis Doroshenko
wrote:
> so the OS needs to do something. what should it do? should it just
> panic? or may be losing one process is better than losing them all?
> then, what are the criteria for choosing processes to be killed?..
>
> wondering if "random" means
On 1/28/10, nixlists wrote:
> Why kill random processes that may not be misbehaving and/or cause a
> kernel panic when you want to kill the process(es) that leak memory or
> are hungry in the first place? It's possible to avoid kernel panics in
> this case IMO, and not kill random processes.
On Wed, Jan 27, 2010 at 10:35 AM, Robert wrote:
> frantisek holop wrote:
>>
>> the kernel will kill random processes? are we talking about linux's OOM
>> here or openbsd? since when is this in openbsd? i seem to recall
>> some debate where openbsd devs found that idea ridiculous. i know i do,
Whoops... re-reading, I see that I missed your disklabel output... sorry.
On Wed, 27 Jan 2010 17:25 -0500, "Brad Tilley" wrote:
> On Wed, 27 Jan 2010 20:43 +, "Rob Sheldon"
> wrote:
>
> [snip]
>
> > softraid0 at root
> > root on sd1a swap on sd1b dump on sd1b
> >
> > ...that's odd, it's
On 2010-01-27, Rob Sheldon wrote:
> The longer version: this is a backup server running backuppc for a
> corporate client ("large enough number of workstations") that does research
> work ("some really big files"). I _thought_ I had read the big filesystem
> FAQ carefully, but somehow missed that
On Wed, 27 Jan 2010 20:43 +, "Rob Sheldon" wrote:
[snip]
> softraid0 at root
> root on sd1a swap on sd1b dump on sd1b
>
> ...that's odd, it's showing swap (and dump) on sd1b, but there's no such
> thing:
>
> $ sudo df /dev/sd1b
> df: /dev/sd1b: Device not configured
>
> ...maybe it really
On Wed, 27 Jan 2010 22:06:19 +0100, Otto Moerbeek wrote:
>
> No, currently the amount of physical memory an amd64 can address is
> limited.
Well, F___. :-(
The rule here then is, if you've got a partition bigger than 1TB, you
*must* have swap?
- R.
--
[__ Robert Sheldon
[__ Founder, No Proble
On Wed, Jan 27, 2010 at 08:43:40PM +, Rob Sheldon wrote:
> On Wed, 27 Jan 2010 07:42:42 +0100, Otto Moerbeek wrote:
> > On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
> >
> >> There's no dmesg attached because I'm not on-site with the server at
> the
> >> moment, and because AF
On Wed, 27 Jan 2010 07:42:42 +0100, Otto Moerbeek wrote:
> On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
>
>> There's no dmesg attached because I'm not on-site with the server at
the
>> moment, and because AFAICT this is a known problem.
>
> A pity, since it does matter what platf
hmm, on Wed, Jan 27, 2010 at 04:35:19PM +0100, Robert said that
> If the OS runs out of (any) memory then there is already a serious
there's plenty of discussion about the virtues/stupidity
of the OOM killer approach, including various "pardon" policies.
google for "out of fuel linux" for amusemen
On Wed, Jan 27, 2010 at 10:31:40AM -0500, Ted Unangst wrote:
> On Wed, Jan 27, 2010 at 10:00 AM, frantisek holop wrote:
> > hmm, on Wed, Jan 27, 2010 at 03:28:12PM +0100, Otto Moerbeek said that
> >> Depends on the arch. i386 is limited to 1G, amd64 is limited to 8G per
> >> process. What happen
frantisek holop wrote:
the kernel will kill random processes? are we talking about linux's OOM
here or openbsd? since when is this in openbsd? i seem to recall
some debate where openbsd devs found that idea ridiculous. i know i do,
and the machine should panic instead of starting shooting dow
On Wed, Jan 27, 2010 at 10:00 AM, frantisek holop wrote:
> hmm, on Wed, Jan 27, 2010 at 03:28:12PM +0100, Otto Moerbeek said that
>> Depends on the arch. i386 is limited to 1G, amd64 is limited to 8G per
>> process. What happens if more memory is allocated than the available
>> swap is that the k
On Wed, Jan 27, 2010 at 10:11:57AM -0500, Joe Gidi wrote:
> On Wed, January 27, 2010 9:28 am, Otto Moerbeek wrote:
> > Depends on the arch. i386 is limited to 1G, amd64 is limited to 8G per
> > process. What happens if more memory is allocated than the available
> > swap is that the kernel will k
On Wed, January 27, 2010 9:28 am, Otto Moerbeek wrote:
> Depends on the arch. i386 is limited to 1G, amd64 is limited to 8G per
> process. What happens if more memory is allocated than the available
> swap is that the kernel will kill random processes to free swap. That
> might be what is going on
hmm, on Wed, Jan 27, 2010 at 03:28:12PM +0100, Otto Moerbeek said that
> Depends on the arch. i386 is limited to 1G, amd64 is limited to 8G per
> process. What happens if more memory is allocated than the available
> swap is that the kernel will kill random processes to free swap. That
> might be
On Wed, Jan 27, 2010 at 02:06:20PM +, Rob Sheldon wrote:
> On Wed, 27 Jan 2010 07:42:42 +0100, Otto Moerbeek wrote:
> > On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
> >
> >> Hi,
> >
> > Therse days, amd64 is the only platform that increases the limit
> > (MAXDSIZE) to 8G. Th
On Wed, 27 Jan 2010 07:42:42 +0100, Otto Moerbeek wrote:
> On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
>
>> Hi,
>
> Therse days, amd64 is the only platform that increases the limit
> (MAXDSIZE) to 8G. Though you venture into untested territory, we
> (myself at least) just do not
On Tue, 26 Jan 2010 19:10:47 -0600 (CST), "L. V. Lammert"
wrote:
> On Wed, 27 Jan 2010, Rob Sheldon wrote:
>
> Don't know if this is related to a problem I had on a machine recently,
..
> however I found that if I hung the 'bad' drive on ANOTHER machine, the
> fsck ran just fine!
To be honest, I
On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
> Hi,
>
> So, the short version is that I have a server with OpenBSD 4.6 that can't
> fsck its big partition; fsck fails with a segfault every time. If I "ulimit
> -d unlimited" before fsck'ing, it just takes a little longer to segfault
On Wed, Jan 27, 2010 at 12:38:47AM +, Rob Sheldon wrote:
> Hi,
>
> So, the short version is that I have a server with OpenBSD 4.6 that can't
> fsck its big partition; fsck fails with a segfault every time. If I "ulimit
> -d unlimited" before fsck'ing, it just takes a little longer to segfault.
On Wed, 27 Jan 2010, Rob Sheldon wrote:
> Hi,
>
> So, the short version is that I have a server with OpenBSD 4.6 that can't
> fsck its big partition; fsck fails with a segfault every time. If I "ulimit
> -d unlimited" before fsck'ing, it just takes a little longer to segfault.
> It produces no oth
30 matches
Mail list logo