I was reading in to the parameter a little more and it appears that the
defuault for vm.overcommit_ratio is 50%, I am considering bumping this
up to 95% so the sums look like this:
max memory allocation for process = swap + ratio of physical memory
21 + (16 * 0.95) = 36.2GB
This in theory sho
Hi,
On 16/06/2014 14:15, Andres Freund wrote:
Hi,
On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
# Dont hand out more memory than neccesary
vm.overcommit_memory = 2
So you haven't tune overcom
Hi,
On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
> [1] 3 x ESX VM's
> [a] 8 vCPU's each
> [b] 16GB memory each
> # Dont hand out more memory than neccesary
> vm.overcommit_memory = 2
So you haven't tune overcommit_ratio at all? Can you show
/proc/memin
Hi All,
I need some assistance with a particular out of memory issue I am
currently experiencing, your thoughts would be greatly appreciated.
Configuration:
[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
[2] CentOS 6.5 64-bit on each
On Feb 5, 2008, at 4:28 PM, Jeff Davis wrote:
On Mon, 2008-02-04 at 16:11 -0600, Erik Jones wrote:
Are you sure the postmaster is being launched
under ulimit unlimited?
ulimit -a gives:
core file size(blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
file size
On Mon, 2008-02-04 at 16:11 -0600, Erik Jones wrote:
> > Are you sure the postmaster is being launched
> > under ulimit unlimited?
>
> ulimit -a gives:
>
> core file size(blocks, -c) unlimited
> data seg size (kbytes, -d) unlimited
> file size (blocks, -f) unlimited
>
I wrote:
> ... I'm wondering a bit why
> CacheMemoryContext has so much free space in it, but even if it had none
> you'd still be at risk.
I tried to reproduce this by creating a whole lot of trivial tables and
then pg_dump'ing them:
create table t0 (f1 int primary key); insert into t0 values(0)
Erik Jones <[EMAIL PROTECTED]> writes:
> On Feb 4, 2008, at 3:26 PM, Tom Lane wrote:
>> Are you sure the postmaster is being launched
>> under ulimit unlimited?
> ulimit -a gives:
One possible gotcha is that ulimit in an interactive shell isn't
necessarily the same environment that an init script
On Feb 4, 2008, at 3:26 PM, Tom Lane wrote:
Erik Jones <[EMAIL PROTECTED]> writes:
Sure. I've attached an archive with the full memory context and
error for each. Note that I'm already 99% sure that this is due to
our exorbitantly large relation set which is why I think pg_dump's
catalog que
Erik Jones <[EMAIL PROTECTED]> writes:
> Sure. I've attached an archive with the full memory context and
> error for each. Note that I'm already 99% sure that this is due to
> our exorbitantly large relation set which is why I think pg_dump's
> catalog queries are running out of work_mem (c
On Feb 4, 2008, at 1:27 PM, Tom Lane wrote:
We'd need to see more details to really give decent advice. Exactly
what queries and exactly what was the error message (in particular
I'm wondering how large the failed request was)? Which PG version?
Can you get the memory context dump out of the
Erik Jones <[EMAIL PROTECTED]> writes:
> Hello, this past weekend I received a couple of Out of Memory errors
> while running pg_dump for two different selects against the
> catalogs, one with pg_get_viewdef() and the other with one of the
> pg_index join pg_class left join pg_depend queries
Hello, this past weekend I received a couple of Out of Memory errors
while running pg_dump for two different selects against the
catalogs, one with pg_get_viewdef() and the other with one of the
pg_index join pg_class left join pg_depend queries). Is it work_mem
I should be increasing wit
>
> Vacuum memory usage is tuned by the "maintenance_work_mem" parameter. I
> suggest you look at
> http://www.postgresql.org/docs/8.2/static/runtime-config-resource.html and
> http://www.postgresql.org/docs/8.2/static/kernel-resources.html#AEN19338.
Thanks Sander, I've read so many of these pag
> If this is only a PostgreSQL database server, don't limit the postgres user.
> Don't tweak these limits unless you know exactly what you are doing.
Unfortunately, it is not. It has other applications. Including Apache
and so on. I tried not setting the ulimits at all, but it seems to be
require
Hi,
> Now if I want a "maintenance_work_mem" of 64M for Postgresql, what
> should the "max user processes" setting be in my ulimit, or the "open
> files" setting etc? Is there a Postgresql help or doc page I can read
> to see how these values map? I'd like to be more educated in how I
> test to t
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgressoftnofile 8192
> > postgreshardnofile 8192
> >
> > How should I change these values?
"Lim Berger" <[EMAIL PROTECTED]> writes:
> I think I have located the problem. It is in "/etc/profile" where some
> ulimits are added. This is the offending text, I think:
> #* cPanel Added Limit Protections -- BEGIN
> #unlimit so we can run the whoami
> ulimit -n 4096 -u 143
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgressoftnofile 8192
> > postgreshardnofile 8192
> >
> > How should I change these values?
Hi Lim,
It might also be in /etc/security/limits.conf.
Thanks. I see these two lines in that file:
postgressoftnofile 8192
postgreshardnofile 8192
How should I change these values? I am not sure how this reflects the
"ulimit" options.
Those are limits to the allow
Hi Lim,
"Lim Berger" <[EMAIL PROTECTED]> writes:
Wow, you are right! The "su - postgres" showed up with wildly
different values! Most notably, the "max user processes" is only 20!!
Whereas in the regular user stuff it was above 14000. Would you know
how to change this in a CentOS Linux machine?
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> >> Wow, you are right! The "su - postgres" showed up with wildly
> >> different values! Most notably, the "max user processes" is only 20!!
> >> Whereas in the regular user stuff it was
"Lim Berger" <[EMAIL PROTECTED]> writes:
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Whe
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > Lim Berger escribió:
> >
> > > Thanks. I did "su postgres" and ran the ulimit command again. All
> > > values are the same, except for "open files" which is double in the
> > > case of this
On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Lim Berger escribió:
>
> > Thanks. I did "su postgres" and ran the ulimit command again. All
> > values are the same, except for "open files" which is double in the
> > case of this user (instead of 4096, it is 8192). Not sure what I can
> > g
Lim Berger escribió:
> Thanks. I did "su postgres" and ran the ulimit command again. All
> values are the same, except for "open files" which is double in the
> case of this user (instead of 4096, it is 8192). Not sure what I can
> gather from that?
Try "su - postgres" instead (which will run the
On 8/13/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> In response to "Lim Berger" <[EMAIL PROTECTED]>:
>
> > On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > > ERROR: out of memory
> > > > DETAIL: Failed on request of size 67108860.
> > >
> > > Ap
In response to "Lim Berger" <[EMAIL PROTECTED]>:
> On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > ERROR: out of memory
> > > DETAIL: Failed on request of size 67108860.
> >
> > Apparently, this number:
> >
> > > maintenance_work_mem = 64MB
> >
On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
> > ERROR: out of memory
> > DETAIL: Failed on request of size 67108860.
>
> Apparently, this number:
>
> > maintenance_work_mem = 64MB
>
> is more than your system can actually support. Which is a bit odd
My responses below yours. Thanks so much for bearing with me..
On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> >> "Lim Berger" <[EMAIL PROTECTED]> writes:
> >>
> >> > Hi
> >> >
> >> > I a
"Lim Berger" <[EMAIL PROTECTED]> writes:
> ERROR: out of memory
> DETAIL: Failed on request of size 67108860.
Apparently, this number:
> maintenance_work_mem = 64MB
is more than your system can actually support. Which is a bit odd for
any modern-day machine. I suspect the postmaster is being
"Lim Berger" <[EMAIL PROTECTED]> writes:
> On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
>> "Lim Berger" <[EMAIL PROTECTED]> writes:
>>
>> > Hi
>> >
>> > I am getting the following error while running queries such as "vacuum
>> > analyze TABLE", even on small tables with a piddly 35,000 row
On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > Hi
> >
> > I am getting the following error while running queries such as "vacuum
> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >
> > The error message:
> > --
> > ERROR: o
"Lim Berger" <[EMAIL PROTECTED]> writes:
> Hi
>
> I am getting the following error while running queries such as "vacuum
> analyze TABLE", even on small tables with a piddly 35,000 rows!
>
> The error message:
> --
> ERROR: out of memory
> DETAIL: Failed on request of size 67108860.
> --
>
> My
Hi
I am getting the following error while running queries such as "vacuum
analyze TABLE", even on small tables with a piddly 35,000 rows!
The error message:
--
ERROR: out of memory
DETAIL: Failed on request of size 67108860.
--
My postgresql.conf is below. I am on a Dual Core server with 4GB o
> Given that they have improved their SysV IPC support steadily over the
> past few Darwin releases, I don't see why you'd expect them to not be
> willing to do this. Having a larger default limit costs them *zero* if
> the feature is not used, so what's the objection?
The objection would be atti
Scott Ribe <[EMAIL PROTECTED]> writes:
>> I have asked Apple about using a saner default for shmmax, but a few
>> more complaints in their bug system wouldn't hurt.
> I suspect it won't help, since their official position is already "don't use
> shmget, use mmap instead"...
Given that they have i
> I have asked Apple about using a saner default for shmmax, but a few
> more complaints in their bug system wouldn't hurt.
I suspect it won't help, since their official position is already "don't use
shmget, use mmap instead"...
--
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303)
Maybe this is a server vs normal OS X issue. I am postgres on a
normal iMac 10.3.5 with no problems, but this is just a developent box
so I don't need the server version. All of the servers that I run are
Linux/FreeBSD. I don't have access to a Mac server, if I did I would
test this myself.
On
Tom Lane wrote:
Jeffrey Melloy <[EMAIL PROTECTED]> writes:
I have a couple users trying to install Postgres on OS X. To the best
of my knowledge, both of them are using 7.4.5/10.3.5, and got identical
errors while trying to init the database:
They need to increase the system's shmmax lim
Jeffrey Melloy <[EMAIL PROTECTED]> writes:
> I have a couple users trying to install Postgres on OS X. To the best
> of my knowledge, both of them are using 7.4.5/10.3.5, and got identical
> errors while trying to init the database:
They need to increase the system's shmmax limit (sysctl kern.s
On Thu, 2004-09-30 at 13:49, Jeffrey Melloy wrote:
> I have a couple users trying to install Postgres on OS X. To the best
> of my knowledge, both of them are using 7.4.5/10.3.5, and got identical
> errors while trying to init the database:
Have you tried the suggestions in the documentation?
I have a couple users trying to install Postgres on OS X. To the best
of my knowledge, both of them are using 7.4.5/10.3.5, and got identical
errors while trying to init the database:
Reducing the shared buffers didn't help.
Any thoughts would be appreciated.
Jeffrey Melloy
[EMAIL PROTECTED]
Wi
Hi,
I've got this very bad message with Apache 1.3.6 running mod_perl and
postgresql
6.3.2 with a site hosted by an ISP. They just allocate a max of 10 Mo of memory
for
each user.
For me this was a message coming from server resources. Not enougth memory.
At home with the same configuration I ne
Steve Heaven <[EMAIL PROTECTED]> writes:
> If we try to run a perl script which accesses a Postgres dB using mod_perl
> with Apache we get:
> "child pid 25144 exit signal Segmentation fault (11)
> Out of memory!"
I am mighty ignorant about Apache, but I'll venture a bet that the "Out
of memory!"
Jeffrey wrote:
>
> Out of memory!
Looks like you might possibly be using Apache?
http://perl.apache.org/guide/troubleshooting.html#Out_of_memory_
46 matches
Mail list logo