I am running cygwin 64 bit and have installed the latest 7.3.7 version of
php.
When I attempt to run composer, I get a fatal out of memory error.
In my php.ini file I have set max memory to -1, and
php -r "echo ini_get('memory_limit').PHP_EOL;"
-1
confirms it.
I saw a s
On 9/23/2013 2:28 AM, Matt D. wrote:
On 9/22/2013 10:25 PM, Larry Hall (Cygwin) wrote:
On 9/22/2013 5:30 PM, Matt D. wrote:
Depending on the options I choose, I keep running into lz/lzma reporting:
"ERROR: Can't allocate required memory!".
This can occur before or during compression. The lz/lz
The link sounded promising but no matter how high I set it (tried up to
8192), I get the same error, even when bringing down the compression
ratio a bit.
I attempted to increase the cygwin heap on both /usr/lib/p7zip/7z.exe
and 7za.exe.
Matt D.
On 9/22/2013 10:25 PM, Larry Hall (Cygwin) wr
On 9/22/2013 5:30 PM, Matt D. wrote:
Depending on the options I choose, I keep running into lz/lzma reporting:
"ERROR: Can't allocate required memory!".
This can occur before or during compression. The lz/lza process tends to
stop after gaining about 400MB on the process; then it exits with this
Depending on the options I choose, I keep running into lz/lzma
reporting: "ERROR: Can't allocate required memory!".
This can occur before or during compression. The lz/lza process tends to
stop after gaining about 400MB on the process; then it exits with this
error. The problem does not appear
Greetings, Buchbinder, Barry (NIH/NIAID) [E]!
> Just to complete this topic ...
> This gets rid of all the fork-execs in the inner loop except
> for sleep. Instead of comparing file contents, it uses
> the test builtin to compare time stamps.
I /never ever/ rely on timestamps, except for casual
Just to complete this topic ...
This gets rid of all the fork-execs in the inner loop except
for sleep. Instead of comparing file contents, it uses
the test builtin to compare time stamps.
#!/bin/dash
FILE_TO_CHECK=/mypath/style.less
COMPARE_FILE=/mypath/compare_file.tm
1-second delay between
> checks of the hash. This works great for several hours, but then gives an
> "out
> of memory" error and actually brings Windows 7 to its knees.
Have you ascertained whether the leak is in bash, cygwin1.dll, or in
Windows itself? If it is BLODA (and
On 06/01/2012 03:20 AM, Adam Dinwoodie wrote:
> Buchbinder, Barry wrote:
>> You might try changing
>> [[ condition ]]
>> to
>> [ condition ]
>> Perhaps single brackets use memory differently than double brackets.
>
> They do: [[ condition ]] is interpreted by the shell; [ condition ] forks
1-second delay between
> checks of the hash. This works great for several hours, but then gives an
> "out
> of memory" error and actually brings Windows 7 to its knees.
> [...]
> Here is the script:
>
> #!/bin/sh
>
> FILE_TO_CHECK=/mypat
AZ 9901 wrote:
> So some things to avoid while (bash)scripting under Cygwin to limit
> BLODA effect :
> - | : pipe stdout --> stdin
> - $(...) : subshell fork
> - `...` : same as before, subshell fork
> - [ condition ] : prefer [[ condition ]] construction
> - anything else ?
By my understanding o
2012/6/1 AZ 9901:
> 2012/6/1 Adam Dinwoodie:
>> Buchbinder, Barry wrote:
>>> You might try changing
>>> [[ condition ]]
>>> to
>>> [ condition ]
>>> Perhaps single brackets use memory differently than double brackets.
>>
>> They do: [[ condition ]] is interpreted by the shell; [ condition ]
2012/6/1 Adam Dinwoodie:
> Buchbinder, Barry wrote:
>> You might try changing
>> [[ condition ]]
>> to
>> [ condition ]
>> Perhaps single brackets use memory differently than double brackets.
>
> They do: [[ condition ]] is interpreted by the shell; [ condition ] forks to
> call /usr/bin/[.
ils to match the previous hash,
> it
> will execute a command to process the file. I used a 1-second delay between
> checks of the hash. This works great for several hours, but then gives an
> "out
> of memory" error and actually brings Windows 7 to its knees.
>
>
Jordan yahoo.com> writes:
> This works great for several hours, but then gives an "out
> of memory" error and actually brings Windows 7 to its knees.
> I can provide the exact memory error if
> requested
I reproduced it again. The error messages are as follows:
./m
2012/5/31 Jordan :
> AZ 9901 :
>>
>> Make an infinite loop with no fork, and look at the memory usage.
>> Then, make an infinite loop with one fork and look at the memory
>>
>> I really hope a solution will be found one day
>>
>
> Argh! And I really like CygWin, so I was hoping to learn that this
AZ 9901 gmail.com> writes:
>
> Make an infinite loop with no fork, and look at the memory usage.
> Then, make an infinite loop with one fork and look at the memory
>
> I really hope a solution will be found one day
>
Argh! And I really like CygWin, so I was hoping to learn that this is
reso
assuming the problem is that forks are not getting fully
> reclaimed.
I think so too ; when I intensively script under Cygwin, I have to
reboot after some time because the machine runs out of memory.
I made some tests, forks are clearly the root cause yes, they are not
fully reclaimed.
Make
The following are just ideas - totally untested.
You might try changing
[[ condition ]]
to
[ condition ]
Perhaps single brackets use memory differently than double brackets.
If that doesn't work, try changing
#!/bin/sh
(which calls bash) to
#!/bin/dash
You will have to have retain
1. Does "fewer forks" mean that some forks are still occurring, thus the same
memory crash will still happen, but not right away? Just delaying the
inevitable, for longer than my original script does?
I think so, assuming the problem is that forks are not getting fully reclaimed.
2. What is
Thrall, Bryan flightsafety.com> writes:
>
> AZ 9901 wrote on 2012-05-31:
> > 2012/5/31 Jordan :
> > Then, when (bash) scripting under Cygwin, you must take care to avoid
> > forking as much as possible.
> >
> > You could try to improve the "sleep 1" loop with the following one :
> >
> > while
AZ 9901 wrote on 2012-05-31:
> 2012/5/31 Jordan :
> Then, when (bash) scripting under Cygwin, you must take care to avoid
> forking as much as possible.
>
> You could try to improve the "sleep 1" loop with the following one :
>
> while md5sum $FILE_TO_CHECK | cut -d " " -f1 | grep -q "^$MD5PRINT
2012/5/31 Jordan :
> I am just wondering why the loops here are consuming increasing amounts of
> memory over time? I'm assigning new MD5 values into existing variables over
> and
> over, not allocating new variables for each MD5 assignment. (Right??) Is 1
> second perhaps too short a delay... do
veral hours, but then gives an "out
of memory" error and actually brings Windows 7 to its knees.
The script uses a loop within a loop; the outer loop is infinite by design, and
the inner loop ends when it finds a non-matching hash and processes the file.
It
broke while running the inne
ad.)
-Richard
Am 19.03.2012 11:51, schrieb Bruno Galindro da Costa:
Hi!
I'm try to copy some files from windows to Linux using rsync but,
after some short of time, an error was showed. Here is the log:
ERROR: out of memory in flist_expand [sender]
rsync error: error allocating core memory buf
-Richard
Am 19.03.2012 11:51, schrieb Bruno Galindro da Costa:
Hi!
I'm try to copy some files from windows to Linux using rsync but,
after some short of time, an error was showed. Here is the log:
ERROR: out of memory in flist_expand [sender]
rsync error: error allocating core memory
Hi!
I'm try to copy some files from windows to Linux using rsync but,
after some short of time, an error was showed. Here is the log:
ERROR: out of memory in flist_expand [sender]
rsync error: error allocating core memory buffers (code 22) at
/home/lapo/package/rsync-3.0.9-1/src/rsync-
On 11/29/2010 1:57 PM, Corinna Vinschen wrote:
On Nov 29 05:58, tsteven4 wrote:
I can readily reproduce a tcsh error "Out of memory", however the
timing is variable. In my actual application the time to failure
varies from seconds to hours. I have a test case that seems to
On Nov 29 05:58, tsteven4 wrote:
> I can readily reproduce a tcsh error "Out of memory", however the
> timing is variable. In my actual application the time to failure
> varies from seconds to hours. I have a test case that seems to be
> able to reproduce the error.
>
&
I can readily reproduce a tcsh error "Out of memory", however the timing
is variable. In my actual application the time to failure varies from
seconds to hours. I have a test case that seems to be able to reproduce
the error.
The symptom of the error is a message like:
188
.
On Nov 6 12:00, Jim Reisert AD1C wrote:
> I just got an abort/out of memory error from tcsh. I was sourcing a
> script to run telnet. Should I be worried about this?
>
> tcsh current memory allocation:
> free: 0 10275 11 26830211
>
I just got an abort/out of memory error from tcsh. I was sourcing a
script to run telnet. Should I be worried about this?
tcsh current memory allocation:
free: 0 10275 11 26830211
11000000000000
0
On Jan 5 10:03, Buchbinder, Barry (NIH/NIAID) [E] wrote:
> Upgrading from 1.5 to 1.7 I started getting an error message from
> cygpath where I hadn't previously. This can be exemplified by the
> following.
>
> $ cmd /c dir /s /b o:\\ | cygpath -u -f - | wc
> cygpath:
Upgrading from 1.5 to 1.7 I started getting an error message from
cygpath where I hadn't previously. This can be exemplified by the
following.
$ cmd /c dir /s /b o:\\ | cygpath -u -f - | wc
cygpath: out of memory
20029 20029 332488
While I've fixed my problem*, I thought that
Pádraig Brady <[EMAIL PROTECTED]> wrote:
>>> I notice that argv_iter does a malloc() + memcpy() per entry.
>>> Since the sources are already NUL terminated strings
>>> perhaps it could just return a pointer to a getdelim
>>> realloc'd buffer which was referenced in the argv_iterator struct.
>>
>> T
Jim Meyering wrote:
> Pádraig Brady <[EMAIL PROTECTED]> wrote:
>> Jim Meyering wrote:
>>> Subject: [PATCH 1/2] argv-iter: new module
>>>
>>> * gl/lib/argv-iter.h: New file.
>>> * gl/lib/argv-iter.c: New file.
>>> * gl/modules/argv-iter: New file.
>> Very useful module!
>>
>> I see that --files0-fro
Pádraig Brady <[EMAIL PROTECTED]> wrote:
> Jim Meyering wrote:
>> Subject: [PATCH 1/2] argv-iter: new module
>>
>> * gl/lib/argv-iter.h: New file.
>> * gl/lib/argv-iter.c: New file.
>> * gl/modules/argv-iter: New file.
>
> Very useful module!
>
> I see that --files0-from was added to `du` in Mar 20
Jim Meyering wrote:
> Subject: [PATCH 1/2] argv-iter: new module
>
> * gl/lib/argv-iter.h: New file.
> * gl/lib/argv-iter.c: New file.
> * gl/modules/argv-iter: New file.
Very useful module!
I see that --files0-from was added to `du` in Mar 2004,
so it's a nice solution to this 4 year old issue.
Jim Meyering <[EMAIL PROTECTED]> wrote:
> Eric Blake <[EMAIL PROTECTED]> wrote:
>> According to Barry Kelly on 11/23/2008 6:24 AM:
>>> I have a problem with du running out of memory.
>>>
>>> I'm feeding it a list of null-separated file names
Eric Blake <[EMAIL PROTECTED]> wrote:
> [adding the upstream coreutils list]
> According to Barry Kelly on 11/23/2008 6:24 AM:
>> I have a problem with du running out of memory.
>>
>> I'm feeding it a list of null-separated file names via standard input,
&
Eric Blake wrote:
> [adding the upstream coreutils list]
>
> According to Barry Kelly on 11/23/2008 6:24 AM:
> > I have a problem with du running out of memory.
> >
> > I'm feeding it a list of null-separated file names via standard input,
> > to a command-
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[adding the upstream coreutils list]
According to Barry Kelly on 11/23/2008 6:24 AM:
> I have a problem with du running out of memory.
>
> I'm feeding it a list of null-separated file names via standard input,
> to a command-lin
I have a problem with du running out of memory.
I'm feeding it a list of null-separated file names via standard input,
to a command-line that looks like:
du -b --files0-from=-
The problem is that when du is run in this way, it leaks memory like a
sieve. I feed it about 4.7 million path
At 04:02 AM 7/5/2005, you wrote:
>Michael Bax SPAM.bigfoot.com> writes:
>
>>
>> I discovered that my .history file was 53 MB in size! This is with
>> history 1024
>> savehist(1024 merge)
>> in my .login.
>>
>> > wc .history
>> 0 3316299 53060768 .history
>>
>
>I've got the same probl
Michael Bax SPAM.bigfoot.com> writes:
>
> I discovered that my .history file was 53 MB in size! This is with
> history 1024
> savehist(1024 merge)
> in my .login.
>
> > wc .history
> 0 3316299 53060768 .history
>
I've got the same problem with the same symptoms (the
Ugh, top-posting... Reformatted.
On Mon, 7 Feb 2005, Alexis Cothenet wrote:
> On Mon, 7 Feb 2005, Igor Pechtchanski wrote:
>
> > On Mon, 7 Feb 2005, Alexis Cothenet wrote:
> >
> > > Hi all,
> > >
> > > the subject is the error message i gave compiling a simple programm
> > > on cygwin. I tried t
> -Original Message-
> From: cygwin-owner On Behalf Of Alexis Cothenet
> Sent: 07 February 2005 16:22
> I have already seen this page but this mailing list seems to me
> the better one for this question, am i wrong ?
This sentence doesn't even make sense, how could there possibly be a c
Hi,
I have already seen this page but this mailing list seems to me
the better one for this question, am i wrong ?
I have ever checked on google what could be the reason of this message
during a compilation.
It was indicated that it could be relied on the gcc version but
the cygwin gcc version i
way to cause this problem! My cygwin
> installation was running fine for over a year on an XP machine without
> any problems. Today, out of nowhere, it starts giving me the "Out of
> memory" error whenever I try to run tcsh.
> Looking in that directory revealed that the .histo
http://www.cygwin.com/ml/cygwin/2004-02/msg00798.html
Re: tcsh hangs after updating to cygwin 1.5.7-1. Expires with "Out of
Memory"
* From: Corinna Vinschen
* To: cygwin at cygwin dot com
* Date: Tue, 17 Feb 2004 16:11:55 +0100
* Subject: Re: tcsh hangs afte
On Feb 17 12:51, Etienne Huot wrote:
> References: <[EMAIL PROTECTED]>
>
> I have experienced exactly the same problem! On two different machines
> running XP. But actually, I'm not sure this is due to the new DLL
I'm running XP *and* I'm using tcsh on a regular basis and I'm not
able to reprodu
.0 box with tcsh as my login
shell.
>> After upgrading to cygwin 1.5.7-1 from cygwin.com, tcsh has stopped
>> working.
>>
>> If I start tcsh either from command line or run menu or as a login
>> shell, it hangs for a long period of time after displaying motd.
>> It even
gin
> shell, it hangs for a long period of time after displaying motd.
> It eventually comes out of slumber with a message "Out of Memory" and
> exits.
>
> I ran it with verbose and echo flags set and it seems to go through my
> .cshrc file successfully but get stuck after
eventually comes out of slumber with a message "Out of Memory" and
exits.
I ran it with verbose and echo flags set and it seems to go through my
.cshrc file successfully but get stuck after that. It was working fine
with the same startup files before the upgrade.
All other shells; bash, ks
> When i start ./configure of that program:
> http://dev.null.pl/ekg/ekg-20040130.tar.gz it runs a while and then
> ends with many out of memory errors (windows popups are showing).
> Machine has 140MB of ram, it is Win98 on VMWare. Cygwin version is
> 1.5.7, but on 1.5.6 and e
When i start ./configure of that program:
http://dev.null.pl/ekg/ekg-20040130.tar.gz it runs a while and then
ends with many out of memory errors (windows popups are showing).
Machine has 140MB of ram, it is Win98 on VMWare. Cygwin version is
1.5.7, but on 1.5.6 and earlier the same happened
Hello,
Does anyone know of a quick way to increase the number of processes that can be
run in the background? I need to have a large number of copies of a small
utility listening at the same time and, with no applications running, I can
only
achieve 62 instances of the utility before I get the me
at 8.0 server. This satifies all my present needs.
However, over a couple of days rsync.exe and sshd.exe processes are
adding up (over 100 at my last count) until the Win 2k machine runs out
of memory. Each process eats about 2.000-3.000 K.
The backup still works ok and I can't fin
100 at my last count) until the Win 2k machine runs out
of memory. Each process eats about 2.000-3.000 K.
The backup still works ok and I can't find any error messages on my
linux machine.
Could someone point the problem in front of this terminal to the right
solution.
Thanks a lot
Armin
ds about 1G memory, the following
> happened:
>
> "Out of memory during request for 1016 bytes, total sbrk() is 261015552
> bytes!".
>
> So my program was forced to die.
>
> You are most appreciated! All the best!
>
> Best regards,
>
> Huang Xiangang
> B
Dear All,
My System Environment as below:
Windows 2000 Professional
Cygwin version 2.78.2.9
Phyical Memory: 256M
Virtual Memory: 1.5G
Perl v5.6.1 built for cygwin-multi
When I run my perl program which needs about 1G memory, the following
happened:
"Out of memory during request for 1016
memory
>until the memory limit of 256 MB (default) is reached.
>Perl, as expected, terminates with "Out of Memory!"
>
>However, _after_ the script terminates,
>subsequently executed cygwin programs have a problem:
>
> - a simple "ls" gives
>Segmentatio
expected, terminates with "Out of Memory!"
However, _after_ the script terminates,
subsequently executed cygwin programs have a problem:
- a simple "ls" gives
Segmentation fault (core dumped)
% ls -l
... (works correctly)
% ./test.pl <-- script to reach
63 matches
Mail list logo