> Tim Taylor
> on Tue, 17 Jan 2023 13:39:01 + writes:
> On 17/01/2023 13:06, Duncan Murdoch wrote:
>> I don't have a valgrind-capable version of R, but I'd be interested to
>> see whether this is a one-time loss, or repeated. That is, do you get a
>> much bigger
On 17/01/2023 13:06, Duncan Murdoch wrote:
I don't have a valgrind-capable version of R, but I'd be interested to
see whether this is a one-time loss, or repeated. That is, do you get a
much bigger loss from running the lossy code in a loop like this?
for (i in 1:100) { png(filename='p.png'
I don't have a valgrind-capable version of R, but I'd be interested to
see whether this is a one-time loss, or repeated. That is, do you get a
much bigger loss from running the lossy code in a loop like this?
for (i in 1:100) { png(filename='p.png'); plot(1:10); dev.off() }
Duncan Murdoch
Note that cairo_pdf() also suffers from the same leak
.. as to be expected once you notice that much of the cairo
device handling uses common code.
...
.. and then, when you are aware that on Linux, the default
interactive device is x11() and its default type is *also*
"cairo" { possibly not on
> Edward Ionides
> on Mon, 16 Jan 2023 09:04:49 -0500 writes:
> Hi all,
> Yesterday I discovered what seems to me like a memory leak in png() so I'm
> reporting it in case that is helpful. Here is a small reproducible
example:
> R -d "valgrind --tool=memcheck --trac
Hi all,
Yesterday I discovered what seems to me like a memory leak in png() so I'm
reporting it in case that is helpful. Here is a small reproducible example:
R -d "valgrind --tool=memcheck --track-origins=yes --leak-check=full"
--vanilla -e "png(filename='p.png'); plot(1:10); dev.off()"
## HAS L
Hi,
please consider the following minimal reproducible example:
Create a new R package which just contains the following two (exported) objects:
crash_dumps <- new.env()
f <- function() {
x <- runif(1e5)
dump <- lapply(1:2, function(i) unserialize(serialize(sys.frames(), NULL)))
assign("
> Gábor Csárdi
> on Wed, 22 Jan 2020 22:56:17 + writes:
> Hi All,
> I think there is a memory error in the libcurl connection code that
> typically happens when libcurl reads big chunks of data. This
> potentially affects all code that use url() with the libcurl do
Hi All,
I think there is a memory error in the libcurl connection code that
typically happens when libcurl reads big chunks of data. This
potentially affects all code that use url() with the libcurl download
method, which is the default in most builds. In practice it tends to
happen more with HTTP
It doesn't matter you didn't use the value. An invalid read may fail
or not. It depends on whether that memory portion was reallocated or
not by the OS. When it was, you are trying to read in a memory region
where you don't have permission, so it crashes.
Iñaki
On Sun, 30 Jun 2019 at 04:27, Thern
I had a problem with the latest iteration of the survival package (one that I
hope to
post to Github next week) where it would die in strange ways, i.e., work on one
box and
not on another, a vignette would compile if I invoked Sweave myself but fail in
R CMD
build, etc. The type of thing
Thx William and Brian for your swift responses, very insightful. I'll have
to hunt for more memory.
Cheers
Joris
On Tue, Sep 18, 2018 at 6:16 PM William Dunlap wrote:
> The ratio of object size to rds file size depends on the object. Some
> variation is due to how header information is stored
The ratio of object size to rds file size depends on the object. Some
variation is due to how header information is stored in memory and in the
file but I suspect most is due to how compression works (e.g., a vector of
repeated values can be compressed into a smaller file than a bunch of
random by
Your RDS file is likely compressed, and could have compression of 10x
or more depending on the composition of the data that is in it and the
compression method used. 'gzip' compression is used by default.
--
Brian G. Peterson
http://braverock.com/brian/
Ph: 773-459-4973
IM: bgpbraverock
On Tue,
Dear all,
I tried to read in a 3.8Gb RDS file on a computer with 16Gb available
memory. To my astonishment, the memory footprint of R rises quickly to over
13Gb and the attempt ends with an error that says "cannot allocate vector
of size 5.8Gb".
I would expect that 3 times the memory would be eno
On 07/17/2018 12:56 PM, Joshua Ulrich wrote:
This looks like a case of FAQ 7.42:
https://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-is-R-apparently-not-releasing-memory_003f
Yes. A true memory leak in R would mean that repeated execution of the
same code (e.g. creation and deletion of the list) w
This looks like a case of FAQ 7.42:
https://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-is-R-apparently-not-releasing-memory_003f
On Mon, Jul 16, 2018 at 2:32 PM, Daniel Raduta wrote:
> Hello,
>
> I am experiencing a very noticeable memory leak when using large lists of
> large data. The code below
Hello,
I am experiencing a very noticeable memory leak when using large lists of
large data. The code below creates a list of matrices, yet the memory does
not free up after removing that item and performing a garbage collection.
Is there a something I can do to prevent this memory leak? Any help
I'm not really disagreeing with this, but is not the point of pryr to let you
investigate internals from the R level?
Building code that relies on pryr returning things with specific properties is
very likely doubleplusunrecommended by pryr's author as well.
In that spirit, I suppose that you
If you were curious about the hidden details of the memory layout in R,
the best reference is the source code. In your example, you are not
getting to your string because there is one more pointer in the way, "x"
is a vector of strings, each string is represented by a pointer.
At C level, ther
Hi,
To get the memory address of where the value of variable "x" (of datatype
"numeric") is stored one does the following in R (in 32 bit):
library(pryr)
x <- 1024
addr <- as.numeric(address(x)) + 24 # 24 is needed to jump the
variable info and point to the data itself (i
> Gábor Csárdi
> on Sun, 13 Nov 2016 20:49:57 + writes:
> Using dup() before fdopen() (and calling fclose() on the connection
> when it is closed) indeed fixes the memory leak.
>
Thank you, Gábor!
Yes I can confirm that this fixes the memory leak.
I'm testing ('make check-all')
Using dup() before fdopen() (and calling fclose() on the connection
when it is closed) indeed fixes the memory leak.
FYI,
Gabor
Index: src/main/connections.c
===
--- src/main/connections.c (revision 71653)
+++ src/main/connections.c
On Fri, Nov 11, 2016 at 12:46 PM, Gergely Daróczi
wrote:
[...]
>> I've changed the above to *print* the gc() result every 1000th
>> iteration, and after 100'000 iterations, there is still no
>> memory increase from the point of view of R itself.
Yes, R does not know about it, it does not manage t
On Fri, Nov 11, 2016 at 12:08 PM, Martin Maechler
wrote:
>> Gergely Daróczi
>> on Thu, 10 Nov 2016 16:48:12 +0100 writes:
>
> > Dear All,
> > I'm developing an R application running inside of a Java daemon on
> > multiple threads, and interacting with the parent daemon via
> Gergely Daróczi
> on Thu, 10 Nov 2016 16:48:12 +0100 writes:
> Dear All,
> I'm developing an R application running inside of a Java daemon on
> multiple threads, and interacting with the parent daemon via stdin and
> stdout.
> Everything works perfectly fine exc
Dear All,
I'm developing an R application running inside of a Java daemon on
multiple threads, and interacting with the parent daemon via stdin and
stdout.
Everything works perfectly fine except for having some memory leaks
somewhere. Simplified version of the R app:
while (TRUE) {
c
Hello,
## Configuring R with
./configure --with-tcl-config=/usr/lib/tcl8.5/tclConfig.sh
--with-tk-config=/usr/lib/tk8.5/tkConfig.sh CFLAGS="-fsanitize=address
-fsanitize=undefined -fno-sanitize=float-divide-by-zero,vptr "
CXXFLAGS="-fsanitize=address -fsanitize=undefined
-fno-sanitize=float-d
Hi Josh,
I think we need some more details, including code, and information
about your operating system. My machine has only 12 Gb of ram, but I
can run this quite comfortably (no swap, other processes using memory
etc.):
library(parallel)
library(data.table)
d <- data.table(a = rnorm(5000),
Hello,
I have been having issues using parallel::mclapply in a memory-efficient
way and would like some guidance. I am using a 40 core machine with 96 GB
of RAM. I've tried to run mclapply with 20, 30, and 40 mc.cores and it has
practically brought the machine to a standstill each time to the poin
Dear Michael,
thank you for the enlightenment. Just for the records, here is the
solution that Michael referred to:
http://developer.r-project.org/Refcnt.html
Best,
Denes
On 06/03/2014 03:57 PM, Michael Lawrence wrote:
This is because R keeps track of the names of an object, until there
This is because R keeps track of the names of an object, until there are 2
names. Thus, once it reaches 2, it can no longer decrement the named count.
In this example, 'a' reaches 2 names ('a' and 'b'), thus R does not know
that 'a' only has one name at the end.
Luke has added reference counting t
Hi,
Please consider the following code:
a <- seq.int(10) # create a
tracemem(a)
a[1:4] <- 4:1 # no internal copy
b <- a # no internal copy
b[1:4] <- 1:4 # copy, b is not a any more
a[1:4] <- 1:4 # copy, but why?
With results:
> a <- seq.int(10)
> tracemem(a)
[1] "<0x1792bc0>"
>
On Aug 28, 2013, at 2:24 PM, Hadley Wickham wrote:
>> Yup - parsing is the most expensive part. That's why for high-throughput
>> data you don't want to use ASCII representation. It's amazing that the disk
>> speeds are now so high that CPUs are the bottlenecks now, not vice versa.
>
> Do you h
> Yup - parsing is the most expensive part. That's why for high-throughput data
> you don't want to use ASCII representation. It's amazing that the disk speeds
> are now so high that CPUs are the bottlenecks now, not vice versa.
Do you have any recommendations for binary formats? For R, is there
On Aug 28, 2013, at 1:59 PM, Hadley Wickham wrote:
>>> Why do those lines need any allocations? I thought class<- and attr<-
>>> were primitives, and hence would modify in place.
>>>
>>
>> .. but only if there is no other reference to the data (i.e. NAMED < 2). If
>> there are two references,
>> Why do those lines need any allocations? I thought class<- and attr<-
>> were primitives, and hence would modify in place.
>>
>
> .. but only if there is no other reference to the data (i.e. NAMED < 2). If
> there are two references, they have to copy, because it would change the
> other copy.
On Aug 28, 2013, at 12:17 PM, Hadley Wickham wrote:
> Hi all,
>
> I've been trying to learn more about memory profiling in R and I've
> been trying memory profiling out on read.table. I'm getting a bit of a
> strange result, and I hope that someone might be able to explain why.
>
> After running
Hi all,
I've been trying to learn more about memory profiling in R and I've
been trying memory profiling out on read.table. I'm getting a bit of a
strange result, and I hope that someone might be able to explain why.
After running
Rprof("read-table.prof", memory.profiling = TRUE, line.profiling
Hello,
This is explained in Writing R Extensions, Section 6.1 file R-exts.pdf
in your distribution of R, folder doc.
There are two types of functions to allocate memory in C functions
called from R code.
1. R_alloc() - the memory is automatically reclaimed at the end of the
function call.
2.
Hi ,
I am a newbie to R and i am trying to create a R package which is pretty
main memory intensive.
I would like to know what happens to the variables allocated in the C code
while writing R extensions based on C.
Are they preserved until someone de-allocate them or are they taken out by
R's garb
Hi,
I have a problem with a large data set wrapped in a reference class. I do that
to access the data by reference from within various functions in a script.
# The class definition:
setRefClass("data",
fields = list(h5_df= "list",
Hello again,
Thanks, that explanation helps me understand the issue more. My platform is
Platform: x86_64-unknown-linux-gnu (64-bit) (Ubuntu 10.04 to be more precise).
- Dario.
Original message
>Date: Tue, 1 Feb 2011 21:29:38 -0500
>From: Simon Urbanek
>Subject: Re: [R
On Feb 1, 2011, at 9:00 PM, Dario Strbenac wrote:
> Hello,
>
> I'm trying to track down the cause of some extreme memory usage and I've been
> using Dirk Eddelbuettel's lsos() function he posted on stack overflow. There
> is a large difference between R's RAM usage :
>
> PID USER PR NI
Hello,
I'm trying to track down the cause of some extreme memory usage and I've been
using Dirk Eddelbuettel's lsos() function he posted on stack overflow. There is
a large difference between R's RAM usage :
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
6637 darstr20
Dominick,
On 30/04/10 18:40, Dominick Samperi wrote:
> Just to be sure that I understand, are you suggesting that the R-safe way to
> do
> things is to not use STL, and to not use C++ memory management and
> exception handling? How can you leave a function in an irregular way without
> triggering
On Sat, May 1, 2010 at 5:02 AM, Romain Francois
wrote:
>
> Simon,
>
> Le 30/04/10 20:12, Simon Urbanek a écrit :
> Thank you for these nuggets of information. It might be beneficial to
> promote them to some notes in the "Interfacing C++ code" section of WRE.
The manual is also missing informatio
Simon,
Le 30/04/10 20:12, Simon Urbanek a écrit :
Dominick,
On Apr 30, 2010, at 1:40 PM, Dominick Samperi wrote:
Just to be sure that I understand, are you suggesting that the R-safe way to do
things is to not use STL, and to not use C++ memory management and exception
handling? How can y
> Indeed. As I said, using heap objects (explicit initialization) does solve
> that issue, but you have to be again wary of other libraries which may still
> use static initializers.
I just did a little test and found, in the case of gcc/g++, that a
main C main program linked with a C++ library
Dominick,
On Apr 30, 2010, at 2:51 PM, Dominick Samperi wrote:
> Thanks for the clarification Simon,
>
> I think it is safe (R-safe?) to say that if there are no exceptions or errors
> on either side, then it is highly likely that everything is fine.
>
I think so - at least on the exceptions t
Thanks for the clarification Simon,
I think it is safe (R-safe?) to say that if there are no exceptions or errors
on either side, then it is highly likely that everything is fine.
When there are errors or exceptions, it is probably NOT safe to try to
recover. Better to terminate the R session, in
Dominick,
On Apr 30, 2010, at 1:40 PM, Dominick Samperi wrote:
> Just to be sure that I understand, are you suggesting that the R-safe way to
> do things is to not use STL, and to not use C++ memory management and
> exception handling? How can you leave a function in an irregular way without
>
Simon,
Just to be sure that I understand, are you suggesting that the R-safe way to do
things is to not use STL, and to not use C++ memory management and
exception handling? How can you leave a function in an irregular way without
triggering a seg fault or something like that, in which case there
Brian's answer was pretty exhaustive - just one more note that is indirectly
related to memory management: C++ exception handling does interfere with R's
error handling (and vice versa) so in general STL is very dangerous and best
avoided in R. In addition, remember that regular local object rul
On Fri, 30 Apr 2010, Dominick Samperi wrote:
The R docs say that there are two methods that the C programmer can
allocate memory, one where R automatically frees the memory on
return from .C/.Call, and the other where the user takes responsibility
for freeing the storage. Both methods involve us
The R docs say that there are two methods that the C programmer can
allocate memory, one where R automatically frees the memory on
return from .C/.Call, and the other where the user takes responsibility
for freeing the storage. Both methods involve using R-provided
functions.
What happens when the
Hi,
On 11/3/09 2:28 PM, William Dunlap wrote:
The following odd call to rep()
gives somewhat random results:
rep(1:4, 1:8, each=2)
I've committed a fix for this to R-devel.
I admit that I had to reread the rep man page as I first thought this
was not a valid call to rep since times (1:8) i
The following odd call to rep()
gives somewhat random results:
> rep(1:4, 1:8, each=2)
[1] 1 1 1 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 4
4 4 4
[26] 4 4 4 4 4 4 4 4 4 4 4 NA NA NA NA NA NA NA NA NA NA NA
NA NA NA
[51] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
Without the file, we can do nothing with this, so please put it
somewhere accessible. Also, we need exact reproduction instructions:
how did you tell R this was a UTF-8 file? If you copy-pasted it, what
did you copy it from?
The posting guide and FAQ did ask you not to report on obsolete
ve
--=_mixed 00289247C1257634_=
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
Dear R-Bugs,
thank you for your wonderful software, which we use a lot. We are having a =
bit of difficulty right now because it crashes sometimes with Unicode=20
characters.
On Thu, 13 Aug 2009 13:42:39 -0400
Simon Urbanek wrote:
> I'm not convinced that what you propose is a good idea. First, I
> don't quite understand why you would want to use an existing SEXP -
> if you had a valid SEXP for the current R instance, then there is no
> need for R_RegisterObject. If t
Yuri,
I'm not convinced that what you propose is a good idea. First, I don't
quite understand why you would want to use an existing SEXP - if you
had a valid SEXP for the current R instance, then there is no need for
R_RegisterObject. If the SEXP is from a different R instance then you
ca
Hi everyone. In response to my previous message (Memory management
issues), I've come up with the following patch against R 2.9.1.
To summarize the situation:
- We're hitting the memory barrier in our lab when running concurrent R
processes due to the large datasets we use.
- We don't want to c
On Jul 5, 2009, at 10:54 AM, Yuri D'Elia wrote:
Hi everybody,
I have been interfacing some C++ library code into an R package but
ran into optimization issues specific to memory management that
require
some insight into the GC.
One of the C++ libraries returns simple vectors of integers, d
On 05/07/2009 6:05 PM, Yuri D'Elia wrote:
In article <4a5102ff.8040...@stats.uwo.ca>,
Duncan Murdoch wrote:
What I'd would like to do is:
- "patch" the SEXP returned to R so that DATAPTR() points directly to
the required address.
The normal way to do what you want is to use an "external p
In article
<8ec76080907051259q4744d40bp46b2434b086d5...@mail.gmail.com>,
Whit Armstrong wrote:
> If you are in control of the c++ library (i.e. it is not from a
> vendor), then you can also override the new operator of your object so
> that it allocates an SEXP. if you implement PROTECT/UNPROT
In article <4a5102ff.8040...@stats.uwo.ca>,
Duncan Murdoch wrote:
> > What I'd would like to do is:
> >
> > - "patch" the SEXP returned to R so that DATAPTR() points directly to
> > the required address.
>
> The normal way to do what you want is to use an "external pointer". R
> assumes t
If you are in control of the c++ library (i.e. it is not from a
vendor), then you can also override the new operator of your object so
that it allocates an SEXP. if you implement PROTECT/UNPROTECT calls
correctly, then GC will not be a problem.
The approach that I've taken with my time series lib
On 05/07/2009 10:54 AM, Yuri D'Elia wrote:
Hi everybody,
I have been interfacing some C++ library code into an R package but
ran into optimization issues specific to memory management that require
some insight into the GC.
One of the C++ libraries returns simple vectors of integers, doubles and
Hi everybody,
I have been interfacing some C++ library code into an R package but
ran into optimization issues specific to memory management that require
some insight into the GC.
One of the C++ libraries returns simple vectors of integers, doubles and
complex which are allocated and managed from
Prof Brian Ripley wrote:
Very likely your C code is writing out of bounds and has corrupted R's
memory heap. Use the tools discussed in 'Writing R Extensions' (such as
Valgrind) to help you track this down.
Brian, thanks a lot; I installed valgrind, and the error was promptly
spotted (your q
Very likely your C code is writing out of bounds and has corrupted R's
memory heap. Use the tools discussed in 'Writing R Extensions' (such as
Valgrind) to help you track this down.
On Fri, 3 Oct 2008, Göran Broström wrote:
Hello,
I get a segfault when running glmmboot in my own package glm
Hello,
I get a segfault when running glmmboot in my own package glmmML. Has
happened many time before, but this time I get no hint of where in my C
functions the error might be. I give the output below. Can this be an R
bug? I suspect it has to do with repeated calls to 'vmmin' like this:
For the record: this is now fixed.
On Thu, 7 Aug 2008, [EMAIL PROTECTED] wrote:
> Full_Name: Bill Dunlap
> Version: R version 2.8.0 Under development (unstable) (2008-07-05 r46037)
> OS: Linux
> Submission from: (NULL) (76.28.245.14)
>
>
> valgrind finds some memory leaks in R when I use sub() wi
For the record: this is now fixed.
On Thu, 7 Aug 2008, [EMAIL PROTECTED] wrote:
Full_Name: Bill Dunlap
Version: R version 2.8.0 Under development (unstable) (2008-07-05 r46037)
OS: Linux
Submission from: (NULL) (76.28.245.14)
valgrind finds some memory leaks in R when I use sub() with
a range
Full_Name: Bill Dunlap
Version: R version 2.8.0 Under development (unstable) (2008-07-05 r46037)
OS: Linux
Submission from: (NULL) (76.28.245.14)
valgrind finds some memory leaks in R when I use sub() with
a range in the regular expression:
% R --debugger=valgrind --debugger-args=--leak-check=fu
As a belated follow-up (I was away at the time), note that in general we
don't tamper with code we have ported from other projects as it makes
future maintenance so much more difficult. At the very least, we need
conspicuous comments to ensure that such changes do not get lost (I've
just added
> "BD" == Bill Dunlap <[EMAIL PROTECTED]>
> on Thu, 10 Jul 2008 13:17:03 -0700 (PDT) writes:
BD> Several folks have previously written that valgrind
BD> notices a memory leak in R's readline code. It looks
BD> like it leaks a copy of every input line.
[]
Than
> "BD" == Bill Dunlap <[EMAIL PROTECTED]>
> on Wed, 9 Jul 2008 11:26:50 -0700 (PDT) writes:
BD> There is a 2-block memory leak in the sub() (or any other regex-related
BD> function, probably) when the pattern argument involves a range
BD> expression, e.g., '[0-9]'.
BD>
Several folks have previously written that valgrind notices
a memory leak in R's readline code. It looks like it leaks
a copy of every input line.
% ~/R-svn/r-devel/R/bin/R --debugger=valgrind --debugger-args=--leak-check=full
--vanilla
==10725== Memcheck, a memory error detector.
==10725== Copy
There is a 2-block memory leak in the sub() (or any other regex-related
function, probably) when the pattern argument involves a range
expression, e.g., '[0-9]'.
% R --debugger=valgrind --debugger-args=--leak-check=full --vanilla
==14519== Memcheck, a memory error detector.
==14519== Copyright (C)
Before submitting a bug report, it is probably wise to bring it up on
r-devel first. Remember, someone has to clean out all those false bug
reports manually.
There are so many potential reasons for your problem, which indicates
fragmented memory allocations. One obvious one is that one of the
pa
Full_Name: Peter Bosa
Version: 2.6.2
OS: RHES 4.2 - 4.5
Submission from: (NULL) (67.138.101.226)
I am experiencing a memory allocation error with R 2.6.2 which was not present
in previous versions of R (2.4 and 2.2). I have several Linux machines with
various Redhat OS versions loaded on them, ra
This issue might be related to a similar one in
http://article.gmane.org/gmane.comp.lang.r.devel/11452
** Memory is not reclaimed when reading DateTime values, but works Ok when
with integers.
There is no memory loss when opening/closing of connection is done outside
of the loop. However, openi
Hello,
To trigger the memory leak, an application embedding R will have printed
a vector element of size R_BUFSIZE or more to stdout, so long as the
R_Outputfile pointer is NULL and R was compiled with vasprintf support.
The leak is in Rcons_vprintf from printutils.c. It looks as though
some
There is nothing to reproduce here: we do not have 639.txt.
But note that read.table returns a data frame, and ?array has
data: a vector (including a list) giving data to fill the array.
so I do wonder if this is what you intended: you seem to have tried to
create an array list with 639*63
Full_Name: bill langdon
Version: 2.4.1
OS: ubuntu
Submission from: (NULL) (155.245.58.159)
#WBL 22 Feb 2007 ubuntu
R.version
#platform i486-pc-linux-gnu
#arch i486
#os linux-gnu
#system i486, linux-gnu
#status
#major 2
#minor 4.1
#year
Hello,
I've compiled some Fortran code and dyn.loaded it into R as in the
"Writing R Extensions" manual. The code receives four large arrays of
doubles from R (size about 3000x3000), and runs through several loops
with BLAS calls. However, I get a memory corruption error when
running it --
>> spend more time on this. I really don't mind using the
>previous version.
Hello Derek,
or upgrade to R 2.5.0dev; the execution of your code snippet is not
hampered by memory issues:
> sessionInfo()
R version 2.5.0 Under development (unstable) (2006-10-10 r39600)
i386-pc-mingw32
locale:
L
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> Peter,
>=20
> I ran the memory limit function you mention below and both versions provi=
de
> the same result:
>=20
> >
> > memory.limit(size=3D4095)
> NULL
> > memory.limit(NA)
> [1] 4293918720
> >
> I do have 4GB ram on my PC. As a more repr
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> Peter,
>
> I ran the memory limit function you mention below and both versions provide
> the same result:
>
> >
> > memory.limit(size=4095)
> NULL
> > memory.limit(NA)
> [1] 4293918720
> >
> I do have 4GB ram on my PC. As a more reproducible
Peter,
I ran the memory limit function you mention below and both versions provide
the same result:
memory.limit(size=4095)
NULL
memory.limit(NA)
[1] 4293918720
I do have 4GB ram on my PC. As a more reproducible form of the test, I
have attached output that uses a randomly generated data
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> Thanks for the replies. Point taken regarding submission protocol. I have
> included a text file attachment that shows the R output with version 2.3.=
0and
> 2.4.0. A label distinguishing the version is included in the comments.
>=20
> A quick
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> Thanks for the replies. Point taken regarding submission protocol. I have
> included a text file attachment that shows the R output with version 2.3.0and
> 2.4.0. A label distinguishing the version is included in the comments.
>
> A quick bac
Thanks for the replies. Point taken regarding submission protocol. I have
included a text file attachment that shows the R output with version 2.3.0and
2.4.0. A label distinguishing the version is included in the comments.
A quick background on the attached example. The dataset has 650,000 record
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> thanks for the friendly reply. i think my description was fairly clear: i
> import a large dataset and run a model. using the same dataset, the
> process worked previously and it doesn't work now. if the new version of R
> requires more memory
"Derek Stephen Elmerick" <[EMAIL PROTECTED]> writes:
> thanks for the friendly reply. i think my description was fairly clear: i
> import a large dataset and run a model. using the same dataset, the
> process worked previously and it doesn't work now. if the new version of R
> requires more memory
It would be helpful to produce a script that reproduces the error on =20
your system. And include details on the size of your data set and =20
what you are doing with it. It is unclear what function is actually =20
causing the error and such. Really, in order to do something about it =20=
you need
It would be helpful to produce a script that reproduces the error on
your system. And include details on the size of your data set and
what you are doing with it. It is unclear what function is actually
causing the error and such. Really, in order to do something about it
you need to show h
thanks for the friendly reply. i think my description was fairly clear: i
import a large dataset and run a model. using the same dataset, the
process worked previously and it doesn't work now. if the new version of R
requires more memory and this compromises some basic data analyses, i would
label
1 - 100 of 129 matches
Mail list logo