See below for the complete mail to which I reply which was not sent to rhelp.
==
emptyexpandlist2<-list(ne=0,l=array(NA, dim=c(1, 1000L)),len=1000L)
addexpandlist2<-function(x,prev){
if(prev$len==prev$ne){
n2<-prev$len*2
prev <- list(ne=prev$ne, l=array(prev$l, dim=c(1, n2))
r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Bert Gunter
Sent: Thursday, July 19, 2012 9:11 AM
To: Hadley Wickham
Cc: r-help@r-project.org
Subject: Re: [R] complexity of operations in R
Hadley et. al:
Indeed. And using a loop is a poor way to do it anyway.
v <
> On Thu, Jul 19, 2012 at 3:00 PM, Jan van der Laan wrote:
>
> When the length of the end result is not known, doubling the length of the
> list is also much faster than increasing the size of the list with single
> items.
>
> [snip]
>
> What causes these differences? I can imagine that the time n
Jan:
Point taken.
However, if possible, as Bill Dunlap indicated, it still may make
sense to create an oversized list first and then populate what you
need of it with your loop.
Note that a lot of this can be finessed with lapplyand friends
anyway, letting R worry about the details of creating a
user system elapsed
11.125 0.052 11.181
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Bert Gunter
Sent: Thursday, July 19, 2012 9:11 AM
To: Hadley Wickham
Cc: r-help@r-pr
On 07/19/2012 06:11 PM, Bert Gunter wrote:
Hadley et. al:
Indeed. And using a loop is a poor way to do it anyway.
v <- as.list(rep(FALSE,dotot))
is way faster.
-- Bert
I agree that not using a loop is much faster, but I assume that the
original question is about the situation where the siz
On 07/19/2012 05:50 PM, Hadley Wickham wrote:
On Thu, Jul 19, 2012 at 8:02 AM, Jan van der Laan wrote:
The following function is faster than your g and easier to read:
g2 <- function(dotot) {
v <- list()
for (i in seq_len(dotot)) {
v[[i]] <- FALSE
}
}
Except that you don't need
On Thu, Jul 19, 2012 at 11:11 AM, Bert Gunter wrote:
> Hadley et. al:
>
> Indeed. And using a loop is a poor way to do it anyway.
>
> v <- as.list(rep(FALSE,dotot))
>
> is way faster.
>
> -- Bert
>
Its not entirely clear to me what we are supposed to conclude about this.
I can confirm Bert's cla
On Thu, Jul 19, 2012 at 9:21 AM, William Dunlap wrote:
> Preallocation of lists does speed things up. The following shows
> time quadratic in size when there is no preallocation and linear
> growth when there is, for size in the c. 10^4 to 10^6 region:
Interesting, thanks! I wish there was a be
11.181
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>
>> -Original Message-
>> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
>> Behalf Of Bert Gunter
>> Sent: Thursday, July 19, 2012 9:11 AM
>>
t: Thursday, July 19, 2012 9:11 AM
> To: Hadley Wickham
> Cc: r-help@r-project.org
> Subject: Re: [R] complexity of operations in R
>
> Hadley et. al:
>
> Indeed. And using a loop is a poor way to do it anyway.
>
> v <- as.list(rep(FALSE,dotot))
>
> is way faster.
>
also rep.int()
> system.time(for (i in 1:1000) x <- rep.int(FALSE, 10))
user system elapsed
0.290.020.29
> system.time(for (i in 1:1000) x <- rep(FALSE, 10))
user system elapsed
1.960.082.05
On Thu, Jul 19, 2012 at 9:11 AM, Bert Gunter wrote:
> Hadley et. al
Hadley et. al:
Indeed. And using a loop is a poor way to do it anyway.
v <- as.list(rep(FALSE,dotot))
is way faster.
-- Bert
On Thu, Jul 19, 2012 at 8:50 AM, Hadley Wickham wrote:
> On Thu, Jul 19, 2012 at 8:02 AM, Jan van der Laan wrote:
>> Johan,
>>
>> Your 'list' and 'array doubling' code
On Thu, Jul 19, 2012 at 8:02 AM, Jan van der Laan wrote:
> Johan,
>
> Your 'list' and 'array doubling' code can be written much more efficient.
>
> The following function is faster than your g and easier to read:
>
> g2 <- function(dotot) {
> v <- list()
> for (i in seq_len(dotot)) {
> v[[
Johan,
Your 'list' and 'array doubling' code can be written much more efficient.
The following function is faster than your g and easier to read:
g2 <- function(dotot) {
v <- list()
for (i in seq_len(dotot)) {
v[[i]] <- FALSE
}
}
In the following line in you array doubling function
On Wed, Jul 18, 2012 at 10:06 AM, Patrick Burns wrote:
> It looks to me like the following
> should do what you want:
>
> f2 <- function(dotot) array(FALSE, c(dotot, 1))
>
> What am I missing?
>
ah, the output of this is purely a toy example. the point is that the
add*() functions are O(1) and th
On 18/07/2012 09:49, Rui Barradas wrote:
Hello,
Em 18-07-2012 09:06, Patrick Burns escreveu:
It looks to me like the following
should do what you want:
f2 <- function(dotot) array(FALSE, c(dotot, 1))
What am I missing?
That matrix is even faster?
Depends on your unstated version of R
Hello,
Em 18-07-2012 09:06, Patrick Burns escreveu:
It looks to me like the following
should do what you want:
f2 <- function(dotot) array(FALSE, c(dotot, 1))
What am I missing?
That matrix is even faster?
f2 <- function(dotot) array(FALSE, c(dotot, 1))
f3 <- function(dotot) matrix(FALSE,
It looks to me like the following
should do what you want:
f2 <- function(dotot) array(FALSE, c(dotot, 1))
What am I missing?
Pat
On 17/07/2012 21:58, Johan Henriksson wrote:
thanks for the link! I should read it through. that said, I didn't find
any good general solution to the problem so he
thanks for the link! I should read it through. that said, I didn't find any
good general solution to the problem so here I post some attempts for
general input. maybe someone knows how to speed this up. both my solutions
are theoretically O(n) for creating a list of n elements. The function to
impr
Hello!
I am optimizing my code in R and for this I need to know a bit more about
the internals. It would help tremendously if someone could link me to a
page with O()-complexities of all the operations.
In this particular case, I need something like a linked list with O(1)
insertLast/First ability
21 matches
Mail list logo