Is there a performance advantage to doing this, as opposed to growing the vector within the loop? I suppose R could have to dynamically reallocate memory at some point?
Alan 2010/5/30 Uwe Ligges <lig...@statistik.tu-dortmund.de>: > > > On 26.05.2010 08:52, Alan Lue wrote: >> >> Come to think of it, we can't save the output of each invocation and >> concatenate it later, since we need the output as input for the next >> iteration. > > > Yes, but you can do it a bit cleverer than before by initializing to the > fill length as in: > > r.seq <- numeric(nrow(d)) > r.seq[1] <- 2 * (1 / d$Dt[1] - 1) > for (i in 2:nrow(d)) { > r.seq[i] <- uniroot(bdt.deviation, interval = c(0, 1), > D.T = d$Dt[i], r.prior = r.seq[i-1])$root > } > > Uwe Ligges > > > >> Alan >> >> >> On Tue, May 25, 2010 at 11:43 PM, Alan Lue<alan....@gmail.com> wrote: >>> >>> Since `for' loops are slow in R, and since `apply' functions are >>> faster, I was wondering whether there were a way to use an apply >>> function—or to otherwise avoid using a loop—when iterating over a >>> statement that updates its input. >>> >>> For example, here's some such code: >>> >>> r.seq<- 2 * (1 / d$Dt[1] - 1) >>> for (i in 2:nrow(d)) { >>> rf<- uniroot(bdt.deviation, interval=c(0, 1), D.T=d$Dt[i], >>> r.prior=r.seq) >>> r.seq<- append(r.seq, rf$root) >>> } >>> >>> The call to `uniroot()' both updates `r.seq' and reads it as input. >>> We could save the output of each invocation of `uniroot()' and >>> concatenate it later, but is there a better way to write this (i.e., >>> to execute more quickly) while updating `r.seq' in each iteration? >>> >>> Alan >>> >> >> >> > -- Alan Lue Master of Financial Engineering UCLA Anderson School of Management ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.