Well, sort of...
aggregate() is basically a wrapper for lapply(), which ultimately must loop
over the function call at the R interpreter level, as opposed to vectorized
functions that loop at the C level and hence can be orders of magnitude
faster. As a result, there is often little difference in
You don't need loops at all.
grw <- aggregate(gw ~ ts + ISEG + iter, data = dat, FUN = sum)
GRW <- aggregate(gw ~ ts + ISEG, data = grw, FUN = function(x){max(x) -
min(x)})
DC <- aggregate(div ~ ts + ISEG, data = subset(dat, IRCH == 1), FUN =
function(x){max(x) - min(x)})
iter <- a
You are certainly in Circle 2 of 'The R Inferno',
which I suspect is where almost all of the
computation time is coming from.
Instead of doing:
divChng <- rbind(divChng,c(datTS$ts[1], SEG[j], DC, GRW,
max(datTS$iter)))
it would be much better to create 'divChng' to
be the final length and then
The small example below works lighting-fast; however, when I run the same
script on my real problem, a 1Gb text file, the for loops have been running
for over 24 hrs and I have no idea if the processing is 10% done or 90%
done. I have not been able to figure out a betteR way to code up the
materia
4 matches
Mail list logo