it would probably be faster if you can be as specific as possible... in the
help that has been provided, it is looking everywhere in c:/
you could narrow that down a bit and save it from looking too deep in your
system.
dir("C:\Documents and Settings\username\Desktop", pattern="foo.pdf",
ful
these guys wont help you with your homework. But have you ever heard of
Google?... if so, R has plenty of online manuals and cheat sheets.
--
View this message in context:
http://r.789695.n4.nabble.com/Quadratic-equation-tp3758790p3759239.html
Sent from the R help mailing list archive at Nabble.
untested because I don't have access to your data, but this should work.
b13.NEW <- b13[, c("Gesamt", "Wasser", "Boden", "Luft", "Abwasser",
"Gefährliche Abfälle", "nicht gefährliche Abfälle")]
Geophagus wrote:
>
> *Hi @ all,
> I have a question concerning the possibilty of grouping the
?unique
--
View this message in context:
http://r.789695.n4.nabble.com/problems-with-merge-the-output-has-many-repeated-lines-tp2333596p2334249.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https://
Dennis:
I had never used the rep() function and that works out great, thank you.
Brad
--
View this message in context:
http://n4.nabble.com/fill-in-values-between-rollapply-tp1816885p1835513.html
Sent from the R help mailing list archive at Nabble.com.
__
's one approach:
> library(zoo)
> x <- zoo( rpois(100, 40) )
> w <- rollapply(x, 5, mean, by = 5, align = c('left'))
> x2 <- rep(w, each = 5)
>
> Does that work?
>
> HTH,
> Dennis
>
>
> On Fri, Apr 9, 2010 at 12:32 AM, Brad Patrick Schneid <[h
Hi,
Sorry ahead of time for not including data with this question.
Using rollapply to calculate mean values for 5 day blocks, I'd use this:
Roll5mean <- rollapply(data, 5, mean, by=5, align = c("left"))
My question is, can someone tell me how to fill in the days between each of
these means wit
Gabor:
That is not the ideal solution, but it definitely works to provide me with
the "easier alternative". Thanks for the reply!
--
View this message in context:
http://n4.nabble.com/time-series-problem-time-points-don-t-match-tp1748387p1748706.html
Sent from the R help mailing list archive
Hi,
I have a time series problem that I would like some help with if you have
the time. I have many data from many sites that look like this:
Site.1
datetimelevel temp
2009/10/01 00:01:52.0 2.8797 18.401
2009/10/01 00:16:52.0 2.8769 18.382
I'm guessing the answer is no? I too am looking for a function for SIMPER,
it seems logical that this would be in the Vegan package with the ANOSIM
function.
--
View this message in context:
http://n4.nabble.com/Similarity-Percentage-Analysis-tp868460p1573091.html
Sent from the R help mailing l
You rock!
thank you.. you saved me hours of hunting around.
--
View this message in context:
http://n4.nabble.com/should-be-easy-data-frame-manipulation-tp1457518p1457675.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-projec
look at plyr package
colwise (column wise) function
--
View this message in context:
http://n4.nabble.com/Applying-a-transformation-to-multiple-data-frame-columns-tp1457641p1457666.html
Sent from the R help mailing list archive at Nabble.com.
__
R-hel
I have a data.frame with the following:
ID Species Count
1 A 3
1 B 2
1 E 12
2 A 13
2 C 5
2 F 4
3 B 5
3 D 3
I need it in thi
Thats it Hadley!!!
Thank you.
--
View this message in context:
http://n4.nabble.com/read-multiple-large-files-into-one-dataframe-tp891835p1290089.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
https:
### The following is very helpful #
listOfFiles <- list.files(pattern= ".txt")
d <- do.call(rbind, lapply(listOfFiles, read.table))
###
but what if each file contains information corresponding to a different
subject and I need to be able to tell where each
Hi,
I have many .txt files which look like this:
2009/02/07 12:30:10.0 5.0161 13.208
2009/02/07 12:45:10.0 5.0102 13.350
2009/02/07 13:00:10.0 5.0044 13.473
2009/02/07 16:30:10.0 4.9366 13.788
2009/02/07 16:45:10.0 4.9397
Hi,
I have many files which look like this:
"
2009/02/07 12:30:10.0 5.0161 13.208
2009/02/07 12:45:10.0 5.0102 13.350
2009/02/07 13:00:10.0 5.0044 13.473
2009/02/07 16:30:10.0 4.9366 13.788
2009/02/07 16:45:10.0 4.9397 13.798
end d
Sorry...
also, how do I remove the last line which says "end data"
--
View this message in context:
http://n4.nabble.com/R-Automatic-File-Reading-tp810331p1182386.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org m
18 matches
Mail list logo