Look at documentation
?as.Date
as.Date first represents time in UTC, what gives:
as.POSIXlt(zzz1, tz="UTC")
HTH
2009/12/20 MAL :
> All!
>
> This piece of code:
>
> zzz1 <- as.POSIXct("1999-03-18", tz="CET")
> zzz2 <- as.POSIXlt("1999-03-18", tz="CET")
> zzz1 == zzz2
> as.Date(zzz1)
> as.Date(z
I've also made some comparisons and taking into account execution
time, sqldf wins. SummaryBy is better then aggregate in some specific
situations I met in practice. I present this situation below. It
assumes, that there are at least two groups with high number of
levels.
n<-10;
grp1<-sample(1
I've sent last message only to Titus. Sorry :)
Below my proposition:
Instead of aggregate, try summaryBy from doBy package. It's much
faster. And this package made my life easier :)
try:
summaryBy(dur+x~index, data=d, FUN=c(sum, mean)),
but index should be in data.frame as I remember.
I haven't
tally can't follow that line.
> the result is a 50 by 2 table.
> what is that? is a sample?
>
> Thanks.
>
> casper
>
>
>
>
> Marek Janad wrote:
>>
>> data<-data.frame(x=rnorm(49), y=rnorm(49))
>> t(sapply(1:50,function(i){colMeans(data[sample(
data<-data.frame(x=rnorm(49), y=rnorm(49))
t(sapply(1:50,function(i){colMeans(data[sample(nrow(data),size=nrow(data),replace=TRUE),])}))
r-help@r-project.org
2009/12/7 casperyc :
>
> Hi there,
>
> i think that's not what i was aiming for...
>
> i was aked to
>
> Generate 50 Bootstrap samples and
Try
> x <- read.table(textConnection(
+ "1.2 1
+ 1.2 1
+ 1.3 1
+ 1.5 1
+ 1.1 2
+ 1.2 2
+ 9.9 2
+ 0.1 3
+ 1.1 3
+ 1.9 3") )
> x.min <- tapply(x[,1], x[,2], min)
> x[x.min[x[,2]]==x[,1],]
V1 V2
[1,] 1.2 1
[2,] 1.2 1
[3,] 1.1 2
[4,] 0.1 3
2009/10/29 Ista Zahn
> I still don't under
6 matches
Mail list logo