Às 20:31 de 11/12/2024, Sorkin, John escreveu:
I am trying to use the aggregate function to run a function, catsbydat2, that
produces the mean, minimum, maximum, and number of observations of the values
in a dataframe, inJan2Test, by levels of the dataframe variable MyDay. The
output should be
On Wed, 11 Dec 2024, Sorkin, John writes:
> I am trying to use the aggregate function to run a function, catsbydat2, that
> produces the mean, minimum, maximum, and number of observations of the values
> in a dataframe, inJan2Test, by levels of the dataframe variable MyDay. The
> output should
I am trying to use the aggregate function to run a function, catsbydat2, that
produces the mean, minimum, maximum, and number of observations of the values
in a dataframe, inJan2Test, by levels of the dataframe variable MyDay. The
output should be in the form of a dataframe.
#my code:
# This fu
On Mon, 4 Sep 2023, Ivan Calandra wrote:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise() and
doBy::summaryBy() deal with NAs differe
Ivan:
Just one perhaps extraneous comment.
You said that you were surprised that aggregate() and group_by() did not
have the same behavior. That is a misconception on your part. As you know,
the tidyverse recapitulates the functionality of many base R functions; but
it makes no claims to do so in
Haha, got it now, there is an na.action argument (which defaults to
na.omit) to aggregate() which is applied before calling mean(na.rm =
TRUE). Thank you Rui for pointing this out.
So running it with na.pass instead of na.omit gives the same results as
dplyr::group_by()+summarise():
aggregate(
Às 12:51 de 04/09/2023, Ivan Calandra escreveu:
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs d
Thanks Rui for your help; that would be one possibility indeed.
But am I the only one who finds that behavior of aggregate() completely
unexpected and confusing? Especially considering that dplyr::summarise()
and doBy::summaryBy() deal with NAs differently, even though they all
use mean(na.rm
Às 10:44 de 04/09/2023, Ivan Calandra escreveu:
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain. Any help would be appreciated!
Sample data:
my_data <- structure(list(ID = c("FLINT-1", "FLINT-10", "FLINT-100",
"FLINT-101", "FLINT-102", "HORN-10", "HORN
no missing data.
>
> Iago
>
> *De:* R-help de part de Ivan Calandra
>
> *Enviat el:* dilluns, 4 de setembre de 2023 11:44
> *Per a:* R-help
> *Tema:* [R] aggregate formula - differing results
> Dear useRs,
&g
has no missing data.
Iago
De: R-help de part de Ivan Calandra
Enviat el: dilluns, 4 de setembre de 2023 11:44
Per a: R-help
Tema: [R] aggregate formula - differing results
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain
Dear useRs,
I have just stumbled across a behavior in aggregate() that I cannot
explain. Any help would be appreciated!
Sample data:
my_data <- structure(list(ID = c("FLINT-1", "FLINT-10", "FLINT-100",
"FLINT-101", "FLINT-102", "HORN-10", "HORN-100", "HORN-102", "HORN-103",
"HORN-104"), Edge
di Ancona, Ancona (AN)
Uff: +39 071 806 7743
E-mail: stefano.so...@regione.marche.it
---Oo-oO
Da: Bill Dunlap
Inviato: sabato 13 maggio 2023 22:38
A: Stefano Sofia
Cc: r-help@R-project.org
Oggetto: Re: [R] aggr
You don't have to bother with the subtracting from pi/2 bit ... just assume the
cartesian complex values are (y,x) instead of (x,y).
On May 13, 2023 1:38:51 PM PDT, Bill Dunlap wrote:
>I think that using complex numbers to represent the wind velocity makes
>this simpler. You would need to write
I think that using complex numbers to represent the wind velocity makes
this simpler. You would need to write some simple conversion functions
since wind directions are typically measured clockwise from north and the
argument of a complex number is measured counterclockwise from east. E.g.,
wind
Sorry Rui; if you run your code you will get:
Error in FUN(X[[i]], ...) : object 'ws' not found
Moreover, even if you did this:
aggregate(wd ~ day + month, data=df, FUN = my_fun, ws1 = df$ws)
the answer would be wrong is you need to include only the subsets of ws1
corresponding to the split defin
Às 15:51 de 13/05/2023, Stefano Sofia escreveu:
Dear list users,
I have to aggregate wind direction data (wd) using a function that requires
also a second input variable, wind speed (ws).
This is the function that I need to use:
my_fun <- function(wd1, ws1){
u_component <- -ws1*sin(2*pi*
Dear list users,
I have to aggregate wind direction data (wd) using a function that requires
also a second input variable, wind speed (ws).
This is the function that I need to use:
my_fun <- function(wd1, ws1){
u_component <- -ws1*sin(2*pi*wd1/360)
v_component <- -ws1*cos(2*pi*wd1/360)
Da: Eric Berger [ericjber...@gmail.com]
Inviato: martedì 22 settembre 2020 11.00
A: Jeff Newmiller
Cc: Stefano Sofia; r-help mailing list
Oggetto: Re: [R] aggregate semi-hourly data not 00-24 but 9-9
Thanks Jeff.
Stefano, per Jeff's comment, yo
Thanks Jeff.
Stefano, per Jeff's comment, you can replace the line
df1$data_POSIXminus9 <- df1$data_POSIX - lubridate::hours(9)
by
df1$data_POSIXminus9 <- df1$data_POSIX - as.difftime(9,units="hours")
On Mon, Sep 21, 2020 at 8:06 PM Jeff Newmiller wrote:
>
> The base R as.difftime function is
The base R as.difftime function is perfectly usable to create this offset
without pulling in lubridate.
On September 21, 2020 8:06:51 AM PDT, Eric Berger wrote:
>Hi Stefano,
>If you mean from 9am on one day to 9am on the following day, you can
>do a trick. Simply subtract 9hrs from each timestam
Hi Stefano,
If you mean from 9am on one day to 9am on the following day, you can
do a trick. Simply subtract 9hrs from each timestamp and then you want
midnight to midnight for these adjusted times, which you can get using
the method you followed.
I googled and found that lubridate::hours() can be
Dear R-list members,
I have semi-hourly snowfall data.
I should sum the semi-hourly increments (only the positive ones, but this is
not described in my example) day by day, not from 00 to 24 but from 9 to 9.
I am able to use the diff function, create a list of days and use the function
aggregate
Thank you!
This is exactly what I was looking for!
Cheers!
On Wed, Feb 12, 2020 at 11:29 PM Jim Lemon wrote:
>
> Hi Stefan,
> How about this:
>
> sddf<-read.table(text="age x
> 45 1
> 45 2
> 46 1
> 47 3
> 47 3",
> header=TRUE)
> library(prettyR)
> sdtab<-xtab(age~x,sddf)
> sdtab$counts
Hi Stefan,
How about this:
sddf<-read.table(text="age x
45 1
45 2
46 1
47 3
47 3",
header=TRUE)
library(prettyR)
sdtab<-xtab(age~x,sddf)
sdtab$counts
Jim
On Thu, Feb 13, 2020 at 7:40 AM stefan.d...@gmail.com
wrote:
>
> Dear All,
>
> I have a seemingly standard problem to which I someh
Thank you, this is already very helpful.
But how do I get it in the form
age var_x=1 var_x=2 var_x=3
45 1 1 0
46 1 00
So it would be a data frame with 4 variables.
Cheers!
On Wed, Feb 12, 2020 at 10:25 PM William Dunlap wrote:
>
> Y
You didn't say how you wanted to use it as a data.frame, but here is one way
d <- data.frame(
check.names = FALSE,
age = c(45L, 45L, 46L, 47L, 47L),
x = c(1L, 2L, 1L, 3L, 3L))
with(d, as.data.frame(table(age,x)))
which gives:
age x Freq
1 45 11
2 46 11
3 47 10
4 45 2
well, if I think about, its actually a simple frequency table grouped
by age. but it should be usable a matrix or data frame.
On Wed, Feb 12, 2020 at 9:48 PM wrote:
>
> So a pivot table?
>
> On 12 Feb 2020 20:39, stefan.d...@gmail.com wrote:
>
> Dear All,
>
> I have a seemingly standard problem t
Dear All,
I have a seemingly standard problem to which I somehow I do not find
a simple solution. I have individual level data where x is a
categorical variable with 3 categories which I would like to aggregate
by age.
age x
45 1
45 2
46 1
47 3
47 3
and so on.
It should after transfo
You can also use 'dplyr'
library(tidyverse)
result <- pcr %>%
group_by(Gene, Type, Rep) %>%
summarise(mean = mean(Ct),
sd = sd(Ct),
oth = sd(Ct) / sqrt(sd(Ct))
)
Jim Holtman
*Data Munger Guru*
*What is the problem that you are trying to solve?Tell me
Hi Cyrus,
Try this:
pcr<-data.frame(Ct=runif(66,10,20),Gene=rep(LETTERS[1:22],3),
Type=rep(c("Std","Unkn"),33),Rep=rep(1:3,each=22))
testagg<-aggregate(pcr$Ct,c(pcr["Gene"],pcr["Type"],pcr["Rep"]),
FUN=function(x){c(mean(x), sd(x), sd(x)/sqrt(sd(x)))})
nxcol<-dim(testagg$x)[2]
newxs<-paste("x",1
Dear users,
i am trying to summarize data using "aggregate" with the following command:
aggregate(pcr$Ct,c(pcr["Gene"],pcr["Type"],pcr["Rep"]),FUN=function(x){c(mean(x),
sd(x), sd(x)/sqrt(sd(x)))})
and the structure of the resulting data frame is
'data.frame':66 obs. of 4 variables:
$ Gen
Hi,
if you are willing to use dplyr, you can do all in one line of code:
library(dplyr)
df<-data.frame(id=1:10,A=c(123,345,123,678,345,123,789,345,123,789))
df%>%group_by(unique_A=A)%>%summarise(list_id=paste(id,collapse=", "))->r
cheers
Am 06.06.2018 um 10:13 schrieb Massimo Bressan:
> #given
which() is unnecessary. Use logical subscripting:
... t$id[t$A ==x]
Further simplification can be gotten by using the with() function:
l <- with(t, sapply(unique(A), function(x) id[A ==x]))
Check this though -- there might be scoping issues.
Cheers,
Bert
On Thu, Jun 7, 2018, 6:49 AM Massimo
#ok, finally this is my final "best and more compact" solution of the problem
by merging different contributions (thanks to all indeed)
t<-data.frame(id=c(18,91,20,68,54,27,26,15,4,97),A=c(123,345,123,678,345,123,789,345,123,789))
l<-sapply(unique(t$A), function(x) t$id[which(t$A==x)])
r<-dat
vals<- lapply(idx, function(index) x$id[index])
data.frame(unique_A = uA, list_vals=unlist(lapply(vals, paste, collapse = ",
")))
best
Da: "Ben Tupper"
A: "Massimo Bressan"
Cc: "r-help"
Inviato: Giovedì, 7 giugno 2018 14:47:55
Oggetto: Re: [
Hi,
Does this do what you want? I had to change the id values to something more
obvious. It uses tibbles which allow each variable to be a list.
library(tibble)
library(dplyr)
x <- tibble(id=LETTERS[1:10],
A=c(123,345,123,678,345,123,789,345,123,789))
uA <- unique(x$
Using which() to subset t$id should do the trick:
sapply(levels(t$A), function(x) t$id[which(t$A==x)])
Ivan
--
Dr. Ivan Calandra
TraCEr, laboratory for Traceology and Controlled Experiments
MONREPOS Archaeological Research Centre and
Museum for Human Behavioural Evolution
Schloss Monrepos
56567
sorry, but by further looking at the example I just realised that the posted
solution it's not completely what I need because in fact I do not need to get
back the 'indices' but instead the corrisponding values of column A
#please consider this new example
t<-data.frame(id=c(18,91,20,68,54,27
thanks for the help
I'm posting here the complete solution
t<-data.frame(id=1:10,A=c(123,345,123,678,345,123,789,345,123,789))
t$A <- factor(t$A)
l<-sapply(levels(t$A), function(x) which(t$A==x))
r<-data.frame(list_id=unlist(lapply(l, paste, collapse = ", ")))
r<-cbind(unique_A=row.names(r)
Hi Massimo,
Something along those lines could help you I guess:
t$A <- factor(t$A)
sapply(levels(t$A), function(x) which(t$A==x))
You can then play with the output using paste()
Ivan
--
Dr. Ivan Calandra
TraCEr, laboratory for Traceology and Controlled Experiments
MONREPOS Archaeological Resea
#given the following reproducible and simplified example
t<-data.frame(id=1:10,A=c(123,345,123,678,345,123,789,345,123,789))
t
#I need to get the following result
r<-data.frame(unique_A=c(123, 345, 678,
789),list_id=c('1,3,6,9','2,5,8','4','7,10'))
r
# i.e. aggregate over the variable "A
Thank you again Pikal and Bert. Using lapply, as Bert suggested, was
the first thing that i thought of dealing with this question and was
mentioned in my original posting. I just did not know how to implement
it to get the results/form i want. Below is what i did but could not
get it to give me th
gt; I could construct example data.frames myself but most probably they
> would be different from yours and also the result would not be necessary
> the same as you expect.
> >
> > You should post those data frames as output from dput(data) and show us
> real desired result from
1 0 0
I believe that in your dfn is typo in second row and first column and that with
your 3 data.frames the result should be 1.
Cheers
Petr
> -Original Message-----
> From: Ek Esawi [mailto:esaw...@gmail.com]
> Sent: Tuesday, February 27, 2018 2:54 PM
> To: PIKAL Petr ; r-help@r-pr
bably they would be
> different from yours and also the result would not be necessary the same as
> you expect.
>
> You should post those data frames as output from dput(data) and show us real
> desired result from those example data frames.
>
> Cheers
> Petr
>
&g
r-help-boun...@r-project.org] On Behalf Of Ek Esawi
> Sent: Wednesday, February 21, 2018 3:34 AM
> To: r-help@r-project.org
> Subject: [R] Aggregate over multiple and unequal column length data frames
>
> Hi All--
>
> I have generated several 2 column data frames with variable len
All columns in a data.frame **must** have the same length. So you cannot do
this unless empty values are filled with missings (NA's).
-- Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along and
sticking things into it."
-- Opus (aka Berkeley Breathed in his "Bl
Hi All--
I have generated several 2 column data frames with variable length. The
data frames have the same column names and variable types. I was trying to
aggregate over the 2nd column for all the date frames, but could not figure
out how.
I thought i could make them all of equal length then co
Thank you for your response. Note that with R 3.4.3, I get the same
result with simplify=TRUE or simplify=FALSE.
My problem was the behaviour was different if I define my columns as
character or as numeric but for now some minutes I discovered there also
is a stringsAsFactors option in the fun
Don't use aggregate's simplify=TRUE when FUN() produces return
values of various dimensions. In your case, the shape of table(subset)'s
return value depends on the number of levels in the factor 'subset'.
If you make B a factor before splitting it by C, each split will have the
same number of leve
The normal input to a factory that builds cars is car parts. Feeding whole
trucks into such a factory is likely to yield odd-looking results.
Both aggregate and table do similar kinds of things, but yield differently
constructed outputs. The output of the table function is not well-suited to be
Dear R users,
When I use aggregate with table as FUN, I get what I would call a
strange behaviour if it involves numerical vectors and one "level" of it
is not present for every "levels" of the "by" variable:
---
> df <-
data.frame(A=c(1,1,1,1,0,0,0,0),B=c(1,0,1,0,0,
Hi again,
Here is a version cleaned up a bit. Too tired to do it last night.
mndf<-data.frame(st=seq(1483360938,by=1700,length=10),
et=seq(1483362938,by=1700,length=10),
store=c(rep("gap",5),rep("starbucks",5)),
zip=c(94000,94000,94100,94100,94200,94000,94000,94100,94100,94200),
store_id=seq(5
Hi Mark,
I think you might want something like this:
mndf<-data.frame(st=seq(1483360938,by=1700,length=10),
et=seq(1483362938,by=1700,length=10),
store=c(rep("gap",5),rep("starbucks",5)),
zip=c(94000,94000,94100,94100,94200,94000,94000,94100,94100,94200),
store_id=seq(50,59))
# orders the time
I have a data frame that has a set of observed dwell times at a set of
locations. The metadata for the locations includes things that have varying
degrees of specificity. I'm interested in tracking the number of people
present at a given time in a given store, type of store, or zip code.
Here's an
5, -53.75, -53.75, -53.75,
>>> -53.75), GDP = c(1.683046, 0.3212307, 0.0486207, 0.1223268, 0.0171909,
>>> 0.0062104, 0.22379, 0.1406729, 0.0030038, 0.0057422)), .Names =
>>> c("longitude",
>>> "latitude", "GDP"), row.names = c(4L, 17L
;>> same result can be achieved by
> > >>>
> > >>> dat.ag<-aggregate(dat[ , c("DCE","DP")], by= list(dat$first.Name,
> dat$Name, dat$Department) , "I")
> > >>>
> > >>> Sorting according to the first row seems
; >>> Sorting according to the first row seems to be quite tricky. You could
> >>> probably get closer by using some combination of split and order and
> >>> arranging back chunks of data
> >>>
> >>> ooo1<-order(s
tricky. You could
> probably get closer by using some combination of split and order and
> arranging back chunks of data
> >>>
> >>> ooo1<-order(split(dat$DCE,interaction(dat$first.Name, dat$Name,
> dat$Department, drop=T))[[1]])
> >>> data.frame
dat$Name,
>>> dat$Department, drop=T))[[1]])
>>> data.frame(sapply(split(dat$DCE,interaction(dat$first.Name, dat$Name,
>>> dat$Department, drop=T)), rbind))[ooo1,]
>>> Ancient.Nation.QLH Amish.Wives.TAS Auction.Videos.YME
>>> 2
NA
> > 4 0.28 NA NA
> > 1 0.540.59 0.57
> > 3 0.540.59 0.57
> >
> > however I wonder why the order according to the first row is n
NA NA
> > 4 0.28 NA NA
> > 1 0.540.59 0.57
> > 3 0.540.59 0.57
> >
> > however I wonder why the order according to the fir
0.59 0.57
> 3 0.540.59 0.57
>
> however I wonder why the order according to the first row is necessary if all
> NAs are on correct positions?
>
> Cheers
> Petr
>
>
> > -Original Message-
> >
0.540.59 0.57
>
> however I wonder why the order according to the first row is necessary if
> all NAs are on correct positions?
>
> Cheers
> Petr
>
>
> > -Original Message-
> > From: R-help [mailto:r-help-boun...@r-project.org]
necessary if all
NAs are on correct positions?
Cheers
Petr
> -Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David
> Winsemius
> Sent: Friday, November 18, 2016 9:30 AM
> To: Karim Mezhoud
> Cc: r-help@r-project.org
> Subject: R
> On Nov 17, 2016, at 11:27 PM, Karim Mezhoud wrote:
>
> Dear all,
>
> the dat has missing values NA,
>
>first.Name Name Department DCE DP date
> 5 Auction VideosYME 0.57 0.56 2013-09-30
> 18 Amish WivesTAS 0.59 0.56 2013-09-30
> 34 Ancient Natio
Dear all,
the dat has missing values NA,
first.Name Name Department DCE DP date
5 Auction VideosYME 0.57 0.56 2013-09-30
18 Amish WivesTAS 0.59 0.56 2013-09-30
34 Ancient NationQLH 0.54 0.58 2013-09-30
53 Auction VideosYME NA
38 3.2 S2 A
> S2B 22 3.2 S2 B
>
> David C
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wednesday, August 24, 2016 2:51 PM
> To: David L Carlson
> Cc: r-help mailing list
> Subject: Re: [R] aggregate
>
> Thanks again for patiently o
t
Subject: Re: [R] aggregate
Thanks again for patiently offering great help, David! I just learned
dput() and paste0() now. Hopefully this is my last question.
Suppose a new dataframe is as below (one more numeric column):
myData <- structure(list(X = c(1, 2, 3, 4, 5, 6, 7, 8), Y = c(8, 7, 6,
5
paste0() function:
>
>> sapply(split(myData, paste0(myData$S, myData$Z)), function(x) crossprod(x[,
>> 1], x[, 2]))
> S1A S1B S2A S2B
> 22 38 38 22
>
> David C
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wedne
38 22
David C
-Original Message-
From: Gang Chen [mailto:gangch...@gmail.com]
Sent: Wednesday, August 24, 2016 11:56 AM
To: David L Carlson
Cc: Jim Lemon; r-help mailing list
Subject: Re: [R] aggregate
Thanks a lot, David! I want to further expand the operation a little
bit. With a ne
], x[, 2])))
> Z CP
> A A 10
> B B 10
>
> David C
>
>
> -Original Message-
> From: Gang Chen [mailto:gangch...@gmail.com]
> Sent: Wednesday, August 24, 2016 10:17 AM
> To: David L Carlson
> Cc: Jim Lemon; r-help mailing list
> Subject: Re: [R] aggregate
com]
Sent: Wednesday, August 24, 2016 10:17 AM
To: David L Carlson
Cc: Jim Lemon; r-help mailing list
Subject: Re: [R] aggregate
Thank you all for the suggestions! Yes, I'm looking for the cross
product between the two columns of X and Y.
A follow-up question: what is a nice way to merge the o
ge Station, TX 77840-4352
>
>
> -----Original Message-
> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jim Lemon
> Sent: Tuesday, August 23, 2016 6:02 PM
> To: Gang Chen; r-help mailing list
> Subject: Re: [R] aggregate
>
> Hi Gang Chen,
> I
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jim Lemon
Sent: Tuesday, August 23, 2016 6:02 PM
To: Gang Chen; r-help mailing list
Subject: Re: [R] aggregate
Hi Gang Chen,
If I have the right idea:
for(zval in levels(myData$Z))
crossprod(as.matrix(myData[myData$Z==zval,c("X","Y")]))
Jim
On Wed, Aug 24, 2016 at 8:03 AM, Gang Chen wrote:
> This is a simple question: With a dataframe like the following
>
> myData <- data.frame(X=c(1, 2, 3, 4), Y=c(4, 3
> On Aug 23, 2016, at 3:03 PM, Gang Chen wrote:
>
> This is a simple question: With a dataframe like the following
>
> myData <- data.frame(X=c(1, 2, 3, 4), Y=c(4, 3, 2, 1), Z=c('A', 'A', 'B',
> 'B'))
>
> how can I get the cross product between X and Y for each level of
> factor Z? My difficu
This is a simple question: With a dataframe like the following
myData <- data.frame(X=c(1, 2, 3, 4), Y=c(4, 3, 2, 1), Z=c('A', 'A', 'B', 'B'))
how can I get the cross product between X and Y for each level of
factor Z? My difficulty is that I don't know how to deal with the fact
that crossprod()
ecessarily faster, because apply
>>> is still a for loop inside):
>>>>>
>>>>> f <- function( m, nx, ny ) {
>>>>> # redefine the dimensions of my
>>>>> a <- array( m
>>>>> , dim = c( ny
>>&
lock.means, 2, 4)
tst_2x2
[,1] [,2] [,3] [,4]
[1,] 3.5 11.5 19.5 27.5
[2,] 5.5 13.5 21.5 29.5
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun..
t; --
>>> Sent from my phone. Please excuse my brevity.
>>>
>>> On July 27, 2016 9:08:32 AM PDT, David L Carlson
>wrote:
>>>> This should be faster. It uses apply() across the blocks.
>>>>
>>>>> ilon <- seq(1,8,nx)
>>>&g
> [,1] [,2] [,3] [,4]
>>> [1,] 3.5 11.5 19.5 27.5
>>> [2,] 5.5 13.5 21.5 29.5
>>>
>>> -
>>> David L Carlson
>>> Department of Anthropology
>>> Texas A&M University
>>> Colleg
5 19.5 27.5
>> [2,] 5.5 13.5 21.5 29.5
>>
>> -
>> David L Carlson
>> Department of Anthropology
>> Texas A&M University
>> College Station, TX 77840-4352
>>
>>
>>
>> -Original Message--
gt;From: R-help [mailto:r-help-boun...@r-poject.org] On Behalf Of Anthoni,
>Peter (IMK)
>Sent: Wednesday, July 27, 2016 6:14 AM
>To: r-help@r-project.org
>Subject: [R] Aggregate matrix in a 2 by 2 manor
>
>Hi all,
>
>I need to aggregate some matrix data (1440x720) to a lower
R-help [mailto:r-help-boun...@r-poject.org] On Behalf Of Anthoni, Peter
(IMK)
Sent: Wednesday, July 27, 2016 6:14 AM
To: r-help@r-project.org
Subject: [R] Aggregate matrix in a 2 by 2 manor
Hi all,
I need to aggregate some matrix data (1440x720) to a lower dimension (720x360)
for lots of years a
Hi all,
I need to aggregate some matrix data (1440x720) to a lower dimension (720x360)
for lots of years and variables
I can do double for loop, but that will be slow. Anybody know a quicker way?
here an example with a smaller matrix size:
tst=matrix(1:(8*4),ncol=8,nrow=4)
tst_2x2=matrix(NA,nc
6L, 69L,
>> 82L, 95L, 108L, 121L), class = "data.frame")
>>
>> I would like to aggregate the data 1 degree by 1 degree. I understand that
>> the first step is to convert to raster. I have tried:
>>
>> rasterDF
like to aggregate the data 1 degree by 1 degree. I understand that
> the first step is to convert to raster. I have tried:
>
> rasterDF <- rasterFromXYZ(temp)
> r <- aggregate(rasterDF,fact=2, fun=sum)
>
> But this does not seem to work. Cou
= "data.frame")
I would like to aggregate the data 1 degree by 1 degree. I understand that
the first step is to convert to raster. I have tried:
rasterDF <- rasterFromXYZ(temp)
r <- aggregate(rasterDF,fact=2, fun=sum)
But this does not seem to work. Could anyone help me ou
Hi David,
Thank you so much for your help and others. Here is the code.
balok <- read.csv("G:/A_backup 11 mei 2015/DATA (D)/1 Universiti Malaysia
Pahang/ISM-3 2016 UM/Data/Hourly Rainfall/balok2.csv",header=TRUE)
head(balok, 10); tail(balok, 10)
str(balok)
## Introduce NAs for
balok$Rain.mm2 <-
> On Jul 13, 2016, at 3:21 AM, roslinazairimah zakaria
> wrote:
>
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
> I tried : balok <- cbind(balok3[,-5], balok3$Rain.mm[balok3$Rain.mm==0.0?] <-
> NA)
If you had done as I suggested, the items with factor
Behalf Of
> roslinazairimah zakaria
> Sent: Wednesday, July 13, 2016 12:22 PM
> To: David Winsemius
> Cc: r-help mailing list
> Subject: Re: [R] Aggregate rainfall data
>
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
&
use `gsub()` after the `as.character()` conversion to remove
everything but valid numeric components from the strings.
On Wed, Jul 13, 2016 at 6:21 AM, roslinazairimah zakaria
wrote:
> Dear David,
>
> I got your point. How do I remove the data that contain "0.0?".
>
> I tried : balok <- cbind(ba
Dear David,
I got your point. How do I remove the data that contain "0.0?".
I tried : balok <- cbind(balok3[,-5], balok3$Rain.mm[balok3$Rain.mm==0.0?]
<- NA)
However all the Rain.mm column all become NA.
day month year Time balok3$Rain.mm[balok3$Rain.mm == "0.0?"] <- NA
1 30 7 200
> On Jul 12, 2016, at 3:45 PM, roslinazairimah zakaria
> wrote:
>
> Dear R-users,
>
> I have these data:
>
> head(balok, 10); tail(balok, 10)
>Date Time Rain.mm
> 1 30/7/2008 9:00:00 0
> 2 30/7/2008 10:00:00 0
> 3 30/7/2008 11:00:00 0
> 4 30/7/2008 12:00:00
Dear R-users,
I have these data:
head(balok, 10); tail(balok, 10)
Date Time Rain.mm
1 30/7/2008 9:00:00 0
2 30/7/2008 10:00:00 0
3 30/7/2008 11:00:00 0
4 30/7/2008 12:00:00 0
5 30/7/2008 13:00:00 0
6 30/7/2008 14:00:00 0
7 30/7/2008 15:00:00
> On May 1, 2016, at 9:30 AM, Miluji Sb wrote:
>
> Dear Dennis,
>
> Thank you for your reply. I can use the dplyr/data.table packages to
> aggregate - its the matching FIPS codes to their states that I am having
> trouble. Thanks again.
So post some example code that demonstrate you paid atten
Dear Dennis,
Thank you for your reply. I can use the dplyr/data.table packages to
aggregate - its the matching FIPS codes to their states that I am having
trouble. Thanks again.
Sincerely,
Milu
On Sun, May 1, 2016 at 6:20 PM, Dennis Murphy wrote:
> Hi:
>
> Several such packages exist. Given t
Dear all,
I have the following data by US FIPS code. Is there a package to aggregate
the data by State and Census divisions?
temp <- dput(head(pop1,5))
structure(list(FIPS = c("01001", "01003", "01005", "01007", "01009"
), death_2050A1 = c(18.19158, 101.63088, 13.18896, 10.30068,
131.91798), deat
1 - 100 of 421 matches
Mail list logo