Also, a more compact solution would be:library(plyr)
#Creating a different dataframe as data2 columns were having almost the same as
data1
set.seed(24)
data3<- as.data.frame(matrix(sample(1:40,6*4,replace=TRUE),ncol=6))
colnames(data3)<- colnames(data2)
join(data3,data1)
#Joining by: a, b, c,
HI,
Not sure about your expected result.
library(plyr)
data2New<-join_all(lapply(setdiff(names(data1), names(data2)),function(x)
{data2[,x]<-NA; data2}))
data1New<-join_all(lapply(setdiff(names(data2),
names(data1)),function(x){data1[,x]<-NA;data1}))
data1New
# a b c d e z f g
#1 1 5
On Aug 8, 2013, at 6:09 PM, iza.ch1 wrote:
> Hi
>
> I receive a very strange error message after trying to do t-test. When I
> write the code t.test(x) I get an error message: error in t.test(x) :
> function "sqr" not found
>
> I don't understand this problem. Can someone help me how to do i
Hi
I receive a very strange error message after trying to do t-test. When I write
the code t.test(x) I get an error message: error in t.test(x) : function "sqr"
not found
I don't understand this problem. Can someone help me how to do it right?
Thanks a lot :)
A lot of helpful solutions that pretty much all work. Thanks, everyone!
_
Kevin Parent, Ph.D
Korea Maritime University
From: Rolf Turner
To: Jim Lemon
ject.org>
Sent: Thursday, August 8, 2013 6:26 PM
Subject: Re: [R] Extracting only multiple occurrence
I am not an expert on shrinkage estimators of partial correlations
(such as the one in corpcor), but my sense is that it is difficult to
provide a good estimate of a p-value. You could try to email the
authors of the package and ask them, but this may be more of a
statistics rather than R question.
Assuming your data frame of users is named 'users',
and using your mapping vectors:
users$regions <- ''
users$regions[ users$State_Code) %in% NorthEast ] <- 'NorthEast'
and repeat for the other regions
Or, if you put your mappings in a data frame then it is
as simple as
merge(yourdataframe
If I understand the request correctly, here is an easy to follow example:
(I'm using the first four letters as surrogates for the file names)
(and assuming we want quotes at both the beginning and the end)
> tmp <- letters[1:4]
> tmp
[1] "a" "b" "c" "d"
> foo <- paste( "'", paste(tmp,collapse="'
This is a question for R-sig-mac.
However, try
edit(file=file.choose())
Also, before your edit() command, try
getwd()
Is the file in that directory??
-Don
--
Don MacQueen
Lawrence Livermore National Laboratory
7000 East Ave., L-627
Livermore, CA 94550
925-423-1062
On 8/8/13 7:51 AM
Not quite, David. ... (see inline)
On Thu, Aug 8, 2013 at 1:56 PM, David Winsemius wrote:
>
> On Aug 8, 2013, at 10:54 AM, Bert Gunter wrote:
>
>> Perhaps
>>
>> ?vcov
>>
>> is what you are looking for.
>>
>> -- Bert
>>
>> On Thu, Aug 8, 2013 at 10:37 AM, iza.ch1 wrote:
>>> Hi
>>>
>>> Can someone
On Aug 8, 2013, at 10:54 AM, Bert Gunter wrote:
> Perhaps
>
> ?vcov
>
> is what you are looking for.
>
> -- Bert
>
> On Thu, Aug 8, 2013 at 10:37 AM, iza.ch1 wrote:
>> Hi
>>
>> Can someone give me a hint on how to create a matrix with standard errors
>> from lm model? I have already manage
On 08/04/2013 02:13 AM, Simon Zehnder wrote:
So, I found a solution: First in the "initialize" method of class C coerce
the C object into a B object. Then call the next method in the list with the
B class object. Now, in the "initialize" method of class B the object is a B
object and the respecti
This is exactly what I'm looking for. Each dataFrame will have those
columns that are endemic to the other filled with NA.
Thanks.
Steven H. Ranney
On Thu, Aug 8, 2013 at 12:17 PM, arun wrote:
> HI,
>
> Not sure about your expected result.
>
> library(plyr)
> data2New<-join_all(lapply(setdif
I have two data frames
data1 <- as.data.frame(matrix(data=c(1:4,5:8,9:12,13:24), nrow=4, ncol=6,
byrow=F, dimnames=list(c(1:4),c("a","b","c","d","e","z"
data2 <- as.data.frame(matrix(data=c(1:4,5:8,9:12,37:48), nrow=4, ncol=6,
byrow=F, dimnames=list(c(1:4),c("a","b","c","f","g","z"
that h
Perhaps
?vcov
is what you are looking for.
-- Bert
On Thu, Aug 8, 2013 at 10:37 AM, iza.ch1 wrote:
> Hi
>
> Can someone give me a hint on how to create a matrix with standard errors
> from lm model? I have already managed to get the matrix with coefficients:
>
> coef<-as.data.frame(sapply(seq
Tomas,
Also: please don't send html emails (as is specified in the posting
guide[1]). This is what your email looked like on our end of the
table:
https://stat.ethz.ch/pipermail/r-help/attachments/20130808/74a3c7c2/attachment.pl
[1] Posting Guide: http://www.r-project.org/posting-guide
Hi
Can someone give me a hint on how to create a matrix with standard errors from
lm model? I have already managed to get the matrix with coefficients:
coef<-as.data.frame(sapply(seq_len(ncol(es.w)),function( i) {x1<-
summary(lm(es.w[,i]~es.median[,i]));x1$coef[,1]}))
but I can't get the one
Hi all,
WriteXLS version 3.2.1 has been submitted to CRAN, with thanks to the CRAN
maintainers.
This is a bug fix release with the following fixes:
1. When row.names = TRUE, the initial comments row, which contains the comments
attributes for the data frame columns and is rbind()ed to the sour
It's not clear how you are planning to use this within R, but
you don't need a loop.
individual.proj.quote <-
capture.output(write.table(matrix(individual.proj, 1),
quote=TRUE, sep=",", row.names=FALSE, col.names=FALSE))
This produces a single character string which consists of the
quoted file na
On Thu, Aug 08, 2013 at 11:38:33AM -0500, Charles Determan Jr wrote:
> Hi Jenny,
>
> Firstly, to my knowledge you cannot assign the output of cat to an object
> (i.e. it only prints it).
> Second, you can just add the 'collapse' option of the paste function.
>
> individual.proj.quote <- paste(ind
Tomas:
Do some reading on parallelization.
Parallelizing code requires the overhead of setting up, keeping track
of, synching the separate threads. Whether that overhead is worth the
cost depends on the problem,the size, the algorithms, the
machines/hardware,...
Cheers,
Bert
On Thu, Aug 8, 201
On Thu, Aug 08, 2013 at 04:05:57PM +0100, Jenny Williams wrote:
> I am having difficulty storing the output of a for loop I have generated. All
> I want to do is find all the files that I have, create a string with all of
> the names in quotes and separated by commas. This is proving more difficu
Hi Jenny,
Firstly, to my knowledge you cannot assign the output of cat to an object
(i.e. it only prints it).
Second, you can just add the 'collapse' option of the paste function.
individual.proj.quote <- paste(individual.proj, collapse = ",")
if you really want the quotes
individual.proj.quote
I am having difficulty storing the output of a for loop I have generated. All I
want to do is find all the files that I have, create a string with all of the
names in quotes and separated by commas. This is proving more difficult than I
initially anticipated.
I am sure it is either very simple o
Hi,
I can't seem to get this to work:
http://www.endmemo.com/program/R/cbind.php
Do I save the data as data1.csv in note pad and pull in the file? Do I type
data1.csv<-Subtype,Gender,Expression,A,m,-0.54,A,f,-0.8,B,f,-1.03,C,m,-0.41??
I can do a simple matrix. But, I want to have headers and dat
Hello,
i'm pretty confused. I want to speed up my algorithm by using mclapply:
parallel, but when I compare time efficiency, apply still wins.
I'm smoothing log2ratio data by rq.fit.fnb:quantreg which is called by my
function quantsm and I'm wrapping my data into matrix/list for apply/lapply
Dear useR,
I don't understand the results of the predict.bam function of mgcv package
when constucting a varying-coefficient model with bam instead of gam:
library("mgcv")
dat <- gamSim(4)
b <- gam(y ~ fac+s(x2,by=fac)+s(x0), data=dat)
predict(b, dat[1,], type = "terms")
with gam everything is
Hi,
You can save it in file. I copy and paste:
Subtype,Gender,Expression
A,m,-0.54
A,f,-0.8
B,f,-1.03
C,m,-0.41
on the "gedit" and save it as "data1.csv". You might be able to do the same
with notepad.
x <- read.csv("data1.csv",header=T,sep=",")
x2 <- read.csv("data2N.csv",header=T,sep=",")
dat1<- read.table(text="
Name State_Code
Tom 20
Harry 56
Ben 05
Sally 04
",sep="",header=TRUE,stringsAsFactors=FALSE)
dat2<- do.call(cbind,list(NorthEast,MidWest,South,West,Other))
colnames(dat2)<- c("NorthEast","MidWest","South","West","Other")
dat2<- as.data.frame(dat2)
I cannot reproduce your problem. You will have to
give more details. (I assume you have already made
the suggested changes to your code - either label
the 3rd argument to your assign call 'envir=' or use
the syntax 'cpufile[[key]] <- value' instead of assign.)
To start debugging this, have your
Dark:
1. In future, please use dput() to post data to enable us to more
easily read them from your email.
2. As Berend demonstrates, using a more appropriate data structure is
what's required. Here is a slightly shorter, but perhaps trickier
alternative to his solution:
> df ## Your example da
Dear Vera,
I had a similar problem once and as far as I can remember the reason were
some negative inputs or outputs.
# Check for negative values
x1a.neg <- apply(x1a, 1, function(x) any(x<0))
y1.neg <- apply(y1, 1, function(x) any(x<0))
exclude <- x1a.neg | y1.neg
# Exclude negative rows
x1a.ne
I tried using various versions of the 'edit' command. Here is an account of
how this failed. I hope I have included all relevant information.
I haven't used R for a couple of years. Before restarting with R, I
downloaded the latest version I could find in its binary version, and
installed it witho
Dear expeRts,
I have run some simulations under R 2.15.1 on a Mac, and I have rerun a
sample of them under R 3.0.1 on Windows (and also for comparison under
R2.14.1 on Windows). For most cases, I get exactly the same results in
all three runs. However, for those cases that depend on principal
Revolution Analytics staff write about R every weekday at the Revolutions blog:
http://blog.revolutionanalytics.com
and every month I post a summary of articles from the previous month
of particular interest to readers of r-help.
In case you missed them, here are some articles related to R from t
On 08-08-2013, at 11:33, Dark wrote:
> Hi all,
>
> I have a dataframe of users which contain US-state codes.
> Now I want to add a column named REGION based on the state code. I have
> already done a mapping:
>
> NorthEast <- c(07, 20, 22, 30, 31, 33, 39, 41, 47)
> MidWest <- c(14, 15, 16, 17
Stathis:
1. This has nothing to do with R. Post on a statistics list, like
stats.stackexchange.com
2. Read a basic regression/linear models text. You need to educate yourself.
-- Bert
On Thu, Aug 8, 2013 at 3:43 AM, Stathis Kamperis wrote:
> Hi everyone,
>
> I have a response variable 'y' and
Dear Stathis,
I recommend that you try to get some advice from a local statistician or read
an introductory book on statistics. This kind of question is beyond the scope
of a mailing list.
Best regards,
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
Hi everyone,
I have a response variable 'y' and several predictor variables 'x_i'.
I start with a linear model:
m1 <- lm(y ~ x1); summary(m1)
and I get a statistically significant estimate for 'x1'. Then, I
modify my model as:
m2 <- lm(y ~ x1 + x2); summary(m2)
At this moment, the estimate for
Dear List,
I am looking to reveal the combination of environmental factors that bets
explain the observed variance in a uni-variate time series of a population.
I have approached this using two methods, and have different results,
therefore i was hoping somebody may have done something simila
Hi all,
I have a dataframe of users which contain US-state codes.
Now I want to add a column named REGION based on the state code. I have
already done a mapping:
NorthEast <- c(07, 20, 22, 30, 31, 33, 39, 41, 47)
MidWest <- c(14, 15, 16, 17, 23, 24, 26, 28, 35, 36, 43, 52)
South <- c(01, 04, 08,
Dear useRs,
me and my colleague (cc) have recently released a new package on CRAN about
computing various migration indices like the Crude Migration Rate, the
Effectiveness and Connectivity Index, different Gini indices or the
Coefficient of Variation.
I hope that some of you dealing with migrati
Dear Tal:
Thank you for your help.
Thats what I run:
install.packages(corpcor)
require(corpcor)
correlations=cor(mydata)
pcorrrel = cor2pcor(correlations); pcorrrel
2013/8/7 Tal Galili
> A short self contained code would help us help you.
>
> You can try using "str" on the output of the
I found the solution.
coeftest does not work with na.exclude but with na.omit only, i.e. one
needs to omit missing values from the residual matrix.
Cheers!
On Thu, Aug 8, 2013 at 10:19 AM, ivan wrote:
> Dear R Community,
>
> I am trying to build a very simple function which uses lm and coefte
Hi,
In "Skew-t fits to mortality data - can a Gaussian-related distribution replace
the Gompertz-Makeham as the basis for mortality studies?" (J Gerontol A Biol
Sci Med Sci 2013 August ;68(8):903-913; doi:10.1093/Gerona/gls239) Clark et al
compares the fit of several distributions to mortality
Without example data it is difficult to give suggestions on how you
might read this file.
Are you sure your file is fixed width? Sometimes columns are neatly
aligned using whitespace (tabs/spaces). In that case you could use
read.table with the default settings.
Another possibility migh
*Dear Sir,*
Thanks for your response. Here, I was using 'n' to denote the input size
(no. of points in time series using which I am building a Seasonal ARIMA
model). I can check the running time myself and I have done that as well
(it takes some 1-2 minutes for 50 iterations for my input size), bu
Dear R Users,
I attempt to estimate a generalized nonlinear least squares model using
gnls() from the nlme library.
I wish to restrict some of my parameters using the "L-BFGS-B" method for the
"optim" optimizer. However, in contrast to nls() in the same package, gnls()
does not accept any `lower'
On 08/08/13 20:27, Jim Lemon wrote:
On 08/08/2013 04:23 PM, Kevin Parent wrote:
Well that almost works, and I didn't know about duplicated() so
thanks for that. However, it only gives me the duplicated values. I
need the original ones too. So the result I want is:
[g,g,m,m,s,s,t,t,u,u,u,v,v,x,
On 08/08/2013 06:52 PM, Berend Hasselman wrote:
On 08-08-2013, at 10:27, Jim Lemon wrote:
On 08/08/2013 04:23 PM, Kevin Parent wrote:
Well that almost works, and I didn't know about duplicated() so thanks for
that. However, it only gives me the duplicated values. I need the original ones
t
Hello R helpers,
I'm struggling how to apply the integrate function to a data frame. Here is
an example of what I'm trying to do:
# Create data frame
x <- 0:4
tx <- 10:14
T <- 12:16
data <- data.frame(x=x, tx=tx, T=T)
# Parameter
alpha <- 10
beta <- 11
# Integral
integrand <- function(y){
(y+
On 08-08-2013, at 10:27, Jim Lemon wrote:
> On 08/08/2013 04:23 PM, Kevin Parent wrote:
>> Well that almost works, and I didn't know about duplicated() so thanks for
>> that. However, it only gives me the duplicated values. I need the original
>> ones too. So the result I want is: [g,g,m,m,s,s
On 08/08/2013 09:07, Pancho Mulongeni wrote:
Hi! I just installed the latest R 3.01.
I then wanted to update my packages.
I believe the advice given is to take the library folder from the old R version
and copy it on top of (overwrite) the library folder of the new R version, in
my case the lib
On 08/08/2013 04:23 PM, Kevin Parent wrote:
Well that almost works, and I didn't know about duplicated() so thanks for
that. However, it only gives me the duplicated values. I need the original ones
too. So the result I want is: [g,g,m,m,s,s,t,t,u,u,u,v,v,x,x,y,y,y]. What
duplicated() gives me
Dear R Community,
I am trying to build a very simple function which uses lm and coeftest to
return a coefficient matrix with heteroskedasticity robust standard errors.
The function is the following:
reg=function(formula,data,na.action){
res=lm(formula=formula,data=data,na.action=na.action)
h
Hi! I just installed the latest R 3.01.
I then wanted to update my packages.
I believe the advice given is to take the library folder from the old R version
and copy it on top of (overwrite) the library folder of the new R version, in
my case the library of R 2.15.2 to library of R 3.01.
When I
Dear Jan
Many thanks for your help. In fact, all lines are shorter than my column
width...
my.column.widths: 238
range(nchar(lines)):235 237
So, it seems I have an inconsistent file structure...
I guess there is no way to handle this in an automated way?
Best Regard
Christian Kameni
plz solve this question and send me commands..
this is a question for diallel analysis..
thanks
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-gui
On 08/08/2013 05:08, Mohit Dhingra wrote:
*Dear All,*
I am using Seasonal ARIMA model for predicting cloud workloads. I want to
know the running time complexity of building model by the algorithm
implemented in R (I am not sure, is it Yule-Walker?). I want to know if it
It is not Yule-Walker (
59 matches
Mail list logo