You didn't say how you want these variables to be distributed, but in
case you want a multivariate normal, then have a look at function
mvrnorm() from package MASS, and especially at the 'empirical' argument,
e.g.,
library(MASS)
# assumed covariance matrix
V <- cbind(c(2, 1), c(1, 1.2))
V
x1
Thanks to you all,
they are very useful and I am learning a lot.
Best,
Francesca
On 27 January 2013 19:20, arun wrote:
>
>
> Hi,
>
> You could use library(plyr) as well
> library(plyr)
> pnew<-colSums(aaply(laply(split(as.data.frame(p),((1:nrow(as.data.frame(p))-1)%/%
> 25)+1),as.matrix),c(2,3),
First you must learn to be more specific in your description of what you want.
That will allow others to understand what you actually want rather than
guessing.
Perhaps try creating a small (3 var by 10 values) example, and describe the
actual correlations you have created.
If your problem is
The C program takes 2 mzML files from which the binary strings (according to
the X data and Y data is uncompressed/decoded), it then examines spectral
(xy data) similiary and combines both datasets into a new one and finally
after all similar spectra have been merged it writes it all back into 1 ne
What about this then:
list_of_datasets <- lapply(file_names, read.table, other_args_to_read.table)
Something that might then be useful is:
names(list_of_datasets) <- file_names
Does it do it now?
Ivan
--
Ivan CALANDRA
Université de Bourgogne
UMR CNRS/uB 6282 Biogéosciences
6 Boulevard Gabriel
Dear Alma,
either there is a whole lot of miscommunicaton here, or you (and your
supervisor) are way in over your head.
You say that you are working with Cohen's d values. And you mentioned CMA. So,
let me ask you some questions:
1) Has CMA computed those d values for you?
2) If yes, what info
Dear Dr Harrell,
About the mean probabilities, I was refering to the ones computed with the
command predict(...,type="mean").
I tried to set the binwidth in SAS to 0.0001 as you suggested.
After having negated the predictors, I found a C index of 0.968, which is
exactly the same that rcorr.cen
On 26.12.2012 23:28, xiaodao wrote:
> I have problems with very large numbers using knitr. In the following, my a
> and b are extremely small and ssrr and ssru are extremely large.
>
> \documentclass{article}
> \begin{document}
>
> <>=
> ## numbers >= 10^5 will be denoted in scientific notation
Hello List,
while dealing with a questin of 'xiaodao' (
http://r.789695.n4.nabble.com/Problem-with-large-small-numbers-in-knitr-tp4653986.html)
I noticed that copying the default inline hook function obtained by
knit_hooks$get("inline")
into a knit_hook$set(inline = <...>) call turns off exponent
Hi,
I would like to replicate a sort of Montecarlo experiment:
I have to generate a random variable N(0,1) with 100 observations but I have
to contaminate this values at certain point in order to obtain different
vectors with different samples:
tab<-function(N,n,E,L){
for(i in 1:100){
X1<-rnorm(
Dear Wolfgang,
Thank you very much for answering!
1) No, I am doing a Meta Analysis and take the
ds from existing studies. To be exactly, I do use D (Means of
D-Measure which is not exactly d, but very similar)
3) Yes CMA makes a new column with st. error and variance differ from 1.
for
exam
jholtman: I do not understand you question?
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-extract-values-from-a-raster-according-to-Lat-and-long-of-the-values-tp4656767p4656847.html
Sent from the R help mailing list archive at Nabble.com.
__
extract will extract values if you provide the x , y but then who to know
which lat and long correspond to which x and y
--
View this message in context:
http://r.789695.n4.nabble.com/how-to-extract-values-from-a-raster-according-to-Lat-and-long-of-the-values-tp4656767p4656848.html
Sent from th
Irucka Embry mail2world.com> writes:
>
> Hi all, I have a set of 54 files that I need to convert from ASCII grid
> format to .shp files to .bnd files for BayesX.
>
> I have the following R code to operate on those files:
>
> library(maptools)
> library(Grid2Polygons)
> library(BayesX)
> librar
On Sun, 27 Jan 2013, Kay Cichini wrote:
That said,
wilcox_test(x ~ factor(y), distribution = "exact")
or the same with oneway_test, i.e would be ok?
Yep, exactly.
And you could also look at chisq_test(factor(x > 0) ~ factor(y),
distribtuion = approximate()) or something like that. Or
Yes! This does this trick. Thank You!
Tim
>>> peter dalgaard 1/26/2013 11:49 AM >>>
On Jan 26, 2013, at 16:32 , Tim Howard wrote:
> Duncan,
> Good point - I guess I am expecting too much. I'll work on a global replace
> before import or chopping with strsplit while importing.
>
> FYI every
guRus and useRs,
FI 1.0 is new submission available on CRAN implementing common forest
inventory volume equations and factors (form and stacking).
Package is well documented and also accompanies sample dataset to start.
Development and bugs occur in http://github.com/dvdscripter/FI
Regards,
Davi
Dear R users,
Is graspeR package current operative? If yes, anyone would be able to say me
how I can access to the website given by the author? (http://www.cscf.ch/grasp)
Thanks a lot to all for your compression,
Regards,
Xavier Benito Granell
PhD student
IRTA- Ecosistemes Aquàtics
Ctra. de Po
Dear R users,
we have a problem when building R packages which depend on platform
specific packages. The following example will illustrate our problem:
For parallel computing (in our own package) we want to use the multicore
package. Since multicore is not available for Windows we subtitute it by
Is there any package that allow you to perform "MA plot" like graph
for Toray microarray data?
Unlike Affymetrix CEL file which contain 2 values (R and G),
Torray raw data only contain 1 value.
MA-plot is Affymetrix specific which usually available for in (limma)
package.
P. Dubois
[
On 28.01.2013 12:06, mary wrote:
> Hi,
>
> I would like to replicate a sort of Montecarlo experiment:
>
> I have to generate a random variable N(0,1) with 100 observations but I have
> to contaminate this values at certain point in order to obtain different
> vectors with different samples:
Hi,
What is the exact formula used in R lm() for the Adjusted R-squared? How can I
interpret it?
There seem to exist several formula's to calculate Adjusted R-squared.
Wherrys formula [1-(1-R2)·(n-1)/(n-v)]
McNemars formula [1-(1-R2)·(n-1)/(n-v-1)]
Lords formula [1-(1-R2)(n+v-1)/(n-v-1)]
Stein
Hi Nicole,
One nice thing about R is that it is often easy to see the code for
many functions. For summary.lm just type the name at the command
prompt (no brackets) to see the function definition. There you will
find
ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n -
df.int)/rdf)
B
I think this is only a legacy question, right? In recent R, you
can/should use "parallel" instead of either multicore or snowfall.
That said, no answer if you want back compatability with older versions of R.
MW
On Mon, Jan 28, 2013 at 10:43 AM, Florian Schwaiger
wrote:
> Dear R users,
>
> we h
Wrong list! You want Bioconductor.
-- Bert
On Mon, Jan 28, 2013 at 2:42 AM, Peverall Dubois
wrote:
> Is there any package that allow you to perform "MA plot" like graph
> for Toray microarray data?
>
>
> Unlike Affymetrix CEL file which contain 2 values (R and G),
> Torray raw data only contain
Hi,
temp3<- read.table(text="
ID CTIME WEIGHT
HM001 1223 24.0
HM001 1224 25.2
HM001 1225 23.1
HM001 1226 NA
HM001 1227 32.1
HM001 1228 32.4
HM001 1229 1323.2
HM001 1230 27.4
HM001 1231 22.4236 #changed here to test the previous solution
",sep="",header=TRUE,stringsAsFactors=FALSE)
tempnew<- na.o
For lrm fits, predict(fit, type='mean') predicts the mean Y, not a
probability.
Frank
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and prov
Dear Ivan,
It works perfectly fine now. I love this code more since I need not delete
the NULL list myself (and it should be faster, right?). Thank you very much
for your help!
cheers,
Ray
On Mon, Jan 28, 2013 at 5:32 PM, Ivan Calandra wrote:
> What about this then:
> list_of_datasets <- lappl
This code is indeed much shorter. About the speed, I guess it should be
faster, but you should test it with system.time()
I'm glad that it helped.
Ivan
--
Ivan CALANDRA
Université de Bourgogne
UMR CNRS/uB 6282 Biogéosciences
6 Boulevard Gabriel
21000 Dijon, FRANCE
+33(0)3.80.39.63.06
ivan.cala
Instead of saving a string that can be parsed into a language object (and
later evaluated), I would just save something that that could be evaluated
directly. Note that a literal like "MyFile" or 3.14 evaluates to itself so you
can save a literal string or a language object and use eval() on eithe
Dear All,
I would like to use a randomForest algorithm on a dataset.
The set is not particularly large/difficult to handle, but it has some
missing values (both factors and numerical values).
According to what I found
https://stat.ethz.ch/pipermail/r-help/2005-September/078880.html
https://stat.et
I said
> you can attach an attribribute called ".Environment" to your object
> that environment(object) will retrieve
You can also use
environment(object) <- envir
instead of
object <- structure(object, .Environment = envir)
to set the ".Environment" attribute to envir. This makes the code
THE adjusted R^2 is [1-(1-R2)·(n-1)/(n-v-1)], which you call McNemarâs
formula. It was actually proposed first by Fisher in 1924. Theil's
formula is equal to Fisher's.
Wherry's formula, as you give it, is correct but was proposed to estimate
the cross-validated R2, which is different from
This is a little bit hard to explain because there are two levels of
default hooks (the system default and the document-format-specific
default). The best way to explain it is probably the source code:
https://github.com/yihui/knitr/blob/master/R/output.R#L183-L189
In short, the "default" hooks yo
Hi all,
I have been looking for means of add a contour around some points in a
scatterplot as a means of representing the center of density for of the
data. I'm imagining something like a 95% confidence estimate drawn around
the data.
So far I have found some code for drawing polygons around the
Hi Nathan,
This only fits some of your criteria, but have you looked at ?stat_density2d?
Best,
Ista
On Mon, Jan 28, 2013 at 12:53 PM, Nathan Miller wrote:
> Hi all,
>
> I have been looking for means of add a contour around some points in a
> scatterplot as a means of representing the center of
Hi all,
Diverging from my research based number crunching I am interested to see
what, if any R packages are out there that can access daily market
values of investment funds (e.g. using
http://quote.morningstar.com/fund/f.aspx?t=PDMIX) then store the data
value (i.e. NAV =$11.56 ) with date
 Dear,
I would like to use "pROC" software for my study, but I could not
uploaded it in "R". Could you please help me to overcome this problem?
This is the message when I write "Library (pROC)" :
Le chargement a nécessité le package : plyr
Type 'citation("pROC")' for a citation.
Attacheme
And what exactly is the problem? Your code produced no errors (or if
it did you have not shown them to us...)
Best,
Ista
On Mon, Jan 28, 2013 at 9:44 AM, Fethi BEZOUBIRI wrote:
>
>
> Dear,
>
> I would like to use "pROC" software for my study, but I could not
> uploaded it in "R". Could you ple
Many thanks - this was very helpful!
Regards, Kay
Am 28.01.2013 13:19 schrieb "Achim Zeileis" :
> On Sun, 27 Jan 2013, Kay Cichini wrote:
>
> That said,
>>
>> wilcox_test(x ~ factor(y), distribution = "exact")
>>>
>>
>> or the same with oneway_test, i.e would be ok?
>>
>
> Yep, exactly.
>
> And
I'd look into the quantmod package.
Cheers,
MW
On Mon, Jan 28, 2013 at 1:52 PM, Bruce Miller wrote:
> Hi all,
>
> Diverging from my research based number crunching I am interested to see
> what, if any R packages are out there that can access daily market values of
> investment funds (e.g. using
Thanks Ista,
I have played a bit with stat_density2d as well. It doesn't completely
capture what I am looking for and ends up being quite busy at the same
time. I'm looking for a way of helping those looking that the figure to see
the broad patterns of where in the x/y space the data from differen
Hi,
after all, brute force seems the way to go.
I will use a simplified example to illustrate what I want (dump of dat4 is
below):
suppose dat4:
ID rrt Mnd Result
1 0.45 00.1
1 0.48 00.3
1 1.24 00.5
2 0.45 30.2
2 0.48 30.6
2 1.22 30.4
I want to ge
[1]Click here to unsubscribe if you no longer wish to receive our emails
Dear Recruiter,
Here is ourDirect client requirement which can be filled
immediately. Kindly respond to this requirement with your consultant resume,
contact and current location info to speed u
Nicole Janz gmail.com> writes:
>
> What is the exact formula used in R lm() for the Adjusted R-squared? How can I
interpret it?
>From the code of summary.lm():
ans$r.squared <- mss/(mss + rss)
ans$adj.r.squared <- 1 - (1 - ans$r.squared) * ((n -
df.int)/rdf)
Does th
Hi Nate,
You can make it less busy using the bins argument. This is not
documented, except in the examples to stat_contour, but try
ggplot(data=data, aes(x, y, colour=(factor(level)), fill=level))+
geom_point()+
stat_density2d(bins=2)
HTH,
Ista
On Mon, Jan 28, 2013 at 2:43 PM, N
Hi Ista,
Thanks. That does look pretty nice and I hadn't realized that was possible.
Do you know how to extract information regarding those curves? I'd like to
be able to report something about what portion of the data they encompass
or really any other feature about them in a figure legend. I'll
Hi Nate,
I infer from the stat_density2d documentation that the calculation is
carried out by the kde2d function in the MASS package. Refer to ?kde2d
for details.
Best,
Ista
On Mon, Jan 28, 2013 at 3:56 PM, Nathan Miller wrote:
> Hi Ista,
>
> Thanks. That does look pretty nice and I hadn't real
Hi Simon,
Thanks for replying.
On further investigation, I can't reproduce this error on my local
machine -- it only occurs when sending to a cluster (to run the multiple
imputations in parallel) that I've got access to. I send to a friend's
web server, and I get the same sort of error (but
I believe that the value of "radius" that you are using is incorrect.
If you have a data
matrix X whose columns are jointly distributed N(mu,Sigma) then a
confidence
ellipse for mu is determined by
n * (x - Xbar)' S^{-1}(x - Xbar) ~ T^2
where Xbar is the mean vector for X and S is the s
I have a data that looks like this:
mRNA Value
---
mRNA1 30
mRNA2 199
...... ...
mRNA1000 13
Then I'll normalize the value based on the s
Should I understand that this message was received?
Thanks
- Forwarded Message -
From: carol white
To: "r-h...@stat.math.ethz.ch"
Sent: Sunday, January 27, 2013 8:31 PM
Subject: rpart
Hi,
When I look at the summary of an rpart object run on my data, I get 7 nodes but
when I plot t
On Jan 28, 2013, at 9:06 PM, carol white wrote:
Should I understand that this message was received?
It's always possible to check the Archives for this question.
--
David.
Thanks
- Forwarded Message -
From: carol white
To: "r-h...@stat.math.ethz.ch"
Sent: Sunday, January 27,
I am a relatively new user to R, and I am trying to learn more about
converting data in an XML document into "2-dimensional format" such as a
table or array. I might eventually wish to export this data into a
relational database such as SQL, and/or to work with this data within the R
package.
My
Here is my problem,
100 decision trees were built(similar to random forest) and I want to
replace some of them by new trees.
How can I define a tree array including 100 trees, i.e. t[100], and every
t[n] is an "C5.0" object,
such that
when a new tree comes, i can do
n<-10
t[n]<-C5.0(...)
--
V
Hi all,
I'm new on this list so I greet all.
My question is: does exist in R a plot similar to candlestick plot but
not based on xts (time series)? I have to plot a range of 4 value: for
every item I have min value, max value and 2 intermediate values. I
would like plot this like a candlestick, i.e
Hi, I am having some problems with gigFit and would like confirmation on
other platforms; mine is mint; basically Debian.
Although I got a good fit for the density function with the GIG equation
in another curve fitting program, I would really like to use R's tools
for confidence intervals and m
HI,
I don't have Amelia package installed.
If you want to get the mean value, you could use either ?aggregate(), or
?ddply() from library(plyr)
library(plyr)
imputNew<-do.call(rbind,imput1_2_3)
res1<-ddply(imputNew,.(ID,CTIME),function(x) mean(x$WEIGHT))
names(res1)[3]<-"WEIGHT"
head(res
Hi,
I have an experiment where I measured two times the dependent variable (called
Intention). So, I have IntentionPre and IntentionPost.
I am working with an anova for an independent variable called
NumberOfModules.
The problem is that if I run the aov(intentionpre~NumberOfModules) I have that
Hi,
I think I understand your mistake.
imput1_2_3<-list(imp1=structure(list(ID = c("HM001", "HM001", "HM001", "HM001",
"HM001",
"HM001", "HM001", "HM001", "HM001", "HM001", "HM001", "HM001",
"HM001", "HM001", "HM001"), CTIME = 1223:1237, WEIGHT = c(24.9,
25.2, 25.5, 25.24132, 25.7, 27.1, 27.3, 27
Dears,
Unfortunatelly, the packages relaimpo and relimp do not seem to work with
plm function (plm package) or gls function (in nlm package). I've been
studying on how to adapt one of them for this pourpose. In that sense, I
have two questions regarding to this work:
1) have anyone hard of any wo
On 01/28/2013 09:42 PM, Peverall Dubois wrote:
Is there any package that allow you to perform "MA plot" like graph
for Toray microarray data?
Unlike Affymetrix CEL file which contain 2 values (R and G),
Torray raw data only contain 1 value.
MA-plot is Affymetrix specific which usually availabl
Hi,
I have a data set as follow:
X Z
x1 102
x2 102
x2 102
x2 77
x3 23
I need to pivot this data as follows and assign the values based on frequency
of column Z:
X Z.102 Z.77 Z.23
x1 1 0 0
x2 21 0
x3 00
Hi, This is probably a small query but one I'm struggling with: I have a list
in which I had elements which were NA, I removed them, by doing: list2 <-
lapply(list, na.omit),
However this leaves the element there with 'character(0)' in place as well as
attributes:
e.g.
[[978]]
character(0)
at
HI,
How do you want to combine the results?
It looks like the 5 datasets are list elements.
If I take the first three list elements,
imput1_2_3<-list(imp1=structure(list(ID = c("HM001", "HM001", "HM001", "HM001",
"HM001",
"HM001", "HM001", "HM001", "HM001", "HM001", "HM001", "HM001",
"HM001",
Dear Contributors,
I am back asking for help concerning the same type of dataset I was asking
before, in a previous help request.
I needed to sum data over subsample of three time series each of them made
of 100 observations. The solution proposed
were various, among which:
db<-p
dim( db ) <- c(
66 matches
Mail list logo