Hi Daniel,
thanks for your positive response about using compiled ODE functions.
Did you use package deSolve?
Regarding the use of splines (as forcing functions ?) we don't have an
out of the box method yet, but you may consider to approximate the
splines by linear segments or contribute you
Perhaps use tryCatch...
---
Jeff Newmiller The . . Go Live...
DCN: Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.O#. #.O#. with
/Software/Embedded Controllers) .OO#.
Well, I could not call them "entirely different". See attached (tree-tv
from TreeView, tree-r from R).
Yes, I had to rotate and mirror the tree in TreeView but that's all.
And yes, I have to ignore the tree length values from the file.
Maybe it is better to post your inquiry to the Bioconductor l
Hi Yusuke,
Does the following get what you are after?
### Make some test data.
> set.seed(123)
> edf <- data.frame(sex = c(rep("Male", 10), rep("Female", 10), rep("Unknown",
> 10)),
+ head_length = c(1.2 * c(170:179 + rnorm(10)), 0.8 *
c(150:159 + rnorm(10)), c(160:169 + rnorm
I had an identical problem building in R.2.12 with the latest (last week's)
Rtools.
Interestingly I found that if I used a DOS path in rcmd build (eg rcmd build
0.9\pkg) I got a .tar, but if I replaced it with the unix-like path as in
rcmd build 0.9/pkg
I got a .tar.gz
Weird but workable.
S El
On 05/04/11 13:14, David Winsemius wrote:
On Apr 4, 2011, at 9:03 PM, David Winsemius wrote:
On Apr 4, 2011, at 8:42 PM, David Scott wrote:
On 05/04/11 05:58, David Winsemius wrote:
On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols
Awesome! Thanks, David and Dennis! And now I know how to search for
packages more effectively.
Tom
On Mon, Apr 4, 2011 at 9:38 PM, Dennis Murphy wrote:
> Start here:
>
> library(sos) # install first if necessary
> findFn('sample size survey')
>
> I got 238 hits, many of which could be relev
I am new to R.I want to draw grid from a csv file which contains latitude
minimum ,latitude maximum ,longitude minimum ,longitude maximum.The grid
should be divided into exactly 4 quadrants. The map is of NY state of USA. I
want to know how can I do it.
Help would be appreciated.
Thanks
Jaimin
Start here:
library(sos) # install first if necessary
findFn('sample size survey')
I got 238 hits, many of which could be relevant.
HTH,
Dennis
On Mon, Apr 4, 2011 at 6:05 PM, Thomas Levine wrote:
> Hi,
>
> Is there an R package for estimating sample size requirements for
> parameter esti
Thank you for your suggestions, stats experts. Much appreciated.
I still haven't got what I wanted but someone suggested looking into contrasts
and this is looking worth trying
http://finzi.psych.upenn.edu/R/library/gmodels/html/fit.contrast.html
Regards,
Yusuke
-Original Message-
Fro
Thank you Hadley. With your solution, now it feels very easy !
_
From: h.wick...@gmail.com [mailto:h.wick...@gmail.com] On Behalf Of Hadley
Wickham
Sent: Monday, April 04, 2011 6:11 PM
To: Umesh Rosyara
Cc: Dennis Murphy; r-help@r-project.org; rosyar...@gmail.com
Subject: Re: [R] merging
Hello R users,
I am dealing with some resonably big data sets that I have split up
into lists based on various factors. In the code below, I have got my
code producing 100 values between point1x and point1y for the first
matrix in my list.
for (k in 1:length(point1x[[1]][, 1])) {
linex[[k]] = seq
On Apr 4, 2011, at 9:03 PM, David Winsemius wrote:
On Apr 4, 2011, at 8:42 PM, David Scott wrote:
On 05/04/11 05:58, David Winsemius wrote:
On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab<- e
Hi,
Is there an R package for estimating sample size requirements for
parameter estimation in sample surveys? In particular, I'm interested
in sample size estimation for stratified and systematic sampling. I
have a textbook with appropriate formulae, but it'd be nice if I
didn't have to type in al
On Apr 4, 2011, at 8:42 PM, David Scott wrote:
On 05/04/11 05:58, David Winsemius wrote:
On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab<- expression(paste(alpha==1, ", ", beta==2, sep=""))
xlab
On 05/04/11 05:58, David Winsemius wrote:
On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab<- expression(paste(alpha==1, ", ", beta==2, sep=""))
xlab<- substitute(expression( atop(lab==lab., bold(fo
Thanks!
You are awesome! I am not sure I follow everything, but I am trying!
AG
--
View this message in context:
http://r.789695.n4.nabble.com/automating-regression-or-correlations-for-many-variables-tp3426091p3426887.html
Sent from the R help mailing list archive at Nabble.com.
__
Thank you very much for all of your help.
On Mon, Apr 4, 2011 at 6:10 PM, Steven McKinney wrote:
>
>
>> -Original Message-
>> From: stephen sefick [mailto:ssef...@gmail.com]
>> Sent: April-04-11 2:49 PM
>> To: Steven McKinney
>> Subject: Re: [R] Linear Model with curve fitting parameter?
Dear list,
I've noticed that if a list or a vector is given a class (by class(x) <-
"something") then the "selection operator slows down - quite a bit. For example:
> lll <- as.list(letters)
> system.time({for(ii in 1:20)lll[-(1:4)]})
user system elapsed
0.480.000.49
>
> class(
> -Original Message-
> From: stephen sefick [mailto:ssef...@gmail.com]
> Sent: April-04-11 2:49 PM
> To: Steven McKinney
> Subject: Re: [R] Linear Model with curve fitting parameter?
>
> Steven:
>
> I am really sorry for my confusion. I hope this now makes sense.
>
> b0 == y intercept ==
Dear R users,
Let's say I have a list with components being 'm' matrices (as exemplified
in the "mylist" object below). Now, I'd like to subset this list based on an
index vector, which will partition each matrix 'm' in 2 sub-matrices. My
questions are:
1. Is there an elegant way to have the resul
Hi!
I was wondering if PERT or CPM was implemented in R.
I looked in the search engines but didn't find anything.
Since there are so many packages, I thought I'd double check via the
discussion.
Thanks,
Laura.
[[alternative HTML version deleted]]
__
Hi:
On Mon, Apr 4, 2011 at 1:33 PM, geral wrote:
> Thanks!
>
> I must confess I am just a beginner, but I followed your suggestion and did
> 'm <- lm(as.matrix(snp[, -1]) ~ lat, data = snp) ' and it worked
> perfectly.
> I would like to understand what is being done here. as.matrix I understand
On Apr 4, 2011, at 6:05 PM, Folkes, Michael wrote:
I'm using RODBC to read an excel file (not mine!). But I'm
struggling to find a way to preserve the column names that have a
numeric value. sqlFetch() drops the value and calls them f1, f2,
f3,... (ie field number). this is a different
> filelist = list.files(pattern = "K*cd.txt") # the file names are K1cd.txt
> .to K200cd.txt
It's very easy:
names(filelist) <- basename(filelist)
data_list <- ldply(filelist, read.table, header=T, comment=";", fill=T)
Hadley
--
Assistant Professor / Dobelman Family Junior Cha
I'm using RODBC to read an excel file (not mine!). But I'm struggling to find
a way to preserve the column names that have a numeric value. sqlFetch() drops
the value and calls them f1, f2, f3,... (ie field number). this is a different
approach from read.csv, which will append "V" prior to th
On Mon, 4 Apr 2011, Andrew Yee wrote:
This has to do with using pipe() and grep and read.csv()
I have a .csv file that I grep using pipe() and read.csv() as follows:
read.csv(pipe('grep foo bar.csv'))
However, is there a way to have this command run when for example,
there is no "foo" text in
I am interested in comparing the fit of robust (i.e., S and MM) and
non-robust (i.e., OLS) estimators when applied to a particular data set.
The paper entitled "A comparison of robust versions of the AIC based on M, S
and MM-estimators" (available at:
http://ideas.repec.org/p/ner/leuven/urnhdl12345
This has to do with using pipe() and grep and read.csv()
I have a .csv file that I grep using pipe() and read.csv() as follows:
read.csv(pipe('grep foo bar.csv'))
However, is there a way to have this command run when for example,
there is no "foo" text in the bar.csv file? I get an error messag
Thanks, I had recently seen reference to caTools but had forgotten about it.
Much appreciated.
friedman.st...@gmail.com
517-648-6290
-Original message-
From: Gabor Grothendieck
To: Steve Friedman
Cc: r-help@r-project.org
Sent: Mon, Apr 4, 2011 13:17:36 GMT+00:00
Subject: Re: [R] movin
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Stavros Macrakis
> Sent: Monday, April 04, 2011 1:15 PM
> To: r-help
> Subject: [R] General binary search?
>
> Is there a generic binary search routine in a standard library whic
Thanks!
I must confess I am just a beginner, but I followed your suggestion and did
'm <- lm(as.matrix(snp[, -1]) ~ lat, data = snp) ' and it worked perfectly.
I would like to understand what is being done here. as.matrix I understand
makes my data frame be a matrix, but I don't understand the pa
Thank you Dennis for the solution. It is a step ahead..However I need to
read all 200 files as dataframes one-by-one. Can we automate this process.
I used the following step to read all file at once however the data_list
ended as list.
filelist = list.files(pattern = "K*cd.txt") # the file na
Is there a generic binary search routine in a standard library which
a) works for character vectors
b) runs in O(log(N)) time?
I'm aware of findInterval(x,vec), but it is restricted to numeric vectors.
I'm also aware of various hashing solutions (e.g. new.env(hash=TRUE) and
fastmatch), but
Thank you very much Gabor! It looks like that's gonna work
wonderfully. I didn't even know 'ave' existed.
For others out there: I only needed to add a comma: dat[,c("Site",
"Plot", "Sp")]
Small follow up Q: Is there any reason to use 'aggregate' vs. 'ave' in
general?
-mark
On 4/3/1
> -Original Message-
> From: stephen sefick [mailto:ssef...@gmail.com]
> Sent: April-03-11 5:35 PM
> To: Steven McKinney
> Cc: R help
> Subject: Re: [R] Linear Model with curve fitting parameter?
>
> Steven:
>
> You are exactly right sorry I was confused.
>
>
>
On Mon, Apr 4, 2011 at 3:40 PM, Mark Novak wrote:
> Thank you very much Gabor! It looks like that's gonna work wonderfully. I
> didn't even know 'ave' existed.
>
> For others out there: I only needed to add a comma: dat[,c("Site",
> "Plot", "Sp")]
Actually, if dd is a data frame dd[, ix] and
Hi:
Here's a small example:
> df <- data.frame(y1 = rnorm(10), y2 = rnorm(10), y3 = rnorm(10), lat =
rnorm(10))
> m <- lm(cbind(y1, y2, y3) ~ lat, data = df)
> summary(m)
The LHS of the model formula needs to be a matrix. In your case, something
like
m <- lm(as.matrix(snp[, -1]) ~ lat, data =
Hi:
Here's an alternative using ldply() from the plyr package. The idea is to
read the data frames into a list, name them accordingly and then call
ldply().
# Read in the test data frames (you want to use list.files() instead to
input the data per Uwe's guidelines)
df1 <- read.table(textConnectio
Try this:
merge(mydata, cbind(reference, group = rep(unique(mydata$group), each
= nrow(reference))), all = TRUE)
On Mon, Apr 4, 2011 at 2:24 PM, Dimitri Liakhovitski
wrote:
> To clarify just in case, here is the result I am trying to get:
>
> mydate group values
> 12/29/2008 Group1 0.4
On Mon, Apr 4, 2011 at 1:09 PM, Dimitri Liakhovitski
wrote:
> Hello!
> I have my data frame "mydata" (below) and data frame "reference" -
> that contains all the dates I would like to be present in the final
> data frame.
> I am trying to merge them so that the the result data frame contains
> all
Thanks you for your response
For lavaan package can i have more information about this example you have
applied in the section 7
the meanings of The variables (c1,c2,c3,c4, i ,s ,x1,x2)
I think i have need more information to learn more about how able to apply
growth model in my data (long
Dear All,
I have a large data frame with 10 rows and 82 columns. I want to apply the
same function to all of the columns with a single command. e.g. zl <- lm
(snp$a_109909 ~ snp$lat) will fit a linear model to the values in lat and
a_109909. What I want to do is fit linear models for the values i
Dear all,
many thanks, that helped a lot!
Cheers,
Marius
On 2011-04-04, at 19:58 , David Winsemius wrote:
>
> On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
>
>> Dear David,
>>
>> do you know how to get plotmath-like symbols in both rows?
>> I tried s.th. like:
>>
>> lab <- expression(pa
Dear Community,
I am new to R and have a question concerning the causality () test in
the vars package. I need to test whether, say, the variable y Granger
causes the variable x, given z as a control variable.
I estimated the VAR model as follows: >model<-VAR(cbind(x,y,z),p=2)
Then I did the fol
Maybe:
xyplot(0 ~ 0, xlab =
bquote(expression(atop(alpha==.(x)*","~beta==.(y), bold(foo) )) ))
On Mon, Apr 4, 2011 at 2:58 PM, David Winsemius wrote:
>
> On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
>
>> Dear David,
>>
>> do you know how to get plotmath-like symbols in both rows?
>> I tried
On 2011-04-04 10:27, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab<- expression(paste(alpha==1, ", ", beta==2, sep=""))
xlab<- substitute(expression( atop(lab==lab., bold(foo)) ), list(lab.=lab))
xyplot(0 ~ 0, xlab = xlab)
On Apr 4, 2011, at 1:27 PM, Marius Hofert wrote:
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab <- expression(paste(alpha==1, ", ", beta==2, sep=""))
xlab <- substitute(expression( atop(lab==lab., bold(foo)) ),
list(lab.=lab))
xyplot(0 ~ 0, x
Dear David,
do you know how to get plotmath-like symbols in both rows?
I tried s.th. like:
lab <- expression(paste(alpha==1, ", ", beta==2, sep=""))
xlab <- substitute(expression( atop(lab==lab., bold(foo)) ), list(lab.=lab))
xyplot(0 ~ 0, xlab = xlab)
Cheers,
Marius
On 2011-04-04, at 18:59 ,
To clarify just in case, here is the result I am trying to get:
mydate group values
12/29/2008 Group1 0.453466522
1/5/2009Group1 NA
1/12/2009 Group1 0.416548943
1/19/2009 Group1 2.066275155
1/26/2009 Group1 2.037729638
2/2/2009Group1 -0.598040483
2/9
On Apr 4, 2011, at 12:37 PM, Umesh Rosyara wrote:
Dear Uwe and R community members
Thank you Uwe for the help.
I have still a question remaining, I am trying to find answer from
long
time.
While exporting my data, I have some characters mixed into it. I
want to
define any characters as
Hello!
I have my data frame "mydata" (below) and data frame "reference" -
that contains all the dates I would like to be present in the final
data frame.
I am trying to merge them so that the the result data frame contains
all 8 dates in both subgroups (i.e., Group1 should have 8 rows and
Group2 to
On Apr 4, 2011, at 12:45 PM, Marius Hofert wrote:
Dear David,
I intended to use another x-label. But your suggestion brings me to
the idea of just using a two-line xlab, so s.th. like
print(xyplot(0 ~ 0, xlab.top = "This title is now 'centered' for the
human's eye", xlab = "but subtitles a
Dear Uwe and R community members
Thank you Uwe for the help.
I have still a question remaining, I am trying to find answer from long
time.
While exporting my data, I have some characters mixed into it. I want to
define any characters as na.string? Is it possible to do so?
Thanks;
Umesh
There are, however, the multcomp and multcompView packages that
might provide something of interest in this regard. "multcomp" has a
companion book, "Multiple Comparisons Using R" (Bretz, Hothorn,
Westfall, 2010, CRC Press), which I believe provides an excellent
overview of the state of
Hi,
i am new in this forum.
I hope someone can help me or correct me, if this is the false "subforum" to
write this.
I have to choose the "best" arima model from different possibilities of a
timeseries. I know the AIC; BIC and similar. But now i would like to check
the value called r-squared or a
Use exprs on the output from RMA (or another method you like)
library("affy")
myData <-ReadAffy()
myRMA <- rma(myData)
e = exprs(myRMA)
Also, check out the Bioconductor mailing list where
Bioconductor-related topics are discussed.
On Fri, Apr 1, 2011 at 9:54 AM, Landes, Ezekiel
wrote:
> I hav
Dear David,
I intended to use another x-label. But your suggestion brings me to the idea of
just using a two-line xlab, so s.th. like
print(xyplot(0 ~ 0, xlab.top = "This title is now 'centered' for the human's
eye", xlab = "but subtitles are _now_ centered\nbla", scales = list(alternating
= c(
Hi Mauricio,
A Windows binary is now available on CRAN:
http://dirk.eddelbuettel.com/blog/2011/04/04/#rquantlib_0.3.7
Best,
--
Joshua Ulrich | FOSS Trading: www.fosstrading.com
On Tue, Mar 29, 2011 at 10:38 AM, Mauricio Romero
wrote:
> Dear R users,
>
> I have been trying to use RQuantLib i
On 04.04.2011 16:41, Umesh Rosyara wrote:
Dear R community members
I did find a good way to merge my 200 text data files in to a single data
file with one column added will show indicator for that file.
filelist = list.files(pattern = "K*cd.txt")
I doubt you meant "K*cd.txt" but "^K[[:
How about
as.matrix(p.adjust(as.dist(pmat)))
Benno
On 4.Apr.2011, at 17:02, January Weiner wrote:
> Dear all,
>
> I have an n x n matrix of p-values. The matrix is symmetrical, as it
> describes the "each against each" p values of correlation
> coefficients.
>
> How can I best correc
On 04.04.2011 12:35, Yan Jiao wrote:
Dear R users,
I need to add 0 in front of a series of numbers, e.g. 1->001, 19->019,
Is there a fast way of doing that?
formatC(c(1, 19), flag=0, width=3)
Uwe Ligges
Many thanks
yan
[[alternative HTML version deleted]]
__
There are also the multcomp and multcompView packages that might
provide something of interest in this regard. "multcomp" has a
companion book, "Multiple Comparisons Using R" (Bretz, Hothorn,
Westfall, 2010, CRC Press), which I believe provides an excellent
overview of the state of the
On Apr 4, 2011, at 17:02 , January Weiner wrote:
> Dear all,
>
> I have an n x n matrix of p-values. The matrix is symmetrical, as it
> describes the "each against each" p values of correlation
> coefficients.
>
> How can I best correct the p values of the matrix? Notably, the total
> number of
You can use simulation:
1. decide what you think your data will look like
2. decide how you plan to analyze your data
3. write a function that simulates a dataset (common arguments include sample
size(s) and effect sizes) then analyzes the data in your planned manner and
returns the p-value(s) o
1. This is not an R question, AFAICS.
2. Sounds like a research topic. I don't think there's a meaningful
simple answer. I suspect it strongly depends on the model and context.
-- Bert
On Mon, Apr 4, 2011 at 8:02 AM, January Weiner
wrote:
> Dear all,
>
> I have an n x n matrix of p-values. The
Hi all,
I would like to multithread that script, to detect structure from multilocus
genetic data :
>library(Geneland)
>
>geno = read.table("cot966gen_test.txt") #the file is show after
>MCMC(geno.dip.codom = geno, varnpop=T, npopmax=20, spatial = F, nit=10,
>thinnin=100, path.mcmc="./")
>P
Hi
Thanks ,however I would need something different still...
I would need to return a vector so if as to choose
cc[[3]] [2] would return vector/list as in terms of c(b,d,e)
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/Creating-multiple-vector-list-names-novic
Dear all,
I have an n x n matrix of p-values. The matrix is symmetrical, as it
describes the "each against each" p values of correlation
coefficients.
How can I best correct the p values of the matrix? Notably, the total
number of the tests performed is n(n-1)/2, since I do not test the
correlati
Dear R community members
I did find a good way to merge my 200 text data files in to a single data
file with one column added will show indicator for that file.
filelist = list.files(pattern = "K*cd.txt") # the file names are K1cd.txt
.to K200cd.txt
data_list <-lapply(f
Hi R Group
Thanks for some suggestions.
I have finally figured it out..
The following script called from within R Session does what i want..(to
attach files called test1.pdf and Document-1.pdf into the file test2.pdf and
then save the new file with attachments as test3.pdf at the given path)
optio
True - gapped stacked bar plots make no sense at all. I'm working my way up
to a gapped bar plot with series next to each other (and error bars!), what
you'd get if you put a gap in the y-axis of
> twogrp2<-array(twogrp, dim=c(2,5))
> barplot(twogrp2, beside=TRUE)
I'm guessing I can do this if
Bert Jacobs-2 wrote:
>
> I would like to replace the last tree characters of the values of a
> certain
> column in a dataframe.
>
>
Besides the mentioned standard method: I found the subset of string
operations in Hadley Wickhams stringr package helpful. They have a much more
consistent interf
On Mon, Apr 4, 2011 at 11:20 AM, Den Alpin wrote:
> I did some tests on Your and Gabor solutions, below my findings:
>
> - Your solution is fast as my solution in xts (below) but MUCH MORE
> READABLE, in particular I think your test should take into account xts
> creation from the data.frame (see
Juraj17 wrote:
>
> Do I have to write my own, or it exists yet? How name has it, or how can I
> use it.
>
Try the R-function search. It return the function you are looking for as the
first match.
Dieter
--
View this message in context:
http://r.789695.n4.nabble.com/D-Agostino-test-tp34249
On Apr 4, 2011, at 6:35 AM, kitty wrote:
Dear list,
Hi,
I am trying to get the second derivative of a logistic formula, in R
summary
the model is given as :
###
$nls
Nonlinear regression model
model: data ~ logistic(time, A, mu, lambda, addpar)
data: parent.frame()
A mu lambda
0.
I did some tests on Your and Gabor solutions, below my findings:
- Your solution is fast as my solution in xts (below) but MUCH MORE
READABLE, in particular I think your test should take into account xts
creation from the data.frame (see below);
- Gabor's solution with read.zoo is fast as xts but
Hi Paul,
I am using R v. 2.12.2, and the "dse" package with build 2.12.2.
I have attached some sample data to this email, and the R code I use
to create the model and then forecast with it.
Thanks,
Alison
On Mon, Apr 4, 2011 at 11:02 AM, Paul Gilbert
wrote:
> Could you please send me a reprod
Hello,
Using the dse package I have estimated a VAR model using estVARXls().
I can perform forecasts using forecast() with no problems, but when I
try to use simulate() with the same model, I get the following error:
Error in diag(Cov, p) :
'nrow' or 'ncol' cannot be specified when 'x' is a mat
On Apr 4, 2011, at 7:39 AM, Marius Hofert wrote:
Dear expeRts,
I recently asked for a real "centered" title (see, e.g., http://tolstoy.newcastle.edu.au/R/e13/help/11/01/0135.html)
.
A nice solution (from Deepayan Sarkar) is to use "xlab.top" instead
of "main":
library(lattice)
trellis.dev
Try this:
lapply(2:3, FUN = combn, x = string, paste, collapse = '')
On Mon, Apr 4, 2011 at 11:24 AM, michalseneca wrote:
> Hi I have very simple issue as I am still new to the group of R
>
> I have basically
>
> vector of names for which i want to create mutliple combinations and then
> place
biologie.uni-marburg.de> writes:
>
> Dear Ben,
>
> you answerd to Nancy Shackelford about Clarks 2Dt function.
> Since the thread ended just after your reply,
> I would like to ask, if you have an idea how to use this function in R
>
Dear Ronald,
I got started on your problem, but I didn
Hi I have very simple issue as I am still new to the group of R
I have basically
vector of names for which i want to create mutliple combinations and then
place them in different vectors. In some other language I can just place a
third dimension to separate list (or matrix) but i do not know ho
On 2011-04-04 06:39, Andrew D. Steen wrote:
I am trying to make a barplot with a broken axis using gap.barplot (in the
indispensable plotrix package). This works well when the data is a vector:
twogrp<-c(rnorm(10)+4,rnorm(10)+20)
gap.barplot(twogrp,gap=c(8,16),xlab="Index",ytics=c(3,6,17,20),y
Hi,
On Mon, Apr 4, 2011 at 5:15 AM, Sadaf Zaidi wrote:
> Dear Sir/Madam,
> I am stuck with a nagging problem in using R for SVM regression. My data has 5
> dimensions and 400 observations. The independent variables are :
> Peb, Ksub, Sub, and Xtt.
> The dependent variable is: Rexp.
> I tried usin
Is something like this what you're looking for?
R> library(nor1mix)
R> nmix2 <- norMix(c(2, 3), sig2=c(25, 4), w=c(.2, .8))
R> dnorMix(1, nmix2) - dnorm(1, 2, 5)
[1] 0.03422146
Andy
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Beha
On Mon, Apr 4, 2011 at 8:49 AM, Den Alpin wrote:
> I retrieve for a few hundred times a group of time series (10-15 ts
> with 1 values each), on every group I do some calculation, graphs
> etc. I wonder if there is a faster method than what presented below to
> get an appropriate timeseries ob
Hi Dan,
On Mon, Apr 4, 2011 at 7:49 AM, Den Alpin wrote:
> I retrieve for a few hundred times a group of time series (10-15 ts
> with 1 values each), on every group I do some calculation, graphs
> etc. I wonder if there is a faster method than what presented below to
> get an appropriate time
try this:
> x <- read.table(textConnection(";this is example
+ ; r help
+ Var1 Var2 Var3 Var4 Var5
+ 0 0.05 0.0112
+ 1 0.04 0.0618A
+ 2 0.05 0.0814
+ 3 0.01 0.0615 B
+ 4
Hello,
I am trying to find out if R can do the following:
I have a mixture of normals say f = 0.2*Normal(2, 5) + 0.8*Normal(3,2)
How do I find the difference in the densities at any particular point of f
and at Normal(2,5)?
--
Thanks,
Jim.
[[alternative HTML version deleted]]
_
?read.table
Then look at the 'fill' & 'flush' parameters; this may do the trick
On Mon, Apr 4, 2011 at 9:32 AM, Ram H. Sharma wrote:
> Hi R-experts
>
> I have many text files to read and combined them into one into R that are
> output from other programs. My textfile have unbalanced number of ro
I am trying to make a barplot with a broken axis using gap.barplot (in the
indispensable plotrix package). This works well when the data is a vector:
> twogrp<-c(rnorm(10)+4,rnorm(10)+20)
> gap.barplot(twogrp,gap=c(8,16),xlab="Index",ytics=c(3,6,17,20),ylab="Group
values",main="Barplot with gap")
Hi R-experts
I have many text files to read and combined them into one into R that are
output from other programs. My textfile have unbalanced number of rows for
example:
;this is example
; r help
Var1 Var2 Var3 Var4 Var5
0 0.05 0.0112
1 0.04
I retrieve for a few hundred times a group of time series (10-15 ts
with 1 values each), on every group I do some calculation, graphs
etc. I wonder if there is a faster method than what presented below to
get an appropriate timeseries object.
Making a query with RODBC for every group I get a d
I've written about a bunch of Web R interfaces here:
*
http://www.r-statistics.com/2010/04/jeroen-oomss-ggplot2-web-interface-a-new-version-released-v0-2/
*
http://www.r-statistics.com/2010/04/r-node-a-web-front-end-to-r-with-protovis/
(And some other posts here:
http://www.r-statistics.com/categor
On Sun, Apr 3, 2011 at 8:29 PM, Cleber N. Borges wrote:
> hello all
> I am trying to learn how to use the RGtk2 package...
> so, my first problem is: I don't get the right way for populate my
> gtkListStore object!
> any help is welcome... because I am trying several day to mount the code...
> Tha
On Mon, Apr 4, 2011 at 8:30 AM, Steve Friedman wrote:
> Hello
>
>
> Lets say as an example I have a dataframe with the following attributes:
> rownum(1:405), colnum(1:287), year(2000:2009), daily(rownum x colnum x year)
> and foragePotential (0:1, by 0.01). The data is actually stored in a netcdf
Hi R helpers... I am having troubles because of the discrepancy
between the dendrogram plotted from hclust and what is wrote in the
hc2Newick file. I've got a matrix C:
hc <- hclust(dist(C))
plot(hc)
with the:
write(hc2Newick(hc),file='test.newick')
both things draw completely different "trees
> >> and time to upgrade R
>
> I'm still fighting to find out how to upgrade stuff on Ubuntu. After
> a
> repository update the newest available version was still 2.10.1.
> I'll figure it out, sooner or later :)
>
That's simple. Just add
$deb http:///bin/linux/ubuntu maverick/
to /etc/apt
I appreciate that this is OT, but I'd be grateful for pointers to examples of
where
Sweave has been used for web-based applications. In particular, examples of
where reports/analyses are produced automatically through submission of data
to a web-sever. I am mostly interested in situations wher
1 - 100 of 124 matches
Mail list logo