Thanks for your comments!
> Yes. You are fitting by least-squares on two different scales:
> differences in y and differences in log(y) are not comparable.
>
> Both are correct solutions to different problems. Since we have no idea
> what 'x' and 'y' are, we cannot even guess which is more appro
On Wed, 5 Mar 2008, ONKELINX, Thierry wrote:
> Bob,
>
> You can copy the files from the packages to your new computer. Then run
> update.packages(checkBuilt = TRUE).
>
> That should do.
Well, that will download new copies of all the packages, something he was
trying to avoid. It is close to una
The only thing you are adding to earlier replies is incorrect:
fitting by least squares does not imply a normal distribution.
For a regression model, least-squares is in various senses optimal when
the errors are i.i.d. and normal, but it is a reasonable procedure for
many other situati
"Gabor Grothendieck" <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]:
> On Thu, Mar 6, 2008 at 12:00 AM, David Winsemius
> <[EMAIL PROTECTED]> wrote:
>> Philipp Pagel <[EMAIL PROTECTED]> wrote in
>> news:[EMAIL PROTECTED]:
>>
>> > On Wed, Mar 05, 2008 at 12:32:19PM +0100, Erika Frigo wrote:
On Thu, Mar 6, 2008 at 12:00 AM, David Winsemius <[EMAIL PROTECTED]> wrote:
> Philipp Pagel <[EMAIL PROTECTED]> wrote in
> news:[EMAIL PROTECTED]:
>
> > On Wed, Mar 05, 2008 at 12:32:19PM +0100, Erika Frigo wrote:
> >> My file has not only more than a million values, but more than a
> >> million ro
Philipp Pagel <[EMAIL PROTECTED]> wrote in
news:[EMAIL PROTECTED]:
> On Wed, Mar 05, 2008 at 12:32:19PM +0100, Erika Frigo wrote:
>> My file has not only more than a million values, but more than a
>> million rows and moreless 30 columns (it is a productive dataset
>> for cows), infact with read
Steve Markofsky <[EMAIL PROTECTED]> writes:
> I'm trying to run a Ripley's K analysis on a point pattern
> within a window of 1 square km.
> The maximum distance I want to use is the diagonal
> of the window, around 1400m, run in 50m increments.
You can't estimate K(r) for r equal to the diagona
Thank you! That was very helpful indeed!
Karen> Subject: RE: [R] extracting a percentage of data by random> Date: Thu, 6
Mar 2008 11:20:16 +1000> From: [EMAIL PROTECTED]> To: [EMAIL PROTECTED];
r-help@r-project.org> > You don't need any explicit loops at all. Here is a
demo of one way to> do
Hi Ramon,
On 5 March 2008 at 22:00, Ramon Diaz-Uriarte wrote:
| Yes, of course! You are right. What a silly mistake on my part! I was
| using a standalone program for development of functions, debugging,
| etc, of what is part of a package.
That's another good use for littler's r. With a packag
You don't need any explicit loops at all. Here is a demo of one way to
do it:
> set.seed(23) # on Windows
> dat <- data.frame(age = factor(sample(1:4, 200, rep = T)), y =
runif(200))
> head(dat) # ages are in random order
age y
1 3 0.64275524
2 1 0.56125314
3 2 0.82418228
Here is one way of doing it:
> x <- data.frame(group=sample(1:4,100,TRUE), age=runif(100,4,80))
> tapply(x$age, x$group, function(z) mean(z[sample(seq_along(z), length(z) /
> 10)]))
1234
34.56628 58.70901 54.26239 58.89306
>
>
On Wed, Mar 5, 2008 at 7:49 PM, Chang
Hello Gurus:
If I have a dataframe with one of the variables called "age" for example, and I
want to extract a random 10% of the observations from each "age" group of the
entire data frame. Do I have to double loop to split the data and then loop
again to assign random numbers? Or is there a
There are a number of ways for importing from EXCEL. For example if
you were to create a CSV file for EXCEL, you can read it like:
> x <- read.table('/tempxx.txt.r', header=TRUE, as.is=TRUE)
> x
Control
1 543_BU
2 123_AT
3 432_CU
On Wed, Mar 5, 2008 at 6:42 PM, Keizer_71 <[EMAIL PROTECTED]
Thanks for the quick reply.
On Wednesday 05 March 2008, [EMAIL PROTECTED] wrote:
> Generically, let y be the response, f the factor and x the covariate.
> Then
>
> pModel <- lm(y ~ f + x, data) # parallel regressions
> sModel <- lm(y ~ f/x, data) # separate regressions (the '-1' is
> optional)
>
Hello,
I have this in excel
Control
543_BU
123_AT
432_CU
I want to be able to import to R so that it will read like this
c<-c("543_BU","123_AT","432_CU")
output:
[1] "543_BU" "123_AT" "432_CU"
This is just a short version. I have about 20 rows and i need a simpler
way instead of typing
--- Ben Bolker <[EMAIL PROTECTED]> wrote:
> Georg Ehret gmail.com> writes:
>
> >
> > Dear R community, I encounter a problem reading
> data into a dataframe. See
> > attachment for the input.
>
> [snip]
>
> The attachment didn't get through to the list.
> Simplest thing
> to do (if yo
Hi Martin
I was not that much speaking about what we can do, but more about what
we can't. We can't decide that object will be 'never empty', we have to
allow empty oject, otherwise new("A") will not work and that will be
problematic.
So, at this point, I see :
- it is necessary to allow the
Generically, let y be the response, f the factor and x the covariate.
Then
pModel <- lm(y ~ f + x, data) # parallel regressions
sModel <- lm(y ~ f/x, data) # separate regressions (the '-1' is
optional)
anova(pModel, sModel) # tests whether there are differences in
slopes.
In the examp
I am.
What do you need to know now?
P.
On Thursday 06 March 2008 09:39, Roslina Zakaria wrote:
> Hi R users,
> Anybody using tweedie model to analyze rainfall data?
> Thanks in advance for your attention.
>
> __
> R-help@r-project.org mailing list
> h
On 05-Mar-08 23:37:42, zack holden wrote:
> Dear list,
> I'm trying to query a string of numbers to identify where in the string
> the numbers stop increasing (where x[i] == x[i+1]). In example 1 below,
> I've adapted code from Jim Holt to do this. However, I run into
> situations where the conditi
Hi R users,
Anybody using tweedie model to analyze rainfall data?
Thanks in advance for your attention.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-gu
Hi,
How do I get what splits were used for the top important variables.
For example, if my top most important variable is a continuous
variable, e.g. previous 12 month sales, and its range is between $0 to
$1M, how do I find what what were the split values for this variable
that were used in the r
Hi,
How would one go about determining if the slope terms from an analysis of
covariance model are different from eachother?
Based on the example from MASS:
library(MASS)
# parallel slope model
l.para <- lm(Temp ~ Gas + Insul, data=whiteside)
# multiple slope model
l.mult <- lm(Temp ~ Insul/G
Dear list,
I'm trying to query a string of numbers to identify where in the string the
numbers stop increasing (where x[i] == x[i+1]). In example 1 below, I've
adapted code from Jim Holt to do this. However, I run into situations where the
condition is not met, as in example 2, where the number
Hi,
I am trying to understand how the functions em() and me() from the
mclust package work. I cannot make sense of what the algorithm
returns. Here is a basic, simple example:
#
# two bivariate normals, centered at (-5,0) and (5,0), with Id
covarian
Hi,
If you didn't receive the attachment properly, here it is again.
Ravi.
---
Ravi Varadhan, Ph.D.
Assistant Professor, The Center on Aging and Health
Division of Geriatric Medicine and Gerontology
Johns Hopki
Georg Ehret gmail.com> writes:
>
> Dear R community, I encounter a problem reading data into a dataframe. See
> attachment for the input.
[snip]
The attachment didn't get through to the list. Simplest thing
to do (if you can) is to post it somewhere and give us the URL.
Ben Bolker
_
phthao05 gmail.com> writes:
[snip]
> Because I have just learn R language in a few day so I have many problem.
> 1) I don't know why PCA rotation function not run although I try many times.
> Would you please hepl me and explain how to read the PCA map (both of
> rotated and unrotated) in a c
Hi,
Here is another approach, in addition to the suggestions made by Spencer and
Gabor. It uses the spm() function in SemiPar package. An advantage of this
approach is that the smoothing parameter is automatically estimated using
REML (here I use default knot locations, but this can be specified
You can use acf(), but it will be messy and the labeling of the plots
is confusing and perhaps misleading... check out issue number 4
at http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
I would recommend setting up a grid of the ccfs and you could
automate this (i.e., write a loop) if need be
Ramon Diaz-Uriarte wrote on 03/05/2008 03:00 PM:
> Dear Prof. Ripley,
>
> Yes, of course! You are right. What a silly mistake on my part! I was
> using a standalone program for development of functions, debugging,
> etc, of what is part of a package.
Aha! The lesson I take away from this then is
On Wed, 2008-03-05 at 15:32 -0500, Shewcraft, Ryan wrote:
> Hi All,
>
> I can't quite figure out how to change the parameters of the x and y
> axes when I plot a polymars object. I want to add a scatterplot of the
> data points, but the polymars plot seems to automatically set the
> parameters to
Check out ?splinefun
On Wed, Mar 5, 2008 at 3:18 PM, Levi Waldron <[EMAIL PROTECTED]> wrote:
> What functions exist for differentiating a numeric vector (in my case
> spectral data)? That is, experimental data without an analytical
> function. ie,
>
> > x <- seq(1,10,0.1)
> > y=x^3+rnorm(length
Have you looked at the 'fda' package? It has many functions for
doing what you want. A strength is that it is a companion package for
two books on that and related issues, and includes script files under
"~R.installation.directory\library\fda\scripts" to reproduce some of the
analyses.
Hello LIST,
I'd like to use tune from e1071 to do a grid search for hyperparameter
values in gbm. However, I can not get this to work. I note that there is no
wrapper for gbm but that it is possible to use non-wrapped functions (like
lm) without problem. Here's a snippet of code to illustrate.
Dear Prof. Ripley,
Yes, of course! You are right. What a silly mistake on my part! I was
using a standalone program for development of functions, debugging,
etc, of what is part of a package.
Thanks,
R.
On Wed, Mar 5, 2008 at 8:45 PM, Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
> On Wed, 5 M
Read R News 4/1 article on dates.
On Wed, Mar 5, 2008 at 2:56 PM, <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am an advanced user of R. Recently I found out that apparently I do
> not fully understand vectors and lists fully
> Take this code snippet:
>
>
> T = c("02.03.2008 12:23", "03.03.2008 05:5
On Mar 5, 2008, at 2:56 PM, [EMAIL PROTECTED] wrote:
> Hello,
>
> I am an advanced user of R. Recently I found out that apparently I do
> not fully understand vectors and lists fully
> Take this code snippet:
>
>
> T = c("02.03.2008 12:23", "03.03.2008 05:54")
> Times = strptime(T, "%d.%m.%Y %H:%M
Hello -
[EMAIL PROTECTED] wrote:
> Hello,
>
> I am an advanced user of R. Recently I found out that apparently I do
> not fully understand vectors and lists fully
> Take this code snippet:
>
>
> T = c("02.03.2008 12:23", "03.03.2008 05:54")
> Times = strptime(T, "%d.%m.%Y %H:%M")
> Times
Hi All,
I can't quite figure out how to change the parameters of the x and y
axes when I plot a polymars object. I want to add a scatterplot of the
data points, but the polymars plot seems to automatically set the
parameters to fit the polymars line. I've tried using plot(poly,1,fig=
c(x1,x2,y1,
try this:
l <- vector("list", 3)
l[[1]] <- list(4, "hello")
l[[2]] <- list(7, "world")
l[[3]] <- list(9, " ")
lis <- lapply(l, "names<-", value = c("V1", "V2"))
do.call("rbind", lapply(lis, data.frame, stringsAsFactors = FALSE))
I hope it helps.
Best,
Dimitris
Dimitris Rizopoulos
Bio
Dear R community, I encounter a problem reading data into a dataframe. See
attachment for the input. I use:
data<-read.table("test",fill=T,row.names=1)
When you look at "data" you can see that some lines of my input were
broken... I can avoid this problem by sorting the lines by length... Do I
h
Perhaps
data.frame(do.call(rbind, l))
?
- Erik Iverson
[EMAIL PROTECTED] wrote:
> Hello,
>
>
> Given a list with all elements having identical layout, e.g.:
>
>
> l = NULL
> l[[1]] = list(4, "hello")
> l[[2]] = list(7, "world")
> l[[3]] = list(9, " ")
>
>
> is there an easy way to co
Indeed, but are not each of the cell means also evaluations of the effect of
one factor at the specific level of another factor? Is this an issue of
"Tomato, tomahto".
I guess my question is, if I want to know if each of those is different from
0, then should I use the 48df from the full model,
On Wed, Mar 5, 2008 at 8:28 AM, Georg Otto <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am trying to generate a figure of 9 plots that are contained in one
> device by using
>
> par(mfrow = c(3,3,))
>
> I would like to have 1 common legend for all 9 plots somewhere outside
> of the plotting area (as
What functions exist for differentiating a numeric vector (in my case
spectral data)? That is, experimental data without an analytical
function. ie,
> x <- seq(1,10,0.1)
> y=x^3+rnorm(length(x),sd=0.01) #although the real function would be
> nothing simple like x^3...
> derivy <-
I kn
On Wed, Mar 5, 2008 at 7:41 PM, Prof Brian Ripley <[EMAIL PROTECTED]> wrote:
> > Can R perform n-way ANOVA, i.e., with 3 or more factors?
>
> Yes. There are even examples on the help page!
Thanks, Prof. Ripley. I will have a look at it.
Paul
__
R-he
Hello,
Given a list with all elements having identical layout, e.g.:
l = NULL
l[[1]] = list(4, "hello")
l[[2]] = list(7, "world")
l[[3]] = list(9, " ")
is there an easy way to collapse this list into a data.frame with each
row being the elements of the list ?
I.e. in this case I want to
On 6/03/2008, at 2:53 AM, Wolfgang Waser wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls(y ~ a * x^b, start=list(a=1,b=1))
>
> to obtain the coefficients a & b.
>
> I did the same with the linearized formula, including a linear model
>
> log(y) ~ log
Hello,
I am an advanced user of R. Recently I found out that apparently I do
not fully understand vectors and lists fully
Take this code snippet:
T = c("02.03.2008 12:23", "03.03.2008 05:54")
Times = strptime(T, "%d.%m.%Y %H:%M")
Times # OK
class(Times) # OK
is.list(Tim
On Wed, 5 Mar 2008, Ramon Diaz-Uriarte wrote:
> Dear Jeff,
>
> Thanks for the suggestion. However, something is still not working.
> This is a simple example:
>
> *** start C
> #include
>
> struct Sequence {
> int len;
> unsigned int state_count[];
> };
>
>
[EMAIL PROTECTED] writes:
> Well well well...
You're partly misunderstanding...
> To summarize : let assume that A is a class (slot x) and C is a class
> containing A (A and slot y) - as(c,"A") calls new("A"). So new("A")
> HAS TO works, you can not decide to forbid empty object (unless you
> de
On Wed, 5 Mar 2008, Paul Smith wrote:
> Dear All,
>
> Can R perform n-way ANOVA, i.e., with 3 or more factors?
Yes. There are even examples on the help page!
>
> Thanks in advance,
>
> Paul
>
> __
> R-help@r-project.org mailing list
> https://stat.eth
Hi,
Let me make the following points in response to your questions:
1. Your call to optim() with "L-BFGS-B" as the method is correct. Just
make sure that your function "f" is defined as negative log-likelihood,
since optim is by default a minimizer. The other option is to define
log-likelihood
Try to download something in IE and look at the bottom of your browser
where the URL is displayed or look at the Javascript in:
http://data.un.org/_Scripts/SeriesActions.js
and its apparent that the format is as follows:
http://data.un.org/Handlers/DownloadHandler.ashx?DataFilter=srID:1000&dataMar
Dear All,
Can R perform n-way ANOVA, i.e., with 3 or more factors?
Thanks in advance,
Paul
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
a
On 3/5/2008 1:32 PM, jebyrnes wrote:
> Ah. I see. So, if I want to test to see whether each simple effect is
> different from 0, I would do something like the following:
>
> cm2 <- rbind(
> "A:L" = c(1, 0, 0, 0, 0, 0),
> "A:M" = c(1, 1, 0, 0, 0, 0),
> "A:H" = c(1, 0, 1, 0, 0, 0),
> "B:L" =
On Wed, 2008-03-05 at 15:28 +0100, Georg Otto wrote:
> Hi,
>
> I am trying to generate a figure of 9 plots that are contained in one
> device by using
>
> par(mfrow = c(3,3,))
>
> I would like to have 1 common legend for all 9 plots somewhere outside
> of the plotting area (as opposed to one leg
Thanks to All,
The comments were very helpful; however, the the simulation is running very
slow. I reduced the number of loops (conditions) so I have 36 loops, and the
data-generation occurs 1000 times within each loop. At the end of each 1000
reps, I saved the summary (e.g., mean) of the reps to
?mtitle should do it.
--- Georg Otto <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am trying to generate a figure of 9 plots that are
> contained in one
> device by using
>
> par(mfrow = c(3,3,))
>
> I would like to have 1 common legend for all 9 plots
> somewhere outside
> of the plotting area (as op
Hello everybody,
I have a question about box-constrained optimization. I've done some
research and I found that optim could do that. Are there other ways in R ?
Is the following correct if I have a function f of two parameters belonging
for example to [0,1] and [0,Infinity] ?
optim(par=param, fn=
Ah. I see. So, if I want to test to see whether each simple effect is
different from 0, I would do something like the following:
cm2 <- rbind(
"A:L" = c(1, 0, 0, 0, 0, 0),
"A:M" = c(1, 1, 0, 0, 0, 0),
"A:H" = c(1, 0, 1, 0, 0, 0),
"B:L" = c(1, 0, 0, 1, 0, 0),
"B:M" = c(1, 1, 0, 1, 1, 0)
m <- seq(-1,1,0.1)
x1 <- vector()
x2 <- vector()
# the loop statement was incorrect.
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
# But why not something like this? There is no need
for a loop.
x1 <- seq(-1,1,0.1)
mdat <- data.frame(x1, x2=x1^2
Folks,
A nice new data resource has come up -- http://data.un.org/
I thought it would be wonderful to setup an R function like
tseries::get.hist.quote() which would be able to pull in some or all
of this data.
I walked around a bit of it and I'm not able to map the resources to
predictable URLs w
Try
bb[is.na(aa)] <- NA
It may be simple but it is not necessarily obvious :)
--- Carson Farmer <[EMAIL PROTECTED]> wrote:
> Dear List,
>
> I am looking for an efficient method for replacing
> values in a
> data.frame conditional on the values of a separate
> data.frame. Here is
> my scenario:
Dear List,
I am looking for an efficient method for replacing values in a
data.frame conditional on the values of a separate data.frame. Here is
my scenario:
I have a data.frame (A) with say 1000 columns, and 365 rows. Each cell
in the data.frame has either valid value, or NA. I have an additional
Dear Jeff,
Thanks for the suggestion. However, something is still not working.
This is a simple example:
*** start C
#include
struct Sequence {
int len;
unsigned int state_count[];
};
int main(void) {
struct Sequence *A;
int n = 4;
// First li
Thank you everybody.
Phil, your expand.grid works very nicely and I will use it for
non-vectorized functions.
Yet I am a bit confused about "vectorization". For me it is synonymous of
"no loop". :-(
I wrote a toy example (with a function which is not my log-likelihood).
FIRST PART
nir=1:10
log
Hi Eleni,
Check *"Computing Thousands of Test Statistics Simultaneously in R" *in
http://stat-computing.org/newsletter/v181.pdf
Other alternative could be the multtest package.
HTH
Jorge
On Wed, Mar 5, 2008 at 8:55 AM, Eleni Christodoulou <[EMAIL PROTECTED]>
wrote:
> On Wed, Mar 5, 2008 a
Try this:
On 05/03/2008, Martin Kaffanke <[EMAIL PROTECTED]> wrote:
> Hi there!
>
> In my case,
>
> cor(d[1:20])
>
> makes me a good correlation matrix.
>
> Now I'd like to have it one sided, means only the left bottom side to be
> printed (the others are the same) and I'd like to have * wher
On Wed, 5 Mar 2008, Boikanyo Makubate wrote:
> I will like to analyse a binary cross over design using the random
> effects model. The probability of success is assumed to be logistic.
> Suppose as an example, we have 4 subjects undergoing a crossover design,
> where the outcome is either success
Hi,
I'm trying to create a density plot which I used to do in geneplotter
using the following code. Unfortunately I can't find the combination of
R release and geneplotter that works.
Can anyone suggest a fix or an alternative to smoothScatter that will
plot depth of one dive vs depth of the nex
Thank you Yinghai, that's what I need :-)!
Yinghai Deng <[EMAIL PROTECTED]> schrieb: m <- seq(-1,1,0.1)
x1 <- c()
x2 <- c()
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Be
On 3/5/08, Fredrik Karlsson <[EMAIL PROTECTED]> wrote:
> Hi,
>
> In my discpipline, it is common to plot one acoustic property on a
> positive scale but from top to bottom on the ordinate and the same for
> another measurement on the abscissa.
> So, the origin of the plot is on the top right of
Well well well...
To summarize : let assume that A is a class (slot x) and C is a class
containing A (A and slot y) - as(c,"A") calls new("A"). So new("A") HAS
TO works, you can not decide to forbid empty object (unless you define
setAs("C","A") ?)
- In addition, any test that you would like to
On 3/5/2008 10:09 AM, jebyrnes wrote:
> Huh. Very interesting. I haven't really worked with manipulating contrast
> matrices before, save to do a prior contrasts. Could you explain the matrix
> you laid out just a bit more so that I can generalize it to my case?
Each column corresponds to
Hi there!
In my case,
cor(d[1:20])
makes me a good correlation matrix.
Now I'd like to have it one sided, means only the left bottom side to be
printed (the others are the same) and I'd like to have * where the
p-value is lower than 0.05 and ** lower than 0.01.
How can I do this?
And another
On Tue, Mar 4, 2008 at 9:48 PM, John Sorkin <[EMAIL PROTECTED]> wrote:
> Prof. Bates was correct to point out the lack of specifics in my original
> posting. I am looking for a package that will allow we to choose among link
> functions and account for repeated measures in a repeated measures ANO
On Wed, 5 Mar 2008, Chandra Shah wrote:
> Hi
> I have a 3 x 2 contingency table:
> 10 20
> 30 40
> 50 60
> I want to update the frequencies to new marginal totals:
> 100 130
> 40 80 110
> I want to use the ipf (iterative proportional fitting) function which
> is apparently in the cat package.
> C
rrp is working!
Sorry, it was my mistake... fiddling around to find
out what the problem is I forgot to re-include the
variables which are to be imputed. It seems like this
case is not caught but the algorithm finishes with the
mentioned error.
Anyway, I am still a little fuzzy about imputation a
Ramon Diaz-Uriarte wrote on 03/05/2008 04:25 AM:
> Dear All,
>
> In a package, I want to use some C code where I am using a structure
> (as the basic element of a linked list) with flexible array members.
> Basically, this is a structure where the last component is an
> incomplete array type (e.g
Why not simply?
m <- seq(-1, 1, by = 0.1)
dat <- data.frame(m, m^2)
- Erik Iverson
Neuer Arkadasch wrote:
> Hello all,
>
> I am trying to use
>
> m <- seq(-1,1,0.1)
> x1 <- vector()
> x2 <- vector()
> for(i in m){
> x1[i] <- i
> x2[i] <- i^2
> }
> dat <- data.frame(x1,
Thanks all for the prompt answers!!! All works perfectly!
up and running! Thanks!
"jim holtman" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> This should do it for you:
>
>> x <- c("2564gc", "2367,GH", "2134 JHG")
>> x.sep <- gsub("([[:digit:]]+)[ ,]*([[:alpha:]]+)", "\\1 \\2",
m <- seq(-1,1,0.1)
x1 <- c()
x2 <- c()
for(i in 1:length(m)){
x1[i] <- m[i]
x2[i] <- m[i]^2
}
dat <- data.frame(x1,x2)
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Neuer Arkadasch
Sent: March 5, 2008 10:00 AM
To: [EMAIL PROTECTED]
Subject: [R
Try this:
m <- seq(-1,1,0.1)
x1 <- vector(length=length(m))
x2 <- vector(length=length(m))
for(i in m){
x1[i] <- i
x2[i] <- i^2
}
dat <- data.frame(x1,x2)
Ravi.
---
Ravi Varadhan, Ph.D.
Assistant Pro
> Date: Wed, 05 Mar 2008 15:59:59 +0100 (CET)
> From: Neuer Arkadasch <[EMAIL PROTECTED]>
> Sender: [EMAIL PROTECTED]
> Precedence: list
>
> Hello all,
>
> I am trying to use
>
> m <- seq(-1,1,0.1)
> x1 <- vector()
> x2 <- vector()
> for(i in m){
> x1[i] <- i
> x2[i] <- i^2
Davood Tofighi wrote:
> Thanks for your reply. For each condition, I will have a matrix or data
> frames of 1000 rows and 4 columns. I also have a total of 64 conditions for
> now. So, in total, I will have 64 matrices or data frames of 1000 rows and 4
> columns. The format of data I would like to
On Wed, 5 Mar 2008, Wolfgang Waser wrote:
> Dear all,
>
> I did a non-linear least square model fit
>
> y ~ a * x^b
>
> (a) > nls(y ~ a * x^b, start=list(a=1,b=1))
>
> to obtain the coefficients a & b.
>
> I did the same with the linearized formula, including a linear model
>
> log(y) ~ log(a) + b
Hi all,
I would like to know whether there is any function in R were i can
find the cross-correlation of two or more multivariate (time series) data. I
tried the function ccf() but it seems like to have two univariate datasets.
Please let me know.
sincerely,
sandeep
--
Sandeep Joseph PhD
Hi,
I have a survey dataset of about 2 observations
where for 2 factor variables I have about 200 missing
values each. I want to impute these using 10 possibly
explanatory variables which are a mixture of integers
and factors.
Since I was quite intrigued by the concept of rrp I
wanted to use
Hello all,
I am trying to use
m <- seq(-1,1,0.1)
x1 <- vector()
x2 <- vector()
for(i in m){
x1[i] <- i
x2[i] <- i^2
}
dat <- data.frame(x1,x2)
But, I have false result
>dat
x1 x2
1 1 1
could some tell me how it is possible to do this?
Thank you!
Our March-April 2008 R/S+ course schedule is now available. Please check
out this link for additional information and direct enquiries to Sue
Turner [EMAIL PROTECTED] Phone: 206 686 1578
--Can't see your city? Please email us! --
Ask for Group Discount
You could try plotting it in pieces to use less RAM.
library(zoo)
library(chron)
z <- zoo(1:10, chron(1:10))
# same as plot(z)
plot(z[1:5], ylim = range(z), xlim = range(time(z)))
lines(z[5:10])
On Wed, Mar 5, 2008 at 10:00 AM, stephen sefick <[EMAIL PROTECTED]> wrote:
> the comma seperated file
> Which effect sizes (parametric or not) could I use in order to estimate
> the amount of non-linear correlation between 2 variables?
>
> Is it possible to correct for auto-correlation within the correlated
> times series?
>
I think the starting point is to develop a model, even conceptual,
This should do it for you:
> x <- c("2564gc", "2367,GH", "2134 JHG")
> x.sep <- gsub("([[:digit:]]+)[ ,]*([[:alpha:]]+)", "\\1 \\2", x)
> # now create separate values
> strsplit(x.sep, " ")
[[1]]
[1] "2564" "gc"
[[2]]
[1] "2367" "GH"
[[3]]
[1] "2134" "JHG"
>
On 3/5/08, sun <[EMAIL PROTECTED]>
I will like to analyse a binary cross over design using the random
effects model. The probability of success is assumed to be logistic.
Suppose as an example, we have 4 subjects undergoing a crossover design,
where the outcome is either success or failure. The first two subjects
receive treatme
Huh. Very interesting. I haven't really worked with manipulating contrast
matrices before, save to do a prior contrasts. Could you explain the matrix
you laid out just a bit more so that I can generalize it to my case?
Chuck Cleland wrote:
>
>
>One approach would be to use glht() in t
Hi Sun,
vec <- c("2324gz","2567 HK","3741,BF")
vec1 <- gsub('[^[:digit:]]','',vec)
vec2 <- gsub('[^[:alpha:]]','',vec)
> vec1
[1] "2324" "2567" "3741"
> vec2
[1] "gz" "HK" "BF"
Cheers
Vincenzo
---
Vincenzo Luc
Try this:
> library(gsubfn)
> x <- c("2324gz", "2567 HK", "3741,BF")
> strapply(x, "[[:digit:]]+|[[:alpha:]]+")
[[1]]
[1] "2324" "gz"
[[2]]
[1] "2567" "HK"
[[3]]
[1] "3741" "BF"
On Wed, Mar 5, 2008 at 9:51 AM, sun <[EMAIL PROTECTED]> wrote:
> I have strings contain postcode and letters, so
the comma seperated file is 37Mb, and I get the below message:
it is zoo object read in this way:
# chron
> library(chron)
> fmt.chron <- function(x) {
+chron(sub(" .*", "", x), gsub(".* (.*)", "\\1:00", x))
+ }
> z1 <- read.zoo("all.csv", sep = ",", header = TRUE, FUN = fmt.chron)
and then t
1 - 100 of 147 matches
Mail list logo