In such a case your question can be split into two
stages:
The first one is to read your file into R data.frame
(let say) making sure that all "bad" entries are
replaced by NA (missing value). This usually can be
done and there are several ways of doing so.
Stage two would be to check how the missi
On Tue 23 Oct 07, 10:56 AM, Gad Abraham <[EMAIL PROTECTED]> said:
> caffeine wrote:
>> I'd like to fit an ARMA(1,1) model to some data (Federal Reserve Bank
>> interest rates) that looks like:
>> ...
>> 30JUN2006, 5.05
>> 03JUL2006, 5.25
>> 04JUL2006, N < here!
>> 05JUL2006, 5.
Hi,
I opened up R 2.6 and now I am receiving a message that says
Fatal Error: Unable to restore saved data in .RData
When I look at the console I see a message that says
Error in .Call("R_lazyLoadDBfetch",...)
Any thoughts?
David
--
==
On Mon, 2007-10-22 at 06:24 -0700, privalan wrote:
> Dear R-users,
>
> I would like to calculate elasticities and sensitivities of each parameters
> involved in the following transition matrix:
>
> A <- matrix(c(
> sigma*s0*f1, sigma*s0*f2,
>s, v
> ), nrow=2, byr
?save.image
?load
HTH.
tc
On 10/23/07, David Kaplan <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> My apologies for a very simple question. I just downloaded
> R 2.6.0. I want to bring in all of the objects from 2.5.0
> that I see when I type ls(). I have no idea how to do that.
>
> Thanks in advan
caffeine wrote:
> I'd like to fit an ARMA(1,1) model to some data (Federal Reserve Bank
> interest rates) that looks like:
>
>
> ...
> 30JUN2006, 5.05
> 03JUL2006, 5.25
> 04JUL2006, N < here!
> 05JUL2006, 5.25
> ...
>
>
> One problem is that holidays have that "N" for their
Hi all,
My apologies for a very simple question. I just downloaded
R 2.6.0. I want to bring in all of the objects from 2.5.0
that I see when I type ls(). I have no idea how to do that.
Thanks in advance.
David
--
===
Dav
--- Begin Message ---
Hi Keith,
it seems like a good starting position. I recommend that you spend
some time studying Pinheiro and Bates's book to see where t ogo from
here.
Cheers
Andrew
On Mon, Oct 22, 2007 at 02:58:51PM -0800, Keith Cox wrote:
> I have three columns
It represents the subset of the data frame partitioned by 'x$quiz'.
On 10/22/07, Matthew Dubins <[EMAIL PROTECTED]> wrote:
> Yes!! That did it!
>
> Does .sub represent the different levels of the x$quiz indice?
>
>
>
> jim holtman wrote:
> Is this what you were expecting?
> by(x, x$quiz, functi
I've been using (and loving) R for quite a while now, but I have to
admit that something simple is still stumping me.
The question is how I can control the box within which a plot is
drawn, in cases where I'm controlling the aspect ratio with the "asp"
argument.
The problem comes up in pdf(
Yes!! That did it!
Does .sub represent the different levels of the x$quiz indice?
jim holtman wrote:
> Is this what you were expecting?
>
>
>> by(x, x$quiz, function(.sub) t.test(percent ~ group, data=.sub))
>>
> x$quiz: 1
>
> Welch Two Sample t-test
>
> data: percent by group
Is this what you were expecting?
> by(x, x$quiz, function(.sub) t.test(percent ~ group, data=.sub))
x$quiz: 1
Welch Two Sample t-test
data: percent by group
t = 6.3228, df = 6.231, p-value = 0.0006306
alternative hypothesis: true difference in means is not equal to 0
95 percent confiden
Hi,
Following please find *some* of my data.
percent quizgroup
100 1 High
100 1 High
100 1 High
25 1 Low
50 1 Low
75 1 High
50 1 Low
75 1 High
100 1 High
100 1 High
50 1
Hi Keith,
it seems like a good starting position. I recommend that you spend
some time studying Pinheiro and Bates's book to see where t ogo from
here.
Cheers
Andrew
On Mon, Oct 22, 2007 at 02:58:51PM -0800, Keith Cox wrote:
> I have three columns of data, Xc, Trt and fish. This was a repeat
Dear all,
I'm using
R 2.6.0
Windows XP
Access 2003
Lao Script for Windows 7.02
I'm trying to use Lao with R for
1) Chart Legend Labels
2) Writing out text for HTML pages
With Lao Script for Windows it is possible to type Unicode (16-bit).
The strings I want to use are stored in an Access dat
I think what you want to write is:
by(your.df, quiz, function(.sub){
t.test(percent ~ group, data=.sub)
}
On 10/22/07, Matthew Dubins <[EMAIL PROTECTED]> wrote:
> I've tried to use by(), but the closest i got to it doing what I wanted
> was using the following:
>
> by(percent, quiz, functio
Could you post some of your data and your initial test, and explain why
it didn't worked? It is difficult to figure out what is the problem
with your call to by().
Julian
Matthew Dubins wrote:
> I've tried to use by(), but the closest i got to it doing what I wanted
> was using the following:
R News 4/1 which gives an overview of the date time classes.
On 10/22/07, B. Bogart <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I'm using R to visualize and explore the data produced by a software
> system. The software generates logs for many types of events. The
> software runs for days on end,
I have three columns of data, Xc, Trt and fish. This was a repeated
measures design with 6 measurements taken from each of 5 fish. Xc is the
actual measurement, Trt is the treatment, and fish is the fish number. Data
can be seen below (hopefully it is in the column format). I would like to
look
I've tried to use by(), but the closest i got to it doing what I wanted
was using the following:
by(percent, quiz, function(percent) {t.test(percent~group,
data=marks.long)})
But the results it gave me weren't t.tests of percent by group according
to quiz number.
Julian Burgos wrote:
> See
I would suggest that you use POSIXct as the format for the date/time.
This will give you resolution to almost a microsecond, so you should
be able to differentiate between multiple sequencial events. As for
reading it in, almost anything is reasonable; e.g., 2007/10/22
18:38:25.123456. You just h
See by()
Matthew Dubins wrote:
> Hi all,
>
> I wrote a simple function that gives me multiple t.test results
> according to a subset variable and am wondering whether or not I
> reinvented the wheel. Observe:
>
> t.test.sub <- function (formula, data, sub, ...)
> {
> for(i in 1:ma
Hi all,
I wrote a simple function that gives me multiple t.test results
according to a subset variable and am wondering whether or not I
reinvented the wheel. Observe:
t.test.sub <- function (formula, data, sub, ...)
{
for(i in 1:max(sub))
{
print(t.test
Matrices are not made of paper! :) If you index a matrix with negative
numbers, you'll get back that matrix minus that column or row.
A quick example:
>a<-matrix(c(1:9),ncol=3) # Create a sample matrix
>a# Display it
[,1] [,2] [,3]
[1,]147
[2,]2
use the '-' feature.
>mat <- matrix(rnorm(100), nrow = 10)
#snip the second row
>mat[-2,]
#snip the third column
>mat[,-3]
#snip rows 5 and 7
>mat[-c(5,7),]
cheers
tc
On 10/23/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Hi everyone,
>
> suppose I have a 2D matrix, is there a command to
Hi everyone,
suppose I have a 2D matrix, is there a command to snip out a specific
row/column and then remerge the remaining columns/rows back into a
contiguous matrix? I will need to repeat this operation quite a
bit(reverse selection).
Thanks for any insights you can offer.
Yifei
Hello all,
I'm using R to visualize and explore the data produced by a software
system. The software generates logs for many types of events. The
software runs for days on end, and can possibly generate multiple events
per second.
What is the appropriate time format for year, month, day, hour,
Hi,
What is your data?
I've tested with :
df1 <- data.frame(x=rnorm(10), y=rnrom(10))
On 22/10/2007, Diogo Alagador <[EMAIL PROTECTED]> wrote:
>
> *Sorry Henrique*
> **
> *But I'm affraid now the result was*
> **
> **
> *x y*
> *1 1 2*
> *2 1 2*
> *3 1 2*
> *4 1 2*
> *5 1 2*
> *6
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
> Of Katherine Jones
> Sent: Monday, October 22, 2007 2:04 PM
> To: [EMAIL PROTECTED]
> Subject: [R] Bar plot with error bars
>
> Apologies if this has been asked before. I am having trouble
> understanding
Apologies if this has been asked before. I am having trouble
understanding the R mailing list never mind R!
I am relatively new to R having migrated from Minitab and SPSS. I
have managed to do some more complicated statistics such as
hierarchical partitioning of variance on an 80,000 record
.
I would love more quality online documentation around the same level
as at UCLA (http://www.ats.ucla.edu/stat/; the Stata pages are
fantastic), but I think I would like the first draft responsibility
to fall to the well qualified instructor (Hi Matt!), and those with
at least a Masters i
On Mon, 22 Oct 2007, Dieter Best wrote:
> Hello there,
>
> I am not quite sure how to interpret the output of Rprof (in the following the
> output I was staring at). I was poking around the web a little bit for
>documentation but without much success. I guess if I want to figure out what
>takes
On 10/22/2007 2:36 PM, Dieter Best wrote:
> Hello there,
>
> I am not quite sure how to interpret the output of Rprof (in the following
> the output I was staring at). I was poking around the web a little bit for
> documentation but without much success. I guess if I want to figure out what
> t
Good day, all.
I just upgraded to R 2.6.0. I re-installed my most-used packages including
RODBC to be sure I'm up-to-date. I have been working with a large insurance
claims dataset stored in a MS-Access database on a Windows machine. I have been
regularly exporting tables from R into it using s
Sorry Henrique
But I'm affraid now the result was
x y
1 1 2
2 1 2
3 1 2
4 1 2
5 1 2
6 1 2
I think we are very close to the solution. But it isn't this one!!!
Where should I look for?
Sorry for the inconvenience,
Diogo André Alagador
Portugal
from: Henrique
It seems that you are using R-2.6.0 under Windows. Then this error has
been discussed before.
There is a patched version on CRAN. Look at
http://cran.r-project.org/bin/windows/base/rpatched.html
Hope this helps,
Rainer
Denis Aydin schrieb:
> Hello,
>
> I just wanted to save a graphic in the
Does this do what you want?
> x <- scan(textConnection('1 2 3 4 5
+ 6 7 8 9 0
+ 9 8 7 6 5
+ 4 3 2 1 0'), what=rep(list(0),5))
Read 4 records
> x
[[1]]
[1] 1 6 9 4
[[2]]
[1] 2 7 8 3
[[3]]
[1] 3 8 7 2
[[4]]
[1] 4 9 6 1
[[5]]
[1] 5 0 5 0
> # create a matrix and then 'lapply' each column to make
Hi Diogo,
teste <- function(x){
if(!is.list(x))stop("A list is needed")
df_out <- matrix(0, ncol=ncol(x[[1]]), nrow=nrow(x[[1]]))
for(i in 1:ncol(x[[1]])){
df_out[,i] <- apply(do.call("rbind", lapply(x, "[[", i)), 2,
median)
}
return(df_out)
}
x <-
I've been searching for R code which performs the Howard Harris variant of
k-means clustering but cannot seem to locate it. The Hartigan-Wong algorithm
seems quite similar but not the same.
Does anyone know of code that would accomplish this? The algorithm looks
simple enough to implement mys
Hi Chuck,
I am running the Windows version R-2.5.1. I will upgrade and will let
you know what the deal was. Your output definitely looks right.
Thanks,
Tomas
Charles C. Berry wrote:
>
> Tomas,
>
> Are you using R-2.6.0 ??
>
> Each method works for me producing as list of 7000 vectors.
>
> Th
Tomas,
Are you using R-2.6.0 ??
Each method works for me producing as list of 7000 vectors.
The file I used to test this is created by:
for (i in 1:7000) cat( seq(from=i,by=1,length=19),"\n",
sep='\t',file="tmp.tab",append=TRUE)
The first line is:
> scan("tmp.tab",nlines=1)
Read 19 i
Hello there,
I am not quite sure how to interpret the output of Rprof (in the following the
output I was staring at). I was poking around the web a little bit for
documentation but without much success. I guess if I want to figure out what
takes so long in my code the 2nd table $by.total and th
Sebastian P. Luque wrote:
> Hi,
>
> Is there a more efficient way to output NA strings as empty strings in
> format.data.frame than this:
>
> ---<---cut here---start-->---
> R> tt <- data.frame(a=c(NA, rnorm(8), NA), b=c(NA, letters[1:8], NA))
> R> tt <- format(t
Hi Geertje,
You should look into linear mixed-effects models. In these you can
incorporate spatial correlation explicitly. The basic function to use
is lme(), but you should do some reading about this type of models
before jumping into it. An excellent resource is the book "Mixed
Effects Mo
I have a meta-analysis dataset which I would like to analyze as a mixed
model, where the y-variable is a measure of effect size, the random effect
is the study from which the effect size was extracted, and the fixed
effect is a categorical explanatory variable. The complication is that we
often hav
Hi Henrique,
Much thanks (obrigado!!!) for your quick answer,
I have built a function from you suggestions that took the following script:
teste=function(...){
df_out = matrix(0, ncol=ncol(..1), nrow=nrow(..1));
for(i in 1:ncol(..1)){
df_out[,i] = apply(do.call("rbind", lapply(list(...), "[[",
Hi,
Is there a more efficient way to output NA strings as empty strings in
format.data.frame than this:
---<---cut here---start-->---
R> tt <- data.frame(a=c(NA, rnorm(8), NA), b=c(NA, letters[1:8], NA))
R> tt <- format(tt, digits=5, trim=TRUE)
R> tt
Muri Soares wrote:
>Hi everyone,
>
>I have a matrix that contains 1000 replicates of a sample of a list of values.
>I want to group each row (row=replicate) into my defined bin ranges and then
>calculate the mean and stdev for each of the bin ranges (so I will have 1000
>rows but ncol=number of
I appreciate the input. Off-list, someone suggested that I set up a
class wiki, and have this be the first sieve. I could do some quality
control there first (perhaps sending the link to this list serve at
the end of the semester for others to check over), and then post the
final manuals on the R w
Hi Chuck,
thanks for your responses. I did not ignore your suggestions - I did
try them and they did not produce what I need.
The first one produced table with the same format as a read.table would
generate, not not a list of lists.
Second one gave me an error after returning Read 19 items m
Try:
packageDescription("affy")$Version
On 10/22/07, Yupu Liang <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am looking for a way to find out the version information of
> installed R packages.
> ex:
>
> >library(affy)
> >SOME_COMMAND_FOR_VERSION(affy)
>
> I know I can do either getRversion() or R.Ver
> I would like to calculate elasticities and sensitivities of each parameters
> involved in the following transition matrix:
>
> A <- matrix(c(
> sigma*s0*f1, sigma*s0*f2,
>s, v
> ), nrow=2, byrow=TRUE,dimnames=list(stage,stage))
>
> The command "eigen.analysis" a
packageDescription(pkg="affy")$Version
On 22/10/2007, Yupu Liang <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I am looking for a way to find out the version information of
> installed R packages.
> ex:
>
> >library(affy)
> >SOME_COMMAND_FOR_VERSION(affy)
>
> I know I can do either getRversion() or R.Ve
try
sessionInfo()
On 10/22/07, Yupu Liang <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am looking for a way to find out the version information of
> installed R packages.
> ex:
>
> >library(affy)
> >SOME_COMMAND_FOR_VERSION(affy)
>
> I know I can do either getRversion() or R.Version() could give me t
Hi,
I am looking for a way to find out the version information of
installed R packages.
ex:
>library(affy)
>SOME_COMMAND_FOR_VERSION(affy)
I know I can do either getRversion() or R.Version() could give me the
version of R. I assume there must be a way for me to get version
information fo
Tomas,
Three different ways to create a list of 7000 vectors from a file of 7000
rows and 19 columns are given here:
http://article.gmane.org/gmane.comp.lang.r.general/97032
which I think is what you are asking for.
If you truly need a list of 7000 lists each of length 1 containing a
Perhaps:
[[1]]
x v
1 1 1.0565171
2 2 -0.8273003
3 3 1.0614944
4 4 2.6897433
5 5 0.7371014
6 6 -1.3192476
[[2]]
x v
1 1 1.7267265
2 2 -0.2470332
3 3 -0.1667343
4 4 -0.4970180
5 5 -1.0597913
6 6 0.3742491
[[3]]
x v
1 1 1.3846207
2 2 0.7995231
3 3 -0.681851
Am 22.10.2007 um 17:19 schrieb Duncan Murdoch:
> On 10/22/2007 9:54 AM, Birgit Lemcke wrote:
>> Hello R user and helper!
>> I would like to get a 3d plot with coloured points.
>> I did that:
>> colors<-c(rep("2",7), rep("3",12), rep("4", 24), rep("5", 13), rep
>> ("6", 8), rep("7", 51), rep("8"
Hi everyone,
I have a matrix that contains 1000 replicates of a sample of a list of values.
I want to group each row (row=replicate) into my defined bin ranges and then
calculate the mean and stdev for each of the bin ranges (so I will have 1000
rows but ncol=number of bin ranges).
I don't kno
On 10/22/2007 9:54 AM, Birgit Lemcke wrote:
> Hello R user and helper!
>
> I would like to get a 3d plot with coloured points.
>
> I did that:
>
> colors<-c(rep("2",7), rep("3",12), rep("4", 24), rep("5", 13), rep
> ("6", 8), rep("7", 51), rep("8", 1), rep("9", 15), rep("10", 53), rep
> ("11",
Hi everybody,
I'm using the gmodels package to convert human readable contrasts
into the format required by R and would be grateful if someone could
confirm for me whether I've got the contrasts right in the sample
code below.
I'm working on the assumption that the contrasts are index accor
Hi Jim,
I really appreciate your help.
From the input file I have - 19 columns, 7000 rows - the scan gives me
the desired format of a list consisting of 19 lists with 7000 values each.
However I need a list of 7000 lists with 19 values each. (e.g. each row
of my input file should be a separate
Hi all,
I am not a skillful R programmer and has I am handling with large dataframes
(about 3 x 300) I am in need of an efficient function.
I have 4 dataframes with the same dimension. I need to generate other dataframe
with the some dimension than the others where in each position it has
Max Manfrin wrote:
> On 22 Oct 2007, at 10:50, Prof Brian Ripley wrote:
>
>> On Mon, 22 Oct 2007, Max Manfrin wrote:
>>
>>> Can anybody explain me what does it mean "Estimated effects may be
>>> unbalanced", and what does it imply for the anova analysis?
>>
>> The help page does! I suspect you int
This comparison is just as valid as it is for a regular linear mixed model,
which is all that the GAMM is in this case --- the smoothing parameters are
just variance components in your example.
In general you have to be a bit careful with generalized likelihood ratio
tests involving variance
On Fri, 19 Oct 2007, John Sorkin wrote:
>
> As Marc points out, the documentation does say that usr can only be set
> by par. Perhaps the R development team would consider modifying the code
> of plot to check for the usr parameter, and if found print an error
> message.
Unfortunately it's not
I don't know of any package that does dual-frame surveys. The survey
package does not (at the moment).
-thomas
On Mon, 22 Oct 2007, eugen pircalabelu wrote:
> Good afternoon!
>
> My question is more of a "is there a package for doing?" an
> inference regarding the mean, medi
Hi R user,
I am using the gamm() function of the mgcv-package. Now I would like to
decide on the random effects to include in the model. Within a GAMM
framework, is it allowed to compare the following two models
inv_1<-gamm(y~te(sat,inv),data=daten_final, random=list(proband=~1))
inv_2
Hi All,
I am now using R version 2.5.1 under Linux (Red Hat), and have
some problems to use some graphical devices (e.g., png, jpeg, bmp
and so on). Can you tell/teach me how to solve the problem(s)? See
below for details.
(1) I make sure that package:grDevices is on the list using
search()
Hello R user and helper!
I would like to get a 3d plot with coloured points.
I did that:
colors<-c(rep("2",7), rep("3",12), rep("4", 24), rep("5", 13), rep
("6", 8), rep("7", 51), rep("8", 1), rep("9", 15), rep("10", 53), rep
("11",3), rep("12",3), rep("13", 8), rep("14", 90), rep("15", 8), re
2007/10/22, Petr PIKAL <[EMAIL PROTECTED]>:
>
> Hi
>
> [EMAIL PROTECTED] napsal dne 22.10.2007 14:01:14:
>
> > Hi,
> >
> > I need to calculate either Error or Normalized values based on the
> following
> > principle:
> >
> > Error = Observed value - reference value
> > Normalized value = Observed V
Geertje Van der Heijden wrote:
>Hi,
>
>I have collected data on trees from 5 forest plots located within the
>same landscape. Data within the plots are spatially autocorrelated
>(calculated using Moran's I). I would like to do a ANCOVA type of
>analysis combining these five plots, but the assumpti
Just want to chime in and say that I think it's a great idea. It is,
after all, a wiki, and even the bad entries will serve to be something
like stubs that can be expanded upon by others and they play with them.
When using e.g. the Gentoo wiki, I have run across some well-organized
entries (a
--- Ricardo Pietrobon <[EMAIL PROTECTED]> wrote:
> Bill, very interesting comment. However, do you
> believe that by posting
> these tutorials on a wiki they could, even if
> initially faulty, be improved
> by the community over time?
>
> Ricardo
>
As a new user to R it strikes me that some o
Dear R-users:
I have some problems working with lme function, and i would be glad if
anyone could help me.
this kind of analysis i was used to do with PROC MIXED from SAS, but i would
like to move to R, for many reasons...
So, the problem is:
Imagine the I have 3 factors:
fact_A, fact_B and fact
Dear R-users,
I would like to calculate elasticities and sensitivities of each parameters
involved in the following transition matrix:
A <- matrix(c(
sigma*s0*f1, sigma*s0*f2,
s, v
), nrow=2, byrow=TRUE,dimnames=list(stage,stage))
The command "eigen.analysis" av
Hi
[EMAIL PROTECTED] napsal dne 22.10.2007 14:01:14:
> Hi,
>
> I need to calculate either Error or Normalized values based on the
following
> principle:
>
> Error = Observed value - reference value
> Normalized value = Observed Value - Part average
>
> appraiser <- rep(rep(1:3,c(3,3,3)),10)
>
Possibly. I find it difficult to argue the case either way in the
abstract, though. I think once you see some of the outcomes, it will
become clear which are good enough for posting and which are not.
Bill Venables.
_
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
R
On 22 October 2007 at 00:43, Edna Bell wrote:
| Hello R Gurus:
|
| I would like to take a character string and split at the $ sign.
|
| I thought that strsplit would do it, but here are the results:
|
| > vv
| [1] "whine$ts1"
| > vv
| [1] "whine$ts1"
| > strsplit(vv,"$")
| [[1]]
| [1] "whine$ts
Good afternoon!
My question is more of a "is there a package for doing?" an inference
regarding the mean, median, regression estimates, etc. by using the information
from dual frame surveys? (this methodology is based on the work of BANKIER
1986, SKINNER 1991, LOHR and RAO 2000...)
There are probably much more fancy ways to do this, but since no one
else has posted a response, here's what I would do:
> plot(seq(10, 40, 10), axes = FALSE)
> axis(1, at = 1:4, lab = LETTERS[1:4])
> axis(side = 2)
> box()
On 10/20/07, Yong Wang <[EMAIL PROTECTED]> wrote:
> Dear R-list
>
> My q
Thomas,
may I also suggest, from the Documentation>Contributed section of CRAN,
"Econometrics in R" by Grant Farnsworth
http://cran.at.r-project.org/doc/contrib/Farnsworth-EconometricsInR.pdf
(see the chapter on Time series) and, in case you can read Italian,
"Analisi delle serie storiche con R
Hi,
I need to calculate either Error or Normalized values based on the following
principle:
Error = Observed value - reference value
Normalized value = Observed Value - Part average
appraiser <- rep(rep(1:3,c(3,3,3)),10)
trail <- rep(rep(1:3,3),10)
part <- rep(1:10,c(9,9,9,9,9, 9,9,9,9,9))
value
Hi,
I have collected data on trees from 5 forest plots located within the
same landscape. Data within the plots are spatially autocorrelated
(calculated using Moran's I). I would like to do a ANCOVA type of
analysis combining these five plots, but the assumption that there is no
autocorrelation in
Dear R-users:
I have some problems working with lme function, and i would be glad if
anyone could help me.
this kind of analysis i was used to do with PROC MIXED from SAS, but i would
like to move to R, for many reasons...
So, the problem is:
Imagine the I have 3 factors:
fact_A, fact_B and fact
Dear Terry
Thanks for your reply,
I guess then including only the interaction term only should make sense,
since a difference in group 1 between t=0 and t=1 is already taken into
account in the baseline (all coefficients set to zero). My second group as I
understand it should have an exp(coef_g
On 10/22/07, Denis Aydin <[EMAIL PROTECTED]> wrote:
> I just wanted to save a graphic in the pdf-format. But id failed:
>
> Fehler: Invalid font type
> Zusätzlich: Warning messages:
> 1: font family not found in PostScript font database
> 2: font family not found in PostScript font database
>
> I u
Hello,
I just wanted to save a graphic in the pdf-format. But id failed:
Fehler: Invalid font type
Zusätzlich: Warning messages:
1: font family not found in PostScript font database
2: font family not found in PostScript font database
I use R 2.6.0 with all packages updated recently.
Any idea?
On 22 Oct 2007, at 10:50, Prof Brian Ripley wrote:
On Mon, 22 Oct 2007, Max Manfrin wrote:
Can anybody explain me what does it mean "Estimated effects may be
unbalanced", and what does it imply for the anova analysis?
The help page does! I suspect you intended to use factors, and
have no
On Mon, 22 Oct 2007, Max Manfrin wrote:
Can anybody explain me what does it mean "Estimated effects may be
unbalanced", and what does it imply for the anova analysis?
The help page does! I suspect you intended to use factors, and have not
done so, and also that you did not intend to replicat
I am not sure what the general rule you are looking for is. Using
deparse(substitute()) gets you the print-value of the expression (in your
case a name) for the actual argument. But if you want to go back
generations of calls, then just when do you stop?
What you can do is write fun2 to use s
I think the usual thing would be to pass substitute(x) or
deparse(substitute(x)) from the original function (fun2 in your
example).
But if you really want to, you can do
fun <- function(x) eval.parent(call("substitute", substitute(x)))
On 10/22/07, Steve Powell <[EMAIL PROTECTED]> wrote:
> Dear
Can anybody explain me what does it mean "Estimated effects may be
unbalanced", and what does it imply for the anova analysis?
Here a case example of what I get.
> D<-expand.grid(A=c(0,1,2,3),B=c(0,1),C=c(0,1),res=c(runif(30,0,1)))
> aov(res~A*B*C,data=D)
Call:
aov(formula = res ~ A * B * C
Can anybody explain me what does it mean "Estimated effects may be
unbalanced", and what does it imply for the anova analysis?
Here a case example of what I get.
> D<-expand.grid(A=c(0,1,2,3),B=c(0,1),C=c(0,1),res=c(runif(30,0,1)))
> aov(res~A*B*C,data=D)
Call:
aov(formula = res ~ A * B *
Dear list members,
I am writing some functions to help with printing graphs.
If I want to return the name of a variable within a function, for instance
to print the label for a graph, I know that I can use substitute:
fun=function(x) substitute(x) #plus of course some other processing
var=1:3
fu
95 matches
Mail list logo