Re: [R] residual standard error in rlm (MASS package)

2008-12-08 Thread Prof Brian Ripley
The 'r' in rlm is for 'robust', so it does not compute a residual sum of 
squares (which is not robust), but rather a robust estimate of the scale.

That *is* what the help page ?summary.rlm says:

   sigma: The scale estimate.

  stddev: A scale estimate used for the standard errors.

As to exactly what those scale esimates are, see the references or the 
code, but e.g. stddev is based on downweighting larger residuals.



On Mon, 8 Dec 2008, Kari Ruohonen wrote:


Hi,
I would appreciate of someone could explain how the residual standard
error is computed for rlm models (MASS package). Usually, one would
expect to get the residual standard error by


sqrt(sum((y-fitted(fm))^2)/(n-2))


where y is the response, fm a linear model with an intercept and slope
for x and n the number of observations. This does not seem to work for
rlm models and I am wondering what obvious am I missing here? Here is an
example:

x<-1:100
y <-
c(2.37156056743079, 1.66644749462933, 6.33155723966817,
12.7709430358167,
11.124950273, 19.7839679181322, 15.4923741347280, 18.702397068068,
18.7599963836891, 16.5916430986993, 16.0653054434192, 25.4517287910774,
19.9306544701024, 25.3581170063305, 35.6823980984208, 25.8293557856092,
34.7021243077337, 31.5336533511445, 36.3599764020412, 44.6000402205419,
41.9899219097128, 45.4564141342995, 43.6061038794823, 48.7566542867736,
47.5504015095432, 54.8120780105412, 55.2620894365424, 53.223516997263,
59.5477081631011, 61.2390445046623, 62.3106323086734, 68.1104058608567,
62.399184797047, 73.9413640517595, 70.6710955288097, 74.5456476513766,
64.968260562374, 73.2318014155102, 73.7335636549196, 76.9362454490887,
80.2579421621043, 80.945827481932, 87.7805234941603, 90.0909966936097,
86.0620664696943, 90.3640690887434, 98.0965832886435, 96.789139334781,
102.114606626867, 98.3302535449148, 103.107825932103, 109.942412367491,
106.868253017023, 109.808738425258, 110.136050155862, 108.846488332796,
118.442973085485, 117.276921857816, 118.640871017018, 119.263784892266,
123.100214564588, 123.860590728955, 128.712228721465, 131.297848895423,
123.283516322512, 134.012585073241, 132.665302554315, 138.673423711638,
143.687124396642, 139.159598404340, 142.012045172451, 146.480644634549,
145.429104228138, 144.503524323636, 152.348091257061, 149.237135977337,
159.803973361884, 153.195835890301, 158.921034703569, 163.479578254736,
159.591944778941, 163.185119145309, 165.890510577093, 164.573471319534,
173.549321320816, 169.520130741843, 170.439532597426, 174.477604263110,
178.059609946662, 177.828073866105, 185.005760822296, 184.280998437732,
196.085419590290, 187.125508176825, 190.524627542992, 196.849299652848,
197.830377226055, 197.973198490102, 198.59328678419, 199.450725602621
)
# y originally generated with y<-2*x+rnorm(100,0,2)

fm<-lm(y~x)
rm<-rlm(y~x)
fm.r<-sqrt(sum((y-fitted(fm))^2)/(n-2))
rm.r<-sqrt(sum((y-fitted(rm))^2)/(n-2))
print(matrix(c(fm.r,summary(fm)$sigma,rm.r,summary(rm)$sigma),
ncol=2,byrow=T))

Output of this is:
[,1] [,2]
[1,] 1.900033 1.900033
[2,] 1.905847 1.595128

I.e. for the lm model the residual standard error from the summary.lm
method matches exactly sqrt(sum((y-fitted(fm))^2)/(n-2)) but that for
the summary.rlm model is somewhat smaller than
sqrt(sum((y-fitted(rm))^2)/(n-2)). I am curious what causes this
difference?

My sessionInfo()
R version 2.7.1 (2008-06-23)
x86_64-pc-linux-gnu

locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADDRESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods
base

other attached packages:
[1] MASS_7.2-44

regards, Kari Ruohonen


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Compile Packages on vista

2008-12-08 Thread Uwe Ligges

Remove all the blank spaces in your package's name and try again.

Uwe Ligges



Dominik Locher - bondsearch.ch wrote:

Hi

 


I'm trying to compile R packages on my vista (I'm very new on vista...). I
installed Rtools, RExcel on a specific folder: c:\Rtools and gave full
access. Unfortunately, I am still unable to compile any packages (may be
still because of the security properties of vista). I checked several
mailing list/forums/website but I did not found any solution. Does anybody
have the same problem and found a solution?  I've got the following error:

 


##

R CMD install testPackage

installing to 'D:\Rtmp'

 


-- Making package testPackage 

  adding build stamp to DESCRIPTION

  installing R files

  installing inst files

find: `D:/Rtmp/ testPackage /doc': Permission denied

find: `D:/Rtmp/ testPackage /unitTests': Permission denied

make[2]: *** [D:/Rtmp/ testPackage /inst] Error 1

make[1]: *** [all] Error 2

make: *** [pkg- testPackage] Error 2

*** Installation of testPackage failed ***

 


Removing 'D:/Rtmp/ testPackage

Can't read D:/Rtmp/ testPackage /doc: Invalid argument at
c:\Rtools\R\R-2.8.0/bin/i

nstall line 411

Can't remove directory D:/Rtmp/ testPackage /doc: Directory not empty at
c:\Rtools\

R\R-2.8.0/bin/install line 411

Can't read D:/Rtmp/ testPackage /unitTests: Invalid argument at
c:\Rtools\R\R-2.8.0

/bin/install line 411

Can't remove directory D:/Rtmp/ testPackage /unitTests: Directory not empty
at c:\R

tools\R\R-2.8.0/bin/install line 411

Can't remove directory D:/Rtmp/ testPackage: Directory not empty at
c:\Rtools\R\R-

2.8.0/bin/install line 411

Restoring previous 'D:/Rtmp/ testPackage'

mv: cannot move `D:/Rtmp/00LOCK/ testPackage 'to `D:/Rtmp/ testPackage':
Directory n

ot empty

 


Many thansk for your help.

Best regards

Dominik


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logical inconsistency

2008-12-08 Thread Wacek Kusnierczyk
Patrick Connolly wrote:
> On Mon, 08-Dec-2008 at 02:05AM +0800, Berwin A Turlach wrote:
>
> |> G'day Wacek,
> |> 
> |> On Sat, 06 Dec 2008 10:49:24 +0100
> |> Wacek Kusnierczyk <[EMAIL PROTECTED]> wrote:
> |> 
> |> []
> |> > >> there is, in principle, no problem in having a high-level language
> |> > >> perform the computation in a logically consistent way.  
> |> > >
> |> > > Is this now supposed to be a "Radio Eriwan" joke?  As another saying
> |> > > goes: in theory there is no difference between theory and practice,
> |> > > in practice there is.
> |> > 
> |> > no joke, sorry to disappoint you. 
> |> 
> |> Apparently it is, you seem to be a comedian without knowing it. :)
>
> I think this guy's a riot!  Self-effacing humour is not easy to do,
> but he's really good at it.  That wonderful phrase a bit further back
> in this thread where he referred to his 'fiercely truculent' whining
> is one deserving preservation.  
>
> It's a shame the brilliance is lost somewhat by the use a keyboard
> that has no  key.  I've found a lot of these wonderful musings
> too much effort to read.  I'm not familiar with exotic keyboard
> layouts, but perhaps the Norwegian one uses  for something mere
> non-Nordics wouldn't understand.  Pity that.
>
>   

sorry.  i write so much nonsense i have no time to press the shift.  if
the lack of uppercase makes you skip my posts, the better for you.

vQ

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logical inconsistency

2008-12-08 Thread Wacek Kusnierczyk
Berwin A Turlach wrote:
>
> I am not surprised about CS guys never learning about these issues.  As
> long as you play around with data bases (their organisation &c),
> sorting algorithms, artificial intelligence (at least when I attended a
> lecture on this) you do not need to know about these issues.  And,
> unfortunately, it seems nowadays a lot of teaching is on a
> "need-to-know" and "just-in-time" basis.  
>
> It just became criminal when CS guys who were into compiler design
> started to construct compilers that analysed the code and rearranged
> the calculations based on an analysis that assumed infinite precision
> arithmetic.  Such compilers optimised away code that was designed to
> deal with finite precision arithmetic.  I believe this was one of the
> motivations of Goldberg's article.
>
>   

you'd probably enjoy hacker's delight by warren [1], where tricks are
presented that allow you to use computer arithmetic efficiently when you
know the details of the representations.

from the foreword by g.l. steele jr:

"Many books on algorithms and data structures teach complicated
techniques for sorting and
searching, for maintaining hash tables and binary trees, for dealing
with records and pointers. They
overlook what can be done with very tiny pieces of data—bits and arrays
of bits. It is amazing what
can be done with just binary addition and subtraction and maybe some
bitwise operations; the fact
that the carry chain allows a single bit to affect all the bits to its
left makes addition a peculiarly
powerful data manipulation operation in ways that are not widely
appreciated.

Yes, there ought to be a book about these techniques. Now it is in your
hands, and it's terrific. If you
write optimizing compilers or high-performance code, you must read this
book."


vQ

[1] http://www.hackersdelight.org/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Multivariate kernel density estimation

2008-12-08 Thread Jeroen Ooms

I would like to estimate a 95% highest density area for a multivariate
parameter space (In the context of anova). Unfortunately I have only
experience with univariate kernel density estimation, which is remarkebly
easier :)

Using Gibbs, i have sampled from a posterior distirbution of an Anova model
with k means (mu) and 1 common residual variance (s2). The means are
independent of eachother, but conditional on the residual variance. So now I
have a data frame of say 10.000 iterations, and k+1 parameters.

I am especially interested in the posterior distribution of the mu
parameters, because I want to test the support for an inequalty constrained
model (e.g. mu1 > mu2 > mu3). I wish to derive the multivariate 95% highest
density parameter space for the mu parameters. For example, if I had a
posterior distirbution with 2 means, this should somehow result in the
circle or elipse that contains the 95% highest density area. 

Is something like this possible in R? All tips are welcome.
-- 
View this message in context: 
http://www.nabble.com/Multivariate-kernel-density-estimation-tp20894766p20894766.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Vars package - specification of VAR

2008-12-08 Thread Pfaff, Bernhard Dr.
Hello Bernd,

by definition, a VAR does only include **lagged endogenous** variables.
You might want consider SVAR() contained in the same package, or fit a
VECM (see CRAN package 'urca').

Best,
Bernhard 

>Hi useRs,
>
>Been estimating a VAR with two variables, using VAR() of the 
>package "vars".
>
>Perhaps I am missing something, but how can I include the 
>present time t variables, i.e. for the set of equations to be:
>
>x(t) = a1*y(t) + a2*y(t-1) + a3*x(t-1) + ...
>Y(t) = a1*x(t) + a2*x(t-1) + a3*y(t-1) + ...
>
>The types available in function VAR() allow for seasonal 
>dummies, time trends and constant term.
>
>But the terms
>
>a1*y(t)
>a1*x(t)
>
>always seem to be excluded by default, thus only lagged 
>variables enter the right side.
>
>How can I specify VAR() such that a1*y(t) and a1*x(t) are included? 
>Or would I have to estimate with lm() instead?
>
>Many thanks in advance,
>
>Bernd
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Permutation exact test to compare related series

2008-12-08 Thread LE PAPE Gilles

I all,
is there a way with R to perform an exact permutation test to replace the 
wilcoxon test to compare paired series and/or to perform pairwise multiple 
comparisons for related series after a Friedman test ?

Thanks
Gilles

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Taylor diagram

2008-12-08 Thread Jim Lemon

Csima Gabriella wrote:

Dear Everyone,

I wrote a Taylor diagram program in R, but it was not very general, so I was 
happy to find the Taylor.diagram in the plotrix package.
On the other hand I can find many problems with the pos.cor=TRUE case, in other 
words, when we use only the first quarter of the space (positive correlations). 
(When we choose pos.cor=FALSE, the program seems to me perfect)

1.There is only one line around the (0,0) point:

I looked at  this part of the program:

if (ref.sd) {
xcurve <- cos(seq(0, pi/2, by = 0.01)) * sd.r
ycurve <- sin(seq(0, pi/2, by = 0.01)) * sd.r
lines(xcurve, ycurve)
}

I have tried this part of the program (using sd.r=0 and naturally using a 
plot(..)), it gave only one quarter circle and not a seq of them.

2. No lines helping to read the correlation easily at all.


Could you help me, how to correct the program to use as many sd, corr and rmse 
line as I would like to?
Thank you very much for your help in advance!

  

Hi Gabriella,
Olivier Eterradossi wrote the version you prefer, and he may be able to 
send you a fix. If not, I can probably have a look at it tomorrow night 
and work something out. I'll check to see if you have gotten a reply then.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] About adf.test

2008-12-08 Thread Pfaff, Bernhard Dr.
Hello Kamlesh,

have a look at: fUnitRoots, tseries, urca, uroot

Best,
Bernhard

>
>Dear sir,
>
>   I am a new user of R statistical package. I want to perform
>adf.test(augmented dickey fuller test), which packages I need 
>to install in
>order to perform it. I am getting following message on my monitor.
>*x<-rnorm(1000)
>> adf.test(x)
>Error: could not find function "adf.test"
>
>*I am waiting for your response.
>
>Kamlesh Kumar.
>
>-- 
>  Kamlesh Kumar
>  Appt. No. - QQ420,
>  Vila Universitaria, Campus de la UAB,
>  08193 Bellatera, Cerdanyola del Valles,
>  Barcelona, Spain.
>
>   [[alternative HTML version deleted]]
>
>__
>R-help@r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide 
>http://www.R-project.org/posting-guide.html
>and provide commented, minimal, self-contained, reproducible code.
>
*
Confidentiality Note: The information contained in this ...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get Greenhouse-Geisser epsilons from anova?

2008-12-08 Thread Skotara

Thank you for your help!
Sorry, for bothering you again..
I still have trouble combining within and between subject factors.
Interactions of within factors and D having only 2 levels work well.

How can I get the main effect of D? I have tried anova(mlmfitD, mlmfit). 
With D having 3 levels I would expect the dfs to be 2 and 33. However, 
the output states 84,24??


As long as the between factor has only 2 levels the between/within 
interactions fit well with SPSS, but if D has 3 levels, the mismatch is 
immense.
If I calculate the within effects with myma having  not 12 subjects from 
one group but for example 24 from 2 groups, the output treats it as if 
all subjects came from the same group, for example for main effect A the 
dfs are 1 and 35. SPSS puts out 1 and 33 which is what I would have 
expected.. ..




Peter Dalgaard schrieb:

Nils Skotara wrote:

Thank you, this helped me a lot!
All within effects and interactions work well!

Sorry, but I still could not get how to include the between factor..
If I include D with 2 levels, then myma is 24 by 28. (another 12 by 
28 for the

second group of subjects.)
mlmfitD <- lm(myma~D) is no problem, but whatever I tried afterwards 
did not seem logical to me.
I am afraid I do not understand how to include the between factor. I 
cannot include ~D into M or X because it has length 24 whereas the other
factors have 28... 


Just do the same as before, but comparing mlmfitD to mlmfit:

anova(mlmfitD, mlmfit, X=~A+B, M=~A+B+C)
# or anova(mlmfitD, mlmfit, X=~1, M=~C), as long as things are balanced


gives the D:C interaction test (by testing whether the C contrasts 
depend on D). The four-factor interaction is


anova(mlmfitD, mlmfit, X=~(A+B+C)^2, M=~A*B*C)





Zitat von Peter Dalgaard <[EMAIL PROTECTED]>:


Skotara wrote:

Dear Mr. Daalgard.

thank you very much for your reply, it helped me to progress a bit.

The following works fine:
dd <- expand.grid(C = 1:7, B= c("r", "l"), A= c("c", "f"))
myma <- as.matrix(myma) #myma is a 12 by 28 list
mlmfit <- lm(myma~1)
mlmfit0 <- update(mlmfit, ~0)
anova(mlmfit, mlmfit0, X= ~C+B, M = ~A+C+B, idata = dd,
test="Spherical"), which tests the main effect of A.
anova(mlmfit, mlmfit0, X= ~A+C,  M = ~A+C+B, idata = dd,
test="Spherical"), which tests the main effect of B.


However, I can not figure out how this works for the other effects.
If I try:
anova(mlmfit, mlmfit0, X= ~A+B,  M = ~A+C+B, idata = dd,  
test="Spherical")


I get:
Fehler in function (object, ..., test = c("Pillai", "Wilks",
"Hotelling-Lawley",  :
   residuals have rank 1 < 4

dd$C is not a factor with that construction. It works for me after

dd$C <- factor(dd$C)

(The other message is nasty, though. It's slightly different in 
R-patched:


 > anova(mlmfit, mlmfit0, X= ~A+B, M = ~A+C+B, idata = dd,
test="Spherical")
Error in solve.default(Psi, B) :
   system is computationally singular: reciprocal condition number =
2.17955e-34

but it shouldn't happen...
Looks like it is a failure of the internal Thin.row function. Ick!
)


I also don't know how I can calculate the various interactions..
My read is I should change the second argument mlmfit0, too, but I 
can't

figure out how...


The "within" interactions should be straightforward, e.g.

M=~A*B*C
X=~A*B*C-A:B:C

etc.

The within/between interactions are otained from the similar tests of
the between factor(s)

e.g.

mlmfitD <- lm(myma~D)

and then

anova(mlmfitD, mlmfit,)








__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re : Taylor diagram

2008-12-08 Thread Olivier ETERRADOSSI

Hi Gabriela,
as suggested by Jim I'll have a look as soon as possible  (but only 
tomorrow I'm afraid).
Maybe you could provide me with a dataset of yours which shows the 
problem (use direct postin gt o my own address 
[EMAIL PROTECTED], instead of posting the data to Rlist) ?

Thanks,
Olivier

--
Olivier ETERRADOSSI
Maître-Assistant
CMGD / Equipe "Propriétés Psycho-Sensorielles des Matériaux"
Ecole des Mines d'Alès
Hélioparc, 2 av. P. Angot, F-64053 PAU CEDEX 9
tel std: +33 (0)5.59.30.54.25
tel direct: +33 (0)5.59.30.90.35 
fax: +33 (0)5.59.30.63.68

http://www.ema.fr

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logical inconsistency

2008-12-08 Thread Berwin A Turlach
G'day Wacek,

On Mon, 08 Dec 2008 13:13:33 +0100
Wacek Kusnierczyk <[EMAIL PROTECTED]> wrote:

> Berwin A Turlach wrote:
> >
> > I am not surprised about CS guys never learning about these
> > issues.  As long as you play around with data bases (their
> > organisation &c), sorting algorithms, artificial intelligence (at
> > least when I attended a lecture on this) you do not need to know
> > about these issues.  And, unfortunately, it seems nowadays a lot of
> > teaching is on a "need-to-know" and "just-in-time" basis.  
> >
> > It just became criminal when CS guys who were into compiler design
> > started to construct compilers that analysed the code and rearranged
> > the calculations based on an analysis that assumed infinite
> > precision arithmetic.  Such compilers optimised away code that was
> > designed to deal with finite precision arithmetic.  I believe this
> > was one of the motivations of Goldberg's article.
> >
> >   
> 
> you'd probably enjoy hacker's delight by warren [1], where tricks are
> presented that allow you to use computer arithmetic efficiently when
> you know the details of the representations.

Unlikely, these days I am less interested in hacks but more in portable
implementations.  If it were to show that somethings I believe to be
portable are actually not, then it could be of interest after all.

Cheers,

Berwin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Legend and Main Title positioning

2008-12-08 Thread Antje
Thank you very much for your help! That code does exactly what I was looking 
for ( I don't have experience with lattice yet )


Ciao,
Antje



[EMAIL PROTECTED] schrieb:

layout(matrix(c(1,2,3,4), nrow = 2, byrow = TRUE))
plot(rnorm(100))
plot(rnorm(200))
plot(rnorm(300))
plot(rnorm(400))

Now, I'd like to create a legend below each plot and generate a common 

title.

How can I do that?


If you are laying plots out in grids like this then lattice graphics are 
generally the way to go, but here's a solution based upon base graphics. 
The trick is to include extra potting space in your layout for the 
legends.  The code is messy, since it requires you to manually specify 
which cell of the layout to plot into, but I'm sure guven some thought you 
can automate this.


#4 space for plots, 4 for legends
layout(matrix(1:8, nrow = 4, byrow = TRUE), heights=rep(c(3,1),4))

#Check the layout looks suitable
layout.show(8)

#Avoid clipping problems, and create space for your title
par(xpd=TRUE, oma=c(0,0,2,0))

#First plot
plot(rnorm(100))

#Move down and plot the first legend
par(mfg=c(2,1))
legend(0,0, legend="foo", pch=1)

#Repeat for the other plots and legends
par(mfg=c(1,2))
plot(rnorm(200))
par(mfg=c(2,2))
legend(0,0, legend="bar", pch=1)

par(mfg=c(3,1))
plot(rnorm(300))
par(mfg=c(4,1))
legend(0,0, legend="baz", pch=1)

par(mfg=c(3,2))
plot(rnorm(400))
par(mfg=c(4,2))
legend(0,0, legend="quux", pch=1)

#Title for all the plots
title(main="4 plots", outer=TRUE)


Regards,
Richie.

Mathematical Sciences Unit
HSL



ATTENTION:

This message contains privileged and confidential information intended
for the addressee(s) only. If this message was sent to you in error,
you must not disseminate, copy or take any action in reliance on it and
we request that you notify the sender immediately by return email.

Opinions expressed in this message and any attachments are not
necessarily those held by the Health and Safety Laboratory or any person
connected with the organisation, save those by whom the opinions were
expressed.

Please note that any messages sent or received by the Health and Safety
Laboratory email system may be monitored and stored in an information
retrieval system.



Scanned by MailMarshal - Marshal's comprehensive email content security
solution. Download a free evaluation of MailMarshal at www.marshal.com




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logical inconsistency

2008-12-08 Thread Berwin A Turlach
G'day Wacek,

On Sun, 07 Dec 2008 21:09:36 +0100
Wacek Kusnierczyk <[EMAIL PROTECTED]> wrote:

> g'evening.

Not the done thing.
 
> c'mon, a person from central europe can't possibly be unaware of this
> joke.  

I wouldn't call Norway central Europe, but then I also guess that you
are not really Norwegian.

> i know of a 60-page book collecting radio erewan jokes.  deadly
> serious.

That would make you more of an expert in these kind of jokes than me.

> > Classical Radio Eriwan stuff, and I thought this class of jokes have
> > died out.
> 
> apparently not, as long as there are people able to find them in
> whatever they read.

Not in whatever they read.  There are necessary ingredients as you, as
an expert on Radio Eriwan, should now.  There has to be the phrase "in
principle" and something self-contradictory.  Plenty of things are
written that could not possibly be interpreted as Radio Eriwan jokes.
 
> > First, I think it is a rather ambitious jump in logic that a user is
> > interested in stats because the user wants to see whether "8.3 -
> > 7.3 > 1" is true.  The only indication of the user being interested
> > in stats would be that the user used R, which is used primarily for
> > statistical analysis; 
> 
> that was my reasoning, good god!

And as I said, probably a huge jump in logic and faith.
 
> i'd agree that it's not a drawback to know how arithmetic is actually
> done on computers (but hey, there is no unique standard here).  but
> many people i know (which certainly makes a poor statistic) would
> prefer to be abstracted away from having to know that 8.8-7.8 > 1 is
> true, less so why it is true.

Well, these people should probably stick to pencil and paper.  Or use
an appropriate tool.  As I mentioned to you in another thread, a good
handyman does not blame his/her tools but selects the correct tool for
the job.

If I need dynamic memory allocation, I do not choose to use FORTRAN77
but some other language.  If I need infinite precision arithmetic, then
I do not choose R/Matlab/Scilab/Octave but some other tool. 

> not difficult at all to define an arithmetic where 1==0 is true,
> document it in a man page, and refer to it when complaints come. 
> surely, an exotic example.

You forget the bit that you first would have to convince some people
that this is a useful arithmetic and have them start using it.  And I
am not convinced about the "not difficult" part anyway.
 
> i know of cs guys who either have forgotten, or have never learned
> (!). 

I am not surprised about CS guys never learning about these issues.  As
long as you play around with data bases (their organisation &c),
sorting algorithms, artificial intelligence (at least when I attended a
lecture on this) you do not need to know about these issues.  And,
unfortunately, it seems nowadays a lot of teaching is on a
"need-to-know" and "just-in-time" basis.  

It just became criminal when CS guys who were into compiler design
started to construct compilers that analysed the code and rearranged
the calculations based on an analysis that assumed infinite precision
arithmetic.  Such compilers optimised away code that was designed to
deal with finite precision arithmetic.  I believe this was one of the
motivations of Goldberg's article.

Cheers,

Berwin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Reading txt file in R to run Random Intercept Model

2008-12-08 Thread S Ellison


>>> "Anamika Chaudhuri" <[EMAIL PROTECTED]> 08/12/2008 03:46:34 >>>
>I am using a random intercept model with SITEID as random and NAUSEA
as
>outcome.

Hardly surprising ;-)


More seriously, your data set fragment had only one level for SITEID. I
assume there were actually more levels?

S



***
This email and any attachments are confidential. Any use...{{dropped:8}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Taylor diagram

2008-12-08 Thread Csima Gabriella
Dear Everyone,

I wrote a Taylor diagram program in R, but it was not very general, so I was 
happy to find the Taylor.diagram in the plotrix package.
On the other hand I can find many problems with the pos.cor=TRUE case, in other 
words, when we use only the first quarter of the space (positive correlations). 
(When we choose pos.cor=FALSE, the program seems to me perfect)

1.There is only one line around the (0,0) point:

I looked at  this part of the program:

if (ref.sd) {
xcurve <- cos(seq(0, pi/2, by = 0.01)) * sd.r
ycurve <- sin(seq(0, pi/2, by = 0.01)) * sd.r
lines(xcurve, ycurve)
}

I have tried this part of the program (using sd.r=0 and naturally using a 
plot(..)), it gave only one quarter circle and not a seq of them.

2. No lines helping to read the correlation easily at all.


Could you help me, how to correct the program to use as many sd, corr and rmse 
line as I would like to?
Thank you very much for your help in advance!

Gabriella
[EMAIL PROTECTED] 

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Permutation exact test to compare related series

2008-12-08 Thread Mike Lawrence
Here's a paired-samples permutation test.  Note that this does not obtain a
p-value based on all possible randomization orders, a test that by some
nomenclatures would be called a Randomization test and which I presume is
what you desire given the use of the word "exact" in your query.
Instead, the permutation test below it merely reshuffles the data randomly
many times; for small data sets this may achieve all possible randomization
orders, but for larger data sets it will more likely achieve only a subset
of all possible randomization orders. In any case, employing a large number
of randomization orders (set the "iterations" variable to something like
1e5) should provide a good approximation to the exact p-value. You can
always run the test several times to ensure all results are approximately
equal.
pairperm <- function(data1,data2,iterations,tails){
obs_diff = data1-data2 #create a vector of the difference scores
sim_diff = rep(0,length(data1)) #create an empty vector to be filled later
flip = rep(0,length(data1)) #create an empty vector to be filled later
all_data = as.data.frame(cbind(obs_diff,sim_diff,flip)) #create a table with
your difference scores and empty vectors as columns
obs_mean_diff = mean(all_data$obs_diff) #measure the observed mean
difference score
count = 1 #initialize counting variable
for(i in 1:iterations){ #start shuffling loop
all_data$flip = sample(c(-1,1),length(data1), replace=T) #randomly assign
cases to be flipped or not
all_data$sim_diff=all_data$obs_diff*all_data$flip #this flips the sign of
those cases marked for flip
sim_mean_diff = mean(all_data$sim_diff) #calculate the simulated mean
difference score
if(tails == 1){ #if one-tailed
if(obs_mean_diffwrote:

> I all,
> is there a way with R to perform an exact permutation test to replace the
> wilcoxon test to compare paired series and/or to perform pairwise multiple
> comparisons for related series after a Friedman test ?
> Thanks
> Gilles
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
www.thatmike.com

Looking to arrange a meeting? Do so at:
http://www.timetomeet.info/with/mike/

~ Certainty is folly... I think. ~

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Questions on the results from glmmPQL(MASS)

2008-12-08 Thread Fowler, Mark
Ben Bolker's response to a glmmPQL question below raises a question. 

Does the issue of bias with binomial data reported by Breslow (2003)
remain valid with respect specifically to Ripley's treatment of PQL in
glmmPQL? Breslow makes no reference to this particular implementation.
He does discuss that of SAS GLIMMIX, but it does not work exactly as
glmmPQL. I've compared results between binomial models between these two
approaches, and they usually give compatible results. But they can
diverge markedly in enough cases that I wish I understood just how they
differ, so wonder if relative vulnerability to bias could be involved.

BTW Ben refers Zhijie to a separate user group that focuses on mixed
models. I knew nothing of this group. Following through on the link I
found their archive, which included a fairly extensive thread on a
question I posed to the regular R group in October. My question was
forwarded, by Ben Bolker in fact (Wald F tests thread), for which I'm
grateful. But I'm embarassed to say I only learned of the thread, even
though I initiated it, because of this email. I just assumed no
responses, other than R-News, and that was mostly questions to me about
glmmPQL, rather than attempts to answer my own question. I'm clearly not
the only one unaware of the mixed-models group, and a very sad choice
for asking questions about glmmPQL.


>   Mark Fowler
Population Ecology Division
>   Bedford Inst of Oceanography
>   Dept Fisheries & Oceans
>   Dartmouth NS Canada
B2Y 4A2
Tel. (902) 426-3529
Fax (902) 426-9710
Email [EMAIL PROTECTED]
Home Tel. (902) 461-0708
Home Email [EMAIL PROTECTED]


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Ben Bolker
Sent: December 6, 2008 4:10 PM
To: r-help@r-project.org
Subject: Re: [R] Questions on the results from glmmPQL(MASS)




zhijie zhang wrote:
> 
> Dear Rusers,
>I have used R,S-PLUS and SAS to analyze the sample data "bacteria" 
> in MASS package. Their results are listed below.
> I have three questions, anybody can give me possible answers?
> Q1:From the results, we see that R get 'NAs'for AIC,BIC and logLik, 
> while S-PLUS8.0 gave the exact values for them. Why?
> 

This is a philosophical difference between S-PLUS and R.
Since glmmPQL uses quasi-likelihood, technically there is no
log-likelihood (hence no AIC nor BIC, which are based on the
log-likelihood) for this model -- the argument is that one is limited to
looking at Wald tests (testing the Z- or t-statistics, i.e. parameter
estimates divided by estimated standard errors) for inference in this
case.


zhijie zhang wrote:
> 
> Q2: The model to analyse the data is logity=b0+u+b1*trt+b2*I(week>2), 
> but the results for Random effects in R/SPLUS confused me. SAS may be
clearer.
> Random effects:
>  Formula: ~1 | ID
>(Intercept)  Residual
> StdDev:1.410637 0.7800511
>   Which is the random effect 'sigma'? I think it is "1.410637", but 
> what does "0.7800511" mean? That is, i want ot know how to explain/use

> the above two data for Random effects.
> 

The (Intercept) random effect is the variance in intercept across
grouping factors .
The residual (0.78) is (I believe) the individual-level error estimated
for the underlying linear mixed model -- you can probably ignore this.



zhijie zhang wrote:
> 
> Q3:In SAS and other softwares, we can get *p*-values for the random 
> effect 'sigma', but i donot see the *p*-values in the results of 
> R/SPLUS. I have used attributes() to look for them, but no *p* values.

> Anybody knows how to get *p*-values for the random effect 'sigma',.
>   Any suggestions or help are greatly appreciated.
> #R Results:MASS' version 7.2-44; R version 2.7.2
> library(MASS)
> summary(glmmPQL(y ~ trt + I(week > 2), random = ~ 1 | ID,family = 
> binomial, data = bacteria))
> 
> Linear mixed-effects model fit by maximum likelihood
>  Data: bacteria
>   AIC BIC logLik
>NA  NA NA
> 
> Random effects:
>  Formula: ~1 | ID
> (Intercept)  Residual
> StdDev:1.410637 0.7800511
> 
> Variance function:
>  Structure: fixed weights
>  Formula: ~invwt
> Fixed effects: y ~ trt + I(week > 2)
> Value  Std.Error  DF   t-value
> p-value
> (Intercept)  3.4120140.5185033   169   6.580506  0.
> trtdrug -1.247355 0.6440635 47  -1.936696  0.0588
> trtdrug+-0.7543270.6453978 47  -1.168779  0.2484
> I(week > 2)TRUE -1.607257 0.3583379 169 -4.485311  0.
>  Correlation:
> (Intr) trtdrg trtdr+
> trtdrug -0.598
> trtdrug+-0.571  0.460
> I(week > 2)TRUE -0.537  0.047 -0.001
> 
> #S-PLUS8.0: The results are the same as R except the followings:
> AIC  BIClogLik
> 1113.622 1133.984 -550.8111
> 
> #SAS9.1.3
> proc nlmixed data=b;
>  parms b0=-1 b1=1

[R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread tsunhin wong
Dear all,

I'm a newbie in R.
I have a 45x2x2x8 design.
A dataframe stores the metadata of trials. And each trial has its own
data file: I used "read.table" to import every trial into R as a
dataframe (variable).

Now I dynamically ask R to retrieve trials that fit certain selection
criteria, so I use "subset", e.g.
tmptrialinfo <- subset(trialinfo, (Subject==24 & Filename=="v2msa8"))

The name of the dataframe / variable of an individual trial can be
obtained using:
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
Then I get a string:
"t24v2msa8.gz"
which is of the exact same name of the dataframe / variable of that
trial, which is:
t24v2msa8.gz

Can somebody tell me how can I change that string (obtained from
"paste()" above) to be a usable / manipulable variable name, so that I
can do something, such as:
(1)
tmptrial <- trialcompute(trialextract(
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
,tmptrialinfo[1,32],secs,sdm),secs,binsize)
instead of hardcoding:
(2)
tmptrial <- 
trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),secs,binsize)

Currently, 1) doesn't work...

Thanks in advance for your help!

Regards,

  John

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R - how to measure the distance?

2008-12-08 Thread Olivier ETERRADOSSI

Hi,
I'm not sure that I fully understand your problem, so forgive me if my
suggestion is irrelevant :
maybe you should have a look to function geod.dist , in the oce package.
Hope this helps. Olivier


porzycka wrote:
> 
> Hello,
> I have been worked with R for 3 weeks. I have one problem. I could not
> find the answer on it on web pages. I do not know how to measure the
> shortes distance between point (Lon/Lat) and line. I used spatstat package
> and distmap() but I want to calculate these distances only for particular
> points without generate distance map.
> 
> Thank you for all hints.
> S.Porzycka
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 
View this message in context: 
http://www.nabble.com/R---how-to-measure-the-distance--tp20879418p20897078.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Permutation exact test to compare related series

2008-12-08 Thread David Winsemius
Perhaps you will find useful code in the examples for freidman_test  
within package coin. They offer an implementation of the Wilcoxon- 
Nemenyi-McDonald-Thompson test.


--
David Winsemius


On Dec 8, 2008, at 6:15 AM, LE PAPE Gilles wrote:


I all,
is there a way with R to perform an exact permutation test to  
replace the wilcoxon test to compare paired series and/or to perform  
pairwise multiple comparisons for related series after a Friedman  
test ?

Thanks
Gilles

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] example of gladeXML - RGtk2

2008-12-08 Thread Cleber Nogueira Borges

hello all,


where I find a example or tutorial of RGtk2 package?
I would like to know about the gladeXML functions in R.

thanks in advance

Cleber Borges

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread David Winsemius


On Dec 8, 2008, at 10:11 AM, tsunhin wong wrote:


Dear all,

I'm a newbie in R.
I have a 45x2x2x8 design.
A dataframe stores the metadata of trials. And each trial has its own
data file: I used "read.table" to import every trial into R as a
dataframe (variable).

Now I dynamically ask R to retrieve trials that fit certain selection
criteria, so I use "subset", e.g.
tmptrialinfo <- subset(trialinfo, (Subject==24 & Filename=="v2msa8"))

The name of the dataframe / variable of an individual trial can be
obtained using:
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
Then I get a string:
"t24v2msa8.gz"
which is of the exact same name of the dataframe / variable of that
trial, which is:
t24v2msa8.gz




Can somebody tell me how can I change that string (obtained from
"paste()" above) to be a usable / manipulable variable name, so that I
can do something, such as:
(1)
tmptrial <- trialcompute(trialextract(
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
,tmptrialinfo[1,32],secs,sdm),secs,binsize)
instead of hardcoding:
(2)
tmptrial <-  
trialcompute 
(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),secs,binsize)




It may have something to do with the semantics of trialextract about  
which most useRs will know nothing (and I was unsuccessful in finding  
a function by that name with an Rsitesearch). It seems strange that a  
function would accept an unquoted reference to what is obviously a  
file name. Perhaps you should be offering that code for inspection  
rather than asking how to make a simple assignment. It is possible  
that what you are really asking is answered by Faq 7.21:


http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-can-I-turn-a-string-into-a-variable_003f


Currently, 1) doesn't work...


Saying "doesn't work" is not specific enough. What exactly was the  
error message?



Thanks in advance for your help!

Regards,

 John

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] multinomial losgitic regression--vglm()

2008-12-08 Thread Xin Shi
Dear:

 

I try to analysis multinomial logistic regression using vglm in VGAM package. 
However, I wonder how many levels of responses variable this command is 
suitable. I saw the examples in google search works for 3 levels, say 1,2,3. 
However, my response variable is more than 3 levels. Is it working for my case 
or have alternative approach?

 

Many thanks!

 

Xin


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] ARMA models

2008-12-08 Thread AbouEl-Makarim Aboueissa
Dear ALL:
 
Could you please eamil me how to simulate Mixed Seasonal ARMA (p,q)x(P,Q)12 
models [say ARMA(0,1)x(1,0)12 ]from R.
 
With many thanks.
 
Abou
 
 
 
 
==
AbouEl-Makarim Aboueissa, Ph.D.
Assistant Professor of Statistics
Department of Mathematics & Statistics
University of Southern Maine
96 Falmouth Street
P.O. Box 9300
Portland, ME 04104-9300
 

Tel: (207) 228-8389
Fax: (207) 780-5607
Email: [EMAIL PROTECTED] 
  [EMAIL PROTECTED]
 
  http://www.usm.maine.edu/~aaboueissa/ 

 
Office: 301C Payson Smith
 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread Jim Holtman

?get


Sent from my iPhone

On Dec 8, 2008, at 7:11, "tsunhin wong" <[EMAIL PROTECTED]> wrote:


Dear all,

I'm a newbie in R.
I have a 45x2x2x8 design.
A dataframe stores the metadata of trials. And each trial has its own
data file: I used "read.table" to import every trial into R as a
dataframe (variable).

Now I dynamically ask R to retrieve trials that fit certain selection
criteria, so I use "subset", e.g.
tmptrialinfo <- subset(trialinfo, (Subject==24 & Filename=="v2msa8"))

The name of the dataframe / variable of an individual trial can be
obtained using:
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
Then I get a string:
"t24v2msa8.gz"
which is of the exact same name of the dataframe / variable of that
trial, which is:
t24v2msa8.gz

Can somebody tell me how can I change that string (obtained from
"paste()" above) to be a usable / manipulable variable name, so that I
can do something, such as:
(1)
tmptrial <- trialcompute(trialextract(
paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
,tmptrialinfo[1,32],secs,sdm),secs,binsize)
instead of hardcoding:
(2)
tmptrial <-  
trialcompute( 
trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),secs,binsize)


Currently, 1) doesn't work...

Thanks in advance for your help!

Regards,

 John

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Save image as metafile

2008-12-08 Thread mentor_

Hi,

how can I save an image as a metafile?
I know within windows you can do a right click and then 'save image as
metafile'
but I use Mac OS X...I know as well that mac users have a right click as
well, but
it does not work.
Is there a command in R for saving images as metafiles?

Regards,
mentor
-- 
View this message in context: 
http://www.nabble.com/Save-image-as-metafile-tp20894737p20894737.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Query in Cuminc - stratification

2008-12-08 Thread kay frances

Hello everyone, 
 
I am a very new user of R and I have a query about the cuminc function in the 
package cmprsk. In particular I would like to verify that I am interpreting the 
output correctly when we have a stratification variable.
 
Hypothetical example:
 
group : fair hair, dark hair
fstatus: 1=Relapse, 2=TRM, 0=censored
strata: sex (M or F)
 
Our data would be split into:
 
Fair, male, relapse
Dark,male, relapse
Fair, female, relapse
Dark, female, relapse
 
Fair, male, TRM
Dark,male, TRM
Fair, female, TRM
Dark, female, TRM
 
Fair, male, censored
Dark,male, censored
Fair, female, censored
Dark, female, censored
 
Am I correct in thinking that the 2 "Tests" which will be printed by R tell us 
(i) if there are significant differences between those with fair hair and those 
with dark hair as regards cumulative incidence of relapse [taking into account 
sex differences] (ii) if there are significant differences between those with 
fair hair and those with dark hair as regards cumulative incidence of TRM 
[taking into account sex differences] ? The ‘est’and ‘var’values are 
the same regardless of whether we include a stratification variable or not.
 
If we do not include a stratification variable the ‘tests’ results will be 
different to those when a stratification variable is included and they test (i) 
if there are significant differences between those with fair hair and those 
with dark hair as regards cumulative incidence of relapse (ii) if there are 
significant differences between those with fair hair and those with dark hair 
as regards cumulative incidence of TRM.
 
Just thought I’d double check.
Many thanks,
Kim


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] statistics on "runs" of numbers

2008-12-08 Thread tolga . i . uzuner
Dear R Users,

Is there a package or some functionality in R which returns statistics on 
"runs" of numbers, i.e. series of numbers with similar qualities in a time 
series ? For example, the number of +ves,-ves, histograms on cumulations 
in runs, etc. ?

Thanks in advance,
Tolga



Generally, this communication is for informational purposes only
and it is not intended as an offer or solicitation for the purchase
or sale of any financial instrument or as an official confirmation
of any transaction. In the event you are receiving the offering
materials attached below related to your interest in hedge funds or
private equity, this communication may be intended as an offer or
solicitation for the purchase or sale of such fund(s).  All market
prices, data and other information are not warranted as to
completeness or accuracy and are subject to change without notice.
Any comments or statements made herein do not necessarily reflect
those of JPMorgan Chase & Co., its subsidiaries and affiliates.

This transmission may contain information that is privileged,
confidential, legally privileged, and/or exempt from disclosure
under applicable law. If you are not the intended recipient, you
are hereby notified that any disclosure, copying, distribution, or
use of the information contained herein (including any reliance
thereon) is STRICTLY PROHIBITED. Although this transmission and any
attachments are believed to be free of any virus or other defect
that might affect any computer system into which it is received and
opened, it is the responsibility of the recipient to ensure that it
is virus free and no responsibility is accepted by JPMorgan Chase &
Co., its subsidiaries and affiliates, as applicable, for any loss
or damage arising in any way from its use. If you received this
transmission in error, please immediately contact the sender and
destroy the material in its entirety, whether in electronic or hard
copy format. Thank you.
Please refer to http://www.jpmorgan.com/pages/disclosures for
disclosures relating to UK legal entities.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread Jorge Ivan Velez
Dear Tsunhin,
Take a look at ?get

HTH,

Jorge


On Mon, Dec 8, 2008 at 10:11 AM, tsunhin wong <[EMAIL PROTECTED]> wrote:

> Dear all,
>
> I'm a newbie in R.
> I have a 45x2x2x8 design.
> A dataframe stores the metadata of trials. And each trial has its own
> data file: I used "read.table" to import every trial into R as a
> dataframe (variable).
>
> Now I dynamically ask R to retrieve trials that fit certain selection
> criteria, so I use "subset", e.g.
> tmptrialinfo <- subset(trialinfo, (Subject==24 & Filename=="v2msa8"))
>
> The name of the dataframe / variable of an individual trial can be
> obtained using:
> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> Then I get a string:
> "t24v2msa8.gz"
> which is of the exact same name of the dataframe / variable of that
> trial, which is:
> t24v2msa8.gz
>
> Can somebody tell me how can I change that string (obtained from
> "paste()" above) to be a usable / manipulable variable name, so that I
> can do something, such as:
> (1)
> tmptrial <- trialcompute(trialextract(
> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> ,tmptrialinfo[1,32],secs,sdm),secs,binsize)
> instead of hardcoding:
> (2)
> tmptrial <-
> trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),secs,binsize)
>
> Currently, 1) doesn't work...
>
> Thanks in advance for your help!
>
> Regards,
>
>  John
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread tsunhin wong
Thanks Jim and All!

It works:
tmptrial <- trialcompute(trialextract(
get(paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")) ,
tmptrialinfo[1,32],secs,sdm),secs,binsize)

Can I use "assign" instead? How should it be coded then?

Thanks!

- John

On Mon, Dec 8, 2008 at 10:40 AM, Jim Holtman <[EMAIL PROTECTED]> wrote:
> ?get
>
>
> Sent from my iPhone
>
> On Dec 8, 2008, at 7:11, "tsunhin wong" <[EMAIL PROTECTED]> wrote:
>
>> Dear all,
>>
>> I'm a newbie in R.
>> I have a 45x2x2x8 design.
>> A dataframe stores the metadata of trials. And each trial has its own
>> data file: I used "read.table" to import every trial into R as a
>> dataframe (variable).
>>
>> Now I dynamically ask R to retrieve trials that fit certain selection
>> criteria, so I use "subset", e.g.
>> tmptrialinfo <- subset(trialinfo, (Subject==24 & Filename=="v2msa8"))
>>
>> The name of the dataframe / variable of an individual trial can be
>> obtained using:
>> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
>> Then I get a string:
>> "t24v2msa8.gz"
>> which is of the exact same name of the dataframe / variable of that
>> trial, which is:
>> t24v2msa8.gz
>>
>> Can somebody tell me how can I change that string (obtained from
>> "paste()" above) to be a usable / manipulable variable name, so that I
>> can do something, such as:
>> (1)
>> tmptrial <- trialcompute(trialextract(
>> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
>> ,tmptrialinfo[1,32],secs,sdm),secs,binsize)
>> instead of hardcoding:
>> (2)
>> tmptrial <-
>> trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),secs,binsize)
>>
>> Currently, 1) doesn't work...
>>
>> Thanks in advance for your help!
>>
>> Regards,
>>
>> John
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] legend at fixed distance form the bottom

2008-12-08 Thread Greg Snow
I suggested the use of grconvertY if you want to place something exactly n 
inches from the bottom, then grconvertY will give the user (or other) 
coordinates that match, or if you want to place something exactly 10% of the 
total device height from the bottom, etc.

What to do in replotting a graph afte a resize is usually tricky and requires 
information that the program does not have and therefore has to guess at, that 
is why it is safest to not depend on the computer doing the correct thing in a 
resize and just tell the users to resize/set size before calling your function.

If you give a bit more detail on what you are trying to accomplish and what you 
mean by fixed distance (number of inches, proportion of device height, margin 
row, etc.) we may be better able to help.

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> project.org] On Behalf Of Christophe Genolini
> Sent: Saturday, December 06, 2008 7:08 AM
> To: Greg Snow
> Cc: r-help@r-project.org
> Subject: Re: [R] legend at fixed distance form the bottom
>
> Thanks for your answer.
>
> Unfortunatly, I can not create the graphice with the final size since I
> am writing a package in wich the user will have to chose between
> several
> graphics, and then he will have to export one. And they might be one
> graph, or 2x2, or 3x3...
>
> I check the grconvertY but I did not understand what you suggest. To
> me,
> the use of legend can not work since every length in the legend box
> (xlength, ylength, distance to axes) will change when resizing the
> graph. I was more expecting something like introduce the symbols used
> in
> the graph *in* the xlab. Is it possible ?
>
> Christophe
>
> > It is best to create the graphics device at the final size desired,
> then do the plotting and add the legend.  For getting a fixed distance,
> look at the function grconvertY for one possibility.
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > [EMAIL PROTECTED]
> > 801.408.8111
> >
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> >> project.org] On Behalf Of Christophe Genolini
> >> Sent: Friday, December 05, 2008 6:40 AM
> >> To: r-help@r-project.org
> >> Subject: [R] legend at fixed distance form the bottom
> >>
> >> Hi the list
> >>
> >> I would like to add a legend under a graph but at a fixed distance
> from
> >> the graphe. Is it possible ?
> >> More precisely, here is my code :
> >>
> >> --- 8< 
> >> symboles <- c(3,4,5,6)
> >> dn <- rbind(matrix(rnorm(20),,5),matrix(rnorm(20,2),,5))
> >> listSymboles <- rep(symboles,each=2)
> >> matplot(t(dn),pch=listSymboles,type="b")
> >> legend("bottom", pch = unique(listSymboles), legend = c("ane",
> >> "cheval",
> >> "poney", "mule"), inset = c(0,-0.175), horiz = TRUE, xpd = NA)
> >> --- 8< 
> >>
> >> But when I change the size of the graph, the legend is misplaced.
> >>
> >> Instead, I try to put some text in xlab, but I do not know how to
> get
> >> the +, x , V and other symbol.
> >> Does anyone got a solution ?
> >>
> >> Thanks a lot.
> >>
> >> Christophe
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide http://www.R-project.org/posting-
> >> guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
> >
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] statistics on "runs" of numbers

2008-12-08 Thread Albyn Jones
rle(x) gives the run length encoding of x.

rle(x>0) or rle(sign(x)) will do this for positive and negative values of x.

albyn

On Mon, Dec 08, 2008 at 03:24:50PM +, [EMAIL PROTECTED] wrote:
> Dear R Users,
> 
> Is there a package or some functionality in R which returns statistics on 
> "runs" of numbers, i.e. series of numbers with similar qualities in a time 
> series ? For example, the number of +ves,-ves, histograms on cumulations 
> in runs, etc. ?
> 
> Thanks in advance,
> Tolga
> 
> 
> 
> Generally, this communication is for informational purposes only
> and it is not intended as an offer or solicitation for the purchase
> or sale of any financial instrument or as an official confirmation
> of any transaction. In the event you are receiving the offering
> materials attached below related to your interest in hedge funds or
> private equity, this communication may be intended as an offer or
> solicitation for the purchase or sale of such fund(s).  All market
> prices, data and other information are not warranted as to
> completeness or accuracy and are subject to change without notice.
> Any comments or statements made herein do not necessarily reflect
> those of JPMorgan Chase & Co., its subsidiaries and affiliates.
> 
> This transmission may contain information that is privileged,
> confidential, legally privileged, and/or exempt from disclosure
> under applicable law. If you are not the intended recipient, you
> are hereby notified that any disclosure, copying, distribution, or
> use of the information contained herein (including any reliance
> thereon) is STRICTLY PROHIBITED. Although this transmission and any
> attachments are believed to be free of any virus or other defect
> that might affect any computer system into which it is received and
> opened, it is the responsibility of the recipient to ensure that it
> is virus free and no responsibility is accepted by JPMorgan Chase &
> Co., its subsidiaries and affiliates, as applicable, for any loss
> or damage arising in any way from its use. If you received this
> transmission in error, please immediately contact the sender and
> destroy the material in its entirety, whether in electronic or hard
> copy format. Thank you.
> Please refer to http://www.jpmorgan.com/pages/disclosures for
> disclosures relating to UK legal entities.
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] statistics on "runs" of numbers

2008-12-08 Thread tolga . i . uzuner
Many thanks,
Tolga




Albyn Jones <[EMAIL PROTECTED]> 
08/12/2008 17:16

To
[EMAIL PROTECTED]
cc
r-help@r-project.org
Subject
Re: [R] statistics on "runs" of numbers






rle(x) gives the run length encoding of x.

rle(x>0) or rle(sign(x)) will do this for positive and negative values of 
x.

albyn

On Mon, Dec 08, 2008 at 03:24:50PM +, [EMAIL PROTECTED] 
wrote:
> Dear R Users,
> 
> Is there a package or some functionality in R which returns statistics 
on 
> "runs" of numbers, i.e. series of numbers with similar qualities in a 
time 
> series ? For example, the number of +ves,-ves, histograms on cumulations 

> in runs, etc. ?
> 
> Thanks in advance,
> Tolga
> 
> 
> 
> Generally, this communication is for informational purposes only
> and it is not intended as an offer or solicitation for the purchase
> or sale of any financial instrument or as an official confirmation
> of any transaction. In the event you are receiving the offering
> materials attached below related to your interest in hedge funds or
> private equity, this communication may be intended as an offer or
> solicitation for the purchase or sale of such fund(s).  All market
> prices, data and other information are not warranted as to
> completeness or accuracy and are subject to change without notice.
> Any comments or statements made herein do not necessarily reflect
> those of JPMorgan Chase & Co., its subsidiaries and affiliates.
> 
> This transmission may contain information that is privileged,
> confidential, legally privileged, and/or exempt from disclosure
> under applicable law. If you are not the intended recipient, you
> are hereby notified that any disclosure, copying, distribution, or
> use of the information contained herein (including any reliance
> thereon) is STRICTLY PROHIBITED. Although this transmission and any
> attachments are believed to be free of any virus or other defect
> that might affect any computer system into which it is received and
> opened, it is the responsibility of the recipient to ensure that it
> is virus free and no responsibility is accepted by JPMorgan Chase &
> Co., its subsidiaries and affiliates, as applicable, for any loss
> or damage arising in any way from its use. If you received this
> transmission in error, please immediately contact the sender and
> destroy the material in its entirety, whether in electronic or hard
> copy format. Thank you.
> Please refer to http://www.jpmorgan.com/pages/disclosures for
> disclosures relating to UK legal entities.
>[[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 



Generally, this communication is for informational purposes only
and it is not intended as an offer or solicitation for the purchase
or sale of any financial instrument or as an official confirmation
of any transaction. In the event you are receiving the offering
materials attached below related to your interest in hedge funds or
private equity, this communication may be intended as an offer or
solicitation for the purchase or sale of such fund(s).  All market
prices, data and other information are not warranted as to
completeness or accuracy and are subject to change without notice.
Any comments or statements made herein do not necessarily reflect
those of JPMorgan Chase & Co., its subsidiaries and affiliates.

This transmission may contain information that is privileged,
confidential, legally privileged, and/or exempt from disclosure
under applicable law. If you are not the intended recipient, you
are hereby notified that any disclosure, copying, distribution, or
use of the information contained herein (including any reliance
thereon) is STRICTLY PROHIBITED. Although this transmission and any
attachments are believed to be free of any virus or other defect
that might affect any computer system into which it is received and
opened, it is the responsibility of the recipient to ensure that it
is virus free and no responsibility is accepted by JPMorgan Chase &
Co., its subsidiaries and affiliates, as applicable, for any loss
or damage arising in any way from its use. If you received this
transmission in error, please immediately contact the sender and
destroy the material in its entirety, whether in electronic or hard
copy format. Thank you.
Please refer to http://www.jpmorgan.com/pages/disclosures for
disclosures relating to UK legal entities.
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R 2.8.1 is scheduled for December 22

2008-12-08 Thread Peter Dalgaard

This is to announce that we plan to release R version 2.8.1 on Monday,
December 22, 2008.

Release procedures start Friday, December 12.

Those directly involved should review the generic schedule at 
http://developer.r-project.org/release-checklist.html


The source tarballs will be made available daily (barring build
troubles) and the tarballs can be picked up at

http://cran.r-project.org/src/base-prerelease/

a little later.

Binary builds are expected to appear starting Monday 15 at the latest.

For the Core Team
Peter Dalgaard


--
  O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
 c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

___
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-announce

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

___
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-announce

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] partial correlation

2008-12-08 Thread Jürg Brendan Logue
Hej!

 

I have the following problem: 

 

I would like to do partial correlations on non-parametric data. I checked
"pcor" (Computes the partial correlation between two variables given a set
of other variables) but I do not know how to change to a Spearman Rank
Correlation method [pcor(c("BCDNA","ImProd","A365"),var(PCor))]

 

Here's a glimpse of my data (data is raw data).


A436

A365

Chla

ImProd

BCDNA


0.001

0.003

0.624889

11.73023

0.776919


0.138

0.126

0.624889

27.29432

0.357468


0.075

0.056

0.624889

105.3115

0.429785


0.009

0.008

0.312444

55.2929

0.547752


0.005

0.002

0.624889

26.9638

0.738775


0.018

0.006

0.312444

31.14836

0.705814


0.02

0.018

2.22E-16

11.90303

0.755003


0.002

0.003

0.624889

7.829781

0.712091


0.047

0.044

1.523167

1.423823

0.710939


0.084

0.056

13.7085

1.533703

0.280171

 

I’m really grateful for any help since I only recently started employing R.

 

Best regards, 
JBL

 

 

 

 

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] DLM - Covariates in the system equation

2008-12-08 Thread Giovanni Petris

Hi Jens,

I assume you are using package dlm. 

To add covariates to the system equation the workaround is to include
psi in the state vector. Suppose your observation and system equations are
as follow:

Y[t] = F[t]theta[t] + v[t], v[t] ~ N[0,V] #observation equation
theta[t]  = G[t]theta[t-1] + Z[t]psi + w[t], w[t] ~ N[0,W] #system equation

(Note that I wrote F[t]theta[t] instead of F'[t]theta[t], Z[t]psi
instead of psi*Z[t], and G[t]theta[t-1] instead of simply theta[t-1])

Defining 

thetahat[t] = [theta[t]'  psi']'


Fhat[t] = [ F[t]  0 ]


  [ G[t]  Z[t] ]
Ghat[t] = []
  [  0 Id  ]


What = blockdiag(W, 0)


you have a new DLM that satisfies your requirement. I hope this is
clear enough for you to implement it in R. If not, please send a small
example telling us what you want to do. By the way, questions about
contributed packages should be addressed to the package maintainer
first. 

Best,
Giovanni Petris
(author of package dlm)

> Date: Mon, 08 Dec 2008 00:17:29 -0500
> From: [EMAIL PROTECTED]
> Sender: [EMAIL PROTECTED]
> Precedence: list
> 
> Is there a way to add covariates to the system equation in a time-varying 
> approach:
> 
> Y[t] = F'[t]theta[t] + v[t], v[t] ~ N[0,V] #observation equation
> theta[t]  = theta[t-1] + psi*Z[t] + w[t], w[t] ~ N[0,W] #system equation
> 
> While F[t] is a matrix of regressors to capture the short term effect on 
> the response series Y,
> Z[t] measures the long-term effect of either
> (1) two policies by a step dummy or
> (2) various policies with a continuous variable.
> 
> I appreciate any kind of help!
> Thanks in advance!
> Jens
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 

Giovanni Petris  <[EMAIL PROTECTED]>
Associate Professor
Department of Mathematical Sciences
University of Arkansas - Fayetteville, AR 72701
Ph: (479) 575-6324, 575-8630 (fax)
http://definetti.uark.edu/~gpetris/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] gee niggles

2008-12-08 Thread Daniel Farewell
I'm not sure if the gee package is still actively maintained, but I for one 
find it extremely useful. However, I've come across a few infelicities that I'm 
hoping could be resolved for future versions. Hope it's okay to list them all 
in one post! They are:

(1) AR(1) models don't fit when clustsize = 1 for any subject, even if some 
subjects have clustsize > 1.
(2) If the working correlation matrix has dimension less than 4x4, it doesn't 
get printed.
(3) Using a "fixed" working correlation matrix stored in integer mode crashes R.

To illustrate, generate some data:

df <- data.frame(i = rep(1:5, each = 5), j = rep(1:5, 5), y = rep(rnorm(5), 
each = 5) + rnorm(25))

An AR(1) model fits fine to the full data:

require(gee)
gee(y ~ 1, id = i, df, corstr = "AR-M", Mv = 1)

So also when some subjects have fewer observations than others:

gee(y ~ 1, id = i, df, subset = j <= i + 1, corstr = "AR-M", Mv = 1)

(1) However, when any subject (in this case, the first) has only 1 observation, 
gee bails out:

gee(y ~ 1, id = i, df, subset = j <= i & j <= 2, corstr = "AR-M", Mv = 1)

I see no particular reason an AR(1) working structure shouldn't be fit to these 
data. In fact, when (as here) there are at most two observations per subject, 
the AR(1) and exchangeable structures are equivalent, and the latter fits fine:

gee(y ~ 1, id = i, df, subset = j <= i & j <= 2, corstr = "exchangeable")

(2) This brings up the second niggle. When all cluster sizes are less than 
four, gee fits fine but the print method tries to extract elements of the 
working correlation matrix that don't exist (x$working.correlation[1:4, 1:4]):
 
gee(y ~ 1, id = i, df, subset = j <= 3)

(3) Finally, we might want to explicitly enter the identity matrix as the 
working correlation. If we do it this way

gee(y ~ 1, id = i, df, corstr = "fixed", R = outer(1:5, 1:5, function(x, y) 
as.numeric(x == y)))

then all is well, but like this

# gee(y ~ 1, id = i, df, corstr = "fixed", R = outer(1:5, 1:5, function(x, y) 
as.integer(x == y))) # not run

crashes R. I think it has to do with the correlation matrix being stored in 
integer mode:

str(outer(1:5, 1:5, function(x, y) as.integer(x == y)))

Presumably an explicit conversion to numeric mode would take care of this. 
Here's my sessionInfo(), in case it's of relevance:

R version 2.7.2 (2008-08-25) 
i386-apple-darwin8.11.1 

locale:
en_GB.UTF-8/en_GB.UTF-8/C/C/en_GB.UTF-8/en_GB.UTF-8

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base 

other attached packages:
[1] gee_4.13-13

Many thanks in advance!

Daniel Farewell
Cardiff University


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to fit nested random effects with LME

2008-12-08 Thread pilar schneider

Dear experts
 
I need to fit a model with nested random effects (cow within herd and season), 
and I’m not sure how to do it with R.
 
 
Using LME  I know how to fit:   cow within herd
 
fit4 <- lme(milk ~ days + season + season*days , data=xxx, random = ~1 + days| 
herd/cow)
 
How I can fit:  cow within herd and season?
 
 

In SAS I would write it:
 
Proc mixed data=xxx;
Class  herd cow season;
Model  milk= days  season  days*season;
Random intercept days/type=cssubject = cow(herd season) ;
 
  
Thank for your help
 
Maria
_
Connect to the next generation of MSN Messenger 
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get Greenhouse-Geisser epsilons from anova?

2008-12-08 Thread Peter Dalgaard

Skotara wrote:

Thank you for your help!
Sorry, for bothering you again..
I still have trouble combining within and between subject factors.
Interactions of within factors and D having only 2 levels work well.

How can I get the main effect of D? I have tried anova(mlmfitD, mlmfit). 
With D having 3 levels I would expect the dfs to be 2 and 33. However, 
the output states 84,24??


That's not a main effect, it's a simultaneous test of all 28 components. 
 What you need is an analysis of the average of all 
within-measurements. In principle that is obtained by X=~0, M=~1 or 
T=matrix(1,1,28) but there's a bug that prevents it from working (fixed 
in R-patched a couple of days ago).




As long as the between factor has only 2 levels the between/within 
interactions fit well with SPSS, but if D has 3 levels, the mismatch is 
immense.
If I calculate the within effects with myma having  not 12 subjects from 
one group but for example 24 from 2 groups, the output treats it as if 
all subjects came from the same group, for example for main effect A the 
dfs are 1 and 35. SPSS puts out 1 and 33 which is what I would have 
expected.. ..


Hmm, there's a generic problem in that you can't get some of the 
traditional ANOVA table F tests by comparing two models, and in your 
case, SPSS is de facto using the residuals from a model with A:D 
interaction when testing for A. It might help if you try


anova(mlmfitD, X=~..., M=~...)

Look at the (Intercept) line.




Peter Dalgaard schrieb:

Nils Skotara wrote:

Thank you, this helped me a lot!
All within effects and interactions work well!

Sorry, but I still could not get how to include the between factor..
If I include D with 2 levels, then myma is 24 by 28. (another 12 by 
28 for the

second group of subjects.)
mlmfitD <- lm(myma~D) is no problem, but whatever I tried afterwards 
did not seem logical to me.
I am afraid I do not understand how to include the between factor. I 
cannot include ~D into M or X because it has length 24 whereas the other
factors have 28... 


Just do the same as before, but comparing mlmfitD to mlmfit:

anova(mlmfitD, mlmfit, X=~A+B, M=~A+B+C)
# or anova(mlmfitD, mlmfit, X=~1, M=~C), as long as things are balanced


gives the D:C interaction test (by testing whether the C contrasts 
depend on D). The four-factor interaction is


anova(mlmfitD, mlmfit, X=~(A+B+C)^2, M=~A*B*C)





Zitat von Peter Dalgaard <[EMAIL PROTECTED]>:


Skotara wrote:

Dear Mr. Daalgard.

thank you very much for your reply, it helped me to progress a bit.

The following works fine:
dd <- expand.grid(C = 1:7, B= c("r", "l"), A= c("c", "f"))
myma <- as.matrix(myma) #myma is a 12 by 28 list
mlmfit <- lm(myma~1)
mlmfit0 <- update(mlmfit, ~0)
anova(mlmfit, mlmfit0, X= ~C+B, M = ~A+C+B, idata = dd,
test="Spherical"), which tests the main effect of A.
anova(mlmfit, mlmfit0, X= ~A+C,  M = ~A+C+B, idata = dd,
test="Spherical"), which tests the main effect of B.


However, I can not figure out how this works for the other effects.
If I try:
anova(mlmfit, mlmfit0, X= ~A+B,  M = ~A+C+B, idata = dd,  
test="Spherical")


I get:
Fehler in function (object, ..., test = c("Pillai", "Wilks",
"Hotelling-Lawley",  :
   residuals have rank 1 < 4

dd$C is not a factor with that construction. It works for me after

dd$C <- factor(dd$C)

(The other message is nasty, though. It's slightly different in 
R-patched:


 > anova(mlmfit, mlmfit0, X= ~A+B, M = ~A+C+B, idata = dd,
test="Spherical")
Error in solve.default(Psi, B) :
   system is computationally singular: reciprocal condition number =
2.17955e-34

but it shouldn't happen...
Looks like it is a failure of the internal Thin.row function. Ick!
)


I also don't know how I can calculate the various interactions..
My read is I should change the second argument mlmfit0, too, but I 
can't

figure out how...


The "within" interactions should be straightforward, e.g.

M=~A*B*C
X=~A*B*C-A:B:C

etc.

The within/between interactions are otained from the similar tests of
the between factor(s)

e.g.

mlmfitD <- lm(myma~D)

and then

anova(mlmfitD, mlmfit,)









--
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Dates as x axis labels

2008-12-08 Thread Sam Halliday

Hello all,

I wish to plot several hundred data groups in a boxplot(), with  
sensible labels on the x axis. For small datasets, this is possible by  
using the "names" parameter to the boxplot. However, for several  
hundred boxplots, boxplot() displays a tick on the x axis for every one.


For my case, I am using Date objects from the chron package as the x  
labels. When passing a vector of Dates to plot() as the x values,  
plot() is intelligent enough to choose sensible major tick marks (e.g.  
the month name). Whereas boxplot() is picking arbitrary dates to show  
as the major tick marks.


Does anybody know how to use Dates as labels in a boxplot() and have  
it use sensible values as the major tick marks?


Note that I've tried using axis.Date() but it doesn't seem to do  
anything to a boxplot().


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] legend at fixed distance form the bottom

2008-12-08 Thread Christophe Genolini
Sorry, I will be more precise. Here is an example (simplified) of graph 
I want :


 8< 

symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),5))
layout(matrix(c(1,1,1,2,2,3),3))

for(i in 1:3)
 matplot(dn,type="b",xlab="+: a  x: b  ???: c  ???: d",pch=symboles)
 8< 

But instead of ???, I want the "triangle" and the "losange".
So I try to use legend in order to get the losange and triangle :

--- 8< 

for(i in 1:3){
 matplot(dn,type="b",xlab="",pch=symboles)
 legend("top", pch = unique(listSymboles),
   legend = c("a","b","c","d"),
   inset = c(0,1.1), horiz = TRUE, xpd = NA)
}

--- 8< 

On the first plot, the legend is down, on the second, the legend is on a 
correct position, on the third one, the legend in *on* the graduation.
So my problem is to plot either a legend at a fixed distance from the 
graph or to put some symbol ("triangle", "losange" and all the other 
since I might have more than 4 curves) in the xlab.


Any solution ?

Thanks

Christophe






I suggested the use of grconvertY if you want to place something exactly n 
inches from the bottom, then grconvertY will give the user (or other) 
coordinates that match, or if you want to place something exactly 10% of the 
total device height from the bottom, etc.

What to do in replotting a graph afte a resize is usually tricky and requires 
information that the program does not have and therefore has to guess at, that 
is why it is safest to not depend on the computer doing the correct thing in a 
resize and just tell the users to resize/set size before calling your function.

If you give a bit more detail on what you are trying to accomplish and what you 
mean by fixed distance (number of inches, proportion of device height, margin 
row, etc.) we may be better able to help.

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
project.org] On Behalf Of Christophe Genolini
Sent: Saturday, December 06, 2008 7:08 AM
To: Greg Snow
Cc: r-help@r-project.org
Subject: Re: [R] legend at fixed distance form the bottom

Thanks for your answer.

Unfortunatly, I can not create the graphice with the final size since I
am writing a package in wich the user will have to chose between
several
graphics, and then he will have to export one. And they might be one
graph, or 2x2, or 3x3...

I check the grconvertY but I did not understand what you suggest. To
me,
the use of legend can not work since every length in the legend box
(xlength, ylength, distance to axes) will change when resizing the
graph. I was more expecting something like introduce the symbols used
in
the graph *in* the xlab. Is it possible ?

Christophe



It is best to create the graphics device at the final size desired,
  

then do the plotting and add the legend.  For getting a fixed distance,
look at the function grconvertY for one possibility.


--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111



  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
project.org] On Behalf Of Christophe Genolini
Sent: Friday, December 05, 2008 6:40 AM
To: r-help@r-project.org
Subject: [R] legend at fixed distance form the bottom

Hi the list

I would like to add a legend under a graph but at a fixed distance


from


the graphe. Is it possible ?
More precisely, here is my code :

--- 8< 
symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),,5),matrix(rnorm(20,2),,5))
listSymboles <- rep(symboles,each=2)
matplot(t(dn),pch=listSymboles,type="b")
legend("bottom", pch = unique(listSymboles), legend = c("ane",
"cheval",
"poney", "mule"), inset = c(0,-0.175), horiz = TRUE, xpd = NA)
--- 8< 

But when I change the size of the graph, the legend is misplaced.

Instead, I try to put some text in xlab, but I do not know how to


get


the +, x , V and other symbol.
Does anyone got a solution ?

Thanks a lot.

Christophe

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.


  

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.






__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread Greg Snow
In the long run it will probably make your life much easier to read all the 
dataframes into one large list (and have the names of the elements be what your 
currently name the dataframes), then you can just use regular list indexing 
(using [[]] rather than $ in most cases) instead of having to worry about get 
and assign and the risks/subtleties involved in using those.

Hope this helps,

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> project.org] On Behalf Of tsunhin wong
> Sent: Monday, December 08, 2008 8:45 AM
> To: Jim Holtman
> Cc: r-help@r-project.org
> Subject: Re: [R] Transforming a string to a variable's name? help me
> newbie...
>
> Thanks Jim and All!
>
> It works:
> tmptrial <- trialcompute(trialextract(
> get(paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")) ,
> tmptrialinfo[1,32],secs,sdm),secs,binsize)
>
> Can I use "assign" instead? How should it be coded then?
>
> Thanks!
>
> - John
>
> On Mon, Dec 8, 2008 at 10:40 AM, Jim Holtman <[EMAIL PROTECTED]>
> wrote:
> > ?get
> >
> >
> > Sent from my iPhone
> >
> > On Dec 8, 2008, at 7:11, "tsunhin wong" <[EMAIL PROTECTED]> wrote:
> >
> >> Dear all,
> >>
> >> I'm a newbie in R.
> >> I have a 45x2x2x8 design.
> >> A dataframe stores the metadata of trials. And each trial has its
> own
> >> data file: I used "read.table" to import every trial into R as a
> >> dataframe (variable).
> >>
> >> Now I dynamically ask R to retrieve trials that fit certain
> selection
> >> criteria, so I use "subset", e.g.
> >> tmptrialinfo <- subset(trialinfo, (Subject==24 &
> Filename=="v2msa8"))
> >>
> >> The name of the dataframe / variable of an individual trial can be
> >> obtained using:
> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> >> Then I get a string:
> >> "t24v2msa8.gz"
> >> which is of the exact same name of the dataframe / variable of that
> >> trial, which is:
> >> t24v2msa8.gz
> >>
> >> Can somebody tell me how can I change that string (obtained from
> >> "paste()" above) to be a usable / manipulable variable name, so that
> I
> >> can do something, such as:
> >> (1)
> >> tmptrial <- trialcompute(trialextract(
> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> >> ,tmptrialinfo[1,32],secs,sdm),secs,binsize)
> >> instead of hardcoding:
> >> (2)
> >> tmptrial <-
> >>
> trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),sec
> s,binsize)
> >>
> >> Currently, 1) doesn't work...
> >>
> >> Thanks in advance for your help!
> >>
> >> Regards,
> >>
> >> John
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> >> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] legend at fixed distance form the bottom

2008-12-08 Thread Greg Snow
Try this:

symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),5))
layout(matrix(c(1,1,1,2,2,3),3))


for(i in 1:3){
  matplot(dn,type="b",xlab="",pch=symboles)
  legend(grconvertX(0.5,'nfc'), grconvertY(0,'nfc'),
xjust=0.5, yjust=0,
pch = unique(symboles),
legend = c("a","b","c","d"),
horiz = TRUE, xpd = NA)
}


Does that do what you want?  Or at least get you started in the correct 
direction?

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> project.org] On Behalf Of Christophe Genolini
> Sent: Monday, December 08, 2008 11:28 AM
> To: Greg Snow
> Cc: r-help@r-project.org
> Subject: Re: [R] legend at fixed distance form the bottom
>
> Sorry, I will be more precise. Here is an example (simplified) of graph
> I want :
>
>  8< 
>
> symboles <- c(3,4,5,6)
> dn <- rbind(matrix(rnorm(20),5))
> layout(matrix(c(1,1,1,2,2,3),3))
>
> for(i in 1:3)
>   matplot(dn,type="b",xlab="+: a  x: b  ???: c  ???: d",pch=symboles)
>  8< 
>
> But instead of ???, I want the "triangle" and the "losange".
> So I try to use legend in order to get the losange and triangle :
>
> --- 8< 
>
> for(i in 1:3){
>   matplot(dn,type="b",xlab="",pch=symboles)
>   legend("top", pch = unique(listSymboles),
> legend = c("a","b","c","d"),
> inset = c(0,1.1), horiz = TRUE, xpd = NA)
> }
>
> --- 8< 
>
> On the first plot, the legend is down, on the second, the legend is on
> a
> correct position, on the third one, the legend in *on* the graduation.
> So my problem is to plot either a legend at a fixed distance from the
> graph or to put some symbol ("triangle", "losange" and all the other
> since I might have more than 4 curves) in the xlab.
>
> Any solution ?
>
> Thanks
>
> Christophe
>
>
>
>
>
> > I suggested the use of grconvertY if you want to place something
> exactly n inches from the bottom, then grconvertY will give the user
> (or other) coordinates that match, or if you want to place something
> exactly 10% of the total device height from the bottom, etc.
> >
> > What to do in replotting a graph afte a resize is usually tricky and
> requires information that the program does not have and therefore has
> to guess at, that is why it is safest to not depend on the computer
> doing the correct thing in a resize and just tell the users to
> resize/set size before calling your function.
> >
> > If you give a bit more detail on what you are trying to accomplish
> and what you mean by fixed distance (number of inches, proportion of
> device height, margin row, etc.) we may be better able to help.
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > [EMAIL PROTECTED]
> > 801.408.8111
> >
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> >> project.org] On Behalf Of Christophe Genolini
> >> Sent: Saturday, December 06, 2008 7:08 AM
> >> To: Greg Snow
> >> Cc: r-help@r-project.org
> >> Subject: Re: [R] legend at fixed distance form the bottom
> >>
> >> Thanks for your answer.
> >>
> >> Unfortunatly, I can not create the graphice with the final size
> since I
> >> am writing a package in wich the user will have to chose between
> >> several
> >> graphics, and then he will have to export one. And they might be one
> >> graph, or 2x2, or 3x3...
> >>
> >> I check the grconvertY but I did not understand what you suggest. To
> >> me,
> >> the use of legend can not work since every length in the legend box
> >> (xlength, ylength, distance to axes) will change when resizing the
> >> graph. I was more expecting something like introduce the symbols
> used
> >> in
> >> the graph *in* the xlab. Is it possible ?
> >>
> >> Christophe
> >>
> >>
> >>> It is best to create the graphics device at the final size desired,
> >>>
> >> then do the plotting and add the legend.  For getting a fixed
> distance,
> >> look at the function grconvertY for one possibility.
> >>
> >>> --
> >>> Gregory (Greg) L. Snow Ph.D.
> >>> Statistical Data Center
> >>> Intermountain Healthcare
> >>> [EMAIL PROTECTED]
> >>> 801.408.8111
> >>>
> >>>
> >>>
> >>>
>  -Original Message-
>  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>  project.org] On Behalf Of Christophe Genolini
>  Sent: Friday, December 05, 2008 6:40 AM
>  To: r-help@r-project.org
>  Subject: [R] legend at fixed distance form the bottom
> 
>  Hi the list
> 
>  I would like to add a legend under a graph but at a fixed distance
> 
> >> from
> >>
>  the graphe. Is it possible ?
>  More precisely, here is my code :
> 
>  --- 8< 
>  symboles <- c(3,4,5,6)
>  dn <- rbind(matrix(rnorm(20),,5),matrix(rnorm(20,2),,5))
>  listSymboles <- rep(symboles,each=2)
>  matplot(t(dn),pch=listSymboles,type="b")
>  legend("bottom", pch = uni

[R] Clustering with Mahalanobis Distance

2008-12-08 Thread Richardson, Patrick
Dear R ExpeRts,

I'm having memory difficulties using mahalanobis distance to trying to cluster 
in R.  I was wondering if anyone has done it with a matrix of 6525x17 (or 
something similar to that size).  I have a matrix of 6525 genes and 17 samples. 
I have my R memory increased to the max and am still getting "cannot allocate 
vector of size" errors.  My matrix "x" is actually a transpose of the original 
matrix (as I want to cluster by samples and not genes). "y" is a vector of the 
mean gene expression levels and "z" is the covariance matrix of "x" (I think 
this is where the problem lies as the covariance matrix is enormous.

I can't really provide a reproducible example as I would have to attach my data 
files, which I don't think anyone would appreciate.

rm(list=ls())  #removes everything from memory#
gc()  #collects garbage#
memory.limit(size = 4095)   #increases memory limit#
x <- as.matrix(read.table("x.txt", header=TRUE, row.names=1))
y <- as.matrix(read.table("y.txt", header=TRUE, row.names=1))
z <- as.matrix(read.table("z.txt", header=TRUE, row.names=1))
mal <- mahalanobis(x, y, z)

The ultimate goal is to run "hclust" with the mahalanobis distance matrix.


If anyone knows where I could find a more "memory friendly" function or any 
advise as to what I might try to optimize my code,  I would appreciate it.  
sessionInfo() is below.

Many Thanks,

_
Patrick Richardson
Biostatistician - Program of Translational Medicine
Van Andel Research Institute - Webb Lab
333 Bostwick Avenue NE
Grand Rapids, MI  49503


R version 2.8.0 (2008-10-20)
i386-pc-mingw32

locale:
LC_COLLATE=English_United States.1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=English_United 
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices datasets  tcltk utils methods   base

other attached packages:
[1] svSocket_0.9-5 svIO_0.9-5 R2HTML_1.59svMisc_0.9-5   svIDE_0.9-5

loaded via a namespace (and not attached):
[1] tools_2.8.0

This email message, including any attachments, is for th...{{dropped:9}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] example of gladeXML - RGtk2

2008-12-08 Thread Michael Lawrence
A good example of using glade with RGtk2 is the rattle package.

See: http://rattle.togaware.com/

On Mon, Dec 8, 2008 at 8:15 AM, Cleber Nogueira Borges
<[EMAIL PROTECTED]>wrote:

> hello all,
>
>
> where I find a example or tutorial of RGtk2 package?
> I would like to know about the gladeXML functions in R.
>
> thanks in advance
>
> Cleber Borges
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lazy evaluation and scoping ?

2008-12-08 Thread Bert Gunter
I think actually it's both lazy evaluation and scoping. Here is how I
understand it.

Consider:

> flist <- vector("list",2) ## creates the empty list

> for(i in 1:2)slist[[i]] <- function()i

Now the RHS of the assignment is a function that returns the value of i.
That is:

> flist
function()i


[[2]]
function()i


So the question is: what will be the value of i when the function is
invoked? By R's lexical scoping rules it will be the value of i in the
enclosing environment of the function, which is the value of i in the
environment when the function is **defined** . This will be i = 2, the last
value of the for loop on exit. This is due to lazy evaluation -- the value
of i is not needed until the for() ends, as one can find by:

> sapply(flist,function(z)as.list(environment(z)))
$i
[1] 2

$i
[1] 2

Hence one gets the results you originally saw. Adding the force(i) statement
forces i to be evaluated separately at each iteration of the loop, thus
placing the current values of i at each iteration into each function's
enclosing environment.

HTH.

-- Bert Gunter 

 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Antonio, Fabio Di Narzo
Sent: Saturday, December 06, 2008 7:28 PM
To: Gabor Grothendieck
Cc: r-help@r-project.org
Subject: Re: [R] unexpected scoping behavior with functions created in a
loop

2008/12/6 Gabor Grothendieck <[EMAIL PROTECTED]>:
> The missing item is lazy evaluation.  Try forcing the evaluation of i
> and then repeat:
>
> makeF <- function(i) { force(i); function() i }
Tnx! That works! Sometimes lazy evaluation + side effects is just too
much (complicated) for me:D

bests,
a.
>
>
> On Sat, Dec 6, 2008 at 9:22 PM, Antonio, Fabio Di Narzo
> <[EMAIL PROTECTED]> wrote:
>> Hi guys.
>> I recently stumbled on an unexpected behavior of R when using
>> functions created in a loop.
>> The problem is silly enough to me that I had hard time choosing a good
>> mail subject, not talking about searching in the archives...
>> After some experiments, I trimmed down the following minimal
>> reproducible example:
>> ###
>> makeF <- function(i) function() i
>>
>> fList <- list(makeF(1), makeF(2))
>> sapply(fList, do.call, list())
>> ##This works as expected (by me...):
>> #[1] 1 2
>>
>> ##Things go differently when creating functions in a for loop:
>> for(i in 1:2)
>>  fList[[i]] <- makeF(i)
>> sapply(fList, do.call, list())
>> #[1] 2 2
>>
>> ##Same result with "lapply":
>> fList <- lapply(as.list(1:2), makeF)
>> sapply(fList, do.call, list())
>> #[1] 2 2
>> ###
>>
>> I evidently overlook some important detail, but I still can't get it.
>> Somebody can explain me what's happening there?
>> Bests,
>> antonio.
>>
>>> R.version
>>   _
>> platform   i686-pc-linux-gnu
>> arch   i686
>> os linux-gnu
>> system i686, linux-gnu
>> status Patched
>> major  2
>> minor  8.0
>> year   2008
>> month  12
>> day04
>> svn rev47063
>> language   R
>> version.string R version 2.8.0 Patched (2008-12-04 r47063)
>> --
>> Antonio, Fabio Di Narzo
>> Ph.D. student at
>> Department of Statistical Sciences
>> University of Bologna, Italy
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>



-- 
Antonio, Fabio Di Narzo
Ph.D. student at
Department of Statistical Sciences
University of Bologna, Italy

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] partial correlation

2008-12-08 Thread David Freedman

not sure which library 'pcor' is in, but why don't you just use the ranks of
the variables and then perform the correlation on the ranks:

x<-sample(1:10,10,rep=T)
y<-x+ sample(1:10,10,rep=T)
cor(x,y)
cor(rank(x),rank(y))

HTH
david freedman



Jürg Brendan Logue wrote:
> 
> Hej!
> 
>  
> 
> I have the following problem: 
> 
>  
> 
> I would like to do partial correlations on non-parametric data. I checked
> "pcor" (Computes the partial correlation between two variables given a set
> of other variables) but I do not know how to change to a Spearman Rank
> Correlation method [pcor(c("BCDNA","ImProd","A365"),var(PCor))]
> 
>  
> 
> Here's a glimpse of my data (data is raw data).
> 
> 
> A436
> 
> A365
> 
> Chla
> 
> ImProd
> 
> BCDNA
> 
> 
> 0.001
> 
> 0.003
> 
> 0.624889
> 
> 11.73023
> 
> 0.776919
> 
> 
> 0.138
> 
> 0.126
> 
> 0.624889
> 
> 27.29432
> 
> 0.357468
> 
> 
> 0.075
> 
> 0.056
> 
> 0.624889
> 
> 105.3115
> 
> 0.429785
> 
> 
> 0.009
> 
> 0.008
> 
> 0.312444
> 
> 55.2929
> 
> 0.547752
> 
> 
> 0.005
> 
> 0.002
> 
> 0.624889
> 
> 26.9638
> 
> 0.738775
> 
> 
> 0.018
> 
> 0.006
> 
> 0.312444
> 
> 31.14836
> 
> 0.705814
> 
> 
> 0.02
> 
> 0.018
> 
> 2.22E-16
> 
> 11.90303
> 
> 0.755003
> 
> 
> 0.002
> 
> 0.003
> 
> 0.624889
> 
> 7.829781
> 
> 0.712091
> 
> 
> 0.047
> 
> 0.044
> 
> 1.523167
> 
> 1.423823
> 
> 0.710939
> 
> 
> 0.084
> 
> 0.056
> 
> 13.7085
> 
> 1.533703
> 
> 0.280171
> 
>  
> 
> I’m really grateful for any help since I only recently started employing
> R.
> 
>  
> 
> Best regards, 
> JBL
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
>   [[alternative HTML version deleted]]
> 
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 


-
David Freedman
Atlanta
-- 
View this message in context: 
http://www.nabble.com/partial-correlation-tp20900010p20901761.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Save image as metafile

2008-12-08 Thread David Winsemius


On Dec 8, 2008, at 7:51 AM, mentor_ wrote:



Hi,

how can I save an image as a metafile?
I know within windows you can do a right click and then 'save image as
metafile'
but I use Mac OS X...


Which means this question would be better posed on the Mac OS list.


I know as well that mac users have a right click as
well, but
it does not work.


Are you using the R.app? If so, then you need to focus on the quartz  
device window (by clicking on it or choosing it from the Window"  
pulldown menu)  and choose Save as.. from the File menu. It will not  
offer to save it as a windows metafile, but rather does so as a pdf.  
In Preview you can open the pdf and then save as other formats,  
although wmf is not one of the options. If you prefer a tiff file, you  
could use Grab. Even LemkeSoft's GraphicConverter does not offer a WMF  
option, so it's probably a proprietary format that M$ is not  
documenting well or trying to restrict in some manner. To find out  
what devices are available, try:


> capabilities()
jpeg  png tifftcltk  X11 aqua http/ftp   
sockets   libxml fifo   clediticonv  NLS
TRUE TRUEFALSE TRUE TRUE TRUE TRUE  
TRUE TRUE TRUE TRUE TRUE TRUE

 profmemcairo
   FALSE TRUE

So on my machine, a png device can get graphics output. That should  
provide all of the functionality of a wmf format and be much more  
cross-platform.


Is there a command in R for saving images as metafiles?


It would appear not, but why would it be necessary? What's wrong with  
the choice among jpeg, png, pdf or tiff?


--
David Winsemius




Regards,
mentor
--
View this message in context: 
http://www.nabble.com/Save-image-as-metafile-tp20894737p20894737.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread tsunhin wong
I want to combine all dataframes into one large list too...
But each dataframe is a 35 columns x varying number of rows structure
(from 2000 to >9000 rows)
I have ~1500 dataframes of these to process, and that add up to >
1.5Gb of data...

Combining dataframes into a single one require me to transform each
single dataframe into one line, but I really don't have a good
solution for the varying number of rows scenario... And also, I don't
want to stall my laptop every time I run the data set: maybe I can do
that when my prof give me a ~ 4Gb ram desktop to run the script ;)

Thanks! :)

- John

On Mon, Dec 8, 2008 at 1:36 PM, Greg Snow <[EMAIL PROTECTED]> wrote:
> In the long run it will probably make your life much easier to read all the 
> dataframes into one large list (and have the names of the elements be what 
> your currently name the dataframes), then you can just use regular list 
> indexing (using [[]] rather than $ in most cases) instead of having to worry 
> about get and assign and the risks/subtleties involved in using those.
>
> Hope this helps,
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> [EMAIL PROTECTED]
> 801.408.8111
>
>
>> -Original Message-
>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
>> project.org] On Behalf Of tsunhin wong
>> Sent: Monday, December 08, 2008 8:45 AM
>> To: Jim Holtman
>> Cc: r-help@r-project.org
>> Subject: Re: [R] Transforming a string to a variable's name? help me
>> newbie...
>>
>> Thanks Jim and All!
>>
>> It works:
>> tmptrial <- trialcompute(trialextract(
>> get(paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")) ,
>> tmptrialinfo[1,32],secs,sdm),secs,binsize)
>>
>> Can I use "assign" instead? How should it be coded then?
>>
>> Thanks!
>>
>> - John
>>
>> On Mon, Dec 8, 2008 at 10:40 AM, Jim Holtman <[EMAIL PROTECTED]>
>> wrote:
>> > ?get
>> >
>> >
>> > Sent from my iPhone
>> >
>> > On Dec 8, 2008, at 7:11, "tsunhin wong" <[EMAIL PROTECTED]> wrote:
>> >
>> >> Dear all,
>> >>
>> >> I'm a newbie in R.
>> >> I have a 45x2x2x8 design.
>> >> A dataframe stores the metadata of trials. And each trial has its
>> own
>> >> data file: I used "read.table" to import every trial into R as a
>> >> dataframe (variable).
>> >>
>> >> Now I dynamically ask R to retrieve trials that fit certain
>> selection
>> >> criteria, so I use "subset", e.g.
>> >> tmptrialinfo <- subset(trialinfo, (Subject==24 &
>> Filename=="v2msa8"))
>> >>
>> >> The name of the dataframe / variable of an individual trial can be
>> >> obtained using:
>> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
>> >> Then I get a string:
>> >> "t24v2msa8.gz"
>> >> which is of the exact same name of the dataframe / variable of that
>> >> trial, which is:
>> >> t24v2msa8.gz
>> >>
>> >> Can somebody tell me how can I change that string (obtained from
>> >> "paste()" above) to be a usable / manipulable variable name, so that
>> I
>> >> can do something, such as:
>> >> (1)
>> >> tmptrial <- trialcompute(trialextract(
>> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
>> >> ,tmptrialinfo[1,32],secs,sdm),secs,binsize)
>> >> instead of hardcoding:
>> >> (2)
>> >> tmptrial <-
>> >>
>> trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),sec
>> s,binsize)
>> >>
>> >> Currently, 1) doesn't work...
>> >>
>> >> Thanks in advance for your help!
>> >>
>> >> Regards,
>> >>
>> >> John
>> >>
>> >> __
>> >> R-help@r-project.org mailing list
>> >> https://stat.ethz.ch/mailman/listinfo/r-help
>> >> PLEASE do read the posting guide
>> >> http://www.R-project.org/posting-guide.html
>> >> and provide commented, minimal, self-contained, reproducible code.
>> >
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-
>> guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Save image as metafile

2008-12-08 Thread Prof Brian Ripley

On Mon, 8 Dec 2008, David Winsemius wrote:



On Dec 8, 2008, at 7:51 AM, mentor_ wrote:



Hi,

how can I save an image as a metafile?
I know within windows you can do a right click and then 'save image as
metafile'
but I use Mac OS X...


Which means this question would be better posed on the Mac OS list.


I know as well that mac users have a right click as
well, but
it does not work.


Are you using the R.app? If so, then you need to focus on the quartz device 
window (by clicking on it or choosing it from the Window" pulldown menu)  and 
choose Save as.. from the File menu. It will not offer to save it as a 
windows metafile, but rather does so as a pdf. In Preview you can open the 
pdf and then save as other formats, although wmf is not one of the options. 
If you prefer a tiff file, you could use Grab. Even LemkeSoft's 
GraphicConverter does not offer a WMF option, so it's probably a proprietary 
format that M$ is not documenting well or trying to restrict in some manner.


It is basically a set of calls to the (proprietary) Windows GDI.  There 
seems to be no viable implementation on any other platform (except as a 
bitmap, but then there are lots of portable bitmap formats supported by R)



To find out what devices are available, try:


capabilities()
  jpeg  png tifftcltk  X11 aqua http/ftp  sockets 
libxml fifo   clediticonv  NLS
  TRUE TRUEFALSE TRUE TRUE TRUE TRUE TRUE 
TRUE TRUE TRUE TRUE TRUE

profmemcairo
 FALSE TRUE

So on my machine, a png device can get graphics output. That should provide 
all of the functionality of a wmf format and be much more cross-platform.


Not so: wmf is a primarily a vector format: however PDF is a good 
cross-platform substitute.



Is there a command in R for saving images as metafiles?


It would appear not, but why would it be necessary? What's wrong with the 
choice among jpeg, png, pdf or tiff?


--
David Winsemius




Regards,
mentor
--
View this message in context: 
http://www.nabble.com/Save-image-as-metafile-tp20894737p20894737.html

Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html

and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming a string to a variable's name? help me newbie...

2008-12-08 Thread Greg Snow
I really don't understand your concern.  Something like:

> nms <- c('file1','file2','file3')
> my.data <- list()
> for (i in nms) my.data[[ i ]] <- read.table(i)

Will read in the files listed in the nms vector and put them into the list 
my.data (each data frame is a single element of the list).  This list will not 
take up about the same amount of memory as if you read each file into a 
dataframe in the global environment.  And there is no transforming of data 
frames (into 1 row or otherwise).

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


> -Original Message-
> From: tsunhin wong [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 08, 2008 1:34 PM
> To: Greg Snow
> Cc: Jim Holtman; r-help@r-project.org; [EMAIL PROTECTED]
> Subject: Re: [R] Transforming a string to a variable's name? help me
> newbie...
>
> I want to combine all dataframes into one large list too...
> But each dataframe is a 35 columns x varying number of rows structure
> (from 2000 to >9000 rows)
> I have ~1500 dataframes of these to process, and that add up to >
> 1.5Gb of data...
>
> Combining dataframes into a single one require me to transform each
> single dataframe into one line, but I really don't have a good
> solution for the varying number of rows scenario... And also, I don't
> want to stall my laptop every time I run the data set: maybe I can do
> that when my prof give me a ~ 4Gb ram desktop to run the script ;)
>
> Thanks! :)
>
> - John
>
> On Mon, Dec 8, 2008 at 1:36 PM, Greg Snow <[EMAIL PROTECTED]> wrote:
> > In the long run it will probably make your life much easier to read
> all the dataframes into one large list (and have the names of the
> elements be what your currently name the dataframes), then you can just
> use regular list indexing (using [[]] rather than $ in most cases)
> instead of having to worry about get and assign and the
> risks/subtleties involved in using those.
> >
> > Hope this helps,
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > [EMAIL PROTECTED]
> > 801.408.8111
> >
> >
> >> -Original Message-
> >> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> >> project.org] On Behalf Of tsunhin wong
> >> Sent: Monday, December 08, 2008 8:45 AM
> >> To: Jim Holtman
> >> Cc: r-help@r-project.org
> >> Subject: Re: [R] Transforming a string to a variable's name? help me
> >> newbie...
> >>
> >> Thanks Jim and All!
> >>
> >> It works:
> >> tmptrial <- trialcompute(trialextract(
> >> get(paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")) ,
> >> tmptrialinfo[1,32],secs,sdm),secs,binsize)
> >>
> >> Can I use "assign" instead? How should it be coded then?
> >>
> >> Thanks!
> >>
> >> - John
> >>
> >> On Mon, Dec 8, 2008 at 10:40 AM, Jim Holtman <[EMAIL PROTECTED]>
> >> wrote:
> >> > ?get
> >> >
> >> >
> >> > Sent from my iPhone
> >> >
> >> > On Dec 8, 2008, at 7:11, "tsunhin wong" <[EMAIL PROTECTED]> wrote:
> >> >
> >> >> Dear all,
> >> >>
> >> >> I'm a newbie in R.
> >> >> I have a 45x2x2x8 design.
> >> >> A dataframe stores the metadata of trials. And each trial has its
> >> own
> >> >> data file: I used "read.table" to import every trial into R as a
> >> >> dataframe (variable).
> >> >>
> >> >> Now I dynamically ask R to retrieve trials that fit certain
> >> selection
> >> >> criteria, so I use "subset", e.g.
> >> >> tmptrialinfo <- subset(trialinfo, (Subject==24 &
> >> Filename=="v2msa8"))
> >> >>
> >> >> The name of the dataframe / variable of an individual trial can
> be
> >> >> obtained using:
> >> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> >> >> Then I get a string:
> >> >> "t24v2msa8.gz"
> >> >> which is of the exact same name of the dataframe / variable of
> that
> >> >> trial, which is:
> >> >> t24v2msa8.gz
> >> >>
> >> >> Can somebody tell me how can I change that string (obtained from
> >> >> "paste()" above) to be a usable / manipulable variable name, so
> that
> >> I
> >> >> can do something, such as:
> >> >> (1)
> >> >> tmptrial <- trialcompute(trialextract(
> >> >> paste("t",tmptrialinfo[1,2],tmptrialinfo[1,16],".gz",sep="")
> >> >> ,tmptrialinfo[1,32],secs,sdm),secs,binsize)
> >> >> instead of hardcoding:
> >> >> (2)
> >> >> tmptrial <-
> >> >>
> >>
> trialcompute(trialextract(t24v2msa8.gz,tmptrialinfo[1,32],secs,sdm),sec
> >> s,binsize)
> >> >>
> >> >> Currently, 1) doesn't work...
> >> >>
> >> >> Thanks in advance for your help!
> >> >>
> >> >> Regards,
> >> >>
> >> >> John
> >> >>
> >> >> __
> >> >> R-help@r-project.org mailing list
> >> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> >> PLEASE do read the posting guide
> >> >> http://www.R-project.org/posting-guide.html
> >> >> and provide commented, minimal, self-contained, reproducible
> code.
> >> >
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.e

Re: [R] legend at fixed distance form the bottom

2008-12-08 Thread Christophe Genolini

Thank you very much,
That's very close to what I want. Well, I would have love to let the 
user manualy resize the graph, but I am not sur it is possible. So 
that's enouth for me, thanks a lot.


Christophe

PS : with grconvertX(0.5,'nfc'), the legend is not center.
With grconvertX(0.535,'nfc'), it is almost center. Is it because the 
left margin is larger than the right margin?

Try this:

symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),5))
layout(matrix(c(1,1,1,2,2,3),3))


for(i in 1:3){
  matplot(dn,type="b",xlab="",pch=symboles)
  legend(grconvertX(0.5,'nfc'), grconvertY(0,'nfc'),
xjust=0.5, yjust=0,
pch = unique(symboles),
legend = c("a","b","c","d"),
horiz = TRUE, xpd = NA)
}


Does that do what you want?  Or at least get you started in the correct 
direction?

--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
project.org] On Behalf Of Christophe Genolini
Sent: Monday, December 08, 2008 11:28 AM
To: Greg Snow
Cc: r-help@r-project.org
Subject: Re: [R] legend at fixed distance form the bottom

Sorry, I will be more precise. Here is an example (simplified) of graph
I want :

 8< 

symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),5))
layout(matrix(c(1,1,1,2,2,3),3))

for(i in 1:3)
  matplot(dn,type="b",xlab="+: a  x: b  ???: c  ???: d",pch=symboles)
 8< 

But instead of ???, I want the "triangle" and the "losange".
So I try to use legend in order to get the losange and triangle :

--- 8< 

for(i in 1:3){
  matplot(dn,type="b",xlab="",pch=symboles)
  legend("top", pch = unique(listSymboles),
legend = c("a","b","c","d"),
inset = c(0,1.1), horiz = TRUE, xpd = NA)
}

--- 8< 

On the first plot, the legend is down, on the second, the legend is on
a
correct position, on the third one, the legend in *on* the graduation.
So my problem is to plot either a legend at a fixed distance from the
graph or to put some symbol ("triangle", "losange" and all the other
since I might have more than 4 curves) in the xlab.

Any solution ?

Thanks

Christophe







I suggested the use of grconvertY if you want to place something
  

exactly n inches from the bottom, then grconvertY will give the user
(or other) coordinates that match, or if you want to place something
exactly 10% of the total device height from the bottom, etc.


What to do in replotting a graph afte a resize is usually tricky and
  

requires information that the program does not have and therefore has
to guess at, that is why it is safest to not depend on the computer
doing the correct thing in a resize and just tell the users to
resize/set size before calling your function.


If you give a bit more detail on what you are trying to accomplish
  

and what you mean by fixed distance (number of inches, proportion of
device height, margin row, etc.) we may be better able to help.


--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111



  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
project.org] On Behalf Of Christophe Genolini
Sent: Saturday, December 06, 2008 7:08 AM
To: Greg Snow
Cc: r-help@r-project.org
Subject: Re: [R] legend at fixed distance form the bottom

Thanks for your answer.

Unfortunatly, I can not create the graphice with the final size


since I


am writing a package in wich the user will have to chose between
several
graphics, and then he will have to export one. And they might be one
graph, or 2x2, or 3x3...

I check the grconvertY but I did not understand what you suggest. To
me,
the use of legend can not work since every length in the legend box
(xlength, ylength, distance to axes) will change when resizing the
graph. I was more expecting something like introduce the symbols


used


in
the graph *in* the xlab. Is it possible ?

Christophe




It is best to create the graphics device at the final size desired,

  

then do the plotting and add the legend.  For getting a fixed


distance,


look at the function grconvertY for one possibility.



--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111




  

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
project.org] On Behalf Of Christophe Genolini
Sent: Friday, December 05, 2008 6:40 AM
To: r-help@r-project.org
Subject: [R] legend at fixed distance form the bottom

Hi the list

I would like to add a legend under a graph but at a fixed distance



from



the graphe. Is it possible ?
More precisely, here is my code :

--- 8< 
symboles <- c(3,4,5,6)
dn <- rbind(matrix(rnorm(20),,5),matrix(rnorm(20,2),,5))
listSymboles <- rep(symboles,each=2)
matplot(t(dn),pch=listSymbole

[R] Scan a folder for a given type of files

2008-12-08 Thread pomchip


Dear R-users,

I have found on the list several posts addressing the issue of getting data for
a list of defined files. However, I found nothing about scanning a given folder
and retrieving the list of, let's say, .txt files (I probably used the wrong
keywords). Could someone tell me which posts or function help I should look at?

Thanks in advance

Sebastien

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scan a folder for a given type of files

2008-12-08 Thread Gábor Csárdi
Sebastien,

see ?list.files, especially the 'pattern' argument. For your
particular case it is

list.files(pattern="\\.txt$")

You might want to use the 'ignore.case' argument as well.

Gabor

On Mon, Dec 8, 2008 at 10:53 PM,  <[EMAIL PROTECTED]> wrote:
>
>
> Dear R-users,
>
> I have found on the list several posts addressing the issue of getting data 
> for
> a list of defined files. However, I found nothing about scanning a given 
> folder
> and retrieving the list of, let's say, .txt files (I probably used the wrong
> keywords). Could someone tell me which posts or function help I should look 
> at?
>
> Thanks in advance
>
> Sebastien
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Gabor Csardi <[EMAIL PROTECTED]> UNIL DGM

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scan a folder for a given type of files

2008-12-08 Thread Bert Gunter
?files

and the links therein. Seems like a most obvious keyword to me for asking
about files ...

Cheers,
Bert Gunter 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of [EMAIL PROTECTED]
Sent: Monday, December 08, 2008 1:54 PM
To: r-help@r-project.org
Subject: [R] Scan a folder for a given type of files



Dear R-users,

I have found on the list several posts addressing the issue of getting data
for
a list of defined files. However, I found nothing about scanning a given
folder
and retrieving the list of, let's say, .txt files (I probably used the wrong
keywords). Could someone tell me which posts or function help I should look
at?

Thanks in advance

Sebastien

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Multivariate kernel density estimation

2008-12-08 Thread Greg Snow
Possible, yes (see fortune('Yoda')), but doing it can be a bit difficult.

Here is one approximation using an independent normal kernel in 2 dimensions:

kde.contour <- function(x,y=NULL, conf=0.95, xdiv=100, ydiv=100, kernel.sd ) {
xy <- xy.coords(x,y)
xr <- range(xy$x)
yr <- range(xy$y)

xr <- xr + c(-1,1)*0.1*diff(xr)
yr <- yr + c(-1,1)*0.1*diff(yr)

if(missing(kernel.sd)) {
kernel.sd <- c( diff(xr)/6, diff(yr)/6 )
} else if (length(kernel.sd)==1) {
kernel.sd <- rep(kernel.sd, 2)
}


xs <- seq(xr[1], xr[2], length.out=xdiv)
ys <- seq(yr[1], yr[2], length.out=ydiv)
mydf <- expand.grid( xx=xs, yy=ys )

tmpfun <- function(xx,yy) {
sum( dnorm(xx, xy$x, kernel.sd[1]) * dnorm(yy, xy$y, 
kernel.sd[2]) )
}

z <- mapply(tmpfun, xx=mydf$xx, yy=mydf$yy)

sz <- sort(z, decreasing=TRUE)
cz <- cumsum(sz)
cz <- cz/cz[length(cz)]

cutoff <- sz[ which( cz > conf )[1] ]

plot(xy, xlab='x', ylab='y', xlim=xr, ylim=yr)
#contour( xs, ys, matrix(z, nrow=xdiv), add=TRUE, col='blue')
contour( xs, ys, matrix(z, nrow=xdiv), add=TRUE, col='red',
levels=cutoff, labels='')

invisible(NULL)
}

# test
kde.contour( rnorm(100), rnorm(100) )

# correlated data
my.xy <- MASS::mvrnorm(100, c(3,10), matrix( c(1,.8,.8,1), 2) )

kde.contour( my.xy, kernel.sd=.5 )

# compare to theoritical
lines(ellipse::ellipse( 0.8, scale=c(1,1), centre=c(3,10)), col='green')


# bimodal

new.xy <- rbind( MASS::mvrnorm(65, c(3,10), matrix( c(1,.6,.6,1),2) ),
MASS::mvrnorm(35, c(6, 7), matrix( c(1,.6,.6,1), 2) ) )

kde.contour( new.xy, kernel.sd=.75 )


For more than 2 dimensions it becomes more difficult (both to visualize and to 
find the region, contour only works in 2 dimensions).  You can also see that 
the approximations are not great compared to the true theory, but possibly a 
better kernel would improve that.

Hope this helps,


--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
801.408.8111


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> project.org] On Behalf Of Jeroen Ooms
> Sent: Monday, December 08, 2008 5:54 AM
> To: r-help@r-project.org
> Subject: [R] Multivariate kernel density estimation
>
>
> I would like to estimate a 95% highest density area for a multivariate
> parameter space (In the context of anova). Unfortunately I have only
> experience with univariate kernel density estimation, which is
> remarkebly
> easier :)
>
> Using Gibbs, i have sampled from a posterior distirbution of an Anova
> model
> with k means (mu) and 1 common residual variance (s2). The means are
> independent of eachother, but conditional on the residual variance. So
> now I
> have a data frame of say 10.000 iterations, and k+1 parameters.
>
> I am especially interested in the posterior distribution of the mu
> parameters, because I want to test the support for an inequalty
> constrained
> model (e.g. mu1 > mu2 > mu3). I wish to derive the multivariate 95%
> highest
> density parameter space for the mu parameters. For example, if I had a
> posterior distirbution with 2 means, this should somehow result in the
> circle or elipse that contains the 95% highest density area.
>
> Is something like this possible in R? All tips are welcome.
> --
> View this message in context: http://www.nabble.com/Multivariate-
> kernel-density-estimation-tp20894766p20894766.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] French IRC channel and mailing list ?

2008-12-08 Thread Julien Barnier
Dear all,

For some time now, R is becomming more and more popular in more and
more countries. France is for sure one of them, but french people
being french one of the obstacle they might tackle is the lack of
documentation and support in their native language.

To offer this support in french an IRC channel (#Rfr on
irc.freenode.net) was created some months ago, beside the official
english channel #R. We (me (~juba) and Pierre-Yves Chibon (~pingou))
have to recognize that it has a very low activity right now but it's
related to our lack of promotion about it.

Another tool that could be useful to bring support in french (and
other languages) would be dedicated mailing-lists. I've searched the
archives to see if similar requests have already been made but
couldn't manage to find one. So I would like to ask here the question,
has there been any thought on the creation of dedicated R-help
mailing-lists for the major languages such as Spanish, French, Chinese
and others ?

We had some thought about it and we actually think that it would be
something useful for users to be able to receive some help (especially
when their english is not really fluent).

Thanks in advance for your answers,

Sincerely,

Pierre-Yves Chibon
Julien Barnier

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Save image as metafile

2008-12-08 Thread David Winsemius


On Dec 8, 2008, at 4:02 PM, Prof Brian Ripley wrote:


On Mon, 8 Dec 2008, David Winsemius wrote:



To find out what devices are available, try:


capabilities()
 jpeg  png tifftcltk  X11 aqua http/ftp   
sockets libxml fifo   clediticonv  NLS
 TRUE TRUEFALSE TRUE TRUE TRUE TRUE  
TRUE TRUE TRUE TRUE TRUE TRUE

profmemcairo
FALSE TRUE

So on my machine, a png device can get graphics output. That should  
provide all of the functionality of a wmf format and be much more  
cross-platform.


Not so: wmf is a primarily a vector format: however PDF is a good  
cross-platform substitute.


Thank you for the correction.  I had conflated "lossless format" with  
"vector format".  Perhaps the OP will be happier with pdf output. Will  
he be able to use cmd-S to bring up a Save as dialog if he is not  
running from R.app?


--
David Winsemius



Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R and Scheme

2008-12-08 Thread Stavros Macrakis
I've read in many places that R semantics are based on Scheme semantics.  As
a long-time Lisp user and implementor, I've tried to make this more precise,
and this is what I've found so far.  I've excluded trivial things that
aren't basic semantic issues: support for arbitrary-precision integers;
subscripting; general style; etc. I would appreciate corrections or
additions from more experienced users of R -- I'm sure that some of the
points below simply reflect my ignorance.

==Similarities to Scheme==

R has first-class function closures. (i.e. correctly supports upward and
downward funarg).

R has a single namespace for functions and variables (Lisp-1).

==Important dissimilarities to Scheme (as opposed to other Lisps)==

R is not properly tail-recursive.

R does not have continuations or call-with-current-continuation or other
mechanisms for implementing coroutines, general iterators, and the like.

R supports keyword arguments.

==Similarities to Lisp and other dynamic languages, including Scheme==

R is runtime-typed and garbage-collected.

R supports nested read-eval-print loops for debugging etc.

R expressions are represented as user-manipulable data structures.

==Dissimilarities to all (modern) Lisps, including Scheme==

R has call-by-need, not call-by-object-value.

R does not have macros.

R objects are values, not pointers, so a<-1:10; b<-a; b[1]<-999; a[1] =>
999.  Similarly, functions cannot modify the contents of their arguments.

There is no equivalent to set-car!/rplaca (not even pairlists and
expressions).  For example, r<-pairlist(1,2); r[[1]]<-r does not create a
circular list. And in general there doesn't seem to be substructure sharing
at the semantic level (though there may be in the implementation).

R does not have multiple value return in the Lisp sense.

R assignment creates a new local variable on first assignment, dynamically.
So static analysis is not enough to determine variable reference (R is not
referentially transparent). Example: ff <- function(a){if (a) x<-1; x} ;
x<-99; ff(T) -> 1; ff(F) -> 99.

In R, most data types (including numeric vectors) do not have a standard
external representation which can be read back in without evaluation.

R coerces logicals to numbers and numbers to strings. Lisps are stricter
about automatic type conversion -- except that false a.k.a. NIL == () in
Lisps other than Scheme.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get Greenhouse-Geisser epsilons from anova?

2008-12-08 Thread John Fox
Dear Peter and Nils,

I hesitate to repeat this (though I'm going to do it anyway!), but it's
quite simple to get these tests from Anova() in the car package. Here's an
example from ?Anova of a repeated-measures ANOVA with two within and two
between-subject factors:

-- snip ---

Anova> ## a multivariate linear model for repeated-measures data
Anova> ## See ?OBrienKaiser for a description of the data set used in this
example.
Anova> 
Anova> phase <- factor(rep(c("pretest", "posttest", "followup"), c(5, 5,
5)),
Anova+ levels=c("pretest", "posttest", "followup"))

Anova> hour <- ordered(rep(1:5, 3))

Anova> idata <- data.frame(phase, hour)

Anova> idata
  phase hour
1   pretest1
2   pretest2
3   pretest3
4   pretest4
5   pretest5
6  posttest1
7  posttest2
8  posttest3
9  posttest4
10 posttest5
11 followup1
12 followup2
13 followup3
14 followup4
15 followup5

Anova> mod.ok <- lm(cbind(pre.1, pre.2, pre.3, pre.4, pre.5, 
Anova+  post.1, post.2, post.3, post.4, post.5, 
Anova+  fup.1, fup.2, fup.3, fup.4, fup.5) ~
treatment*gender, 
Anova+ data=OBrienKaiser)

Anova> (av.ok <- Anova(mod.ok, idata=idata, idesign=~phase*hour)) 

Type II Repeated Measures MANOVA Tests: Pillai test statistic
Df test stat approx F num Df den DfPr(>F)

treatment20.4809   4.6323  2 10 0.0376868 *

gender   10.2036   2.5558  1 10 0.1409735

treatment:gender 20.3635   2.8555  2 10 0.1044692

phase10.8505  25.6053  2  9 0.0001930
***
treatment:phase  20.6852   2.6056  4 20 0.0667354 .

gender:phase 10.0431   0.2029  2  9 0.8199968

treatment:gender:phase   20.3106   0.9193  4 20 0.4721498

hour 10.9347  25.0401  4  7 0.0003043
***
treatment:hour   20.3014   0.3549  8 16 0.9295212

gender:hour  10.2927   0.7243  4  7 0.6023742

treatment:gender:hour20.5702   0.7976  8 16 0.6131884

phase:hour   10.5496   0.4576  8  3 0.8324517

treatment:phase:hour 20.6637   0.2483 16  8 0.9914415

gender:phase:hour10.6950   0.8547  8  3 0.6202076

treatment:gender:phase:hour  20.7928   0.3283 16  8 0.9723693

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Anova> summary(av.ok, multivariate=FALSE)

Univariate Type II Repeated-Measures ANOVA Assuming Sphericity

 SS num Df Error SS den Df   FPr(>F)

treatment   211.286  2  228.056 10  4.6323  0.037687
*  
gender   58.286  1  228.056 10  2.5558  0.140974

treatment:gender130.241  2  228.056 10  2.8555  0.104469

phase   167.500  2   80.278 20 20.8651 1.274e-05
***
treatment:phase  78.668  4   80.278 20  4.8997  0.006426
** 
gender:phase  1.668  2   80.278 20  0.2078  0.814130

treatment:gender:phase   10.221  4   80.278 20  0.6366  0.642369

hour106.292  4   62.500 40 17.0067 3.191e-08
***
treatment:hour1.161  8   62.500 40  0.0929  0.999257

gender:hour   2.559  4   62.500 40  0.4094  0.800772

treatment:gender:hour 7.755  8   62.500 40  0.6204  0.755484

phase:hour   11.083  8   96.167 80  1.1525  0.338317

treatment:phase:hour  6.262 16   96.167 80  0.3256  0.992814

gender:phase:hour 6.636  8   96.167 80  0.6900  0.699124

treatment:gender:phase:hour  14.155 16   96.167 80  0.7359  0.749562

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 


Mauchly Tests for Sphericity

Test statistic p-value
phase  0.74927 0.27282
treatment:phase0.74927 0.27282
gender:phase   0.74927 0.27282
treatment:gender:phase 0.74927 0.27282
hour   0.06607 0.00760
treatment:hour 0.06607 0.00760
gender:hour0.06607 0.00760
treatment:gender:hour  0.06607 0.00760
phase:hour 0.00478 0.44939
treatment:phase:hour   0.00478 0.44939
gender:phase:hour  0.00478 0.44939
treatment:gender:phase:hour0.00478 0.44939


Greenhouse-Geisser and Huynh-Feldt Corrections
 for Departure from Sphericity

 GG eps Pr(>F[GG])
phase   0.79953  7.323e-05 ***
treatment:phase 0.799530.01223 *  
gender:

Re: [R] R and Scheme

2008-12-08 Thread Luke Tierney

On Mon, 8 Dec 2008, Stavros Macrakis wrote:


I've read in many places that R semantics are based on Scheme semantics.  As
a long-time Lisp user and implementor, I've tried to make this more precise,
and this is what I've found so far.  I've excluded trivial things that
aren't basic semantic issues: support for arbitrary-precision integers;
subscripting; general style; etc. I would appreciate corrections or
additions from more experienced users of R -- I'm sure that some of the
points below simply reflect my ignorance.

==Similarities to Scheme==

R has first-class function closures. (i.e. correctly supports upward and
downward funarg).

R has a single namespace for functions and variables (Lisp-1).

==Important dissimilarities to Scheme (as opposed to other Lisps)==

R is not properly tail-recursive.


True at present.  May be unavoidable since the language provides
access to the stack via things like sys.parent, but as it is rare to
look at anything other than the immediate calling environment and call
(outside of a debugging context) it may be possible to change that.



R does not have continuations or call-with-current-continuation or other
mechanisms for implementing coroutines, general iterators, and the like.

R supports keyword arguments.

==Similarities to Lisp and other dynamic languages, including Scheme==

R is runtime-typed and garbage-collected.

R supports nested read-eval-print loops for debugging etc.

R expressions are represented as user-manipulable data structures.

==Dissimilarities to all (modern) Lisps, including Scheme==

R has call-by-need, not call-by-object-value.

R does not have macros.


Those are related -- because of lazy evaluation one does macros are
not needed to achive semantic goals (see for example tryCatch).  Being
able to define friendlier syntax would sometimes be nice though (see
tryCatch again).


R objects are values, not pointers, so a<-1:10; b<-a; b[1]<-999; a[1] =>
999.  Similarly, functions cannot modify the contents of their arguments.

There is no equivalent to set-car!/rplaca (not even pairlists and
expressions).  For example, r<-pairlist(1,2); r[[1]]<-r does not create a
circular list. And in general there doesn't seem to be substructure sharing
at the semantic level (though there may be in the implementation).

R does not have multiple value return in the Lisp sense.

R assignment creates a new local variable on first assignment, dynamically.
So static analysis is not enough to determine variable reference (R is not
referentially transparent). Example: ff <- function(a){if (a) x<-1; x} ;
x<-99; ff(T) -> 1; ff(F) -> 99.


Correct, and a fair nuisance for code analysis and compilation work.
I'm not sure how much would break if R adopted the conventions in
Python (or with Scheme's define as I recall) that referencing a not
yet initialized local variable is an error.

I'm not sure I would label this as meaning R is not referentially
transparent thoug -- that goes out the window with mutable bindings as
also available in Scheme.


In R, most data types (including numeric vectors) do not have a standard
external representation which can be read back in without evaluation.


The default print form is not readable in this sense but dput is
available for this purpose.


R coerces logicals to numbers and numbers to strings. Lisps are stricter
about automatic type conversion -- except that false a.k.a. NIL == () in
Lisps other than Scheme.


A more important difference may be that logicals can have three values
-- TRUE, FALSE and NA.

luke



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Luke Tierney
Chair, Statistics and Actuarial Science
Ralph E. Wareham Professor of Mathematical Sciences
University of Iowa  Phone: 319-335-3386
Department of Statistics andFax:   319-335-3017
   Actuarial Science
241 Schaeffer Hall  email:  [EMAIL PROTECTED]
Iowa City, IA 52242 WWW:  http://www.stat.uiowa.edu

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R and Scheme

2008-12-08 Thread Gabor Grothendieck
A few comments interspersed.

On Mon, Dec 8, 2008 at 5:59 PM, Stavros Macrakis <[EMAIL PROTECTED]> wrote:
> I've read in many places that R semantics are based on Scheme semantics.  As
> a long-time Lisp user and implementor, I've tried to make this more precise,
> and this is what I've found so far.  I've excluded trivial things that
> aren't basic semantic issues: support for arbitrary-precision integers;
> subscripting; general style; etc. I would appreciate corrections or
> additions from more experienced users of R -- I'm sure that some of the
> points below simply reflect my ignorance.
>
> ==Similarities to Scheme==
>
> R has first-class function closures. (i.e. correctly supports upward and
> downward funarg).
>
> R has a single namespace for functions and variables (Lisp-1).

Environments can be used to create separate name spaces.

R packages can use the NAMESPACE file to set up their own namespace.

>
> ==Important dissimilarities to Scheme (as opposed to other Lisps)==
>
> R is not properly tail-recursive.
>
> R does not have continuations or call-with-current-continuation or other
> mechanisms for implementing coroutines, general iterators, and the like.
>

True although there is callCC but it just lets you jump right of a nested
sequence of calls.

> R supports keyword arguments.
>
> ==Similarities to Lisp and other dynamic languages, including Scheme==
>
> R is runtime-typed and garbage-collected.
>
> R supports nested read-eval-print loops for debugging etc.
>
> R expressions are represented as user-manipulable data structures.
>
> ==Dissimilarities to all (modern) Lisps, including Scheme==
>
> R has call-by-need, not call-by-object-value.

Call by need?

>
> R does not have macros.

You can create them. See:

Programmer's Niche: Macros in R
http://cran.r-project.org/doc/Rnews/Rnews_2001-3.pdf

>
> R objects are values, not pointers, so a<-1:10; b<-a; b[1]<-999; a[1] =>
> 999.  Similarly, functions cannot modify the contents of their arguments.
>

a[1] is not 999 after the above completes (if that is what is meant by
a[1] => 999):

> a<-1:10; b<-a; b[1]<-999
> a
 [1]  1  2  3  4  5  6  7  8  9 10

> There is no equivalent to set-car!/rplaca (not even pairlists and
> expressions).  For example, r<-pairlist(1,2); r[[1]]<-r does not create a
> circular list. And in general there doesn't seem to be substructure sharing
> at the semantic level (though there may be in the implementation).
>
> R does not have multiple value return in the Lisp sense.

You can do this:

http://finzi.psych.upenn.edu/R/Rhelp02a/archive/36820.html

> R assignment creates a new local variable on first assignment, dynamically.
> So static analysis is not enough to determine variable reference (R is not
> referentially transparent). Example: ff <- function(a){if (a) x<-1; x} ;
> x<-99; ff(T) -> 1; ff(F) -> 99.
>
> In R, most data types (including numeric vectors) do not have a standard
> external representation which can be read back in without evaluation.

???

>
> R coerces logicals to numbers and numbers to strings. Lisps are stricter
> about automatic type conversion -- except that false a.k.a. NIL == () in
> Lisps other than Scheme.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to display y-axis labels in Multcomp plot

2008-12-08 Thread Metconnection

Dear R-users, 
I'm currently using the multcomp package to produce plots of means with 95%
confidence intervals
i.e.

mult<-glht(lm(response~treatment, data=statdata),
linfct=mcp(treatment="Means"))
plot(confint(mult,calpha = sig))

Unfortunately the y-axis on the plot appears to be fixed and hence if the
labels on the y-axis (treatment levels) are too long, then they are not
displayed in full on the plot. Of course I could always make the labels
shorter but I was wondering if there was a way to make the position of the
y-axis on the plot more flexible, such as in the scatterplot produced using
xyplot function, that would allow me to view the labels in full.

Thanks in advance for any advice!
Simon

-- 
View this message in context: 
http://www.nabble.com/How-to-display-y-axis-labels-in-Multcomp-plot-tp20904977p20904977.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] custom panel help in lattice

2008-12-08 Thread Deepayan Sarkar
On Sun, Dec 7, 2008 at 9:37 AM, Jon Loehrke <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am having an issue with a custom panel for lattice.  The problem
> comes when I try passing a groups argument.
>
> Here is the custom panel, a wrapper for smooth spline.  I copied
> panel.loess and replaced the loess arguments with smooth.spline().
> [Note: I would like to use the cross-validation fitting properties of
> smooth.spline.]
>
> library(lattice)
>
> panel.smooth.spline<-function(x,y,w=NULL, df, spar = NULL, cv = FALSE,
>lwd=plot.line$lwd, lty=plot.line$lty,col, col.line=plot.line$col,
>type, horizontal=FALSE,... ){
>
>x <- as.numeric(x)
>y <- as.numeric(y)
>ok <- is.finite(x) & is.finite(y)
>if (sum(ok) < 1)
>return()
>if (!missing(col)) {
>if (missing(col.line))
>col.line <- col
>}
>plot.line <- trellis.par.get("plot.line")
>if (horizontal) {
>spline <- smooth.spline(y[ok], x[ok], ...)
>panel.lines(x = spline$y, y = spline$x, col = col.line,
>lty = lty, lwd = lwd, ...)
>}
>else {
>spline <- smooth.spline(x[ok], y[ok],...)
>panel.lines(x = spline$x, y = spline$y, col = col.line,
>lty = lty, lwd = lwd, ...)
>}
>}
>
>
> # Here is my test data frame
> set.seed(25)
> test<-data.frame(x=c(1:200), y=rnorm(200), groups=gl(4,200/4))
>
> # This call to xyplot works, but the smoother colors are not unique.
>
> xyplot(y~x|groups, data=test,
>panel=function(...){
>panel.xyplot(...)
>panel.smooth.spline(...)
>})
>
> # This call to xyplot doesn't work and results in an error "error
> using packet"
>
> xyplot(y~x|groups, data=test, groups=groups,
>panel=function(...){
>panel.xyplot(...)
>panel.smooth.spline(...)
>})
>
> I think this should be quite simple but I must be too simple minded.
> Thanks for any help.

You end up calling smooth.spline() with arguments it doesn't
recognize. One work-around is defining 'panel.smooth.spline' as
follows:

panel.smooth.spline <-
function(x, y,
 w=NULL, df, spar = NULL, cv = FALSE,
 lwd=plot.line$lwd, lty=plot.line$lty,col, col.line=plot.line$col,
 type, horizontal=FALSE,... )
{
   x <- as.numeric(x)
   y <- as.numeric(y)
   ok <- is.finite(x) & is.finite(y)
   if (sum(ok) < 1)
   return()
   if (!missing(col)) {
   if (missing(col.line))
   col.line <- col
   }
   plot.line <- trellis.par.get("plot.line")
   if (horizontal) {
   spline <-
   smooth.spline(y[ok], x[ok],
 w=w, df=df, spar = spar, cv = cv)
   panel.lines(x = spline$y, y = spline$x, col = col.line,
   lty = lty, lwd = lwd, ...)
   }
   else {
   spline <-
   smooth.spline(x[ok], y[ok],
 w=w, df=df, spar = spar, cv = cv)
   panel.lines(x = spline$x, y = spline$y, col = col.line,
   lty = lty, lwd = lwd, ...)
   }
   }

-Deepayan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Identifying a subset of observations on a 3d-scatter plot using cloud()

2008-12-08 Thread Deepayan Sarkar
On Sat, Dec 6, 2008 at 9:27 PM, Giam Xingli <[EMAIL PROTECTED]> wrote:
> Hello everyone,
>
> This is my first post to the mailing list, so I hope I am posting my message 
> the correct way.
>
> I am trying to present my dataset in a 3d scatterplot using cloud() in the 
> {lattice} package. I hope to explicitly identify a subset of my observations. 
> The observations in this subset need to satisfy the following critera (values 
> on x,y, and z axes above a certain cutoff value). It will be great if I can 
> get advice on how to annotate the points representing the subset of 
> observations with a different colour, and if possible, label the points.
>

For colors, you could try

cloud(z ~ x * y, ..., groups = (x > c1 & y > c2 & z > c3))

For interactive labeling, see ?panel.identify.cloud. Direct labeling
is also possible, but you need to write a panel function
('panel.3dtext' in the latticeExtra pacakge may help).

-Deepayan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to display y-axis labels in Multcomp plot

2008-12-08 Thread Kingsford Jones
See ?par and note the 'mar' parameter

Here's an example:


library(multcomp)
labs <- c('short', 'medium', 'long')
treatment <- gl(3, 10, labels = labs)
response <- rnorm(30, mean=as.numeric(treatment))
mult <- glht(lm(response ~ treatment),
linfct=mcp(treatment='Means'))
par(mar=c(4,8,4,2))
plot(confint(mult))

hth,

Kingsford Jones



On Mon, Dec 8, 2008 at 5:06 PM, Metconnection <[EMAIL PROTECTED]> wrote:
>
> Dear R-users,
> I'm currently using the multcomp package to produce plots of means with 95%
> confidence intervals
> i.e.
>
> mult<-glht(lm(response~treatment, data=statdata),
> linfct=mcp(treatment="Means"))
> plot(confint(mult,calpha = sig))
>
> Unfortunately the y-axis on the plot appears to be fixed and hence if the
> labels on the y-axis (treatment levels) are too long, then they are not
> displayed in full on the plot. Of course I could always make the labels
> shorter but I was wondering if there was a way to make the position of the
> y-axis on the plot more flexible, such as in the scatterplot produced using
> xyplot function, that would allow me to view the labels in full.
>
> Thanks in advance for any advice!
> Simon
>
> --
> View this message in context: 
> http://www.nabble.com/How-to-display-y-axis-labels-in-Multcomp-plot-tp20904977p20904977.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to use different title in a loop

2008-12-08 Thread Xinxin Yu

Hi all,

I want to draw n plots whose titles are "Main effect of X_i", i= 1,  
2, ..., n, respectively.


More specifically, I use:

par(mfrow = c(n/2, 2))
for ( i in 1:n)
{
plot(1:J, y[i, 1: J], main = 'Main effect of X_i')
}

This gives me a series of plots with the common title with a letter i  
in it, not the consecutive numbers I want.


Thanks in advance.

Xinxin

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] bayesm package not downloading via any mirror or repository

2008-12-08 Thread ekwaters

I am a pretty new R user, I am running the latest linux version on xandros,
updated with some extra debian packages, and I also run the latest windows
version, but prefer linux.

I am having trouble downloading "bayesm", it won't do it all from any of the
sites on the web, I resorted to this one,
http://packages.debian.org/unstable/math/r-cran-bayesm. and got slightly
further, but I think it is still not recognising a mirror. THis is my error
message:

> install.packages("bayesm")
Warning in install.packages("bayesm") : argument 'lib' is missing: using
/usr/local/lib/R/site-library
--- Please select a CRAN mirror for use in this session ---
Loading Tcl/Tk interface ... done
Warning in download.packages(unique(pkgs), destdir = tmpd, available =
available,  :
 no package 'bayesm' at the repositories

The package is certainly there.

It could be my version of linux, I know.

All I want to do is sample from a dirichlet prior, so I want rdirichlet and
ddirichlet commands basically. If anyone can help or suggest an alternate
package that would be great.

E
-- 
View this message in context: 
http://www.nabble.com/bayesm-package-not-downloading-via-any-mirror-or-repository-tp20906495p20906495.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scan a folder for a given type of files

2008-12-08 Thread Sébastien
Thanks Gabor

Gábor Csárdi a écrit :
> Sebastien,
>
> see ?list.files, especially the 'pattern' argument. For your
> particular case it is
>
> list.files(pattern="\\.txt$")
>
> You might want to use the 'ignore.case' argument as well.
>
> Gabor
>
> On Mon, Dec 8, 2008 at 10:53 PM,  <[EMAIL PROTECTED]> wrote:
>   
>> Dear R-users,
>>
>> I have found on the list several posts addressing the issue of getting data 
>> for
>> a list of defined files. However, I found nothing about scanning a given 
>> folder
>> and retrieving the list of, let's say, .txt files (I probably used the wrong
>> keywords). Could someone tell me which posts or function help I should look 
>> at?
>>
>> Thanks in advance
>>
>> Sebastien
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>> 
>
>
>
>   

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to use different title in a loop

2008-12-08 Thread Aval Sarri
see ?paste

main = paste ("Main effect of ", variable)

> par(mfrow = c(n/2, 2))
> for ( i in 1:n)
> {
>plot(1:J, y[i, 1: J], main = 'Main effect of X_i')
> }
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Trouble with gridBase and inset plots

2008-12-08 Thread Paul Murrell
Hi


Lorenzo Isella wrote:
> Dear All,
> I ma having a trouble in generating a figure containing 3 insets with
> the gridBase package.
> I always get an error message of the kind:
> 
> Error in gridPLT() : Figure region too small and/or viewport too large
> 
> No matter which parameters I choose. The plots works nicely with two
> insets only, but when I try adding the third one, my troubles begin.
> I am probably doing something wrong in the generation of the 3rd inset
> and I paste below everything I do in this (a bit complicated) figure.
> Any suggestion is welcome.


The error means that you are creating a region that is too small.  Try
setting the width and height of the PDF to something big and your code
might work and you might be able to see why the region is too small at
the default size.  To get more help, you'll have to simplify your code
example and/or post some data so that we can run your code.

Paul


> Cheers
> 
> Lorenzo
> 
> 
> pdf("./post-processing-plots/exploratory_research_figure_2.pdf")
> par( mar = c(4.5,5, 2, 1) + 0.1)
> plot(time[1:time_end],tot_num_150[1:time_end]/1e6,type="b",lwd=2,col="blue",lty=2,
>  xlab=expression(paste(tau,"[s]")),
>  ylab=expression(paste("N[",
> cm^{-3},"]")),cex.lab=1.6,ylim=range(c(7.4e7,1.43e8)),yaxt="n",cex.axis=1.4)
> #lines(time[1],ini_pop/1e6, "p",col="red",lwd=2,lty=1,pch=5 )
> lines(time[time_end],8.25e7, "p",col="red",lwd=2,lty=1,pch=5)
> 
> axis(side=2, at=c( 7.4e7, 9.6e7, 1.18e8, 1.4e8),
> labels=expression(7.4%*%10^7, 9.6%*%10^7,
> 1.18%*%10^8,1.4%*%10^8),cex.lab=1.6,cex.axis=1.4)
> #axis(side=1,cex.axis=1.4)
> ## lines(time[1:time_end],
> N_approx[1:time_end],col="red",type="b",lwd=2,lty=1,pch=4)
> ## lines(time[1:time_end],
> N_approx2[1:time_end],col="black",type="b",lwd=2,lty=1,pch=2)
> ## lines(time[1:time_end],
> N_approx_beta1[1:time_end,2],col="brown",type="b",lwd=2,lty=1,pch=5)
> legend("topright",cex=1.2, c(expression("Simulation"),
> expression("Outlet measurement")),
> lwd=c(2,2),lty=c(2,0),pch = c(1,5),col=c("blue", "red"),box.lwd=0,box.lty=0,
> ,xjust = 1, yjust = 1)
> # abline(v=time[12],lwd=2,pch=2,lty=2)
> lines(c(time[14],time[14]), c(0,1.2e8),lwd=2,lty=2,pch=2)
> # legend(-0.2,9.2e7,cex=1.2,c(expression("numerical result for a
> 5m-long pipe")),bty="n")
>  arrows(0.8, 9e7, time[14], tot_num_150[14]/1e6, length = 0.15,lwd=2)
> text(0.8,8.8e7,cex=1.2,"Final concentration for a")
> text(0.8,8.5e7,cex=1.2,"6.5m long transfer tube (LAT)")
> 
> text(0.8,8e7,cex=1.2,"Final concentration for a")
> text(0.68,7.7e7,cex=1.2,"9m long transfer tube")
> text(0.8,7.4e7,cex=1.2,"(VELA)")
> 
> text(1.3,1.24e8,cex=1.2,"Residence time for")
> text(1.3,1.21e8,cex=1.2,"a 6.5m long transfer tube")
> 
> 
> arrows(1.1, 7.7e7, time[21], 7.7e7, length = 0.,lwd=2)
> arrows( time[21], 7.7e7, time[21], tot_num_150[21]/1e6, length = 0.15,lwd=2)
> 
> 
> 
> par( mar = c(0.,0., 0., 0.) )
> 
> 
> #1st inset
> 
> vp <- baseViewports()
>pushViewport(vp$inner,vp$figure,vp$plot)
>pushViewport(viewport(x=-0.0,y=1.04,width=.4,height=.4,just=c(0,1)))
> 
>par(fig=gridPLT(),new=F)
> 
> #grid.rect(gp=gpar(lwd=0,col="red"))
> 
>plot(D_mean,data_150[1, ]/log_factor*log(10)/1e6,"l",
> pch=1,col="black", lwd=2,xlab="",ylab=""
> ,cex.axis=1.,cex.lab=1.,log="x",xaxt="n",yaxt="n",
> ylim=range(c(0, 2.4e8)))
> 
> ## axis(side=2, at=c( 0, 0.6e8, 1.2e8, 1.8e8, 2.4e8),
> ## labels=expression(0, 6%*%10^7, 1.2%*%10^8,
> 1.8%*%10^8,2.4%*%10^8),cex.lab=1.4,cex.axis=1.2)
> 
> 
> #2nd inset
> 
> #vp <- baseViewports()
>pushViewport(vp$inner,vp$figure,vp$plot)
>pushViewport(viewport(x=0.5,y=0.65,width=.4,height=.4,just=c(0,1)))
> 
>par(fig=gridPLT(),new=F)
> 
> #grid.rect(gp=gpar(lwd=0,col="red"))
> 
>plot(D_mean,data_150[21, ]/log_factor*log(10)/1e6,"l",lwd=2,
> pch=1,col="black",xlab="",ylab="", log="x"
> ,cex.axis=1.4,cex.lab=1.6,xaxt="n",yaxt="n",ylim=range(c(0, 2.4e8)))
> 
> 
> 
> #3rd inset
> 
> #vp <- baseViewports()
>pushViewport(vp$inner,vp$figure,vp$plot)
>pushViewport(viewport(x=0.25,y=0.7,width=.4,height=.4,just=c(0,1)))
> 
>par(fig=gridPLT(),new=T)
> 
> #grid.rect(gp=gpar(lwd=0,col="red"))
> 
>plot(D_mean,data_150[14, ]/log_factor*log(10)/1e6,"l",lwd=2,
> pch=1,col="black",xlab="",ylab="", log="x"
> ,cex.axis=1.4,cex.lab=1.6,xaxt="n",yaxt="n",ylim=range(c(0, 2.4e8)))
> 
> 
> popViewport(3)
> 
> 
> dev.off()
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
[EMAIL PROTECTED]
http://www.stat.auckland.ac.nz/~paul/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/

[R] expected variable name error pos 98349 WInBUGS in R

2008-12-08 Thread Anamika Chaudhuri
> I am using a random intercept model with SITEID as random and NAUSEA as
> outcome.
>
> I tried using a dataset without missing values and changed my model
> statement accordingly but still get the same error. Follwoing in an excerpt.
> > anal.data <- read.table("nausea.txt", header=T, sep="\t")
> > list(names(anal.data))
> [[1]]
> [1] "SITEID" "NAUSEA"
> > #anal.data <- read.csv("simuldat.csv", header=T)
> > attach(anal.data)
> The following object(s) are masked from anal.data ( position 3 ) :
>  NAUSEA SITEID
>
> The following object(s) are masked from anal.data ( position 4 ) :
>  NAUSEA SITEID
> > data.bugs <-
> list("NAUSEA"=NAUSEA,"SITEID"=SITEID,"n.samples"=n.samples,"n.sites"=n.sites,"n.params"=n.params)
> > bugsData(data.bugs, fileName = "nauseadata.txt")
> > inits.bugs <- list("alpha"=rep(0,n.sites), "tau"=1)
> > bugsInits(list(inits.bugs), fileName = "nauseainit.txt")
> > modelCheck("nausea_random.txt")   # check model file
> model is syntactically correct
> > modelData("nauseadata.txt") # read data file
> expected variable name error pos 98349
> *MODEL*
> model
> {
> for (i in 1:n.samples)
> {NAUSEA[i] ~ dbin(p[i],1)
> logit(p[i]) <- alpha[SITEID[i]]}
> #for(k in 1:n.params)
> #{b[k]~ dnorm(0.0,tau)}
> for (j in 1:n.sites)
> {alpha[j]~dnorm(0.0,1.0E-10)}
>
> tau ~ dgamma(0.001,0.001)
> }
> *Dataset:*
> SITEID NAUSEA
> 1 0
> 1 1
> 1 1
> 1 0
> 1 1
> 1 1
> 1 0
> 1 1
> 1 1
> 1 1
> 1 0
> 1 1
> 1 0
> 1 1
> 1 1
> 1 1
> 1 0
> 1 0
> 1 0
> 1 1
>
> -Anamika
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] errors with compilation

2008-12-08 Thread Jason Tan

Hi,

i'm trying to compile R on a Cray XT3 using pgi/7.2.1 - CNL (compute  
node linux)

The R version is 2.8.0

this is the option
-enable-R-static-lib=yes
--disable-R-shlib
CPICFLAGS=fpic
FPICFLAGS=fpic
CXXPICFLAGS=fpic
SHLIB_LDFLAGS=shared
--with-x=no
SHLIB_CXXLDFLAGS=shared
--disable-BLAS-shlib
CFLAGS="-g -O2 -Kieee"
FFLAGS="-g -O2 -Kieee"
CXXFLAGS="-g -O2 -Kieee"
FCFLAGS="-g -O2 -Kieee"
CC=cc
F77=ftn
CXX=CC
FC=ftn

R is now configured for x86_64-unknown-linux-gnu

 Source directory:  .
 Installation directory:/lus/nid00036/jasont/R

 C compiler:cc  -g -O2 -Kieee
 Fortran 77 compiler:   ftn  -g -O2 -Kieee

 C++ compiler:  CC  -g -O2 -Kieee
 Fortran 90/95 compiler:ftn -g -O2 -Kieee
 Obj-C compiler: gcc -g -O2

 Interfaces supported:
 External libraries:readline
 Additional capabilities:   PNG, JPEG, iconv, MBCS, NLS
 Options enabled:   static R library, R profiling, Java

 Recommended packages:  yes

The error is :

cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -I. -I../../src/include -I../../src/include -I/usr/local/include  
-DHAVE_CONFIG_H-g -O2 -Kieee -c pcre.c -o pcre.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 108)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 109)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 110)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 111)
PGC/x86-64 Linux 7.2-1: compilation completed with warnings
cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -I. -I../../src/include -I../../src/include -I/usr/local/include  
-DHAVE_CONFIG_H-g -O2 -Kieee -c platform.c -o platform.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-S-0037-Syntax error: Recovery attempted by deleting identifier  
FALSE (platform.c: 1657)

PGC-S-0094-Illegal type conversion required (platform.c: 1661)
PGC/x86-64 Linux 7.2-1: compilation completed with severe errors
make[3]: *** [platform.o] Error 2
make[3]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[2]: *** [R] Error 2
make[2]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[1]: *** [R] Error 1
make[1]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src'
make: *** [R] Error 1


Jason Tan
Senior Computer Support Officer
Western Australian Supercomputer Program (WASP)
The University of Western Australia
M024
35 Stirling Highway
CRAWLEY WA 6009
Ph: +618 64888742
Fax: +618 6488 8088
Email: [EMAIL PROTECTED]
Web: www.wasp.uwa.edu.au

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] error from running WinBUGS in R

2008-12-08 Thread Anamika Chaudhuri
Has anyone ever seen an error like this from running WinBUGS in R
'> modelCompile(numChains=2) # compile model with 1 chain
error for node p[3421] of type GraphLogit.Node node vetor contains undefined
elements

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] forestplot and x axis scale

2008-12-08 Thread Pam Murnane
Hello R users,

I would like to create several forestplots with the same X axis, so,
if you were to look at the plots lined up all the X axes would be
identical (and the different plots could be compared).  Here is one
version of code I've used:

mytk10<-c(0.1, 0.5, 1, 2, 5, 10)
pdf(file = "myfile.pdf",
 pointsize = 7, paper="letter", width=6, height=9)
forestplot(newcite,or,lcl,ucl,zero=0, graphwidth = unit(1.2,"inches"),
   clip=c(log(0.1),log(10)), xlog=TRUE, xticks=mytk10, xlab="Odds Ratio",
   col=meta.colors(box="darkblue",line="darkblue",zero="grey50"))
title(main = list("My title", col="darkblue", font=2))
dev.off()

--> I have changed the width of the pdf output and/or the graphwidth
specified in the forestplot function -- and depending on the length of
the text/table descriptions (in the matrix "newcite"), the X axis will
vary (when the text is long, the axis is shorter).  I tried fixing the
axis at a relatively small size (using graphwidth), but it would still
be smaller when I was using data with long "newcite" text.  Do I need
to fix the amount of text for display within "newcite"?

--> A second and less essential question:  I've used mytk10 for the
axis tickmarks/labels - I'd prefer no decimal points for 1,2,5,and 10
- any way to adjust this?

Thanks in advance for any assistance!

Pam

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to add accuracy, sensitivity, specificity to logistic regression output?

2008-12-08 Thread pufftissue pufftissue
Hi,

Is there a way when doing logistic regression for the output to spit out
accuracy, sensitivity, and specificity?

I would like to know these basic measures for my model.

Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] chron - when seconds data not included

2008-12-08 Thread Tubin

I have date and time data which looks like this:

  [,1] [,2]   
 [1,] "7/1/08" "9:19" 
 [2,] "7/1/08" "9:58" 
 [3,] "7/7/08" "15:47"
 [4,] "7/8/08" "10:03"
 [5,] "7/8/08" "10:32"
 [6,] "7/8/08" "15:22"
 [7,] "7/8/08" "15:27"
 [8,] "7/8/08" "15:40"
 [9,] "7/9/08" "10:25"
[10,] "7/9/08" "10:27"

I would like to use chron on it, so that I can calculate intervals in time.

I can't seem to get chron to accept the time format that doesn't include
seconds.  Do I have to go through and append :00 on every line in order to
use chron?
-- 
View this message in context: 
http://www.nabble.com/chron---when-seconds-data-not-included-tp20908803p20908803.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] chron - when seconds data not included

2008-12-08 Thread Gabor Grothendieck
On Mon, Dec 8, 2008 at 11:52 PM, Tubin <[EMAIL PROTECTED]> wrote:
>
> I have date and time data which looks like this:
>
>  [,1] [,2]
>  [1,] "7/1/08" "9:19"
>  [2,] "7/1/08" "9:58"
>  [3,] "7/7/08" "15:47"
>  [4,] "7/8/08" "10:03"
>  [5,] "7/8/08" "10:32"
>  [6,] "7/8/08" "15:22"
>  [7,] "7/8/08" "15:27"
>  [8,] "7/8/08" "15:40"
>  [9,] "7/9/08" "10:25"
> [10,] "7/9/08" "10:27"
>
> I would like to use chron on it, so that I can calculate intervals in time.
>
> I can't seem to get chron to accept the time format that doesn't include
> seconds.  Do I have to go through and append :00 on every line in order to
> use chron?



That's one way:

m <- matrix( c("7/1/08","9:19",
  "7/1/08","9:58",
  "7/7/08","15:47",
  "7/8/08","10:03",
  "7/8/08","10:32",
  "7/8/08","15:22",
  "7/8/08","15:27",
  "7/8/08","15:40",
  "7/9/08","10:25",
  "7/9/08","10:27"), nc = 2, byrow = TRUE)

chron(m[,1], paste(m[,2], 0, sep = ":"))

# another is to use as.chron

as.chron(apply(m, 1, paste, collapse = " "), "%m/%d/%y %H:%M")

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Data Analysis Functions in R

2008-12-08 Thread Feanor22

Hi experts of R,

Are there any functions in R to test a univariate series for long memory
effects, structural breaks and time reversability?
I've found for ARCH effects(ArchTest), for normal (Shapiro.test,
KS.test(comparing with randn) and lillie.test) but not for the above
mentioned.
Where can I find a comprehensive list of functions available by type?

Thank you

Renato Costa
-- 
View this message in context: 
http://www.nabble.com/Data-Analysis-Functions-in-R-tp20909079p20909079.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] R and Scheme

2008-12-08 Thread Peter Dalgaard

Luke Tierney wrote:


R does not have macros.


Those are related -- because of lazy evaluation one does macros are
not needed to achive semantic goals (see for example tryCatch).  Being
able to define friendlier syntax would sometimes be nice though (see
tryCatch again).


Also for some practical purposes. For instance, it is pretty hard to 
loop over a set of variables and get the output labeled with the 
variable names


for (y in list(systbt, diastbt)) plot (age, y)

will (of course) plot both with a y axis labeled "y". A macro version of 
"for" could potentially create the expansion


plot(age, systbt)
plot(age, diastbt)

and evaluate the result. (Presumably, this is already possible with a 
bit of sufficiently contorted code, but there's no user-friendly syntax 
for it that I know of.)



In R, most data types (including numeric vectors) do not have a standard
external representation which can be read back in without evaluation.


The default print form is not readable in this sense but dput is
available for this purpose.


I think the point is that dput() is not always equivalent to the object 
- parsing dput output gives a different object from the original. 
Numeric vectors of length > 1 is the most obvious case, "expression" 
objects another (those can get terribly confusing at times).


We currently have this sort of confusion

> e <- substitute(x+a,list(a=c(1,2,3)))
> e2 <- parse(text=deparse(e))[[1]]
> e2
x + c(1, 2, 3)
> e[[3]]
[1] 1 2 3
> e2[[3]]
c(1, 2, 3)

> dput(e)
x + c(1, 2, 3)
> dput(e2)
x + c(1, 2, 3)

That is, deparse/reparse of an object can lead to a _different_ object 
that gets dput()'ed indistinguishably.


I have occasionally been wondering whether it would be a good idea to 
have some sort of syntax primitive requesting parse-time evaluation. E.g.


quote(x + .const(c(1,2,3)))

would be equivalent to

substitute(x+a,list(a=c(1,2,3)))




--
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] package "wmtsa": wavCWTPeaks error

2008-12-08 Thread mauede
I keep getting the following error when I look for minima in the series:

> aa.peak <- wavCWTPeaks (aa.tree)
Error in `row.names<-.data.frame`(`*tmp*`, value = c("1", "0")) :
  invalid 'row.names' length

How can I work it around ?

Thank you.

Regards,
Maura


Alice Messenger ;-) chatti anche con gli amici di Windows Live Messenger e 
tutti i telefonini TIM!
Vai su http://maileservizi.alice.it/alice_messenger/index.html?pmk=footer
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] package "wmtsa": wavCWTPeaks error

2008-12-08 Thread stephen sefick
Are the names of the rows the same as the time series that you are
using?  I know that I am not being that helpful, but this seems like a
mismatch in the time series object.  look at
length(rowname(your.data))
length(your.data[,1])

again it is always helpful to have reproducible code.

On Tue, Dec 9, 2008 at 1:39 AM,  <[EMAIL PROTECTED]> wrote:
> I keep getting the following error when I look for minima in the series:
>
>> aa.peak <- wavCWTPeaks (aa.tree)
> Error in `row.names<-.data.frame`(`*tmp*`, value = c("1", "0")) :
>  invalid 'row.names' length
>
> How can I work it around ?
>
> Thank you.
>
> Regards,
> Maura
>
>
> Alice Messenger ;-) chatti anche con gli amici di Windows Live Messenger e 
> tutti i telefonini TIM!
> Vai su http://maileservizi.alice.it/alice_messenger/index.html?pmk=footer
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Stephen Sefick

Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods.  We are mammals, and have not exhausted the
annoying little problems of being mammals.

-K. Mullis

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] errors with compilation

2008-12-08 Thread Prof Brian Ripley

Please try the R-patched.version of R (see the posting guide or the FAQ).

That seems to be about an error in the R 2.8.0 sources that was corrected 
in October, and only happens if you do not ask for X11 support (which most 
R users need, including it seems all the pre-release testers).


On Tue, 9 Dec 2008, Jason Tan wrote:


Hi,

i'm trying to compile R on a Cray XT3 using pgi/7.2.1 - CNL (compute node 
linux)

The R version is 2.8.0

this is the option
-enable-R-static-lib=yes
--disable-R-shlib

/* The latter is the default */


CPICFLAGS=fpic
FPICFLAGS=fpic
CXXPICFLAGS=fpic
SHLIB_LDFLAGS=shared
--with-x=no
SHLIB_CXXLDFLAGS=shared
--disable-BLAS-shlib


Do you really need to set all these?  And are you sure you know the 
correct values?  ('-shared' is standard on Linux.)



CFLAGS="-g -O2 -Kieee"
FFLAGS="-g -O2 -Kieee"
CXXFLAGS="-g -O2 -Kieee"
FCFLAGS="-g -O2 -Kieee"
CC=cc
F77=ftn
CXX=CC
FC=ftn

R is now configured for x86_64-unknown-linux-gnu

Source directory:  .
Installation directory:/lus/nid00036/jasont/R

C compiler:cc  -g -O2 -Kieee
Fortran 77 compiler:   ftn  -g -O2 -Kieee

C++ compiler:  CC  -g -O2 -Kieee
Fortran 90/95 compiler:ftn -g -O2 -Kieee
Obj-C compiler:  gcc -g -O2

Interfaces supported:
External libraries:readline
Additional capabilities:   PNG, JPEG, iconv, MBCS, NLS
Options enabled:   static R library, R profiling, Java

Recommended packages:  yes

The error is :

cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/pcre  -I. 
-I../../src/include -I../../src/include -I/usr/local/include -DHAVE_CONFIG_H 
-g -O2 -Kieee -c pcre.c -o pcre.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 108)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 109)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 110)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h: 111)
PGC/x86-64 Linux 7.2-1: compilation completed with warnings
cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/pcre  -I. 
-I../../src/include -I../../src/include -I/usr/local/include -DHAVE_CONFIG_H 
-g -O2 -Kieee -c platform.c -o platform.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-S-0037-Syntax error: Recovery attempted by deleting identifier FALSE 
(platform.c: 1657)

PGC-S-0094-Illegal type conversion required (platform.c: 1661)
PGC/x86-64 Linux 7.2-1: compilation completed with severe errors
make[3]: *** [platform.o] Error 2
make[3]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[2]: *** [R] Error 2
make[2]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[1]: *** [R] Error 1
make[1]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src'
make: *** [R] Error 1


Jason Tan
Senior Computer Support Officer
Western Australian Supercomputer Program (WASP)
The University of Western Australia
M024
35 Stirling Highway
CRAWLEY WA 6009
Ph: +618 64888742
Fax: +618 6488 8088
Email: [EMAIL PROTECTED]
Web: www.wasp.uwa.edu.au

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


--
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] errors with compilation

2008-12-08 Thread Jason Tan

thanks. i noticed a syntax error. a missing semi colon.

any idea about the error below?

cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c Rsock.c -o Rsock.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c internet.c -o internet.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c nanoftp.c -o nanoftp.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c nanohttp.c -o nanohttp.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c sock.c -o sock.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -I. -I../../../src/include -I../../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H   -fPIC  -g -O2  -c sockconn.c -o sockconn.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
cc -shared -fPIC  -L/usr/local/lib64 -o internet.so Rsock.o internet.o  
nanoftp.o nanohttp.o sock.o sockconn.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
/usr/bin/ld: /usr/lib64/libpthread.a(ptw-fcntl.o): relocation  
R_X86_64_32 against `a local symbol' can not be used when making a  
shared object; recompile with -fPIC

/usr/lib64/libpthread.a: could not read symbols: Bad value
make[4]: *** [internet.so] Error 2

jason

On 09/12/2008, at 4:50 PM, Prof Brian Ripley wrote:

Please try the R-patched.version of R (see the posting guide or the  
FAQ).


That seems to be about an error in the R 2.8.0 sources that was  
corrected in October, and only happens if you do not ask for X11  
support (which most R users need, including it seems all the pre- 
release testers).


On Tue, 9 Dec 2008, Jason Tan wrote:


Hi,

i'm trying to compile R on a Cray XT3 using pgi/7.2.1 - CNL  
(compute node linux)

The R version is 2.8.0

this is the option
-enable-R-static-lib=yes
--disable-R-shlib

/* The latter is the default */


CPICFLAGS=fpic
FPICFLAGS=fpic
CXXPICFLAGS=fpic
SHLIB_LDFLAGS=shared
--with-x=no
SHLIB_CXXLDFLAGS=shared
--disable-BLAS-shlib


Do you really need to set all these?  And are you sure you know the  
correct values?  ('-shared' is standard on Linux.)



CFLAGS="-g -O2 -Kieee"
FFLAGS="-g -O2 -Kieee"
CXXFLAGS="-g -O2 -Kieee"
FCFLAGS="-g -O2 -Kieee"
CC=cc
F77=ftn
CXX=CC
FC=ftn

R is now configured for x86_64-unknown-linux-gnu

Source directory:  .
Installation directory:/lus/nid00036/jasont/R

C compiler:cc  -g -O2 -Kieee
Fortran 77 compiler:   ftn  -g -O2 -Kieee

C++ compiler:  CC  -g -O2 -Kieee
Fortran 90/95 compiler:ftn -g -O2 -Kieee
Obj-C compiler:  gcc -g -O2

Interfaces supported:
External libraries:readline
Additional capabilities:   PNG, JPEG, iconv, MBCS, NLS
Options enabled:   static R library, R profiling, Java

Recommended packages:  yes

The error is :

cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -I. -I../../src/include -I../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H -g -O2 -Kieee -c pcre.c -o pcre.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h:  
108)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h:  
109)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h:  
110)
PGC-W-0155-64-bit integral value truncated  (/usr/include/wctype.h:  
111)

PGC/x86-64 Linux 7.2-1: compilation completed with warnings
cc -I../../src/extra/zlib -I../../src/extra/bzip2 -I../../src/extra/ 
pcre  -I. -I../../src/include -I../../src/include -I/usr/local/ 
include -DHAVE_CONFIG_H -g -O2 -Kieee -c platform.c -o platform.o

/opt/cray/xt-asyncpe/1.2/bin/cc: INFO: linux target is being used
PGC-S-0037-Syntax error: Recovery attempted by deleting identifier  
FALSE (platform.c: 1657)

PGC-S-0094-Illegal type conversion required (platform.c: 1661)
PGC/x86-64 Linux 7.2-1: compilation completed with severe errors
make[3]: *** [platform.o] Error 2
make[3]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[2]: *** [R] Error 2
make[2]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src/main'
make[1]: *** [R] Error 1
make[1]: Leaving directory `/lus/nid00036/jasont/R-2.8.0/src'
make: *** [R] Error 1


Jason Tan
Senior Computer Support Officer
Western Australian Supercomputer Program (WASP)
The University of Western Australia
M024
35 Stirling Highway
CRAWLEY WA 6009
Ph: +618 64888742
Fax: +618 6488 8088
Email: [EMAIL PROTECTED]
Web: www.wasp.uwa.edu.au