Re: [R] Modifying a package name / Build from sources (rpart)

2011-02-14 Thread Simon Knos
On Thu, Feb 10, 2011 at 8:43 PM, Marc Schwartz  wrote:
>
> n Feb 10, 2011, at 12:13 PM, Ben Bolker wrote:
[...]

Dear Ben, Dear Marc


Thank you very much!


Best,

Simon

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] conditional value assignment

2011-02-14 Thread Frank Tamborello
Dear R-Help,

I am trying to compute a new variable, let's call it "target cannon orientation 
(tco)" based conditionally on old variables, "TargetColor," "CannonOriB," and 
"CannonOriR." For every case in the data set, if TargetColor is "B" then I want 
tco to equal the value for that case of CannonOirB, else CannonOriR. I've tried 
writing for loops and functions that I can feed to sapply. I suspect there must 
be a simple solution but I cannot seem to get either incantation to perform the 
assignment. What would be a good way to do this?

Example data:
"TargetColor.1.18." "CannonOriB.1.18."  "CannonOriR.1.18."
"B" 5   3
"R" 5   3


Example assignment of tco
"tco"
5
3

Thanks much!

Frank Tamborello, PhD
W. M. Keck Postdoctoral Fellow
School of Biomedical Informatics
University of Texas Health Science Center, Houston
https://xfiles.uth.tmc.edu/Users/ftamborello/public/index.html

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rjava does not install

2011-02-14 Thread Orvalho Augusto
Sorry for this delay. I have installed rjava on Fedora, CentOS and Debians.

The secret is to install java and configure de java variables. The easiest
way for R is:
R CMD javareconf

It will detect your java envoriment and preconfigure R for you.

Then  run: R CMD INSTALL rJava etc as you did.

Caveman


On Tue, Feb 8, 2011 at 9:05 AM, Prof Brian Ripley wrote:

> This isn't the right place (rJava has its own support lists), but
>
> - that is not the Java package installed on any version of Fedora I have
> seen, and you might want to try the native openjdk version.
>
> - the JAVA_HOME used by the build is not what you show, as it is looking in
> /opt/jre1.6.0_22/include (and needs to look in the JDK).
>
> I have no idea what about your setup caused the discrepancy: please seek
> help from your local Linux support (or use rJava's support lists).
>
>
> On Mon, 7 Feb 2011, servet cizmeli wrote:
>
>  I am on a fedora server on which I am not root privileges. I am trying to
>> locally install rJava... Here are my steps :
>>
>> $uname -a
>> Linux 2.6.18-194.17.4.el5 #1 SMP Mon Oct 25 15:50:53 EDT 2010 x86_64
>> x86_64 x86_64 GNU/Linux
>>
>> $ java -version
>> java version "1.6.0_22"
>> Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
>> Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode)
>>
>> $ echo $JAVA_HOME
>> /opt/jdk1.6.0_22/
>>
>> $ R CMD javareconf -e
>> Java interpreter : /opt/jdk1.6.0_22//jre/bin/java
>> Java version : 1.6.0_22
>> Java home path : /opt/jdk1.6.0_22/
>> Java compiler : /opt/jdk1.6.0_22//bin/javac
>> Java headers gen.: /opt/jdk1.6.0_22//bin/javah
>> Java archive tool: /opt/jdk1.6.0_22//bin/jar
>> Java library path:
>> $(JAVA_HOME)jre/lib/amd64/server:$(JAVA_HOME)jre/lib/amd64:$(JAVA_HOME)jre/../lib/amd64::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
>> JNI linker flags : -L$(JAVA_HOME)jre/lib/amd64/server
>> -L$(JAVA_HOME)jre/lib/amd64 -L$(JAVA_HOME)jre/../lib/amd64 -L
>> -L/usr/java/packages/lib/amd64 -L/usr/lib64 -L/lib64 -L/lib -L/usr/lib -ljvm
>> JNI cpp flags : -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux
>>
>> The following Java variables have been exported:
>> JAVA_HOME JAVA JAVAC JAVAH JAR JAVA_LIBS JAVA_CPPFLAGS
>> JAVA_LD_LIBRARY_PATH
>>
>> And the installation halts with the following error (please see below for
>> the details):
>> rJava.h:19:17: error: jni.h: No such file or directory
>>
>> I would appreciate very much your kindly help
>> Servet
>>
>>
>>
>> install.packages("rJava",dep=T)
>> Installing package(s) into
>> ‘/home/acizmeli/R/x86_64-redhat-linux-gnu-library/2.12’
>> (as ‘lib’ is unspecified)
>> --- Please select a CRAN mirror for use in this session ---
>> Loading Tcl/Tk interface ... done
>> trying URL 'http://cran.skazkaforyou.com/src/contrib/rJava_0.8-8.tar.gz'
>> Content type 'application/x-gzip' length 522057 bytes (509 Kb)
>> opened URL
>> ==
>> downloaded 509 Kb
>>
>> * installing *source* package ‘rJava’ ...
>> checking for gcc... gcc -m64 -std=gnu99
>> checking for C compiler default output file name... a.out
>> checking whether the C compiler works... yes
>> checking whether we are cross compiling... no
>> checking for suffix of executables...
>> checking for suffix of object files... o
>> checking whether we are using the GNU C compiler... yes
>> checking whether gcc -m64 -std=gnu99 accepts -g... yes
>> checking for gcc -m64 -std=gnu99 option to accept ISO C89... none needed
>> checking how to run the C preprocessor... gcc -m64 -std=gnu99 -E
>> checking for grep that handles long lines and -e... /bin/grep
>> checking for egrep... /bin/grep -E
>> checking for ANSI C header files... yes
>> checking for sys/wait.h that is POSIX.1 compatible... yes
>> checking for sys/types.h... yes
>> checking for sys/stat.h... yes
>> checking for stdlib.h... yes
>> checking for string.h... yes
>> checking for memory.h... yes
>> checking for strings.h... yes
>> checking for inttypes.h... yes
>> checking for stdint.h... yes
>> checking for unistd.h... yes
>> checking for string.h... (cached) yes
>> checking sys/time.h usability... yes
>> checking sys/time.h presence... yes
>> checking for sys/time.h... yes
>> checking for unistd.h... (cached) yes
>> checking for an ANSI C-conforming const... yes
>> checking whether time.h and sys/time.h may both be included... yes
>> configure: checking whether gcc -m64 -std=gnu99 supports static inline...
>> yes
>> checking whether setjmp.h is POSIX.1 compatible... yes
>> checking whether sigsetjmp is declared... yes
>> checking whether siglongjmp is declared... yes
>> checking Java support in R... present:
>> interpreter : '/opt/jdk1.6.0_22//jre/bin/java'
>> archiver : '/opt/jdk1.6.0_22//bin/jar'
>> compiler : '/opt/jdk1.6.0_22//bin/javac'
>> header prep.: '/opt/jdk1.6.0_22//bin/javah'
>> cpp flags : '-I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux'
>> java libs : '-L$(JAVA_HOME)jre/lib/amd64/server
>> -L$(JAVA_HOME)jre/lib/amd64 -L$(JAVA_HOME)jre/..

Re: [R] conditional value assignment

2011-02-14 Thread Dennis Murphy
Hi:

Wouldn't ifelse() work here?

tco <- with(df, ifelse(TargetColor == 'B', CannonOriB, CannonOriR))

ifelse() is vectorized, so there should be no need for a loop.

Test:
> df <- data.table(TargetColor = c('B', 'R'), CannonOriB = c(5, 5),
+  CannonOriR = c(3, 3), stringsAsFactors = FALSE)
> with(df, ifelse(TargetColor == 'B', CannonOriB, CannonOriR))
[1] 5 3

HTH,
Dennis

On Mon, Feb 14, 2011 at 1:02 AM, Frank Tamborello <
franklin.tambore...@uth.tmc.edu> wrote:

> Dear R-Help,
>
> I am trying to compute a new variable, let's call it "target cannon
> orientation (tco)" based conditionally on old variables, "TargetColor,"
> "CannonOriB," and "CannonOriR." For every case in the data set, if
> TargetColor is "B" then I want tco to equal the value for that case of
> CannonOirB, else CannonOriR. I've tried writing for loops and functions that
> I can feed to sapply. I suspect there must be a simple solution but I cannot
> seem to get either incantation to perform the assignment. What would be a
> good way to do this?
>
> Example data:
> TargetColor.1.18 CannonOriB.1.18 "CannonOriR.1.1
> "B" 5   3
> "R" 5   3
>
>
> Example assignment of tco
> "tco"
> 5
> 3
>
> Thanks much!
>
> Frank Tamborello, PhD
> W. M. Keck Postdoctoral Fellow
> School of Biomedical Informatics
> University of Texas Health Science Center, Houston
> https://xfiles.uth.tmc.edu/Users/ftamborello/public/index.html
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] When is *interactive* data visualization useful to use?

2011-02-14 Thread Rainer M Krug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/11/2011 08:21 PM, Claudia Beleites wrote:
> Dear Tal, dear list,
> 
> I think the importance of interactive graphics has a lot do with how
> visual your scientific discipline works. I'm spectroscopist, and I think
> we are very visually oriented: if I think of a spectrum I mentally see a
> graph.
> 
> So for that kind of work, I need a lot of interaction (type: plot,
> change a bit, plot again), e.g.
> One example is the removal of spikes from Raman spectra (caused e.g. by
> cosmic rays hitting the detector). It is fairly easy to compute a list
> of suspicious signals. It is already much more complicated to find the
> actual beginning and end of the spike. And it is really difficult not to
> have false positives by some automatic procedure, because the spectra
> can look very different for different samples. It would just take me far
> longer to find a computational description of what is a spike than
> interactively accepting/rejecting the automatically marked suspicions.
> Even though it feels like slave work ;-)
> 
> Roughly the same applies for the choice of pre-processing like baseline
> correction. A number of different physical causes can produce different
> kinds of baselines, and usually you don't know which process contributes
> to what extent. In practice, experience suggests a method, I apply it
> and look whether the result looks as expected. I'm not aware of any
> performance measure that would indicate success here.
> 
> The next point where interaction is needed pops up as my data has e.g.
> spatial and spectral dimensions. So do the models usually: e.g. in a
> PCA, the loadings would usually capture the spectroscopic direction,
> whereas the scores belong to the spatial domain. So I have "connected"
> graphs: the spatial distribution (intensity map, score map, etc.), and
> the spectra (or loadings).
> As soon as I have such connections I wish for interactive visualization:
> I go back and forth between the plots: what is the spectrum that belongs
> to this region of the map? Where on the sample are high intensities of
> this band? What is the substance behind that: if it is x, the
> intensities at that other spectral band should correlate. And then I
> want to compare this to the scatterplot (pairs plot of the PCA score) or
> to a dendrogram of HCA...
> 
> Also, exploration is not just prerequisite for models, but it frequently
> is already the very proper scientific work (particularly in basic
> science). The more so, if you include exploring the models: Now, which
> of the bands are actually used by my predictive models? Which samples do
> get their predictions because of which spectral feature?
> And, the "statistical outliers" may very well be just the interesting
> part of the sample. And the outlier statistics cannot interprete the
> data in terms of interesting ./. crap.
> 
> For presentation* of results, I personally think that most of the time a
> careful selection of static graphs is much better than live interaction.
> *The thing where you talk to an audience far awayf from your work
> computer. As opposed to sitting down with your client/colleague and
> analysing the data together.
> 
>> It could be argued that the interactive part is good for exploring (For
>> example) a different behavior of different groups/clusters in the
>> data. But
>> when (in practice) I approached such situation, what I tended to do
>> was to
>> run the relevant statistical procedures (and post-hoc tests)
> As long as the relevant measure exists, sure.
> Yet as a non-statistician, my work is focused on the physical/chemical
> interpretation. Summary statistics are one set of tools for me, and
> interactive visualisation is another set of tools (overlapping though).
> 
> I may want to subtract the influence of the overall unchanging sample
> matrix (that would be the minimal intensity for each wavelength). But
> the minimum spectrum is too noisy. So I use a quantile. Which one?
> Depends on the data. I'll have a look at a series (say, the 2nd to 10th
> percentile) and decide trading off noise and whether any new signals
> appear. I honestly think there's nothing gained if I sit down and try to
> write a function scoring the similarity to the minimum spectrum and the
> noise level: the more so as it just shifts the need for a decision (How
> much noise outweighs what intensity of real signal being subtracted?).
> It is a decision I need to take. With number or with eye. And after all,
> my professional training was thought to enable me taking this decision,
> and I'm paid (also) for being able to take this decision efficiently
> (i.e. making a reasonably good choice within not too long time).
> 
> After all, it may also have to do with a complaint a colleague from a
> computational data analysis group once had. He said the bad thing with
> us spectroscopists is that our problems are either so easy that there's
> no fun in solving them, or they are too 

Re: [R] When is *interactive* data visualization useful to use?

2011-02-14 Thread Rainer M Krug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/14/2011 11:21 AM, Gesmann, Markus wrote:
> Hi Rainer,
> 
> You may want to look into the package googleVis, which provides an
> interface between the Google Visualisation API and R, see
> http://code.google.com/p/google-motion-charts-with-r/

True - forgotten about that one. It looks actually nice fore especially
time series.

Rainer

> 
> Regards,
> 
> Markus 
> 
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Rainer M Krug
> Sent: 14 February 2011 09:43
> To: Claudia Beleites
> Cc: R Help
> Subject: Re: [R] When is *interactive* data visualization useful to use?
> 
> On 02/11/2011 08:21 PM, Claudia Beleites wrote:
>> Dear Tal, dear list,
> 
>> I think the importance of interactive graphics has a lot do with how 
>> visual your scientific discipline works. I'm spectroscopist, and I 
>> think we are very visually oriented: if I think of a spectrum I 
>> mentally see a graph.
> 
>> So for that kind of work, I need a lot of interaction (type: plot, 
>> change a bit, plot again), e.g.
>> One example is the removal of spikes from Raman spectra (caused e.g. 
>> by cosmic rays hitting the detector). It is fairly easy to compute a 
>> list of suspicious signals. It is already much more complicated to 
>> find the actual beginning and end of the spike. And it is really 
>> difficult not to have false positives by some automatic procedure, 
>> because the spectra can look very different for different samples. It 
>> would just take me far longer to find a computational description of 
>> what is a spike than interactively accepting/rejecting the
> automatically marked suspicions.
>> Even though it feels like slave work ;-)
> 
>> Roughly the same applies for the choice of pre-processing like 
>> baseline correction. A number of different physical causes can produce
> 
>> different kinds of baselines, and usually you don't know which process
> 
>> contributes to what extent. In practice, experience suggests a method,
> 
>> I apply it and look whether the result looks as expected. I'm not 
>> aware of any performance measure that would indicate success here.
> 
>> The next point where interaction is needed pops up as my data has e.g.
>> spatial and spectral dimensions. So do the models usually: e.g. in a 
>> PCA, the loadings would usually capture the spectroscopic direction, 
>> whereas the scores belong to the spatial domain. So I have "connected"
>> graphs: the spatial distribution (intensity map, score map, etc.), and
> 
>> the spectra (or loadings).
>> As soon as I have such connections I wish for interactive
> visualization:
>> I go back and forth between the plots: what is the spectrum that 
>> belongs to this region of the map? Where on the sample are high 
>> intensities of this band? What is the substance behind that: if it is 
>> x, the intensities at that other spectral band should correlate. And 
>> then I want to compare this to the scatterplot (pairs plot of the PCA 
>> score) or to a dendrogram of HCA...
> 
>> Also, exploration is not just prerequisite for models, but it 
>> frequently is already the very proper scientific work (particularly in
> 
>> basic science). The more so, if you include exploring the models: Now,
> 
>> which of the bands are actually used by my predictive models? Which 
>> samples do get their predictions because of which spectral feature?
>> And, the "statistical outliers" may very well be just the interesting 
>> part of the sample. And the outlier statistics cannot interprete the 
>> data in terms of interesting ./. crap.
> 
>> For presentation* of results, I personally think that most of the time
> 
>> a careful selection of static graphs is much better than live
> interaction.
>> *The thing where you talk to an audience far awayf from your work 
>> computer. As opposed to sitting down with your client/colleague and 
>> analysing the data together.
> 
>>> It could be argued that the interactive part is good for exploring 
>>> (For
>>> example) a different behavior of different groups/clusters in the 
>>> data. But when (in practice) I approached such situation, what I 
>>> tended to do was to run the relevant statistical procedures (and 
>>> post-hoc tests)
>> As long as the relevant measure exists, sure.
>> Yet as a non-statistician, my work is focused on the physical/chemical
> 
>> interpretation. Summary statistics are one set of tools for me, and 
>> interactive visualisation is another set of tools (overlapping
> though).
> 
>> I may want to subtract the influence of the overall unchanging sample 
>> matrix (that would be the minimal intensity for each wavelength). But 
>> the minimum spectrum is too noisy. So I use a quantile. Which one?
>> Depends on the data. I'll have a look at a series (say, the 2nd to 
>> 10th
>> percentile) and decide trading off noise and whether any new signals 
>> appear. I honestly think there's nothing gained if I sit d

Re: [R] Adding labels into lattice's barchart

2011-02-14 Thread Deepayan Sarkar
On Wed, Feb 9, 2011 at 11:04 PM, Luca Meyer  wrote:
> *** APOLOGIZES FOR THOSE READING THE LIST THROUGH NABBLE THIS WAS ALREADY 
> POSTED THERE BUT NOT FORWARDED TO THE LIST FOR SOME UNKNOWN REASON ***
>
> I have a dataset that looks like:
>
> $ V1: factor with 4 levels
> $ V2: factor with 4 levels
> $ V3: factor with 2 levels
> $ V4: num (summing up to 100 within V3 levels)
> $ V5: num (nr of cases for each unique combination of V1*V2*V3 levels)
>
> Quite new to lattice - I've started reading Deepayan's book a few days ago - 
> I have written the following:
>
> barchart(V2 ~ V4 | V1,
>         data=d1,
>         groups=V3,
>         stack=TRUE,
>         auto.key= list(space="top"),
>         layout = c(1,4),
>         xlab=" "
>         )
>
> which works just fine as a stacked bar chart with bars adding up to 100%. Now 
> what I would like to see is the number of cases showing next to the 4 
> x-axis's labels - i.e. V2_L1, ... V2_L4.
>
> In other words now I see something like:
>
> *** V1_L1 ***
> V2_L4 AAAVVV
> V2_L3 AA
> V2_L2 AV
> V2_L1 AA
> *** V1_L2 ***
> V2_L4 AA
> V2_L3 AV
> etc...
>
> But what I am looking for is something like:
> *** V1_L1 ***
> V2_L4 (n=60) AAAVVV
> V2_L3 (n=10) AA
> V2_L2 (n=52) AV
> V2_L1 (n=15) AA
> *** V1_L2 ***
> V2_L4 (n=18) AA
> V2_L3 (n=74) AV
> etc...
>
> How can I do that? I have tried:
>
> V6 <- paste(V2," (n",V5,")")

What you really want is to compute the total sum of V5 per level of V2
(and add that to the labels of V2). There are many ways of doing so,
one is tapply().

In the absence of a reproducible example, here is an approximation:

tdf <- as.data.frame.table(apply(Titanic, c(1, 2, 4), sum))
names(tdf)[1:3] <- paste("V", 1:3, sep = "")

str(tdf)

barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)

with(tdf, tapply(Freq, V2, sum))

numByV2 <- with(tdf, tapply(Freq, V2, sum))

barchart(V2 ~ Freq | V1, data = tdf, groups = V3, stack=TRUE,
 ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))

## or

levels(tdf$V2) <- sprintf("%s (n=%g)", levels(tdf$V2), numByV2)
barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)

-Deepayan

>
> but what i get when I run
>
> barchart(V6 ~ V4 | V1,
>         data=d1,
>         groups=V3,
>         stack=TRUE,
>         auto.key= list(space="top"),
>         layout = c(1,4),
>         xlab=" "
>         )
>
> is a bunch of empty bars due to the fact that the unique combinations have 
> risen.
>
> Any help would be appreciated.
>
> Thanks,
> Luca
>
> Mr. Luca Meyer
> www.lucameyer.com
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (no subject)

2011-02-14 Thread Jannis
First of all I would advice you to use meaningful subjects when posting 
to the list.


My second point is only a vague guess, but if you use the exact code 
that you posted there may be bracket missing(its the closing bracket 
of the jpg() call ). May this produce your error?


Third, you should use [[]] instead of $ to access the elements of the 
data frame ( see ?$ ). blith[[item]] should do the job.



HTH
Jannis

On 02/14/2011 07:33 AM, Smith, Taylor Taran wrote:

Hey,

I am trying to graph a series of XY plots from a set of .csv files. Getting the 
csv files read in and the original plot set up is not a problem, but I cannot 
figure out a way to call get(iterator) within a separate dataset. A chunk of my 
code is below.


data<- read.csv(file= filename)
data2<- read.csv(file= filename2)

names<- names(data)
attach(data)

for (item in names) {
 jpeg(filename=sprintf(directory_%s.jpg, item)
 plot(SiO2, get(item), pch="", ylab=sprintf("%s", item))
 points(alith$SiO2, alith$get(item), pch=21, col="red")
 points(blith$SiO2, blith$get(item), pch=21, col="blue")
 points(clith$SiO2, clith$get(item), pch=21, col="green")
 points(dlith$SiO2, dlith$get(item), pch=21, col="orange")
 points(glith$SiO2, glith$get(item), pch=21, col="red4")
 points(hlith$SiO2, hlith$get(item), pch=21, col="purple")
 points(llith$SiO2, llith$get(item), pch=21)
 dev.off()
}

Ideally this would spit out hundreds of xy plots (and it does work if I plot them all in 
the same color by turning the pch="" off. However, when calling from a dataset 
with $, it will not let you use a function as the argument (the get(item) part gives an 
error). Is there any way around this?

Thanks

Taylor

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to overlay a map of US on image plot

2011-02-14 Thread Jannis
A First guess would be to have a look at ?map and change the 'add' 
argument to TRUE. This would overlay the map to the image plot that you 
produced before. It may be that you need to mess around with different 
projections etc. but I can not give you any advice on that without 
digging deep into the documentation



HTH
Jannis

On 02/14/2011 03:14 AM, cassie jones wrote:

Dear all,

I have a data set which has latitude, longitude (not in increasing order)
and observed precipitation corresponding to those locations. I am using
image.plot command for that. But I want to overlay a map of US on that plot.
I know we can use map('usa') to draw it, but I dont know how can I overlay
it on the plot of the precipitation data. Can anyone please help me with it?


Thanks in advance.

Cassie.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot text sizing

2011-02-14 Thread Jannis

Trellis graphs can be a pain regarding their parameters ;-)

Try to run trellis.par.get() after you produced the plots and try to 
figure out (by playing around with them) which parameter corresponds to 
your text size (I would guess some of the par.sub.text or par.main.text 
parameters). Use trellis.par.set() to set the according value.


Good luck with trial and error!

HTH
Jannis

On 02/13/2011 08:59 PM, Nick Torenvliet wrote:

Hi all,

I'm using

xyplot(closingDataXTS)

To get a page with any number of seperate plots on it (as many plots as
columns in closingDataXTS).

Each plot is named according to colnames(closingDataXTS).

I would like to control the size of the text each plot name appears in.

I've seen a number of solutions for similar problems that point me in the
direction of trellis.par.get and trellis.par.set but have been unable to put
anything together that works.

Any ideas for me?

Nick

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding labels into lattice's barchart

2011-02-14 Thread Luca Meyer
Thanks Deepayan,

What you suggest is quite fine, but provides the overall number of cases for 
the entire dataset splitted into V2 levels. 

What about if I need to show panel specific's values? For instance I want to 
show not the total number of Female but the total number of Female in 1st Class.

In other worlds, take your example and suppose I have:

barchart(V2 ~ Freq | V1, data = tdf, groups = V3, layout=c(1,4), stack=TRUE,
ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))

and now what I would like to show is the result of

with(tdf, tapply(Freq, list(V2,V1), sum))

next to each stacked bar. 

In the previous example, I would need show in the Crew panel Female (n=23), in 
the 3rd Class panel Female (n=196), etc...

Can I do that?

Thanks,
Luca



Il giorno 14/feb/2011, alle ore 11.43, Deepayan Sarkar ha scritto:

> On Wed, Feb 9, 2011 at 11:04 PM, Luca Meyer  wrote:
>> *** APOLOGIZES FOR THOSE READING THE LIST THROUGH NABBLE THIS WAS ALREADY 
>> POSTED THERE BUT NOT FORWARDED TO THE LIST FOR SOME UNKNOWN REASON ***
>> 
>> I have a dataset that looks like:
>> 
>> $ V1: factor with 4 levels
>> $ V2: factor with 4 levels
>> $ V3: factor with 2 levels
>> $ V4: num (summing up to 100 within V3 levels)
>> $ V5: num (nr of cases for each unique combination of V1*V2*V3 levels)
>> 
>> Quite new to lattice - I've started reading Deepayan's book a few days ago - 
>> I have written the following:
>> 
>> barchart(V2 ~ V4 | V1,
>> data=d1,
>> groups=V3,
>> stack=TRUE,
>> auto.key= list(space="top"),
>> layout = c(1,4),
>> xlab=" "
>> )
>> 
>> which works just fine as a stacked bar chart with bars adding up to 100%. 
>> Now what I would like to see is the number of cases showing next to the 4 
>> x-axis's labels - i.e. V2_L1, ... V2_L4.
>> 
>> In other words now I see something like:
>> 
>> *** V1_L1 ***
>> V2_L4 AAAVVV
>> V2_L3 AA
>> V2_L2 AV
>> V2_L1 AA
>> *** V1_L2 ***
>> V2_L4 AA
>> V2_L3 AV
>> etc...
>> 
>> But what I am looking for is something like:
>> *** V1_L1 ***
>> V2_L4 (n=60) AAAVVV
>> V2_L3 (n=10) AA
>> V2_L2 (n=52) AV
>> V2_L1 (n=15) AA
>> *** V1_L2 ***
>> V2_L4 (n=18) AA
>> V2_L3 (n=74) AV
>> etc...
>> 
>> How can I do that? I have tried:
>> 
>> V6 <- paste(V2," (n",V5,")")
> 
> What you really want is to compute the total sum of V5 per level of V2
> (and add that to the labels of V2). There are many ways of doing so,
> one is tapply().
> 
> In the absence of a reproducible example, here is an approximation:
> 
> tdf <- as.data.frame.table(apply(Titanic, c(1, 2, 4), sum))
> names(tdf)[1:3] <- paste("V", 1:3, sep = "")
> 
> str(tdf)
> 
> barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)
> 
> with(tdf, tapply(Freq, V2, sum))
> 
> numByV2 <- with(tdf, tapply(Freq, V2, sum))
> 
> barchart(V2 ~ Freq | V1, data = tdf, groups = V3, stack=TRUE,
> ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))
> 
> ## or
> 
> levels(tdf$V2) <- sprintf("%s (n=%g)", levels(tdf$V2), numByV2)
> barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)
> 
> -Deepayan
> 
>> 
>> but what i get when I run
>> 
>> barchart(V6 ~ V4 | V1,
>> data=d1,
>> groups=V3,
>> stack=TRUE,
>> auto.key= list(space="top"),
>> layout = c(1,4),
>> xlab=" "
>> )
>> 
>> is a bunch of empty bars due to the fact that the unique combinations have 
>> risen.
>> 
>> Any help would be appreciated.
>> 
>> Thanks,
>> Luca
>> 
>> Mr. Luca Meyer
>> www.lucameyer.com
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding labels into lattice's barchart

2011-02-14 Thread Luca Meyer
Thanks Deepayan,

What you suggest is quite fine, but provides the overall number of cases for 
the entire dataset splitted into V2 levels. 

What about if I need to show panel specific's values? For instance I want to 
show not the total number of Female but the total number of Female in 1st Class.

In other worlds, take your example and suppose I have:

barchart(V2 ~ Freq | V1, data = tdf, groups = V3, layout=c(1,4), stack=TRUE,
   ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))

and now what I would like to show is the result of

with(tdf, tapply(Freq, list(V2,V1), sum))

next to each stacked bar. 

In the previous example, I would need show in the Crew panel Female (n=23), in 
the 3rd Class panel Female (n=196), etc...

Can I do that?

Thanks,
Luca



Il giorno 14/feb/2011, alle ore 11.43, Deepayan Sarkar ha scritto:

> On Wed, Feb 9, 2011 at 11:04 PM, Luca Meyer  wrote:
>> *** APOLOGIZES FOR THOSE READING THE LIST THROUGH NABBLE THIS WAS ALREADY 
>> POSTED THERE BUT NOT FORWARDED TO THE LIST FOR SOME UNKNOWN REASON ***
>> 
>> I have a dataset that looks like:
>> 
>> $ V1: factor with 4 levels
>> $ V2: factor with 4 levels
>> $ V3: factor with 2 levels
>> $ V4: num (summing up to 100 within V3 levels)
>> $ V5: num (nr of cases for each unique combination of V1*V2*V3 levels)
>> 
>> Quite new to lattice - I've started reading Deepayan's book a few days ago - 
>> I have written the following:
>> 
>> barchart(V2 ~ V4 | V1,
>>data=d1,
>>groups=V3,
>>stack=TRUE,
>>auto.key= list(space="top"),
>>layout = c(1,4),
>>xlab=" "
>>)
>> 
>> which works just fine as a stacked bar chart with bars adding up to 100%. 
>> Now what I would like to see is the number of cases showing next to the 4 
>> x-axis's labels - i.e. V2_L1, ... V2_L4.
>> 
>> In other words now I see something like:
>> 
>> *** V1_L1 ***
>> V2_L4 AAAVVV
>> V2_L3 AA
>> V2_L2 AV
>> V2_L1 AA
>> *** V1_L2 ***
>> V2_L4 AA
>> V2_L3 AV
>> etc...
>> 
>> But what I am looking for is something like:
>> *** V1_L1 ***
>> V2_L4 (n=60) AAAVVV
>> V2_L3 (n=10) AA
>> V2_L2 (n=52) AV
>> V2_L1 (n=15) AA
>> *** V1_L2 ***
>> V2_L4 (n=18) AA
>> V2_L3 (n=74) AV
>> etc...
>> 
>> How can I do that? I have tried:
>> 
>> V6 <- paste(V2," (n",V5,")")
> 
> What you really want is to compute the total sum of V5 per level of V2
> (and add that to the labels of V2). There are many ways of doing so,
> one is tapply().
> 
> In the absence of a reproducible example, here is an approximation:
> 
> tdf <- as.data.frame.table(apply(Titanic, c(1, 2, 4), sum))
> names(tdf)[1:3] <- paste("V", 1:3, sep = "")
> 
> str(tdf)
> 
> barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)
> 
> with(tdf, tapply(Freq, V2, sum))
> 
> numByV2 <- with(tdf, tapply(Freq, V2, sum))
> 
> barchart(V2 ~ Freq | V1, data = tdf, groups = V3, stack=TRUE,
>ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))
> 
> ## or
> 
> levels(tdf$V2) <- sprintf("%s (n=%g)", levels(tdf$V2), numByV2)
> barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE)
> 
> -Deepayan
> 
>> 
>> but what i get when I run
>> 
>> barchart(V6 ~ V4 | V1,
>>data=d1,
>>groups=V3,
>>stack=TRUE,
>>auto.key= list(space="top"),
>>layout = c(1,4),
>>xlab=" "
>>)
>> 
>> is a bunch of empty bars due to the fact that the unique combinations have 
>> risen.
>> 
>> Any help would be appreciated.
>> 
>> Thanks,
>> Luca
>> 
>> Mr. Luca Meyer
>> www.lucameyer.com
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>> 

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem in installing Rattle in R 2.12.1.0

2011-02-14 Thread taby gathoni

Hi all, 

Please help,

I am getting an error when I try installing rattle. 

Error in as.GType(type) : Cannot convert RGtkBuilder to GType

I have already installed 
gtk-2.12.9-win32-2  and gtk2-runtime-2.22.0-2010-10-21-ash

and restarted R as instructions require.

Please assist me


Taby











  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] rjava does not install

2011-02-14 Thread Mike Marchywka



[ stupid hotmail still not mark original, top post. I can't figure out when it 
is kind
enough to do this and not, probably thinks I need to use html LOL ]

The point seems to be that you need jni.h which apparently is only in the jdk, 
not the jre.
If you install "java" it isn't clear what this thing finds.
if you do something like "which -a java" and "which -a javac" you may see some 
issues.





Date: Mon, 14 Feb 2011 11:33:49 +0200
From: orvaq...@gmail.com
To: rip...@stats.ox.ac.uk
CC: r-help@r-project.org
Subject: Re: [R] rjava does not install


Sorry for this delay. I have installed rjava on Fedora, CentOS and Debians.

The secret is to install java and configure de java variables. The easiest
way for R is:
R CMD javareconf

It will detect your java envoriment and preconfigure R for you.

Then  run: R CMD INSTALL rJava etc as you did.

Caveman


On Tue, Feb 8, 2011 at 9:05 AM, Prof Brian Ripley wrote:

> This isn't the right place (rJava has its own support lists), but
>
> - that is not the Java package installed on any version of Fedora I have
> seen, and you might want to try the native openjdk version.
>
> - the JAVA_HOME used by the build is not what you show, as it is looking in
> /opt/jre1.6.0_22/include (and needs to look in the JDK).
>
> I have no idea what about your setup caused the discrepancy: please seek
> help from your local Linux support (or use rJava's support lists).
>
>
> On Mon, 7 Feb 2011, servet cizmeli wrote:
>
>  I am on a fedora server on which I am not root privileges. I am trying to
>> locally install rJava... Here are my steps :
>>
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using filled.contour and contour functions together

2011-02-14 Thread Jannis
You should start reading the documentation of the functions you use. 
?filled.contour gives you the following:

...The output produced by |filled.contour| is actually a combination of 
two plots; one is the filled contour and one is the legend. Two separate 
coordinate systems are set up for these two plots, but they are only 
used internally - once the function has returned these coordinate 
systems are lost. If you want to annotate the main contour plot, for 
example to add points, you can specify graphics commands in the 
|plot.axes| argument. An example is given below. ...

So you could use the example from the documentation to set up your own 
coordinate system. Or change the margins of the second plot so the two 
coordinate systems overlay ( par(mar=c()) ). I would use contourplot() 
in package lattice instead.


HTH
Jannis

On 02/11/2011 10:56 AM, Xavier Bodin wrote:
> Dear R help contributors,
>
> I'd like to plot ground temperature with time on X-axis and depth on Y-axis
> on this datasets ( http://r.789695.n4.nabble.com/file/n3301033/NEdaily.csv
> NEdaily.csv ), and to do so I use the following commands:
>
>   library(RSEIS)
>
>   xNE <- seq(1, as.numeric(as.Date(max(NEdaily[[1]])) -
> as.Date(min(NEdaily[[1]]))), 1)
>   yNE<- rev(c(-0.3, -0.5, -0.7, -0.9, -1.1, -1.4, -1.7, -2, -2.5, -3, -4,
> -5, -7, -9, -10))
>   zNE<-
> mirror.matrix(as.matrix(NEdaily[1:(nrow(NEdaily)-1),2:length(NEdaily)]))
>
>   filled.contour(xNE,yNE,zNE
>   , col = myPal(20)
>   , zlim = c(-20,20)
>   , ylab = "Depth [m]",
>   , xlab = paste("Days since ", as.Date(min(NEdaily[[1]]), format
> ="%d.%m.%Y"))
>   )
>   contour(xNE,yNE,zNE, lty = 3, add = T)
>   contour(xNE,yNE,zNE, nlevels = 1, level = 0, add = T, lwd = 1.5)
>
> I get this graph ( http://r.789695.n4.nabble.com/file/n3301033/NEdaily.png
> NEdaily.png ) and don't understand why filled.contour and contour plots are
> no set on the same dimensions and why they don't exactly overlay. Does
> anyone have an idea and a solution ?!
>
> Thanks in advance,
>
> Xavier


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to improve the precison of this calculation?

2011-02-14 Thread Mike Marchywka



[ anyone know how to get hotmail to consistently mark original text? 
Now its hanging my keyboard in firefox  LOL ]

Anyway, I think I was the one advocating these approaches over things like
indefinite length calculations and I punted the R questions to others but 
I'm not real sure what you are asking here. It sounds like you are trying
to add a small number to a larger number and concerned that nothing is changed.

Normal floating point has some minimum number, I can't remember what it is
usually called, I'll use "d", such that 1+d==1. So if you keep adding
things that are too small they will all get dropped. There are probably
some numerical people here but just offhand you could try ordering your
terms, say you have a series of binomial coefficient b(i) and you
can start summing from the smallest one first, that should preserve important
sums. That is, if you want to add "d" to "1" say "n" times, do this d*n+1
rather than repeatedly adding 1+d+d+d+... executed from left to right. 
It may be easy to order your terms without evaluating them and express
each b(i+1)=a(i+1)*b(i) or do other tricks and find ways to avoid the issues
as much as possible. 


ok, what I called "d" above they call either "X" or epsilon ,



I was hoping to find something on intel site since they have lots
of numerical optimization stuff but if it is there it is buried in everything
else. 

If you actually look at hard core floating point code, you often
find comments about " if your compiler believes that addition and multiplication
commute or ... " whatever, it becomes obvious that pepole run into these 
problems
a lot. It is a nice way to turn an IIR audio filter into a tone generator for
a fire alarm LOL, you don't need huge numbers to see these issues come up. 
AFAIK, most issues with implementing linear algebra are related to these etc. 




Date: Sat, 12 Feb 2011 15:31:22 +0800
From: zhaoxing...@yahoo.com.cn
To: r-h...@stat.math.ethz.ch
Subject: [R] how to improve the precison of this calculation?


Hello
T
I want to order some calculation "result", there will be lots of 
"result" that need to calculate and order
PS: the "result" is just a intermediate varible and ordering them is 
the very aim

# problem:
# For fixed NT and CT, and some pair (c,n). order the pair by corresponding 
result
# c and n are both random variable

CT<-6000#assignment to CT
NT<-29535210#assignment to NT

# formulae for calculation intermediate variable "result", this is just a 
picture of the calculation which can't implement due to  the precision problem

i<-0:(c-1)
vec<-choose(n,i)*choose(NT-n,CT-i)  #get the vector which need summation
result<-log(sum(vec))   #get the log of summation

# thanks to Petr, we have a solution
calsta <- function(c, n) {
i <- seq(from=0, length=c)
logx <- lchoose(NT-n, CT-i) + lchoose(n, i)
logmax <- max(logx)
logmax + log(sum(exp(logx - logmax)))
}

# now, new problem arise, in theory, the "result" of different (c,n) pair 
should most probably differ, so I can order them, but
> calsta(918,10)-calsta(718,10)
[1] 0
> calsta(718,9)-calsta(718,9)
[1] 0
> calsta(718,9)-calsta(918,10)
[1] 0
# As Eik pointed out in another post, "calsta constantly returns 57003.6 for 
c>38 (the summands in
# sum(exp(logx - logmax)) will become 0 for c>38)" (when n= 10,000)


I think there are two ways to solve this problem:

1.Reducing the calcuation formulae algebraically get a conciser kernel for 
comparing. I think this is the perfect method and I failed many times, though 
this is not the topic of mailing-list, I hope if someone with stronge 
algebraical skills could give me some surprise

2.Through skills of R, which is difficult for me

PS: I don't want a perfect solution but a better one, which could have a higher 
discriminability than before

Thank you in advance

Yours sincerely

ZhaoXing
Department of Health Statistics
West China School of Public Health
Sichuan University
No.17 Section 3, South Renmin Road
Chengdu, Sichuan 610041
P.R.China

__
¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä?


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How can I slightly offset plots?

2011-02-14 Thread Carly Huitema
Dear R help contributers,

I have several x,y scatter plots and I would like to plot them
slightly offset, similar to:
http://www.mathworks.com/matlabcentral/fx_files/24368/2/content/style4.jpg

I've looked all over for this answer with no luck.
Just a function name or previous example would probably be enough for
me to figure it out.

Thank-you in advance,
Carly

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Adding labels into lattice's barchart

2011-02-14 Thread Deepayan Sarkar
On Mon, Feb 14, 2011 at 5:16 PM, Luca Meyer  wrote:
> Thanks Deepayan,
>
> What you suggest is quite fine, but provides the overall number of cases for 
> the entire dataset splitted into V2 levels.
>
> What about if I need to show panel specific's values? For instance I want to 
> show not the total number of Female but the total number of Female in 1st 
> Class.
>
> In other worlds, take your example and suppose I have:
>
> barchart(V2 ~ Freq | V1, data = tdf, groups = V3, layout=c(1,4), stack=TRUE,
>        ylim = sprintf("%s (n=%g)", names(numByV2), numByV2))
>
> and now what I would like to show is the result of
>
> with(tdf, tapply(Freq, list(V2,V1), sum))
>
> next to each stacked bar.
>
> In the previous example, I would need show in the Crew panel Female (n=23), 
> in the 3rd Class panel Female (n=196), etc...
>
> Can I do that?

Well, then you probably don't want to change the labels, but rather
put in the counts in each panel. Something like

barchart(V2 ~ Freq | V1, data=tdf, groups=V3, stack=TRUE,
 panel = function(x, y, ...) {
 n <- tapply(x, y, sum)
 panel.barchart(x, y, ...)
 panel.text(x = 0, y = sort(unique(as.numeric(y))),
srt = 90, adj = c(0.5, -0.2),
labels = sprintf("(n = %g)", n))
 })

If the numbers you want to add up is not the x or y variable, then you
need a little bit more work. You will need to pass in the count
variable separately and use 'subscripts' to get the proper subsets
(look for discussions of 'subscripts' in the documentation and/or
book).

-Deepayan

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can I slightly offset plots?

2011-02-14 Thread Sarah Goslee
Carly,

Those aren't just slightly offset, that's a 3D plot. You might look into some of
the 3D plotting packages for R, maybe scatterplot3d?

Sarah

On Mon, Feb 14, 2011 at 7:45 AM, Carly Huitema  wrote:
> Dear R help contributers,
>
> I have several x,y scatter plots and I would like to plot them
> slightly offset, similar to:
> http://www.mathworks.com/matlabcentral/fx_files/24368/2/content/style4.jpg
>
> I've looked all over for this answer with no luck.
> Just a function name or previous example would probably be enough for
> me to figure it out.
>
> Thank-you in advance,
> Carly
>


-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] problem in installing Rattle in R 2.12.1.0

2011-02-14 Thread Prof Brian Ripley

On Mon, 14 Feb 2011, taby gathoni wrote:



Hi all,

Please help,

I am getting an error when I try installing rattle.

Error in as.GType(type) : Cannot convert RGtkBuilder to GType


How?  From the sources?  Since you appear to be using Windows 
(unstated), from binary packages?  Which versions, from which 
repositories?



I have already installed
gtk-2.12.9-win32-2? and gtk2-runtime-2.22.0-2010-10-21-ash


Those two conflict: which is first in your path?

Assuming this is current R 2.12.1 (or you should have upgraded first 
according to the posting guide) and 32-bit R on Windows, uninstall 
gtk-2.12.9-win32 and packages RGtk2 and XML and start again.



and restarted R as instructions require.

Please assist me


Please asssist *us*, by following the posting guide!




Taby

[[alternative HTML version deleted]]




--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RCytoscape setPosition error

2011-02-14 Thread Martin Morgan
On 02/13/2011 11:28 AM, Fahim M wrote:
> Hi
> Can some one please point out where i am wrong.
> 
> I am trying to position set of nodes column-wise in cytoscape using
> RCytoscape
> AD
> BE
> CF

Hi Fahim -- please ask questions about Bioconductor packages on the
Bioconductor mailing list

http://bioconductor.org/help/mailing-list/

and include packageMaintainer('RCytoscape') in the post.

Martin

> 
> ---
> 
> g <- new ('graphNEL', edgemode='undirected')
> cw <- CytoscapeWindow ('smallExample', graph=RCytoscape::makeSimpleGraph())
> layout (cw, 'jgraph-spring')
> redraw(cw)
> 
> nodesFr = c('A', 'B', 'C')
> nodesTo =c('D', 'E', 'F')
> nodesAll = union(nodesFr, nodesTo)
> 
> nElemFr = length(nodesFr)
> nElemTo = length(nodesTo)
> 
> g <- graph::addNode(nodesAll, g)
> 
> setPosition(cw, nodesFr , c(400, 400, 400), c(100, 200, 300))
> setPosition(cw, nodesTo , c(600, 600, 600), c(100, 200, 300))
> Error in convertToR(xmlParse(node, asText = TRUE)) :
>   faultCode: 0 faultString: Failed to invoke method setNodesPositions in
> class tudelft.CytoscapeRPC.CytoscapeRPCCallHandler: null
> 
> setPosition(cw, nodesTo , c(600, 600, 600), c(100, 200, 300))
> Error in convertToR(xmlParse(node, asText = TRUE)) :
>   faultCode: 1001 faultString: Node with id: D could not be found.
> 
> Thanks
> --Fahim
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


-- 
Computational Biology
Fred Hutchinson Cancer Research Center
1100 Fairview Ave. N. PO Box 19024 Seattle, WA 98109

Location: M1-B861
Telephone: 206 667-2793

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Optimization Question

2011-02-14 Thread Gabrielsen, Alexandros
Hi all,
 
This is my first optimization code and am receiving the following error for the 
following function:
fn <- function(p) {

+263*log(sqrt(2*pi)*sd(test$A))+ sum(log(abs(c(test$A[-1], 1))^p[3])) + 
(sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1], 
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 

}

out <- optim(fn, p = c(0.1, 0.1, 2.5), method="BFGS", hessian=TRUE)

Error in optim(fn, p = c(0.1, 0.1, 2.5), method = "BFGS", hessian = TRUE) : 

non-finite finite-difference value [3]

 
Have tried multiple initial values however, the error remains tha same.
 
Running R 2.12.1
 
Many Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] how to order POSIXt objects ?

2011-02-14 Thread JonC

I have a problem ordering by descending magnitude a POSIXt object. Can
someone help please and let me know how to work around this. My goal is to
be able to order my data by DATE and then by descending TIME.

I have tried to include as much info as possible below. The problem stems
from trying to read in times from a CSV file. I have converted the character
time values to a POSIXt object using the STRPTIME function. I would like
ideally to sort using the order function as below.

test.sort <- order(test$DATE, -test$mytime)

However, when I try this I receive the error as below : 

Error in `-.POSIXt`(test2$mytime) : 
  unary '-' is not defined for "POSIXt" objects

To make this easier to understand I have pasted my example data below with a
list of R commands I have used. Any help or assistance would be appreciated.

> test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
> Documents/Downloads/test2.csv", sep=",")
> test2
DATE TIME
1 18/01/2011 08:00:01
2 18/01/2011 08:10:01
3 18/01/2011 08:20:01
4 18/01/2011 08:30:01
5 19/01/2011 08:00:01
6 19/01/2011 08:10:01
7 19/01/2011 08:20:01
8 19/01/2011 08:30:01

> test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
> test2$mytime
[1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
"2011-02-14 08:30:01" "2011-02-14 08:00:01"
[6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"

> test2
DATE TIME  mytime
1 18/01/2011 08:00:01 2011-02-14 08:00:01
2 18/01/2011 08:10:01 2011-02-14 08:10:01
3 18/01/2011 08:20:01 2011-02-14 08:20:01
4 18/01/2011 08:30:01 2011-02-14 08:30:01
5 19/01/2011 08:00:01 2011-02-14 08:00:01
6 19/01/2011 08:10:01 2011-02-14 08:10:01
7 19/01/2011 08:20:01 2011-02-14 08:20:01
8 19/01/2011 08:30:01 2011-02-14 08:30:01

> test2.sort <- order(test2$DATE, -test2$mytime)
Error in `-.POSIXt`(test2$mytime) : 
  unary '-' is not defined for "POSIXt" objects

It's at this stage where I have got stuck as I'm new to R and don't yet know
a way of getting around this error. Thanks in advance. 

JonC









-- 
View this message in context: 
http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to get warning about implicit factor to integer coercion?

2011-02-14 Thread WB Kloke
Is there a way in R (12.x) to avoid the implicit coercion of factors to integers
in the context of subscripts?

If this is not possible, is there a way to get at least a warning, if any
coercion of this type happens, given that the action of this coercion is almost
never what is wanted?

Of course, in the rare case that as.integer() is applied explicitly onto a
factor, the warning is not needed, but certainly not as disastrous as in the
other case.

Probably, in a future version of R, an option similar to that for partial
matches of subscripts might be useful to change the default from maximal
unsafety to safe use.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Regular Expression

2011-02-14 Thread Deb Midya
Hi R users,
 
Thanks in advance.
 
I am using R-2.12.1 on Windows XP.
 
I am looking for some good literature on Regular Expression. May I request you 
to assist me please.

Once again, thank you very much for the time you have given.
 
Regards,
 
Deb
 


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] package rgl installation failure due to png

2011-02-14 Thread xian-1 . zhang
Dear R list,

I am having problem installing the package rgl on a redhat system.

System info:

Linux lci4.eu.novartis.net 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 
2005 x86_64 x86_64 x86_64 GNU/Linux


R sessionInfo():

R version 2.12.1 (2010-12-16)
Platform: x86_64-unknown-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8   LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=C  LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8   LC_NAME=C
 [9] LC_ADDRESS=C   LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

Error msg when installing rgl:

 install.packages('rgl_0.92.798.tar.gz')
inferring 'repos = NULL' from the file name
* installing *source* package ârglâ ...
checking for gcc... gcc -std=gnu99
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc -std=gnu99 accepts -g... yes
checking for gcc -std=gnu99 option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -std=gnu99 -E
checking for gcc... (cached) gcc -std=gnu99
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc -std=gnu99 accepts -g... (cached) yes
checking for gcc -std=gnu99 option to accept ISO C89... (cached) none 
needed
checking for libpng-config... yes
configure: using libpng-config
configure: using libpng dynamic linkage
checking for X... libraries /usr/X11R6/lib64, headers /usr/X11R6/include
checking GL/gl.h usability... yes
checking GL/gl.h presence... yes
checking for GL/gl.h... yes
checking GL/glu.h usability... yes
checking GL/glu.h presence... yes
checking for GL/glu.h... yes
checking for glEnd in -lGL... yes
checking for gluProject in -lGLU... yes
checking for freetype-config... yes
configure: using Freetype and FTGL
configure: creating ./config.status
config.status: creating src/Makevars
** libs
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c BBoxDeco.cpp -o BBoxDeco.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c Background.cpp -o Background.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c Color.cpp -o Color.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c Disposable.cpp -o Disposable.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c Light.cpp -o Light.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c LineSet.cpp -o LineSet.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c LineStripSet.cpp -o LineStripSet.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c Material.cpp -o Material.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c PointSet.cpp -o PointSet.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c PrimitiveSet.cpp -o PrimitiveSet.o
g++ -I/usr/people/rim/zhangx2s/software/R/lib64/R/include -DHAVE_PNG_H 
-I/usr/include/libpng12 -I/usr/X11R6/include -DHAVE_FREETYPE -Iext/ftgl 
-I/usr/include/freetype2 -Iext -I/usr/local/include   -g -O2 -fpic  -g -O2 
-c QuadSet.cpp -o QuadSet.o
g++ -I/usr/people/rim/zhangx2s/s

[R] Analyzing dissimilarity ratings with Multidimensional Scaling

2011-02-14 Thread Luca Turchet
Dear R-list members,
I need an help with the Multidimensional Scaling analysis (MDS).
So far I used the cmdscale() command in R, but I did not get the perceptual
map I would love to see,
and I would like to know if it is possible to get it using R, and if yes
how.
I also had a look to the functions isoMDS() and sammoc() but with no luck.

I summarize the experiment I performed, and I would ask you an help to show
me how
to analyze in R my results.

I simulated 6 surfaces materials at auditory and visual level, and I asked
subjects to evaluate on a 9-point Likert scale the degree of coherence
between
the two stimuli. For example, there are trials where at auditory level a
metal
surface was simulated and at visual level the snow surface, or wood both at
auditory and visual level.

The experiment had only 12 participants, so for each trial I have 12
response
(no repeated measures involved).
In total there were 36 trials (the combinations of the 6 materials), and
each trial was
evaluated only once by each of the 12 participants.


Basically, these are my data:

 WD   MT   SW   GR   SN   DL
WD 7.00 6.50 4.91 4.83 5.50 5.00
MT 7.33 6.91 2.08 3.16 4.25 3.25
SW 2.91 1.75 7.91 6.25 6.83 5.41
GR 2.91 2.66 6.25 6.41 7.25 6.75
SN 4.00 4.00 5.58 6.00 7.00 6.58
DL 3.91 3.08 5.16 6.25 6.50 6.83

On the rows there are the visual stimuli and on the columns the auditory
stimuli. In each
cell there is the average score for each trial (e.g. the trial GR-MT is
2.66, that
is the average score given by participants to the trial where the material
"gravel"
was provided at visual level and the material "metal" was provided ar
auditory level).


My first goal in the analysis process is to print a perceptual map where to
place the pairs of
audio-visual stimuli (e.g. WD-WD, MT-DL, etc.) and see how far the trials
are from each other.
I used cmdscale in R but I did not get the wanted result. Any suggestion?

My second goal would be to find some p-values assessing the significant
statistical differences.

For example I would like to understand if having the coherent couple of
stimuli
SW-SW (which means "snow" both at auditory and visual level) produces
significant
differences in the evaluations rather than the couple SW-MT (which means
"snow"
at auditory level and "metal" at visual level)

Again, I would like to undestand if there is any significant difference
between
all the couples of stimuli corresponding to solid surfaces (like the couples
MT-MT,
MT-WD, WD-WD, MT-MT) and all the couples where a solid surface and a
aggregate surface
are presented (like the couples MT-SN, or WD-GR, etc.).


I really thanks anyone who can provide any suggestion or useful information.
I extensively searched in the R list mailing list topics but I did not find
anything
that could fit my case.

Thanks in advance

Kind regards


Luca
-- 
---

"Music is a moral law:
It gives a soul to the Universe,
wings to the mind,
flight to the imagination,
a charm to sadness,
and life to everything.
It is the essence of order,
and leads to all that is good,
just and beautiful,
of which it is the invisible,
but nevertheless dazzling,
passionate, and eternal form".

Plato, 400 B.C. (from the Dialogues)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Using filled.contour and contour functions together

2011-02-14 Thread Xavier Bodin

Thanks to you, and to David Winsemius who replied, I finally found a
solution, which works pretty fine :

filled.contour(x,y,z, plot.axes = {axis(1); axis(2) ; contour(x,y,z2, add =
T); contour(x,y,z2, nlevels = 1, level = 0, add = T, lwd = 1.5)})


Xavier
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Using-filled-contour-and-contour-functions-together-tp3301033p3304904.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimal Y>=q cutoff after logistic regression

2011-02-14 Thread Frank Harrell

It is very seldom that such a cutoff is real and validates in another
dataset.  As described so well in Steyerberg's book Clinical Prediction
Modeling there are many good ways to present models to non-statisticians. 
Nomograms and calibration curves with histograms of predicted probabilities
are two good ones.

There is a reason that the speedometer in your car doesn't just read "slow"
and "fast".

Frank

-
Frank Harrell
Department of Biostatistics, Vanderbilt University
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Optimal-Y-q-cutoff-after-logistic-regression-tp3304474p3305012.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MCMC glmm

2011-02-14 Thread garciap

Hi to all the people,

I'm working with abundance data of some species, but containing too zero
values, and the factors are the ones typical in a BACI experiment
(Before-and-After-Control-Impact). Thus, these are two fixed factors. As the
data does not holds the normality and homogeneity of variances assumptions
of clasiccal ANOVA, I'm trying to fit a zero-altered model using the MCMC
glmm library.
I've two questions:

1.- how I can include an interaction between the BA (before and after) and
the CI (control-impact) components in this kind of models? I'm searching in
the notes available in the models but found no clear answer. My first
approach to this wil be to wrote a formula like: Abundance~BA+CI+BA*CI.
2.- Even when I try to fit a model without interactions I can't do it
because I obtain the following error:
> fit<-MCMCglmm(Abundancia~BA+CI, random=NULL,
> family="zapoisson",data=Trucha)
Error in MCMCglmm(Abundancia ~ BA + CI, random = NULL, family = "zapoisson", 
: 
  please use idh(trait):units or us(trait):units or trait:units for error
structures involving multinomial data with more than 2 categories or
zero-infalted/altered/hurdle models

I don't know where is the problem, maybe because my original data is
organised as (obviously with much more data):

AbundanceBACI
5   11
3   21
6   12


Any idea or suggestion? Many thanks for your help and patience, best regards


Pablo
8   22
-- 
View this message in context: 
http://r.789695.n4.nabble.com/MCMC-glmm-tp3304916p3304916.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analyzing dissimilarity ratings with Multidimensional Scaling

2011-02-14 Thread Mike Marchywka



> My first goal in the analysis process is to print a perceptual map where to
> place the pairs of
> audio-visual stimuli (e.g. WD-WD, MT-DL, etc.) and see how far the trials
> are from each other.

I've been using heatmap for stuff like this.
You can get a nice picture this way and get quick visual
survey and dendrograms, 

xm<-scan(file="avm.txt")
str(xm)
heatmap(xm)
heatmap(matrix(xm,6,6))

I ran the above on your data and it visually looks
like there could be interesting patterns to test.





  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimal Y>=q cutoff after logistic regression

2011-02-14 Thread Viechtbauer Wolfgang (STAT)
That's definitely one for the fortune package!

Wolfgang

> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On Behalf Of Frank Harrell
> Sent: Monday, February 14, 2011 14:50
> To: r-help@r-project.org
> Subject: Re: [R] Optimal Y>=q cutoff after logistic regression
> 
> 
> It is very seldom that such a cutoff is real and validates in another
> dataset.  As described so well in Steyerberg's book Clinical Prediction
> Modeling there are many good ways to present models to non-statisticians.
> Nomograms and calibration curves with histograms of predicted
> probabilities are two good ones.
> 
> There is a reason that the speedometer in your car doesn't just read
> "slow" and "fast".
> 
> Frank
> 
> -
> Frank Harrell
> Department of Biostatistics, Vanderbilt University
> --
> View this message in context: http://r.789695.n4.nabble.com/Optimal-Y-q-
> cutoff-after-logistic-regression-tp3304474p3305012.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] censReg or tobit: testing for assumptions in R?

2011-02-14 Thread Terry Therneau
> I'm thinking of applying a censored regression model to
> cross-sectional data, using either the tobit (package survival) or the
> censReg function (package censReg). The dependent variable is left and
> right-censored.
  I assume you mean "survreg" in the survival package.  It's a shame
that censored gaussian regression has earned a unique label (tobit) that
makes people think it is something so very different.


M>y hopefully not too silly question is this: I understand that
>heteroskedasticity and nonnormal errors are even more serious problems
>in a censored regression than in an ols-regression. But I'm not sure
>how to test for these assumptions in R? Is there a way to get to the
>residuals of censored regression models (given that corresponding
>functions for lm, such as rstandard, are not applicable)?

Actually it can be less of a problem, since the really large outliers
often turn into censored observations, limiting their leverage. 
For examination, there are good ideas in the article referenced
by ?residuals.survreg.

Terry Therneau

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] saving plots

2011-02-14 Thread Philipp Pagel
On Mon, Feb 14, 2011 at 12:59:13PM +0530, km wrote:
> Hi all,
> 
> Is there a way to save the currently displayed plot to  an image format just
> after we view it?
> I think this would be more intuitive as a user if I wish to save it just
> after I visualize the plot.
> 
> I am aware that we need to do some thing like this
> jpeg('somefilename.jpg')
> ... plot... commands...
> dev.off()

In addition to savePlot, which has already been recommended, you may
also want to look at dev.copy2eps and dev.copy2pdf. 

cu
Philipp

-- 
Dr. Philipp Pagel
Lehrstuhl für Genomorientierte Bioinformatik
Technische Universität München
Wissenschaftszentrum Weihenstephan
Maximus-von-Imhof-Forum 3
85354 Freising, Germany
http://webclu.bio.wzw.tum.de/~pagel/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get warning about implicit factor to integer coercion?

2011-02-14 Thread Ista Zahn
Hi,
I think an example would be helpful. I'm not sure what behavior you
are referring to.

best,
Ista
On Mon, Feb 14, 2011 at 5:31 AM, WB Kloke  wrote:
> Is there a way in R (12.x) to avoid the implicit coercion of factors to 
> integers
> in the context of subscripts?
>
> If this is not possible, is there a way to get at least a warning, if any
> coercion of this type happens, given that the action of this coercion is 
> almost
> never what is wanted?
>
> Of course, in the rare case that as.integer() is applied explicitly onto a
> factor, the warning is not needed, but certainly not as disastrous as in the
> other case.
>
> Probably, in a future version of R, an option similar to that for partial
> matches of subscripts might be useful to change the default from maximal
> unsafety to safe use.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R 2.12.2 scheduled for February 25

2011-02-14 Thread Peter Dalgaard
This is to announce that we plan to release R version 2.12.2 on Friday,
February 25, 2011. (Mainly to sort out complex arithmetic issues with some 
compiler platforms.)

Those directly involved should review the generic schedule at
http://developer.r-project.org/release-checklist.html

The source tarballs will be made available daily (barring build
troubles) via

http://cran.r-project.org/src/base-prerelease/

For the R Core Team
Peter Dalgaard

-- 
Peter Dalgaard
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd@cbs.dk  Priv: pda...@gmail.com

___
r-annou...@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-announce

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread jim holtman
'unclass' it first(assuming that it is POSIXct)

-unclass(mytime)

On Mon, Feb 14, 2011 at 3:55 AM, JonC  wrote:
>
> I have a problem ordering by descending magnitude a POSIXt object. Can
> someone help please and let me know how to work around this. My goal is to
> be able to order my data by DATE and then by descending TIME.
>
> I have tried to include as much info as possible below. The problem stems
> from trying to read in times from a CSV file. I have converted the character
> time values to a POSIXt object using the STRPTIME function. I would like
> ideally to sort using the order function as below.
>
> test.sort <- order(test$DATE, -test$mytime)
>
> However, when I try this I receive the error as below :
>
> Error in `-.POSIXt`(test2$mytime) :
>  unary '-' is not defined for "POSIXt" objects
>
> To make this easier to understand I have pasted my example data below with a
> list of R commands I have used. Any help or assistance would be appreciated.
>
>> test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
>> Documents/Downloads/test2.csv", sep=",")
>> test2
>        DATE     TIME
> 1 18/01/2011 08:00:01
> 2 18/01/2011 08:10:01
> 3 18/01/2011 08:20:01
> 4 18/01/2011 08:30:01
> 5 19/01/2011 08:00:01
> 6 19/01/2011 08:10:01
> 7 19/01/2011 08:20:01
> 8 19/01/2011 08:30:01
>
>> test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
>> test2$mytime
> [1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
> "2011-02-14 08:30:01" "2011-02-14 08:00:01"
> [6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"
>
>> test2
>        DATE     TIME              mytime
> 1 18/01/2011 08:00:01 2011-02-14 08:00:01
> 2 18/01/2011 08:10:01 2011-02-14 08:10:01
> 3 18/01/2011 08:20:01 2011-02-14 08:20:01
> 4 18/01/2011 08:30:01 2011-02-14 08:30:01
> 5 19/01/2011 08:00:01 2011-02-14 08:00:01
> 6 19/01/2011 08:10:01 2011-02-14 08:10:01
> 7 19/01/2011 08:20:01 2011-02-14 08:20:01
> 8 19/01/2011 08:30:01 2011-02-14 08:30:01
>
>> test2.sort <- order(test2$DATE, -test2$mytime)
> Error in `-.POSIXt`(test2$mytime) :
>  unary '-' is not defined for "POSIXt" objects
>
> It's at this stage where I have got stuck as I'm new to R and don't yet know
> a way of getting around this error. Thanks in advance.
>
> JonC
>
>
>
>
>
>
>
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Regular Expression

2011-02-14 Thread Gabor Grothendieck
On Mon, Feb 14, 2011 at 4:13 AM, Deb Midya  wrote:
> Hi R users,
>
> Thanks in advance.
>
> I am using R-2.12.1 on Windows XP.
>
> I am looking for some good literature on Regular Expression. May I request 
> you to assist me please.

There are regular expression links on the gsubfn home page:
http://gsubfn.googlecode.com

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread Hadley Wickham
It's a bit better to use xtfrm.
Hadley

On Monday, February 14, 2011, jim holtman  wrote:
> 'unclass' it first(assuming that it is POSIXct)
>
> -unclass(mytime)
>
> On Mon, Feb 14, 2011 at 3:55 AM, JonC  wrote:
>>
>> I have a problem ordering by descending magnitude a POSIXt object. Can
>> someone help please and let me know how to work around this. My goal is to
>> be able to order my data by DATE and then by descending TIME.
>>
>> I have tried to include as much info as possible below. The problem stems
>> from trying to read in times from a CSV file. I have converted the character
>> time values to a POSIXt object using the STRPTIME function. I would like
>> ideally to sort using the order function as below.
>>
>> test.sort <- order(test$DATE, -test$mytime)
>>
>> However, when I try this I receive the error as below :
>>
>> Error in `-.POSIXt`(test2$mytime) :
>>  unary '-' is not defined for "POSIXt" objects
>>
>> To make this easier to understand I have pasted my example data below with a
>> list of R commands I have used. Any help or assistance would be appreciated.
>>
>>> test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
>>> Documents/Downloads/test2.csv", sep=",")
>>> test2
>>        DATE     TIME
>> 1 18/01/2011 08:00:01
>> 2 18/01/2011 08:10:01
>> 3 18/01/2011 08:20:01
>> 4 18/01/2011 08:30:01
>> 5 19/01/2011 08:00:01
>> 6 19/01/2011 08:10:01
>> 7 19/01/2011 08:20:01
>> 8 19/01/2011 08:30:01
>>
>>> test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
>>> test2$mytime
>> [1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
>> "2011-02-14 08:30:01" "2011-02-14 08:00:01"
>> [6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"
>>
>>> test2
>>        DATE     TIME              mytime
>> 1 18/01/2011 08:00:01 2011-02-14 08:00:01
>> 2 18/01/2011 08:10:01 2011-02-14 08:10:01
>> 3 18/01/2011 08:20:01 2011-02-14 08:20:01
>> 4 18/01/2011 08:30:01 2011-02-14 08:30:01
>> 5 19/01/2011 08:00:01 2011-02-14 08:00:01
>> 6 19/01/2011 08:10:01 2011-02-14 08:10:01
>> 7 19/01/2011 08:20:01 2011-02-14 08:20:01
>> 8 19/01/2011 08:30:01 2011-02-14 08:30:01
>>
>>> test2.sort <- order(test2$DATE, -test2$mytime)
>> Error in `-.POSIXt`(test2$mytime) :
>>  unary '-' is not defined for "POSIXt" objects
>>
>> It's at this stage where I have got stuck as I'm new to R and don't yet know
>> a way of getting around this error. Thanks in advance.
>>
>> JonC
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> View this message in context: 
>> http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jim Holtman
> Data Munger Guru
>
> What is the problem that you are trying to solve?
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization Question

2011-02-14 Thread Ravi Varadhan
Your function is non-smooth and nasty looking.  You might want to set the
function value to a large positive number if an illegal arithmetic operation
is performed and `NaN' is returned.  

fn <- function(p) {
ftemp <- 263*log(sqrt(2*pi)*sd(test$A))+ sum(log(abs(c(test$A[-1],
1))^p[3])) + (sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1],
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 if (is.nan(ftemp))
ftemp <- 1.0e11  #
return(ftemp)
}

Another option is to specify bounds on the parameters.  This will help avoid
illegal arithmetic.

If you still unsuccessful after these changes, you might want to try
optimizing using "optimx" package, as it will try various optimization
tools.  Hopefully, some of them will be successful.

If you send the data test$A, we might be able to help you better.

Hope this helps,
Ravi.


---
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns
Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gabrielsen, Alexandros
Sent: Monday, February 14, 2011 7:17 AM
To: r-help@r-project.org
Subject: [R] Optimization Question

Hi all,
 
This is my first optimization code and am receiving the following error for
the following function:
fn <- function(p) {

+263*log(sqrt(2*pi)*sd(test$A))+ sum(log(abs(c(test$A[-1], 1))^p[3])) +
(sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1],
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 

}

out <- optim(fn, p = c(0.1, 0.1, 2.5), method="BFGS", hessian=TRUE)

Error in optim(fn, p = c(0.1, 0.1, 2.5), method = "BFGS", hessian = TRUE) : 

non-finite finite-difference value [3]

 
Have tried multiple initial values however, the error remains tha same.
 
Running R 2.12.1
 
Many Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] fit two-parameter lognormal distribution, l-moments

2011-02-14 Thread Tonja Krueger

Dear R helpers,

I would like to fit a two-parameter lognormal distribution to my data using 
l-moments. Is there a package that provides this feature? 

I used the “lmom”-package to fit the three-parameter lognormal distribution to 
my data as shown beneath. I would like something similar for the two-parameter 
lognormal distribution.

library(lmom)
data<-c(6.8044, 6.4422, 6.0900, 6.3978, 6.2156, 5.8734, 6.3112, 6.1590, 6.2368, 
6.1746, 6.0124, 6.2202, 5.7680, 5.8958, 6.5836, 5.9614, 5.9892, 6.2870)
y.data<-seq(5.6,6.8,0.01)

lmom <-samlmu(data)
lfit.lognorm<- pelln3(lmom)
lcdfln3<- cdfln3(y.data,lfit.lognorm)

Thank you, Tonja


___
NEU: FreePhone - kostenlos mobil telefonieren und surfen!   


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sem problem - did not converge

2011-02-14 Thread Felipe
 Someone can help me? I tried several things and always don't converge
I am making a confirmatory factor analysis model



# Model
library(sem)
dados40.cov <- cov(dados40,method="spearman")
model.dados40 <- specify.model()
F1 ->  Item11, lam11, NA
F1 ->  Item31, lam31, NA
F1 ->  Item36, lam36, NA
F1 ->  Item54, lam54, NA
F1 ->  Item63, lam63, NA
F1 ->  Item65, lam55, NA
F1 ->  Item67, lam67, NA
F1 ->  Item69, lam69, NA
F1 ->  Item73, lam73, NA
F1 ->  Item75, lam75, NA
F1 ->  Item76, lam76, NA
F1 ->  Item78, lam78, NA
F1 ->  Item79, lam79, NA
F1 ->  Item80, lam80, NA
F1 ->  Item83, lam83, NA
F2 ->  Item12, lam12, NA
F2 ->  Item32, lam32, NA
F2 ->  Item42, lam42, NA
F2 ->  Item47, lam47, NA
F2 ->  Item64, lam64, NA
F2 ->  Item66, lam66, NA
F2 ->  Item68, lam68, NA
F2 ->  Item74, lam74, NA
F3 ->  Item3, lam3, NA
F3 ->  Item8, lam8, NA
F3 ->  Item18, lam18, NA
F3 ->  Item23, lam23, NA
F3 ->  Item28, lam28, NA
F3 ->  Item33, lam33, NA
F3 ->  Item38, lam38, NA
F3 ->  Item43, lam43, NA
F4 ->  Item9, lam9, NA
F4 ->  Item39, lam39, NA
F5 ->  Item5, lam5, NA
F5 ->  Item10, lam10, NA
F5 ->  Item20, lam20, NA
F5 ->  Item25, lam25, NA
F5 ->  Item30, lam30, NA
F5 ->  Item35, lam35, NA
F5 ->  Item45, lam45, NA
Item3 <-> Item3, e3,   NA
Item5 <-> Item5, e5,   NA
Item8 <-> Item8, e8,   NA
Item9 <-> Item9, e9,   NA
Item10 <-> Item10, e10,   NA
Item11 <-> Item11, e11,   NA
Item12 <-> Item12, e12,   NA
Item18 <-> Item18, e18,   NA
Item20 <-> Item20, e20,   NA
Item23 <-> Item23, e23,   NA
Item25 <-> Item25, e25,   NA
Item28 <-> Item28, e28,   NA
Item30 <-> Item30, e30,   NA
Item31 <-> Item31, e31,   NA
Item32 <-> Item32, e32,   NA
Item33 <-> Item33, e33,   NA
Item35 <-> Item35, e35,   NA
Item36 <-> Item36, e36,   NA
Item38 <-> Item38, e38,   NA
Item39 <-> Item39, e39,   NA
Item42 <-> Item42, e42,   NA
Item43 <-> Item43, e43,   NA
Item45 <-> Item45, e45,   NA
Item47 <-> Item47, e47,   NA
Item54 <-> Item54, e54,   NA
Item63 <-> Item63, e63,   NA
Item64 <-> Item64, e64,   NA
Item65 <-> Item65, e65,   NA
Item66 <-> Item66, e66,   NA
Item67 <-> Item67, e67,   NA
Item68 <-> Item68, e68,   NA
Item69 <-> Item69, e69,   NA
Item73 <-> Item73, e73,   NA
Item74 <-> Item74, e74,   NA
Item75 <-> Item75, e75,   NA
Item76 <-> Item76, e76,   NA
Item78 <-> Item78, e78,   NA
Item79 <-> Item79, e79,   NA
Item80 <-> Item80, e80,   NA
Item83 <-> Item83, e83,   NA
F1 <-> F1, NA,1
F2 <-> F2, NA,1
F3 <-> F3, NA,1
F4 <-> F4, NA,1
F5 <-> F5, NA,1
F1 <-> F2, F1F2, NA
F1 <-> F3, F1F3, NA
F1 <-> F4, F1F4, NA
F1 <-> F5, F1F5, NA
F2 <-> F3, F2F3, NA
F2 <-> F4, F2F4, NA
F2 <-> F5, F2F5, NA
F3 <-> F4, F3F4, NA
F3 <-> F5, F3F5, NA
F4 <-> F5, F4F5, NA


###i tryed several correlations, such as hetcor and polychor of polycor
library


hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
hetdados40=hcor(dados40)


dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
Warning message:
In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars,  :
  Could not compute QR decomposition of Hessian.
Optimization probably did not converge.

#

The same happen if i put hetdados40 in the place of dados40.cov
of course hetdados40 has 1 in the diag, but any 0

what should i do? i tryed several things...

all value positive..

#

> eigen(hetdados40)$values
 [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
 [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
[13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
[19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
[25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
[31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
[37]  0.2548654  0.2077900  0.2043732  0.1923942
> eigen(dados40.cov)$values
 [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
 [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
[15]  46836.47  45151.23  43213.65  41465.42  40449.59  37824.73  37622.43
[22]  36344.34  35794.22  33959.29  33552.64  32189.94  31304.44  30594.85
[29]  30077.32  29362.66  26928.12  26526.72  26046.47  24264.50  23213.18
[36]  21503.97  20312.55  18710.97  17093.24  14372.21

#

first- no missing data
second - there are 40 variables and 1004 subjects, should not be a problem
the number of variables also!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Confidence interval

2011-02-14 Thread Peter Ehlers

On 2011-02-13 18:31, Joshua Wiley wrote:

Hi,

The logical operators are actually vectorized, so I do not think you
need a loop.  Does this do what you want?

## Some data
set.seed(10)
dat<- matrix(rnorm(500, sd = 3), nrow = 80)

## Hypothetical confidence interval
ci<- c(-5, 5)

## Find the number of points outside interval
sum(dat<  ci[1] | dat>  ci[2], na.rm = TRUE)

Cheers,

Josh


Or you could use (no simpler) findInterval():

 fI <- findInterval(dat, sort(ci))

## to see what's produced:
 table(fI)
#  0   1   2
# 28 512  20

## the 1s indicate values inside ci, etc,
## so we want

 sum(fI != 1)
# [1] 48

Peter Ehlers



On Sun, Feb 13, 2011 at 4:26 PM, Syd88  wrote:


Hi,
I am trying to determine how many points fall ouside the confidence interval
range.

This is the code I have so far but it does not work. Any help would be
appreciated.

Count<- vector ()
for (i in 1: nrow (dataname)){
if (dataname[i]u.ci.post[i]){
count[i] ->  1
}else
{count[i] ->  0}
}



symbol
// = or - not sure if this is the right symbol though
--
View this message in context: 
http://r.789695.n4.nabble.com/Confidence-interval-tp3304258p3304258.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.







__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problems with "Ctrl+r" on Dell machines

2011-02-14 Thread Dimitri Liakhovitski
Hello!

I and my friend have the latest R version (2.12.1). She has a Dell
laptop and I have a Dell desktop.
Both of us have problems with using "Ctrl+r" when we are trying to run
in a selection in R (regular R script). Sometimes it runs, but
frequently it does not run at all, so that one has to higlight the
code again and run it again.

I also have R on an HP laptop and I never have the same problem on it.

Has anyone encountered thisproblem on Dell machines? Any advice on how
to fix it?

Thank you very much!

-- 
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Optimization Question

2011-02-14 Thread Ravi Varadhan
Hi Gabriel,

I played with your problem a little bit.  I tried both "optimx" and also the
Nelder-Mead function from my recent package "dfoptim".  The log-likelihood
is minimized at -Inf.  Therefore, I think that either there is a mistake in
the way your likelihood is coded, or that your model is flawed.  Please
check your code carefully.

require(optimx)

require(dfoptim)  # contains an improved Nelder-Mead algorithm

test <- read.csv("h:/test/gabrielsen_optim_test.csv", header=TRUE)

fn <- function(p) {
ftemp <- 263*log(sqrt(2*pi)*sd(test$A)) + sum(log(abs(c(test$A[-1],
1))^p[3])) + (sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1],
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 

if (is.nan(ftemp)) ftemp <- 1.0e11  #

return(ftemp)
}

out <- optimx(fn, p = c(0.1, 0.1, 2.5), lower=c(-Inf, -Inf, 0),
hessian=TRUE, control=list(all.methods=TRUE))

out1 <- optimx(fn, p = c(0.1, 0.1, 2.5), lower=c(-Inf, -Inf, 0),
upper=rep(10,3), hessian=TRUE, control=list(all.methods=TRUE))

out1 <- optimx(fn, p = c(0.1, 0.1, 2.5), lower=c(-Inf, -Inf, 0),
upper=rep(20,3), hessian=TRUE, control=list(all.methods=TRUE))

out.nm <- nmk(fn, par = c(0.1, 0.1, 2.5),control=list(trace=TRUE))


Hope this helps,
Ravi.

---
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns
Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ravi Varadhan
Sent: Monday, February 14, 2011 10:20 AM
To: 'Gabrielsen, Alexandros'; r-help@r-project.org
Subject: Re: [R] Optimization Question

Your function is non-smooth and nasty looking.  You might want to set the
function value to a large positive number if an illegal arithmetic operation
is performed and `NaN' is returned.  

fn <- function(p) {
ftemp <- 263*log(sqrt(2*pi)*sd(test$A))+ sum(log(abs(c(test$A[-1],
1))^p[3])) + (sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1],
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 if (is.nan(ftemp))
ftemp <- 1.0e11  #
return(ftemp)
}

Another option is to specify bounds on the parameters.  This will help avoid
illegal arithmetic.

If you still unsuccessful after these changes, you might want to try
optimizing using "optimx" package, as it will try various optimization
tools.  Hopefully, some of them will be successful.

If you send the data test$A, we might be able to help you better.

Hope this helps,
Ravi.


---
Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology School of Medicine Johns
Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gabrielsen, Alexandros
Sent: Monday, February 14, 2011 7:17 AM
To: r-help@r-project.org
Subject: [R] Optimization Question

Hi all,
 
This is my first optimization code and am receiving the following error for
the following function:
fn <- function(p) {

+263*log(sqrt(2*pi)*sd(test$A))+ sum(log(abs(c(test$A[-1], 1))^p[3])) +
(sum(((test$A-p[1]+(p[2]+1)*c(test$A[-1],
1)))^2)/sum(sd(test$A)*(abs(c(test$A[-1], 1))^p[3])^2))/2 

}

out <- optim(fn, p = c(0.1, 0.1, 2.5), method="BFGS", hessian=TRUE)

Error in optim(fn, p = c(0.1, 0.1, 2.5), method = "BFGS", hessian = TRUE) : 

non-finite finite-difference value [3]

 
Have tried multiple initial values however, the error remains tha same.
 
Running R 2.12.1
 
Many Thanks!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Test for equivalence

2011-02-14 Thread Greg Snow
Reading the original post it is fairly clear that the original poster's 
question does not match with the traditional test of equivalence, but rather is 
trying to determine "distinguishable or indistinguishable".  If the test in my 
suggestion is statistically significant (and note I did not suggest only 
testing the interaction) then that meets one possible interpretation of 
"distinguishable", a non-significant result could mean either equivalence or 
low power, the combination of which could be an interpretation of 
"indistinguishable".

I phrased my response as a question in hopes that the original poster would 
think through what they really wanted to test and get back to us with further 
details.  It could very well be that my response is very different from what 
they were thinking, but explaining how it does not fit will better help us 
understand the real problem.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: Albyn Jones [mailto:jo...@reed.edu]
> Sent: Sunday, February 13, 2011 9:53 PM
> To: Greg Snow
> Cc: syrvn; r-help@r-project.org
> Subject: Re: [R] Test for equivalence
> 
> testing the null hypothesis of no interaction is not the same as a
> test of equivalence for the two differences.  There is a literature on
> tests of equivalence.  First you must develop a definition of
> equivalence, for example the difference is in the interval (-a,a).
> Then, for example,  you test the null hypothesis that the difference
> is in [a,inf) or (-inf,-a] (a TOST, or two one sided tests).  One
> simple procedure: check to see if the 90% CI for the difference
> (difference of the differences or the interaction effect) is contained
> in the interval (-a,a).
> 
> albyn
> 
> Quoting Greg Snow :
> 
> > Does it make sense for you to combine the 2 data sets and do a 2-way
> > anova with treatment vs. control as one factor and experiment number
> > as the other factor?  Then you could test the interaction and
> > treatment number factor to see if they make a difference.
> >
> > --
> > Gregory (Greg) L. Snow Ph.D.
> > Statistical Data Center
> > Intermountain Healthcare
> > greg.s...@imail.org
> > 801.408.8111
> >
> >
> >> -Original Message-
> >> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> >> project.org] On Behalf Of syrvn
> >> Sent: Saturday, February 12, 2011 7:30 AM
> >> To: r-help@r-project.org
> >> Subject: [R] Test for equivalence
> >>
> >>
> >> Hi!
> >>
> >> is there a way in R to check whether the outcome of two different
> >> experiments is statistically distinguishable or indistinguishable?
> More
> >> preciously, I used the wilcoxon test to determine the differences
> >> between
> >> controls and treated subjects for two different experiments. Now I
> >> would
> >> like to check whether the two lists of analytes obtained are
> >> statistically
> >> distinguishable or indistinguishable
> >>
> >> I tried to use a equivalence test from the 'equivalence' package in
> R
> >> but it
> >> seems that this test is not applicable to my problem. The test in
> the
> >> 'equivalence' package just determines similarity between two
> conditions
> >> but
> >> I need to compare the outcome of two different experiments.
> >>
> >> My experiments are constructed as follows:
> >>
> >> Exp1:
> >> 8 control samples
> >> 8 treated samples
> >> -> determine significantly changes (List A)
> >>
> >> Exp2:
> >> 8 control samples
> >> 8 treated samples
> >> -> determine significantly changes (List B)
> >>
> >>
> >> Now i would like to check whether List A and List B are
> distinguishable
> >> or
> >> indistinguishable.
> >>
> >> Any advice is very much appreciated!
> >>
> >> Best,
> >> beginner
> >> --
> >> View this message in context: http://r.789695.n4.nabble.com/Test-
> for-
> >> equivalence-tp3302739p3302739.html
> >> Sent from the R help mailing list archive at Nabble.com.
> >>
> >> __
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide http://www.R-project.org/posting-
> >> guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] conditional value assignment

2011-02-14 Thread Frank Tamborello
ifelse worked like a charm for this newbie. Thanks, Dennis!

-Frank

On Feb 14, 2011, at 3:39 AM, Dennis Murphy wrote:

> Hi:
> 
> Wouldn't ifelse() work here?
> 
> tco <- with(df, ifelse(TargetColor == 'B', CannonOriB, CannonOriR))
> 
> ifelse() is vectorized, so there should be no need for a loop.
> 
> Test:
> > df <- data.table(TargetColor = c('B', 'R'), CannonOriB = c(5, 5),
> +  CannonOriR = c(3, 3), stringsAsFactors = FALSE)
> > with(df, ifelse(TargetColor == 'B', CannonOriB, CannonOriR))
> [1] 5 3
> 
> HTH,
> Dennis
> 
> On Mon, Feb 14, 2011 at 1:02 AM, Frank Tamborello 
>  wrote:
> Dear R-Help,
> 
> I am trying to compute a new variable, let's call it "target cannon 
> orientation (tco)" based conditionally on old variables, "TargetColor," 
> "CannonOriB," and "CannonOriR." For every case in the data set, if 
> TargetColor is "B" then I want tco to equal the value for that case of 
> CannonOirB, else CannonOriR. I've tried writing for loops and functions that 
> I can feed to sapply. I suspect there must be a simple solution but I cannot 
> seem to get either incantation to perform the assignment. What would be a 
> good way to do this?
> 
> Example data:
> TargetColor.1.18 CannonOriB.1.18 "CannonOriR.1.1
> "B" 5   3
> "R" 5   3
> 
> 
> Example assignment of tco
> "tco"
> 5
> 3
> 
> Thanks much!
> 
> Frank Tamborello, PhD
> W. M. Keck Postdoctoral Fellow
> School of Biomedical Informatics
> University of Texas Health Science Center, Houston
> https://xfiles.uth.tmc.edu/Users/ftamborello/public/index.html
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Newb Prediction Question using stepAIC and predict(), is R wrong?

2011-02-14 Thread Greg Snow
There is a section on environments on the help page ?formula, but that may not 
be completely clear to newer users.

Basically the idea is that when you specify a formula, the default place that R 
will look for the variables in the formula is the data or newdata argument.  So 
if you use those properly, then the data argument data frame will be used to 
fit the original model and the equivalent parts of the newdata data frame will 
be used when predicting.  But if you use a term like mydata$x in the formula, 
then R will look for something named `mydata$x` in the specified data frame, 
but not find it (it does not look for just 'x'), then continue in the search 
path until it finds the original data you used, bypassing the data you want it 
to use.

It is kind of like trying to teach a robot to tie a know in a rope, but rather 
than telling the robot to take the end of the rope that is currently in its 
left hand over the end currently in its right hand (so that it can tie a knot 
in any rope it has), you instead tell it to always use the rope that was in 
your hands at the time of giving it the instructions.

Hope this helps,

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of BSanders
> Sent: Sunday, February 13, 2011 9:19 PM
> To: r-help@r-project.org
> Subject: Re: [R] Newb Prediction Question using stepAIC and predict(),
> is R wrong?
> 
> 
> "So do not use '$' or '[..]' terms in model formulae - this is going to
> cause
> problems when it comes to predict, because your formula will not
> associate
> with the names it has in its formula in the new data frame.  When you
> think
> about it, this is obvious. "
> 
> I really don't understand why this is obvious.  I'm new to R
> programming,
> and not all that experienced in programming in general.  I would be
> grateful
> for any further explanation.
> 
> 
> --
> View this message in context: http://r.789695.n4.nabble.com/Newb-
> Prediction-Question-using-stepAIC-and-predict-is-R-wrong-
> tp3298569p3304421.html
> Sent from the R help mailing list archive at Nabble.com.
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread Prof Brian Ripley
See the help page for order.  It has a similar example, and the 
general solution is to use xtfrm, not unclass.


On Mon, 14 Feb 2011, jim holtman wrote:


'unclass' it first(assuming that it is POSIXct)

-unclass(mytime)

On Mon, Feb 14, 2011 at 3:55 AM, JonC  wrote:


I have a problem ordering by descending magnitude a POSIXt object. Can
someone help please and let me know how to work around this. My goal is to
be able to order my data by DATE and then by descending TIME.

I have tried to include as much info as possible below. The problem stems
from trying to read in times from a CSV file. I have converted the character
time values to a POSIXt object using the STRPTIME function. I would like
ideally to sort using the order function as below.

test.sort <- order(test$DATE, -test$mytime)

However, when I try this I receive the error as below :

Error in `-.POSIXt`(test2$mytime) :
 unary '-' is not defined for "POSIXt" objects

To make this easier to understand I have pasted my example data below with a
list of R commands I have used. Any help or assistance would be appreciated.


test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
Documents/Downloads/test2.csv", sep=",")
test2

       DATE     TIME
1 18/01/2011 08:00:01
2 18/01/2011 08:10:01
3 18/01/2011 08:20:01
4 18/01/2011 08:30:01
5 19/01/2011 08:00:01
6 19/01/2011 08:10:01
7 19/01/2011 08:20:01
8 19/01/2011 08:30:01


test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
test2$mytime

[1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
"2011-02-14 08:30:01" "2011-02-14 08:00:01"
[6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"


test2

       DATE     TIME              mytime
1 18/01/2011 08:00:01 2011-02-14 08:00:01
2 18/01/2011 08:10:01 2011-02-14 08:10:01
3 18/01/2011 08:20:01 2011-02-14 08:20:01
4 18/01/2011 08:30:01 2011-02-14 08:30:01
5 19/01/2011 08:00:01 2011-02-14 08:00:01
6 19/01/2011 08:10:01 2011-02-14 08:10:01
7 19/01/2011 08:20:01 2011-02-14 08:20:01
8 19/01/2011 08:30:01 2011-02-14 08:30:01


test2.sort <- order(test2$DATE, -test2$mytime)

Error in `-.POSIXt`(test2$mytime) :
 unary '-' is not defined for "POSIXt" objects

It's at this stage where I have got stuck as I'm new to R and don't yet know
a way of getting around this error. Thanks in advance.

JonC









--
View this message in context: 
http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.





--
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] When is *interactive* data visualization useful to use?

2011-02-14 Thread Greg Snow
There are some interactive graphics tools in the TeachingDemos package (tkBrush 
allows brushing, tkexamp helps you create your own interactive graphics, etc.).

There are also the iplots package, the rgl package (spinning in 3 dimonsions), 
'tkrplot' package, the fgui package, the playwith package, and the rpanel 
package that may be of interest.

Just a few things to look at.


-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111

[snip]

> These are very interesting and valid points. But which tools are
> recommended / usefull for interactive graphs for data evaluation? I
> somehow have difficulties getting my head around ggobi, and haven't yet
> tried out mondian (but I will). Are there any other ones (as we are ion
> the R list - which integrate with R) which can be recommended?
> 
> Rainer
> 
> >
> 
> 
> - --
> Rainer M. Krug, PhD (Conservation Ecology, SUN), MSc (Conservation
> Biology, UCT), Dipl. Phys. (Germany)
> 
> Centre of Excellence for Invasion Biology
> Natural Sciences Building
> Office Suite 2039
> Stellenbosch University
> Main Campus, Merriman Avenue
> Stellenbosch
> South Africa
> 
> Tel:+33 - (0)9 53 10 27 44
> Cell:   +27 - (0)8 39 47 90 42
> Fax (SA):   +27 - (0)8 65 16 27 82
> Fax (D) :   +49 - (0)3 21 21 25 22 44
> Fax (FR):   +33 - (0)9 58 10 27 44
> email:  rai...@krugs.de
> 
> Skype:  RMkrug
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.10 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk1Y+SYACgkQoYgNqgF2egohOQCeNdhPw6hJ+Ikd3QyDkHE0J47q
> oSkAn1uzat8G70Nq78iOsCCedCYPmZGf
> =d7jP
> -END PGP SIGNATURE-
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread jim holtman
Thanks.  I did not even know about that function; will have to use it
in the future.   That is the good news/bad news about R; so many
things to learn about.

On Mon, Feb 14, 2011 at 11:58 AM, Prof Brian Ripley
 wrote:
> See the help page for order.  It has a similar example, and the general
> solution is to use xtfrm, not unclass.
>
> On Mon, 14 Feb 2011, jim holtman wrote:
>
>> 'unclass' it first(assuming that it is POSIXct)
>>
>> -unclass(mytime)
>>
>> On Mon, Feb 14, 2011 at 3:55 AM, JonC  wrote:
>>>
>>> I have a problem ordering by descending magnitude a POSIXt object. Can
>>> someone help please and let me know how to work around this. My goal is
>>> to
>>> be able to order my data by DATE and then by descending TIME.
>>>
>>> I have tried to include as much info as possible below. The problem stems
>>> from trying to read in times from a CSV file. I have converted the
>>> character
>>> time values to a POSIXt object using the STRPTIME function. I would like
>>> ideally to sort using the order function as below.
>>>
>>> test.sort <- order(test$DATE, -test$mytime)
>>>
>>> However, when I try this I receive the error as below :
>>>
>>> Error in `-.POSIXt`(test2$mytime) :
>>>  unary '-' is not defined for "POSIXt" objects
>>>
>>> To make this easier to understand I have pasted my example data below
>>> with a
>>> list of R commands I have used. Any help or assistance would be
>>> appreciated.
>>>
 test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
 Documents/Downloads/test2.csv", sep=",")
 test2
>>>
>>>        DATE     TIME
>>> 1 18/01/2011 08:00:01
>>> 2 18/01/2011 08:10:01
>>> 3 18/01/2011 08:20:01
>>> 4 18/01/2011 08:30:01
>>> 5 19/01/2011 08:00:01
>>> 6 19/01/2011 08:10:01
>>> 7 19/01/2011 08:20:01
>>> 8 19/01/2011 08:30:01
>>>
 test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
 test2$mytime
>>>
>>> [1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
>>> "2011-02-14 08:30:01" "2011-02-14 08:00:01"
>>> [6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"
>>>
 test2
>>>
>>>        DATE     TIME              mytime
>>> 1 18/01/2011 08:00:01 2011-02-14 08:00:01
>>> 2 18/01/2011 08:10:01 2011-02-14 08:10:01
>>> 3 18/01/2011 08:20:01 2011-02-14 08:20:01
>>> 4 18/01/2011 08:30:01 2011-02-14 08:30:01
>>> 5 19/01/2011 08:00:01 2011-02-14 08:00:01
>>> 6 19/01/2011 08:10:01 2011-02-14 08:10:01
>>> 7 19/01/2011 08:20:01 2011-02-14 08:20:01
>>> 8 19/01/2011 08:30:01 2011-02-14 08:30:01
>>>
 test2.sort <- order(test2$DATE, -test2$mytime)
>>>
>>> Error in `-.POSIXt`(test2$mytime) :
>>>  unary '-' is not defined for "POSIXt" objects
>>>
>>> It's at this stage where I have got stuck as I'm new to R and don't yet
>>> know
>>> a way of getting around this error. Thanks in advance.
>>>
>>> JonC
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
>>> Sent from the R help mailing list archive at Nabble.com.
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>>
>> --
>> Jim Holtman
>> Data Munger Guru
>>
>> What is the problem that you are trying to solve?
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> --
> Brian D. Ripley,                  rip...@stats.ox.ac.uk
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford,             Tel:  +44 1865 272861 (self)
> 1 South Parks Road,                     +44 1865 272866 (PA)
> Oxford OX1 3TG, UK                Fax:  +44 1865 272595



-- 
Jim Holtman
Data Munger Guru

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Test for equivalence

2011-02-14 Thread Albyn Jones
Reading the original post it was clear to me that the poster was looking for
a test of equivalence, but obviously there was room for interpretation!

albyn

On Mon, Feb 14, 2011 at 09:46:13AM -0700, Greg Snow wrote:
> Reading the original post it is fairly clear that the original poster's 
> question does not match with the traditional test of equivalence, but rather 
> is trying to determine "distinguishable or indistinguishable".  If the test 
> in my suggestion is statistically significant (and note I did not suggest 
> only testing the interaction) then that meets one possible interpretation of 
> "distinguishable", a non-significant result could mean either equivalence or 
> low power, the combination of which could be an interpretation of 
> "indistinguishable".
> 
> I phrased my response as a question in hopes that the original poster would 
> think through what they really wanted to test and get back to us with further 
> details.  It could very well be that my response is very different from what 
> they were thinking, but explaining how it does not fit will better help us 
> understand the real problem.
> 
> -- 
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
> 
> 
> > -Original Message-
> > From: Albyn Jones [mailto:jo...@reed.edu]
> > Sent: Sunday, February 13, 2011 9:53 PM
> > To: Greg Snow
> > Cc: syrvn; r-help@r-project.org
> > Subject: Re: [R] Test for equivalence
> > 
> > testing the null hypothesis of no interaction is not the same as a
> > test of equivalence for the two differences.  There is a literature on
> > tests of equivalence.  First you must develop a definition of
> > equivalence, for example the difference is in the interval (-a,a).
> > Then, for example,  you test the null hypothesis that the difference
> > is in [a,inf) or (-inf,-a] (a TOST, or two one sided tests).  One
> > simple procedure: check to see if the 90% CI for the difference
> > (difference of the differences or the interaction effect) is contained
> > in the interval (-a,a).
> > 
> > albyn
> > 
> > Quoting Greg Snow :
> > 
> > > Does it make sense for you to combine the 2 data sets and do a 2-way
> > > anova with treatment vs. control as one factor and experiment number
> > > as the other factor?  Then you could test the interaction and
> > > treatment number factor to see if they make a difference.
> > >
> > > --
> > > Gregory (Greg) L. Snow Ph.D.
> > > Statistical Data Center
> > > Intermountain Healthcare
> > > greg.s...@imail.org
> > > 801.408.8111
> > >
> > >
> > >> -Original Message-
> > >> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> > >> project.org] On Behalf Of syrvn
> > >> Sent: Saturday, February 12, 2011 7:30 AM
> > >> To: r-help@r-project.org
> > >> Subject: [R] Test for equivalence
> > >>
> > >>
> > >> Hi!
> > >>
> > >> is there a way in R to check whether the outcome of two different
> > >> experiments is statistically distinguishable or indistinguishable?
> > More
> > >> preciously, I used the wilcoxon test to determine the differences
> > >> between
> > >> controls and treated subjects for two different experiments. Now I
> > >> would
> > >> like to check whether the two lists of analytes obtained are
> > >> statistically
> > >> distinguishable or indistinguishable
> > >>
> > >> I tried to use a equivalence test from the 'equivalence' package in
> > R
> > >> but it
> > >> seems that this test is not applicable to my problem. The test in
> > the
> > >> 'equivalence' package just determines similarity between two
> > conditions
> > >> but
> > >> I need to compare the outcome of two different experiments.
> > >>
> > >> My experiments are constructed as follows:
> > >>
> > >> Exp1:
> > >> 8 control samples
> > >> 8 treated samples
> > >> -> determine significantly changes (List A)
> > >>
> > >> Exp2:
> > >> 8 control samples
> > >> 8 treated samples
> > >> -> determine significantly changes (List B)
> > >>
> > >>
> > >> Now i would like to check whether List A and List B are
> > distinguishable
> > >> or
> > >> indistinguishable.
> > >>
> > >> Any advice is very much appreciated!
> > >>
> > >> Best,
> > >> beginner
> > >> --
> > >> View this message in context: http://r.789695.n4.nabble.com/Test-
> > for-
> > >> equivalence-tp3302739p3302739.html
> > >> Sent from the R help mailing list archive at Nabble.com.
> > >>
> > >> __
> > >> R-help@r-project.org mailing list
> > >> https://stat.ethz.ch/mailman/listinfo/r-help
> > >> PLEASE do read the posting guide http://www.R-project.org/posting-
> > >> guide.html
> > >> and provide commented, minimal, self-contained, reproducible code.
> > >
> > > __
> > > R-help@r-project.org mailing list
> > > https://stat.ethz.ch/mailman/listinfo/r-help
> > > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > > and provide commented, minimal

Re: [R] how to overlay a map of US on image plot

2011-02-14 Thread Greg Snow
You may want to use the ggplot2 package for this (see ?coord_map), it can 
combine maps and other plots and does a lot of the thinking about scaling and 
projections for you.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of cassie jones
> Sent: Sunday, February 13, 2011 7:15 PM
> To: r-help@r-project.org
> Subject: [R] how to overlay a map of US on image plot
> 
> Dear all,
> 
> I have a data set which has latitude, longitude (not in increasing
> order)
> and observed precipitation corresponding to those locations. I am using
> image.plot command for that. But I want to overlay a map of US on that
> plot.
> I know we can use map('usa') to draw it, but I dont know how can I
> overlay
> it on the plot of the precipitation data. Can anyone please help me
> with it?
> 
> 
> Thanks in advance.
> 
> Cassie.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] xyplot text sizing

2011-02-14 Thread Greg Snow
Modifying the 4th example on the help page for tkexamp (TeachingDemos package) 
may help with exploring the effects of the different parameters and deciding on 
a set to use.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Jannis
> Sent: Monday, February 14, 2011 4:48 AM
> To: Nick Torenvliet
> Cc: R-Help
> Subject: Re: [R] xyplot text sizing
> 
> Trellis graphs can be a pain regarding their parameters ;-)
> 
> Try to run trellis.par.get() after you produced the plots and try to
> figure out (by playing around with them) which parameter corresponds to
> your text size (I would guess some of the par.sub.text or par.main.text
> parameters). Use trellis.par.set() to set the according value.
> 
> Good luck with trial and error!
> 
> HTH
> Jannis
> 
> On 02/13/2011 08:59 PM, Nick Torenvliet wrote:
> > Hi all,
> >
> > I'm using
> >
> > xyplot(closingDataXTS)
> >
> > To get a page with any number of seperate plots on it (as many plots
> as
> > columns in closingDataXTS).
> >
> > Each plot is named according to colnames(closingDataXTS).
> >
> > I would like to control the size of the text each plot name appears
> in.
> >
> > I've seen a number of solutions for similar problems that point me in
> the
> > direction of trellis.par.get and trellis.par.set but have been unable
> to put
> > anything together that works.
> >
> > Any ideas for me?
> >
> > Nick
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread Jeff Newmiller
a) POSIXt represents the idea of a datetime. POSIXct is a compact 
representation (number of seconds since 1970-01-01 00:00:00 GMT) of this 
idea. POSIXlt is an inefficient but convenient representation (using 
nine separate components) of this idea. Either POSIXct or POSIXlt may be 
treated as a POSIXt, but you cannot create a POSIXt as such... it has to 
be one of the other two types. Strptime returns a POSIXlt, not so 
efficient for sorting and so forth.


b) Because the POSIXct representation is always referenced to GMT, and 
the POSIXlt representation always indicates the timezone and daylight 
savings time, you may find these representations difficult to work with 
for "simple" problems. If your times come from multiple timezones, or 
include the daylight savings time shift (and you need to work with times 
around the autumn changeover) then you have to use POSIXct or POSIXlt. 
Otherwise, you may find that Date or chron classes are easier to work with.


c) Note that a column of data that only includes time should probably be 
converted to "difftime" using as.difftime. Your code is filling in the 
current date on your strptime result, which can get you into trouble 
later if you import other times on other dates and then try to compare 
times you imported previously. There is an as.numeric(z,units="secs") 
that you can use to obtain sortable values on the fly.


tc <- textConnection( "DATE,TIME
18/01/2011,08:00:01
18/01/2011,08:10:01
18/01/2011,08:20:01
18/01/2011,08:30:01
19/01/2011,08:00:01
19/01/2011,08:10:01
19/01/2011,08:20:01
19/01/2011,08:30:01
")
dta <- read.csv( tc, as.is=TRUE )
close( tc )
dta$DATE <- as.Date( dta$DATE, format="%d/%m/%Y" )
dta$TIME <- as.difftime( dta$TIME )
dta.sorted <- dta[order( dta$DATE, -as.numeric( dta$TIME ) ), ]


JonC wrote:

I have a problem ordering by descending magnitude a POSIXt object. Can
someone help please and let me know how to work around this. My goal is to
be able to order my data by DATE and then by descending TIME.

I have tried to include as much info as possible below. The problem stems
from trying to read in times from a CSV file. I have converted the character
time values to a POSIXt object using the STRPTIME function. I would like
ideally to sort using the order function as below.

test.sort <- order(test$DATE, -test$mytime)

However, when I try this I receive the error as below : 

Error in `-.POSIXt`(test2$mytime) : 
  unary '-' is not defined for "POSIXt" objects


To make this easier to understand I have pasted my example data below with a
list of R commands I have used. Any help or assistance would be appreciated.

  

test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
Documents/Downloads/test2.csv", sep=",")
test2


DATE TIME
1 18/01/2011 08:00:01
2 18/01/2011 08:10:01
3 18/01/2011 08:20:01
4 18/01/2011 08:30:01
5 19/01/2011 08:00:01
6 19/01/2011 08:10:01
7 19/01/2011 08:20:01
8 19/01/2011 08:30:01

  

test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
test2$mytime


[1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
"2011-02-14 08:30:01" "2011-02-14 08:00:01"
[6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"

  

test2


DATE TIME  mytime
1 18/01/2011 08:00:01 2011-02-14 08:00:01
2 18/01/2011 08:10:01 2011-02-14 08:10:01
3 18/01/2011 08:20:01 2011-02-14 08:20:01
4 18/01/2011 08:30:01 2011-02-14 08:30:01
5 19/01/2011 08:00:01 2011-02-14 08:00:01
6 19/01/2011 08:10:01 2011-02-14 08:10:01
7 19/01/2011 08:20:01 2011-02-14 08:20:01
8 19/01/2011 08:30:01 2011-02-14 08:30:01

  

test2.sort <- order(test2$DATE, -test2$mytime)

Error in `-.POSIXt`(test2$mytime) : 
  unary '-' is not defined for "POSIXt" objects


It's at this stage where I have got stuck as I'm new to R and don't yet know
a way of getting around this error. Thanks in advance. 


JonC












__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can I slightly offset plots?

2011-02-14 Thread Greg Snow
Those types of plots can be very hard to read.  A better approach would be to 
look at the lattice package or faceting in the ggplot2 package.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Carly Huitema
> Sent: Monday, February 14, 2011 5:45 AM
> To: r-help@r-project.org
> Subject: [R] How can I slightly offset plots?
> 
> Dear R help contributers,
> 
> I have several x,y scatter plots and I would like to plot them
> slightly offset, similar to:
> http://www.mathworks.com/matlabcentral/fx_files/24368/2/content/style4.
> jpg
> 
> I've looked all over for this answer with no luck.
> Just a function name or previous example would probably be enough for
> me to figure it out.
> 
> Thank-you in advance,
> Carly
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] autocorrelaltion function plots

2011-02-14 Thread Peter Ehlers

In reply to a question from Fangyi He re the confidence limit
lines in an acf plot,

On 2011-02-13 22:06, BSanders wrote:



I believe it's actually 2/sqrt(n)   where n is the sample size...
Interesting..


The question was actually about the default confidence level,
which is 0.95 (see ?plot.acf which Fangyi He would have
discovered had s/he consulted ?acf). Hence, the lines
are at +/- 1.96/sqrt(n). The function that produces them is
presumably simply abline(h = ).

Peter Ehlers

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How can I slightly offset plots?

2011-02-14 Thread David Winsemius


On Feb 14, 2011, at 12:19 PM, Greg Snow wrote:

Those types of plots can be very hard to read.  A better approach  
would be to look at the lattice package or faceting in the ggplot2  
package.


This is the lattice example:
http://lmdvr.r-forge.r-project.org/figures/figures.html
http://dsarkar.fhcrc.org/lattice/book/images/Figure_14_03_stdBW.png



--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111



-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
project.org] On Behalf Of Carly Huitema
Sent: Monday, February 14, 2011 5:45 AM
To: r-help@r-project.org
Subject: [R] How can I slightly offset plots?

Dear R help contributers,

I have several x,y scatter plots and I would like to plot them
slightly offset, similar to:
http://www.mathworks.com/matlabcentral/fx_files/24368/2/content/style4 
.

jpg

I've looked all over for this answer with no luck.
Just a function name or previous example would probably be enough for
me to figure it out.

Thank-you in advance,
Carly

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-
guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Return list of data frames.

2011-02-14 Thread Alberto Negron
Hi,

I've got a problem with a function trying to return 2 data frames in a list.
The code is as follow:

function(x) {

# code

MyList<- list("a"=df.a,"b"=df.b)
return(MyList)

}

and I got the following message:

Error in list_to_dataframe(res, attr(.data, "split_labels")) :
  Results must be all atomic, or all data frames

At first instance I thought it may be down that both df have same variable
names so I changed var names in df.a - but I am still getting the same error
then I changed my lIst to  MyList<- list(df.a,df.b)  just to try something
different and still getting the same issue.

Now I wondering if the error is down to the difference in size for each data
frame df.a (r=500,c=8) and df.b(r=1,c=4) - if this is the case then I don'
know how to fix it.

Any ideas?

Sorry, I've got one more question, once I return the list from my function
how can I split out my list to get the original 2 data frames or how can I
access each one independently, i.e. assign them to a data frame again.. if
that makes sense.

I am using 2.12.1 on Win7

Regards,

Alberto

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Error en lm.fit NA/NaN/Inf en llamada a una función externa

2011-02-14 Thread agent dunham

Hello, I am new with R, and I'm trying to fit a linear model, I did the
following and obtein this result, can anybody help? Thanks, 

> logdftodos7925vi <- log(dftodos7925vi)
> logALTURA7925<- log(dftodos7925$ALTURA7917)

> logtodos7925.lm <- lm (logALTURA7925~., data= logdftodos7925vi)
Error en lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) : 
  NA/NaN/Inf en llamada a una función externa (arg 1)
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Error-en-lm-fit-NA-NaN-Inf-en-llamada-a-una-funcion-externa-tp3305201p3305201.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] CDF of Sample Quantile

2011-02-14 Thread Bentley Coffey
I need to calculate the probability that a sample quantile will exceed a
threshold given the size of the iid sample and the parameters describing the
distribution of each observation (normal, in my case). I can compute the
probability with brute force simulation: simulate a size N sample, apply R's
quantile() function on it, compare it to the threshold, replicate this MANY
times, and count the number of times the sample quantile exceeded the
threshold (dividing by the total number of replications yields the
probability of interest). The problem is that the number of replications
required to get sufficient precision (3 digits say) is so HUGE that this
takes FOREVER. I have to perform this task so much in my script (searching
over the sample size and repeated for several different distribution
parameters) that it takes too many hours to run.

I've searched for pre-existing code to do this in R and haven't found
anything. Perhaps I'm missing something. Is anyone aware of an R function to
compute this probability?

I've tried writing my own code using the fact that R's quantile() function
is a linear combination of 2 order statistics. Basically, I wrote down the
mathematical form of the joint pdf for the 2 order statistics (a function of
the sample size and the distribution parameters) then performed a
pseudo-Monte Carlo integration (i.e. using Halton Draws rather than R's
random draws) over the region where the sample quantile exceeds the
threshold. In theory, this should work and it takes about 1000 times fewer
clock cycles to compute than the Brute Force approach. My problem is that
there is a significant discrepancy between the results using Brute Force and
using this more efficient approach that I have coded up. I believe that the
problem is numerical error but it could be some programming bug; regardless,
I have been unable to locate the source of this problem and have spent over
20 hours trying to identify it this weekend. Please, somebody help!!!

So, again, my question: is there code in R for quickly evaluating the CDF of
a Sample Quantile given the sample size and the parameters governing the
distribution of each iid point in the sample?

Grateful for any help,

Bentley

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to order POSIXt objects ?

2011-02-14 Thread Mike Marchywka






> Date: Mon, 14 Feb 2011 00:55:12 -0800
> From: jon_d_co...@yahoo.co.uk
> To: r-help@r-project.org
> Subject: [R] how to order POSIXt objects ?
>
>
> I have a problem ordering by descending magnitude a POSIXt object. Can
> someone help please and let me know how to work around this. My goal is to
> be able to order my data by DATE and then by descending TIME.
>
> I have tried to include as much info as possible below. The problem stems
> from trying to read in times from a CSV file. I have converted the character
> time values to a POSIXt object using the STRPTIME function. I would like
> ideally to sort using the order function as below.


I have been using the approach I stuck with elsewhere ,
convert them to numerics and sort that way. Not sure if
there is a better way but I'm used to long ints of "time since
jan 1 1970 " etc. 

>
> test.sort <- order(test$DATE, -test$mytime)
>
> However, when I try this I receive the error as below :
>
> Error in `-.POSIXt`(test2$mytime) :
> unary '-' is not defined for "POSIXt" objects
>
> To make this easier to understand I have pasted my example data below with a
> list of R commands I have used. Any help or assistance would be appreciated.
>
> > test2 <- read.csv("C:/Documents and Settings/Jonathan Cooke/My
> > Documents/Downloads/test2.csv", sep=",")
> > test2
> DATE TIME
> 1 18/01/2011 08:00:01
> 2 18/01/2011 08:10:01
> 3 18/01/2011 08:20:01
> 4 18/01/2011 08:30:01
> 5 19/01/2011 08:00:01
> 6 19/01/2011 08:10:01
> 7 19/01/2011 08:20:01
> 8 19/01/2011 08:30:01
>
> > test2$mytime <- strptime(test2$TIME,"%H:%M:%S")
> > test2$mytime
> [1] "2011-02-14 08:00:01" "2011-02-14 08:10:01" "2011-02-14 08:20:01"
> "2011-02-14 08:30:01" "2011-02-14 08:00:01"
> [6] "2011-02-14 08:10:01" "2011-02-14 08:20:01" "2011-02-14 08:30:01"
>
> > test2
> DATE TIME mytime
> 1 18/01/2011 08:00:01 2011-02-14 08:00:01
> 2 18/01/2011 08:10:01 2011-02-14 08:10:01
> 3 18/01/2011 08:20:01 2011-02-14 08:20:01
> 4 18/01/2011 08:30:01 2011-02-14 08:30:01
> 5 19/01/2011 08:00:01 2011-02-14 08:00:01
> 6 19/01/2011 08:10:01 2011-02-14 08:10:01
> 7 19/01/2011 08:20:01 2011-02-14 08:20:01
> 8 19/01/2011 08:30:01 2011-02-14 08:30:01
>
> > test2.sort <- order(test2$DATE, -test2$mytime)
> Error in `-.POSIXt`(test2$mytime) :
> unary '-' is not defined for "POSIXt" objects
>
> It's at this stage where I have got stuck as I'm new to R and don't yet know
> a way of getting around this error. Thanks in advance.
>
> JonC
>
>
>
>
>
>
>
>
>
> --
> View this message in context: 
> http://r.789695.n4.nabble.com/how-to-order-POSIXt-objects-tp3304609p3304609.html
> Sent from the R help mailing list archive at Nabble.com.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] RCytoscape setPosition error

2011-02-14 Thread Paul Shannon
Hi Fahim,

There is an easily resolved omission in the sample code you sent.  
 
Please note the the 3rd line below, 'displayGraph', which does not appear in 
your email.  This method transmits your graph from R to Cytoscape, creating a 
Cytoscape network.  

  cw <- CytoscapeWindow ('smallExample', graph=RCytoscape::makeSimpleGraph())
  displayGraph (cw)  # this command sends your R graph to Cytoscape.  I 
believe you left this out...
  layout (cw, 'jgraph-spring')
  redraw (cw)
  setPosition (cw, 'A', 50, 80)

If you want to place 'A' directly between 'B' and 'C', try this:

  pos.b <- getPosition (cw, 'B')[[1]]# where is B?
  pos.c <- getPosition (cw, 'C')[[1]]# where is c?
  new.x <- (pos.b$x + pos.c$x)/2 # calculate the x coordinate for A
  new.y <- (pos.b$y + pos.c$y)/2 # and the y
  setPosition (cw, 'A', new.x, new.y)# place A mid-way between B and C

Please try these steps, ensure that they work for you, then we can then look at 
the next spot where trouble arises for you.

Also, please attach your sessionInfo.  It may be wise to update your code to 
the latest version of the package.

Cheers!

- Paul

On Feb 14, 2011, at 5:25 AM, Martin Morgan wrote:

> On 02/13/2011 11:28 AM, Fahim M wrote:
>> Hi
>> Can some one please point out where i am wrong.
>> 
>> I am trying to position set of nodes column-wise in cytoscape using
>> RCytoscape
>> AD
>> BE
>> CF
> 
> Hi Fahim -- please ask questions about Bioconductor packages on the
> Bioconductor mailing list
> 
> http://bioconductor.org/help/mailing-list/
> 
> and include packageMaintainer('RCytoscape') in the post.
> 
> Martin
> 
>> 
>> ---
>> 
>> g <- new ('graphNEL', edgemode='undirected')
>> cw <- CytoscapeWindow ('smallExample', graph=RCytoscape::makeSimpleGraph())
>> layout (cw, 'jgraph-spring')
>> redraw(cw)
>> 
>> nodesFr = c('A', 'B', 'C')
>> nodesTo =c('D', 'E', 'F')
>> nodesAll = union(nodesFr, nodesTo)
>> 
>> nElemFr = length(nodesFr)
>> nElemTo = length(nodesTo)
>> 
>> g <- graph::addNode(nodesAll, g)
>> 
>> setPosition(cw, nodesFr , c(400, 400, 400), c(100, 200, 300))
>> setPosition(cw, nodesTo , c(600, 600, 600), c(100, 200, 300))
>> Error in convertToR(xmlParse(node, asText = TRUE)) :
>>  faultCode: 0 faultString: Failed to invoke method setNodesPositions in
>> class tudelft.CytoscapeRPC.CytoscapeRPCCallHandler: null
>> 
>> setPosition(cw, nodesTo , c(600, 600, 600), c(100, 200, 300))
>> Error in convertToR(xmlParse(node, asText = TRUE)) :
>>  faultCode: 1001 faultString: Node with id: D could not be found.
>> 
>> Thanks
>> --Fahim
>> 
>>  [[alternative HTML version deleted]]
>> 
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
> 
> 
> -- 
> Computational Biology
> Fred Hutchinson Cancer Research Center
> 1100 Fairview Ave. N. PO Box 19024 Seattle, WA 98109
> 
> Location: M1-B861
> Telephone: 206 667-2793

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Spatstat - envelope, Thomas process and Cramer-von Mises

2011-02-14 Thread Jeff Fang
Hi all,

I am using "spatstat" to investigate the spatial structure of some plant
populations, and I want to detect these patters with IPP and a Thomas
process based on pair-correlation function. I know the function "pcfinhom"
is available to characterize the IPP, but I have no idea about how to use
the pcf with Thomas process? Additionally, generating simulation envelopes
using these two null models is another problem for me. Cramer-von Mises
(CvM) can assess the curve-wise significance of deviations from null
hypothese, but in the spatstat package, is there any functions can do this
work?

I am not very familiar with R, so my problem might be simple.  I hope anyone
have any experience with these work can give me any assistance.

Many thanks


Jeff

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sem problem - did not converge

2011-02-14 Thread Felipe Bhering

Someone can help me? I tried several things and always don't converge

# Model
library(sem)
dados40.cov <- cov(dados40,method="spearman")
model.dados40 <- specify.model()
F1 ->  Item11, lam11, NA
F1 ->  Item31, lam31, NA
F1 ->  Item36, lam36, NA
F1 ->  Item54, lam54, NA
F1 ->  Item63, lam63, NA
F1 ->  Item65, lam55, NA
F1 ->  Item67, lam67, NA
F1 ->  Item69, lam69, NA
F1 ->  Item73, lam73, NA
F1 ->  Item75, lam75, NA
F1 ->  Item76, lam76, NA
F1 ->  Item78, lam78, NA
F1 ->  Item79, lam79, NA
F1 ->  Item80, lam80, NA
F1 ->  Item83, lam83, NA
F2 ->  Item12, lam12, NA
F2 ->  Item32, lam32, NA
F2 ->  Item42, lam42, NA
F2 ->  Item47, lam47, NA
F2 ->  Item64, lam64, NA
F2 ->  Item66, lam66, NA
F2 ->  Item68, lam68, NA
F2 ->  Item74, lam74, NA
F3 ->  Item3, lam3, NA
F3 ->  Item8, lam8, NA
F3 ->  Item18, lam18, NA
F3 ->  Item23, lam23, NA
F3 ->  Item28, lam28, NA
F3 ->  Item33, lam33, NA
F3 ->  Item38, lam38, NA
F3 ->  Item43, lam43, NA
F4 ->  Item9, lam9, NA
F4 ->  Item39, lam39, NA
F5 ->  Item5, lam5, NA
F5 ->  Item10, lam10, NA
F5 ->  Item20, lam20, NA
F5 ->  Item25, lam25, NA
F5 ->  Item30, lam30, NA
F5 ->  Item35, lam35, NA
F5 ->  Item45, lam45, NA
Item3 <-> Item3, e3,   NA
Item5 <-> Item5, e5,   NA
Item8 <-> Item8, e8,   NA
Item9 <-> Item9, e9,   NA
Item10 <-> Item10, e10,   NA
Item11 <-> Item11, e11,   NA
Item12 <-> Item12, e12,   NA
Item18 <-> Item18, e18,   NA
Item20 <-> Item20, e20,   NA
Item23 <-> Item23, e23,   NA
Item25 <-> Item25, e25,   NA
Item28 <-> Item28, e28,   NA
Item30 <-> Item30, e30,   NA
Item31 <-> Item31, e31,   NA
Item32 <-> Item32, e32,   NA
Item33 <-> Item33, e33,   NA
Item35 <-> Item35, e35,   NA
Item36 <-> Item36, e36,   NA
Item38 <-> Item38, e38,   NA
Item39 <-> Item39, e39,   NA
Item42 <-> Item42, e42,   NA
Item43 <-> Item43, e43,   NA
Item45 <-> Item45, e45,   NA
Item47 <-> Item47, e47,   NA
Item54 <-> Item54, e54,   NA
Item63 <-> Item63, e63,   NA
Item64 <-> Item64, e64,   NA
Item65 <-> Item65, e65,   NA
Item66 <-> Item66, e66,   NA
Item67 <-> Item67, e67,   NA
Item68 <-> Item68, e68,   NA
Item69 <-> Item69, e69,   NA
Item73 <-> Item73, e73,   NA
Item74 <-> Item74, e74,   NA
Item75 <-> Item75, e75,   NA
Item76 <-> Item76, e76,   NA
Item78 <-> Item78, e78,   NA
Item79 <-> Item79, e79,   NA
Item80 <-> Item80, e80,   NA
Item83 <-> Item83, e83,   NA
F1 <-> F1, NA,1
F2 <-> F2, NA,1
F3 <-> F3, NA,1
F4 <-> F4, NA,1
F5 <-> F5, NA,1
F1 <-> F2, F1F2, NA
F1 <-> F3, F1F3, NA
F1 <-> F4, F1F4, NA
F1 <-> F5, F1F5, NA
F2 <-> F3, F2F3, NA
F2 <-> F4, F2F4, NA
F2 <-> F5, F2F5, NA
F3 <-> F4, F3F4, NA
F3 <-> F5, F3F5, NA
F4 <-> F5, F4F5, NA


###i tryed several correlations, such as hetcor and polychor of polycor
library


hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
hetdados40=hcor(dados40)


dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
Warning message:
In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars,  :
  Could not compute QR decomposition of Hessian.
Optimization probably did not converge.

#

The same happen if i put hetdados40 in the place of dados40.cov
of course hetdados40 has 1 in the diag, but any 0

what should i do? i tryed several things...

all value positive..

#

> eigen(hetdados40)$values
 [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
 [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
[13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
[19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
[25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
[31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
[37]  0.2548654  0.2077900  0.2043732  0.1923942
> eigen(dados40.cov)$values
 [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
 [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
[15]  46836.47  45151.23  43213.65  41465.42  40449.59  37824.73  37622.43
[22]  36344.34  35794.22  33959.29  33552.64  32189.94  31304.44  30594.85
[29]  30077.32  29362.66  26928.12  26526.72  26046.47  24264.50  23213.18
[36]  21503.97  20312.55  18710.97  17093.24  14372.21

#


there are 40 variables and 1004 subjects, should not be a problem the number
of variables also!
-- 
View this message in context: 
http://r.789695.n4.nabble.com/sem-problem-did-not-converge-tp3305200p3305200.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] censReg or tobit: testing for assumptions in R?

2011-02-14 Thread E Hofstadler

Thanks for your response!

Terry Therneau wrote:

  I assume you mean "survreg" in the survival package.  It's a shame
that censored gaussian regression has earned a unique label (tobit) that
makes people think it is something so very different.


(yes, what I meant was the function tobit() from the AER package, which 
however interfaces the survreg function from the survival package:

http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=AER:tobit)


M>y hopefully not too silly question is this: I understand that

heteroskedasticity and nonnormal errors are even more serious problems
in a censored regression than in an ols-regression. But I'm not sure
how to test for these assumptions in R? Is there a way to get to the
residuals of censored regression models (given that corresponding
functions for lm, such as rstandard, are not applicable)?


Actually it can be less of a problem, since the really large outliers
often turn into censored observations, limiting their leverage. 
For examination, there are good ideas in the article referenced

by ?residuals.survreg.


Many thanks for this hint, I will definitely check it out.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sem problem - did not converge

2011-02-14 Thread Felipe Bhering
 Someone can help me? I tried several things and always don't converge
I am making a confirmatory factor analysis.

# Model
library(sem)
dados40.cov <- cov(dados40,method="spearman")
model.dados40 <- specify.model()
F1 ->  Item11, lam11, NA
F1 ->  Item31, lam31, NA
F1 ->  Item36, lam36, NA
F1 ->  Item54, lam54, NA
F1 ->  Item63, lam63, NA
F1 ->  Item65, lam55, NA
F1 ->  Item67, lam67, NA
F1 ->  Item69, lam69, NA
F1 ->  Item73, lam73, NA
F1 ->  Item75, lam75, NA
F1 ->  Item76, lam76, NA
F1 ->  Item78, lam78, NA
F1 ->  Item79, lam79, NA
F1 ->  Item80, lam80, NA
F1 ->  Item83, lam83, NA
F2 ->  Item12, lam12, NA
F2 ->  Item32, lam32, NA
F2 ->  Item42, lam42, NA
F2 ->  Item47, lam47, NA
F2 ->  Item64, lam64, NA
F2 ->  Item66, lam66, NA
F2 ->  Item68, lam68, NA
F2 ->  Item74, lam74, NA
F3 ->  Item3, lam3, NA
F3 ->  Item8, lam8, NA
F3 ->  Item18, lam18, NA
F3 ->  Item23, lam23, NA
F3 ->  Item28, lam28, NA
F3 ->  Item33, lam33, NA
F3 ->  Item38, lam38, NA
F3 ->  Item43, lam43, NA
F4 ->  Item9, lam9, NA
F4 ->  Item39, lam39, NA
F5 ->  Item5, lam5, NA
F5 ->  Item10, lam10, NA
F5 ->  Item20, lam20, NA
F5 ->  Item25, lam25, NA
F5 ->  Item30, lam30, NA
F5 ->  Item35, lam35, NA
F5 ->  Item45, lam45, NA
Item3 <-> Item3, e3,   NA
Item5 <-> Item5, e5,   NA
Item8 <-> Item8, e8,   NA
Item9 <-> Item9, e9,   NA
Item10 <-> Item10, e10,   NA
Item11 <-> Item11, e11,   NA
Item12 <-> Item12, e12,   NA
Item18 <-> Item18, e18,   NA
Item20 <-> Item20, e20,   NA
Item23 <-> Item23, e23,   NA
Item25 <-> Item25, e25,   NA
Item28 <-> Item28, e28,   NA
Item30 <-> Item30, e30,   NA
Item31 <-> Item31, e31,   NA
Item32 <-> Item32, e32,   NA
Item33 <-> Item33, e33,   NA
Item35 <-> Item35, e35,   NA
Item36 <-> Item36, e36,   NA
Item38 <-> Item38, e38,   NA
Item39 <-> Item39, e39,   NA
Item42 <-> Item42, e42,   NA
Item43 <-> Item43, e43,   NA
Item45 <-> Item45, e45,   NA
Item47 <-> Item47, e47,   NA
Item54 <-> Item54, e54,   NA
Item63 <-> Item63, e63,   NA
Item64 <-> Item64, e64,   NA
Item65 <-> Item65, e65,   NA
Item66 <-> Item66, e66,   NA
Item67 <-> Item67, e67,   NA
Item68 <-> Item68, e68,   NA
Item69 <-> Item69, e69,   NA
Item73 <-> Item73, e73,   NA
Item74 <-> Item74, e74,   NA
Item75 <-> Item75, e75,   NA
Item76 <-> Item76, e76,   NA
Item78 <-> Item78, e78,   NA
Item79 <-> Item79, e79,   NA
Item80 <-> Item80, e80,   NA
Item83 <-> Item83, e83,   NA
F1 <-> F1, NA,1
F2 <-> F2, NA,1
F3 <-> F3, NA,1
F4 <-> F4, NA,1
F5 <-> F5, NA,1
F1 <-> F2, F1F2, NA
F1 <-> F3, F1F3, NA
F1 <-> F4, F1F4, NA
F1 <-> F5, F1F5, NA
F2 <-> F3, F2F3, NA
F2 <-> F4, F2F4, NA
F2 <-> F5, F2F5, NA
F3 <-> F4, F3F4, NA
F3 <-> F5, F3F5, NA
F4 <-> F5, F4F5, NA


###i tryed several correlations, such as hetcor and polychor of polycor
library


hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
hetdados40=hcor(dados40)


dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
Warning message:
In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars,  :
  Could not compute QR decomposition of Hessian.
Optimization probably did not converge.

#

The same happen if i put hetdados40 in the place of dados40.cov
of course hetdados40 has 1 in the diag, but any 0

what should i do? i tryed several things...

all value positive..

#

> eigen(hetdados40)$values
 [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
 [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
[13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
[19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
[25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
[31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
[37]  0.2548654  0.2077900  0.2043732  0.1923942
> eigen(dados40.cov)$values
 [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
 [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
[15]  46836.47  45151.23  43213.65  41465.42  40449.59  37824.73  37622.43
[22]  36344.34  35794.22  33959.29  33552.64  32189.94  31304.44  30594.85
[29]  30077.32  29362.66  26928.12  26526.72  26046.47  24264.50  23213.18
[36]  21503.97  20312.55  18710.97  17093.24  14372.21

#


there are 40 variables and 1004 subjects, should not be a problem the number
of variables also!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Parzen estimator of fractional degree of difference, d

2011-02-14 Thread Fologo Dubois
Does R have a function for Parzen degree of differencing estimator? I am 
referring to the non-parametric kernel density estimator set forth by Parzen in 
Parzen (1983)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sem problem - did not converge

2011-02-14 Thread Felipe Bhering
 Someone can help me? I tried several things and always don't converge
I am making a confirmatory factor analysis model.

# Model
library(sem)
dados40.cov <- cov(dados40,method="spearman")
model.dados40 <- specify.model()
F1 ->  Item11, lam11, NA
F1 ->  Item31, lam31, NA
F1 ->  Item36, lam36, NA
F1 ->  Item54, lam54, NA
F1 ->  Item63, lam63, NA
F1 ->  Item65, lam55, NA
F1 ->  Item67, lam67, NA
F1 ->  Item69, lam69, NA
F1 ->  Item73, lam73, NA
F1 ->  Item75, lam75, NA
F1 ->  Item76, lam76, NA
F1 ->  Item78, lam78, NA
F1 ->  Item79, lam79, NA
F1 ->  Item80, lam80, NA
F1 ->  Item83, lam83, NA
F2 ->  Item12, lam12, NA
F2 ->  Item32, lam32, NA
F2 ->  Item42, lam42, NA
F2 ->  Item47, lam47, NA
F2 ->  Item64, lam64, NA
F2 ->  Item66, lam66, NA
F2 ->  Item68, lam68, NA
F2 ->  Item74, lam74, NA
F3 ->  Item3, lam3, NA
F3 ->  Item8, lam8, NA
F3 ->  Item18, lam18, NA
F3 ->  Item23, lam23, NA
F3 ->  Item28, lam28, NA
F3 ->  Item33, lam33, NA
F3 ->  Item38, lam38, NA
F3 ->  Item43, lam43, NA
F4 ->  Item9, lam9, NA
F4 ->  Item39, lam39, NA
F5 ->  Item5, lam5, NA
F5 ->  Item10, lam10, NA
F5 ->  Item20, lam20, NA
F5 ->  Item25, lam25, NA
F5 ->  Item30, lam30, NA
F5 ->  Item35, lam35, NA
F5 ->  Item45, lam45, NA
Item3 <-> Item3, e3,   NA
Item5 <-> Item5, e5,   NA
Item8 <-> Item8, e8,   NA
Item9 <-> Item9, e9,   NA
Item10 <-> Item10, e10,   NA
Item11 <-> Item11, e11,   NA
Item12 <-> Item12, e12,   NA
Item18 <-> Item18, e18,   NA
Item20 <-> Item20, e20,   NA
Item23 <-> Item23, e23,   NA
Item25 <-> Item25, e25,   NA
Item28 <-> Item28, e28,   NA
Item30 <-> Item30, e30,   NA
Item31 <-> Item31, e31,   NA
Item32 <-> Item32, e32,   NA
Item33 <-> Item33, e33,   NA
Item35 <-> Item35, e35,   NA
Item36 <-> Item36, e36,   NA
Item38 <-> Item38, e38,   NA
Item39 <-> Item39, e39,   NA
Item42 <-> Item42, e42,   NA
Item43 <-> Item43, e43,   NA
Item45 <-> Item45, e45,   NA
Item47 <-> Item47, e47,   NA
Item54 <-> Item54, e54,   NA
Item63 <-> Item63, e63,   NA
Item64 <-> Item64, e64,   NA
Item65 <-> Item65, e65,   NA
Item66 <-> Item66, e66,   NA
Item67 <-> Item67, e67,   NA
Item68 <-> Item68, e68,   NA
Item69 <-> Item69, e69,   NA
Item73 <-> Item73, e73,   NA
Item74 <-> Item74, e74,   NA
Item75 <-> Item75, e75,   NA
Item76 <-> Item76, e76,   NA
Item78 <-> Item78, e78,   NA
Item79 <-> Item79, e79,   NA
Item80 <-> Item80, e80,   NA
Item83 <-> Item83, e83,   NA
F1 <-> F1, NA,1
F2 <-> F2, NA,1
F3 <-> F3, NA,1
F4 <-> F4, NA,1
F5 <-> F5, NA,1
F1 <-> F2, F1F2, NA
F1 <-> F3, F1F3, NA
F1 <-> F4, F1F4, NA
F1 <-> F5, F1F5, NA
F2 <-> F3, F2F3, NA
F2 <-> F4, F2F4, NA
F2 <-> F5, F2F5, NA
F3 <-> F4, F3F4, NA
F3 <-> F5, F3F5, NA
F4 <-> F5, F4F5, NA


###i tryed several correlations, such as hetcor and polychor of polycor
library


hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
hetdados40=hcor(dados40)


dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
Warning message:
In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars,  :
  Could not compute QR decomposition of Hessian.
Optimization probably did not converge.

#

The same happen if i put hetdados40 in the place of dados40.cov
of course hetdados40 has 1 in the diag, but any 0

what should i do? i tryed several things...

all value positive..

#

> eigen(hetdados40)$values
 [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
 [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
[13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
[19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
[25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
[31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
[37]  0.2548654  0.2077900  0.2043732  0.1923942
> eigen(dados40.cov)$values
 [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
 [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
[15]  46836.47  45151.23  43213.65  41465.42  40449.59  37824.73  37622.43
[22]  36344.34  35794.22  33959.29  33552.64  32189.94  31304.44  30594.85
[29]  30077.32  29362.66  26928.12  26526.72  26046.47  24264.50  23213.18
[36]  21503.97  20312.55  18710.97  17093.24  14372.21

#

There are no missing data and  40 variables and 1004 subjects, should not be
a problem the number of variables also!

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] txtProgressBar examples?

2011-02-14 Thread Scott Chamberlain
Dear R users, 


I am curious if someone could direct me towards websites/tutorials for uses of 
progress bars (especially) in R. I can't seem to figure them out. 


Thanks very much, Scott 
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R not working after loading lattice

2011-02-14 Thread Rita Carreira

Hello!
Happy Valentine's Day...

After I loaded the package lattice in R, it did not work. Then when I closed R 
and restarted it, R started with an error message and here is the feedback that 
I got from the R Console:

Loading required package: Hmisc
Loading required package: survival
Loading required package: stats
Loading required package: graphics
Loading required package: splines
Error in loadNamespace(i[[1L]], c(lib.loc, .libPaths())) : 
  there is no package called 'lattice'
Error: package 'Hmisc' could not be loaded

I cannot do anything in R now; it does not even read the data in. When I try 
it, it gives me the following error:

> source(.trPaths[5], echo=TRUE, max.deparse.length=150)
Error in source(.trPaths[5], echo = TRUE, max.deparse.length = 150) : 
  object '.trPaths' not found

What is the solution? Should I just reinstall R and make sure I do not load 
lattice?

Thank you all!
Rita
P.S. I am very new to R and I still have trouble understanding some of the 
language and how it all works. I've been using it for about 1.5 weeks.


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analyzing dissimilarity ratings with Multidimensional Scaling

2011-02-14 Thread Luca Turchet
Dear Mike,
thanks a lot for your answer. Unfortunately the way you kindly suggested is
not suitable
to solve my problem.

Indeed as said, I need to display the distances between the proposed trials,
for example I would like to see a 2D plot where I can see how fare is the
trial MT-MT from the trial MT-SW (that is how far are the evaluations of the

audio-visual trial with metal-metal from the trial with metal-snow)

Does anyone have any other suggestion to build such perceptual map in R?

In addition, how can I get some information regarding the significant
differences
between the trials?
If I used the ANOVA would be simple to get those p-values. I wonder if it is
the
case to use it...

Help!


Luca




On Mon, Feb 14, 2011 at 3:23 PM, Mike Marchywka wrote:

>
>
>
> > My first goal in the analysis process is to print a perceptual map where
> to
> > place the pairs of
> > audio-visual stimuli (e.g. WD-WD, MT-DL, etc.) and see how far the trials
> > are from each other.
>
> I've been using heatmap for stuff like this.
> You can get a nice picture this way and get quick visual
> survey and dendrograms,
>
> xm<-scan(file="avm.txt")
> str(xm)
> heatmap(xm)
> heatmap(matrix(xm,6,6))
>
> I ran the above on your data and it visually looks
> like there could be interesting patterns to test.
>
>
>
>
>
>




-- 
---

"Music is a moral law:
It gives a soul to the Universe,
wings to the mind,
flight to the imagination,
a charm to sadness,
and life to everything.
It is the essence of order,
and leads to all that is good,
just and beautiful,
of which it is the invisible,
but nevertheless dazzling,
passionate, and eternal form".

Plato, 400 B.C. (from the Dialogues)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to group data by day

2011-02-14 Thread Michela Ferron
Hi everybody,

I'm a beginner in R and I'm having a hard time grouping my data by day.
The data are in this format:

id; words
2005-07-07T09:59:56Z; 35
2005-07-07T10:01:39Z; 13
2005-07-08T10:02:22Z; 1
2005-07-09T10:03:16Z; 23
2005-07-10T10:04:23Z; 39
2005-07-10T10:04:39Z; 15

I've transformed the date strings in dates with the function:
london$id <- transform(london$id, as.Date(london$id, 
format="%Y-%m-%d%T%H:%M:%S%Z"))
and it seems to work.

Now I would like to add a new "day" variable to group data by day, like this:

id; words; day
2005-07-07T09:59:56Z; 35; 1
2005-07-07T10:01:39Z; 13; 1
2005-07-08T10:02:22Z; 1; 2
2005-07-09T10:03:16Z; 23; 3
2005-07-10T10:04:23Z; 39; 4
2005-07-10T10:04:39Z; 15; 4

How can I do that?

Many thanks!

Michela

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Transforming relational data

2011-02-14 Thread mathijsdevaan

Hi,

I have a large dataset with info on individuals (B) that have been involved
in projects (A) during multiple years (C). The dataset contains three
columns: A, B, C. Example:
   
   A  B  C
1 1  a  1999
2 1  b  1999
3 1  c  1999
4 1  d  1999
5 2  c  2001
6 2  d  2001
7 3  a  2004
8 3  c  2004
9 3  d  2004

I am interested in how well all the individuals in a project know each
other. To calculate this team familiarity measure I want to sum the
familiarity between all individual pairs in a team. The familiarity between
each individual pair in a team is calculated as the summation of each pair's
prior co-appearance in a project divided by the total number of team
members. So the team familiarity in project 3 = (1/4+1/4) + (1/4+1/4+1/2) +
(1/4+1/4+1/2) = 2,5 or a has been in project 1 (of size 4) with c and d >
1/4+1/4 and c has been in project 1 (of size 4) with 1 and d > 1/4+1/4 and c
has been in project 2 (of size 2) with d > 1/2.

I think that the best way to do it is to transform the data into an edgelist
(each pair in one row/two columns) and then creating two additional columns
for the strength of the familiarity and the year of the project in which the
pair was active. The problem is that I am stuck already in the first step.
So the question is: how do I go from the current data structure to a list of
projects and the familiarity of its team members?

Your help is very much appreciated. Thanks!
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Transforming-relational-data-tp3305398p3305398.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Analyzing dissimilarity ratings with Multidimensional Scaling

2011-02-14 Thread Mike Marchywka












> Date: Mon, 14 Feb 2011 17:55:50 +0100
> Subject: Re: [R] Analyzing dissimilarity ratings with Multidimensional
> Scaling
> From: turchet.l...@gmail.com
> To: marchy...@hotmail.com
> CC: r-help@r-project.org
>
> Dear Mike,
> thanks a lot for your answer. Unfortunately the way you kindly
> suggested is not suitable
> to solve my problem.
>
> Indeed as said, I need to display the distances between the proposed trials,
> for example I would like to see a 2D plot where I can see how fare is the
> trial MT-MT from the trial MT-SW (that is how far are the evaluations of the
> audio-visual trial with metal-metal from the trial with metal-snow)

I thought the dendrogram is what you wanted but apparently you can just
use daisy perhaps. This had a lot of typos and may not run as last example
I sent had bad edits but this does create scatterplot with zero points on
diagonal etc. Presumably you can define your own functions appropriate for
your data ( I've never used some of this before which is why I'm trying,
cavear emptor).

library("cluster")
ddm<-daisy(matrix(xm,6,6),metric="euclidean")
str(ddm)
library("scatterplot3d")
dasm<-as.matrix(ddm);
nddf<-reshape(data.frame(dasm),varying=c("X1","X2","X3","X4","X5","X6"),v.name="time",
 direction="long")
str(nddf)
ij<-as.numeric(gl(6,6))
n2<-cbind(nddf,ij)
str(n2)
scatterplot3d(n2$ij,n2$id,n2$time,type="h")




>
> Does anyone have any other suggestion to build such perceptual map in R?
>
> In addition, how can I get some information regarding the significant
> differences
> between the trials?
> If I used the ANOVA would be simple to get those p-values. I wonder if
> it is the
> case to use it...
>
> Help!
>
>
> Luca
>
>
>
>
> On Mon, Feb 14, 2011 at 3:23 PM, Mike Marchywka
> > wrote:
>
>
>
> > My first goal in the analysis process is to print a perceptual map where to
> > place the pairs of
> > audio-visual stimuli (e.g. WD-WD, MT-DL, etc.) and see how far the trials
> > are from each other.
>
> I've been using heatmap for stuff like this.
> You can get a nice picture this way and get quick visual
> survey and dendrograms,
>
> xm<-scan(file="avm.txt")
> str(xm)
> heatmap(xm)
> heatmap(matrix(xm,6,6))
>
> I ran the above on your data and it visually looks
> like there could be interesting patterns to test.
>
>
>
>
>
>
>
>
>
> --
> ---
>
> "Music is a moral law:
> It gives a soul to the Universe,
> wings to the mind,
> flight to the imagination,
> a charm to sadness,
> and life to everything.
> It is the essence of order,
> and leads to all that is good,
> just and beautiful,
> of which it is the invisible,
> but nevertheless dazzling,
> passionate, and eternal form".
>
> Plato, 400 B.C. (from the Dialogues)
>
  
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CDF of Sample Quantile

2011-02-14 Thread Jonathan P Daily
If I understand this, you have a value x, or a vector of values x, and you 
want to know the CDF that this value is drawn from a normal distribution?

I assume you are drawing from rnorm for your simulations, so look at the 
other functions listed when you ?rnorm.

HTH
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
"Is the room still a room when its empty? Does the room,
 the thing itself have purpose? Or do we, what's the word... imbue it."
 - Jubal Early, Firefly

r-help-boun...@r-project.org wrote on 02/14/2011 09:58:09 AM:

> [image removed] 
> 
> [R] CDF of Sample Quantile
> 
> Bentley Coffey 
> 
> to:
> 
> r-help
> 
> 02/14/2011 01:58 PM
> 
> Sent by:
> 
> r-help-boun...@r-project.org
> 
> I need to calculate the probability that a sample quantile will exceed a
> threshold given the size of the iid sample and the parameters describing 
the
> distribution of each observation (normal, in my case). I can compute the
> probability with brute force simulation: simulate a size N sample, apply 
R's
> quantile() function on it, compare it to the threshold, replicate this 
MANY
> times, and count the number of times the sample quantile exceeded the
> threshold (dividing by the total number of replications yields the
> probability of interest). The problem is that the number of replications
> required to get sufficient precision (3 digits say) is so HUGE that this
> takes FOREVER. I have to perform this task so much in my script 
(searching
> over the sample size and repeated for several different distribution
> parameters) that it takes too many hours to run.
> 
> I've searched for pre-existing code to do this in R and haven't found
> anything. Perhaps I'm missing something. Is anyone aware of an R 
function to
> compute this probability?
> 
> I've tried writing my own code using the fact that R's quantile() 
function
> is a linear combination of 2 order statistics. Basically, I wrote down 
the
> mathematical form of the joint pdf for the 2 order statistics (a 
function of
> the sample size and the distribution parameters) then performed a
> pseudo-Monte Carlo integration (i.e. using Halton Draws rather than R's
> random draws) over the region where the sample quantile exceeds the
> threshold. In theory, this should work and it takes about 1000 times 
fewer
> clock cycles to compute than the Brute Force approach. My problem is 
that
> there is a significant discrepancy between the results using Brute Force 
and
> using this more efficient approach that I have coded up. I believe that 
the
> problem is numerical error but it could be some programming bug; 
regardless,
> I have been unable to locate the source of this problem and have spent 
over
> 20 hours trying to identify it this weekend. Please, somebody help!!!
> 
> So, again, my question: is there code in R for quickly evaluating the 
CDF of
> a Sample Quantile given the sample size and the parameters governing the
> distribution of each iid point in the sample?
> 
> Grateful for any help,
> 
> Bentley
> 
>[[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide 
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem problem - did not converge

2011-02-14 Thread Jeremy Miles
You have a fairly large and complex model there.  This sort of model
(almost) always causes problems.

I would try fitting one factor at a time.  That might help you to
narrow down the problem.  If one factor doesn't converge, the whole
model won't converge.

You might also consider joining the structural equation modeling list
- semnet.  This isn't really a sem (the package) or R problem, it's a
more general SEM (the approach) problem.

Jeremy






Chen, F., Bollen, K., Paxton, P., Curran, P.J., & Kirby, J. (2001).
Improper solutions in structural equation models: Causes,
consequences, and strategies. Sociological Methods and Research, 29,
468-508.

On 14 February 2011 07:34, Felipe Bhering  wrote:
>
> Someone can help me? I tried several things and always don't converge
>
> # Model
> library(sem)
> dados40.cov <- cov(dados40,method="spearman")
> model.dados40 <- specify.model()
> F1 ->  Item11, lam11, NA
> F1 ->  Item31, lam31, NA
> F1 ->  Item36, lam36, NA
> F1 ->  Item54, lam54, NA
> F1 ->  Item63, lam63, NA
> F1 ->  Item65, lam55, NA
> F1 ->  Item67, lam67, NA
> F1 ->  Item69, lam69, NA
> F1 ->  Item73, lam73, NA
> F1 ->  Item75, lam75, NA
> F1 ->  Item76, lam76, NA
> F1 ->  Item78, lam78, NA
> F1 ->  Item79, lam79, NA
> F1 ->  Item80, lam80, NA
> F1 ->  Item83, lam83, NA
> F2 ->  Item12, lam12, NA
> F2 ->  Item32, lam32, NA
> F2 ->  Item42, lam42, NA
> F2 ->  Item47, lam47, NA
> F2 ->  Item64, lam64, NA
> F2 ->  Item66, lam66, NA
> F2 ->  Item68, lam68, NA
> F2 ->  Item74, lam74, NA
> F3 ->  Item3, lam3, NA
> F3 ->  Item8, lam8, NA
> F3 ->  Item18, lam18, NA
> F3 ->  Item23, lam23, NA
> F3 ->  Item28, lam28, NA
> F3 ->  Item33, lam33, NA
> F3 ->  Item38, lam38, NA
> F3 ->  Item43, lam43, NA
> F4 ->  Item9, lam9, NA
> F4 ->  Item39, lam39, NA
> F5 ->  Item5, lam5, NA
> F5 ->  Item10, lam10, NA
> F5 ->  Item20, lam20, NA
> F5 ->  Item25, lam25, NA
> F5 ->  Item30, lam30, NA
> F5 ->  Item35, lam35, NA
> F5 ->  Item45, lam45, NA
> Item3 <-> Item3, e3,   NA
> Item5 <-> Item5, e5,   NA
> Item8 <-> Item8, e8,   NA
> Item9 <-> Item9, e9,   NA
> Item10 <-> Item10, e10,   NA
> Item11 <-> Item11, e11,   NA
> Item12 <-> Item12, e12,   NA
> Item18 <-> Item18, e18,   NA
> Item20 <-> Item20, e20,   NA
> Item23 <-> Item23, e23,   NA
> Item25 <-> Item25, e25,   NA
> Item28 <-> Item28, e28,   NA
> Item30 <-> Item30, e30,   NA
> Item31 <-> Item31, e31,   NA
> Item32 <-> Item32, e32,   NA
> Item33 <-> Item33, e33,   NA
> Item35 <-> Item35, e35,   NA
> Item36 <-> Item36, e36,   NA
> Item38 <-> Item38, e38,   NA
> Item39 <-> Item39, e39,   NA
> Item42 <-> Item42, e42,   NA
> Item43 <-> Item43, e43,   NA
> Item45 <-> Item45, e45,   NA
> Item47 <-> Item47, e47,   NA
> Item54 <-> Item54, e54,   NA
> Item63 <-> Item63, e63,   NA
> Item64 <-> Item64, e64,   NA
> Item65 <-> Item65, e65,   NA
> Item66 <-> Item66, e66,   NA
> Item67 <-> Item67, e67,   NA
> Item68 <-> Item68, e68,   NA
> Item69 <-> Item69, e69,   NA
> Item73 <-> Item73, e73,   NA
> Item74 <-> Item74, e74,   NA
> Item75 <-> Item75, e75,   NA
> Item76 <-> Item76, e76,   NA
> Item78 <-> Item78, e78,   NA
> Item79 <-> Item79, e79,   NA
> Item80 <-> Item80, e80,   NA
> Item83 <-> Item83, e83,   NA
> F1 <-> F1, NA,    1
> F2 <-> F2, NA,    1
> F3 <-> F3, NA,    1
> F4 <-> F4, NA,    1
> F5 <-> F5, NA,    1
> F1 <-> F2, F1F2, NA
> F1 <-> F3, F1F3, NA
> F1 <-> F4, F1F4, NA
> F1 <-> F5, F1F5, NA
> F2 <-> F3, F2F3, NA
> F2 <-> F4, F2F4, NA
> F2 <-> F5, F2F5, NA
> F3 <-> F4, F3F4, NA
> F3 <-> F5, F3F5, NA
> F4 <-> F5, F4F5, NA
>
>
> ###i tryed several correlations, such as hetcor and polychor of polycor
> library
>
>
> hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
> hetdados40=hcor(dados40)
>
>
> dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
> Warning message:
> In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
> vars,  :
>  Could not compute QR decomposition of Hessian.
> Optimization probably did not converge.
>
> #
>
> The same happen if i put hetdados40 in the place of dados40.cov
> of course hetdados40 has 1 in the diag, but any 0
>
> what should i do? i tryed several things...
>
> all value positive..
>
> #
>
>> eigen(hetdados40)$values
>  [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
>  [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
> [13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
> [19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
> [25]  0.4270672  0.4071262  0.3947753  0.3763811  0.3680527  0.3560231
> [31]  0.3537934  0.3402836  0.3108977  0.3099143  0.2819351  0.2645035
> [37]  0.2548654  0.2077900  0.2043732  0.1923942
>> eigen(dados40.cov)$values
>  [1] 884020.98 337855.95 138823.30 126291.58  87915.21  79207.04  73442.71
>  [8]  68388.11  60625.26  58356.54  55934.05  54024.00  50505.10  48680.26
> [15]  46836.4

Re: [R] CDF of Sample Quantile

2011-02-14 Thread Duncan Murdoch

On 14/02/2011 9:58 AM, Bentley Coffey wrote:

I need to calculate the probability that a sample quantile will exceed a
threshold given the size of the iid sample and the parameters describing the
distribution of each observation (normal, in my case). I can compute the
probability with brute force simulation: simulate a size N sample, apply R's
quantile() function on it, compare it to the threshold, replicate this MANY
times, and count the number of times the sample quantile exceeded the
threshold (dividing by the total number of replications yields the
probability of interest). The problem is that the number of replications
required to get sufficient precision (3 digits say) is so HUGE that this
takes FOREVER. I have to perform this task so much in my script (searching
over the sample size and repeated for several different distribution
parameters) that it takes too many hours to run.

I've searched for pre-existing code to do this in R and haven't found
anything. Perhaps I'm missing something. Is anyone aware of an R function to
compute this probability?

I've tried writing my own code using the fact that R's quantile() function
is a linear combination of 2 order statistics. Basically, I wrote down the
mathematical form of the joint pdf for the 2 order statistics (a function of
the sample size and the distribution parameters) then performed a
pseudo-Monte Carlo integration (i.e. using Halton Draws rather than R's
random draws) over the region where the sample quantile exceeds the
threshold. In theory, this should work and it takes about 1000 times fewer
clock cycles to compute than the Brute Force approach. My problem is that
there is a significant discrepancy between the results using Brute Force and
using this more efficient approach that I have coded up. I believe that the
problem is numerical error but it could be some programming bug; regardless,
I have been unable to locate the source of this problem and have spent over
20 hours trying to identify it this weekend. Please, somebody help!!!

So, again, my question: is there code in R for quickly evaluating the CDF of
a Sample Quantile given the sample size and the parameters governing the
distribution of each iid point in the sample?


I think the answer to your question is no, but I think it's the wrong 
question.  Suppose Xm:n is the mth sample quantile in a sample of size 
n, and you want to calculate P(Xm:n > x).  Let X be a draw from the 
underlying distribution, and suppose P(X > x) = p.  Then the event Xm:n > x
is the same as the event that out of n independent draws of X, at least 
n-m+1 are bigger than x:  a binomial probability.  And R can calculate 
binomial probabilities using pbinom().


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Bron-Kerbosch algorithm

2011-02-14 Thread Yan Jiao
Dear R users

 

I need to solve the finding all cliques in a graph problem, is there a R
package implementing Bron-Kerbosch algorithm?

 

Many thanks

 

YAn


**
This email and any files transmitted with it are confide...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to group data by day

2011-02-14 Thread Mikhail Titov
It depends what would you like to get at the end. Perhaps you don't
necessary need this type of numbering. For instance, if you'd like to
calculate daily average.

london$id <- as.Date(london$id)

For sum by day you could use, let's say, this

aggregate(words~id,london,FUN=sum)

If you really want what you've asked:

london$one=1
u=unique(london$id)
z=aggregate(one~id,london,FUN=sum)
london$day=rep(seq(along.with=z$one),z$one)

Mikhail


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Michela Ferron
> Sent: Monday, February 14, 2011 11:09 AM
> To: r-help@r-project.org
> Subject: [R] How to group data by day
> 
> Hi everybody,
> 
> I'm a beginner in R and I'm having a hard time grouping my data by day.
> The data are in this format:
> 
> id; words
> 2005-07-07T09:59:56Z; 35
> 2005-07-07T10:01:39Z; 13
> 2005-07-08T10:02:22Z; 1
> 2005-07-09T10:03:16Z; 23
> 2005-07-10T10:04:23Z; 39
> 2005-07-10T10:04:39Z; 15
> 
> I've transformed the date strings in dates with the function:
> london$id <- transform(london$id, as.Date(london$id, format="%Y-%m-
> %d%T%H:%M:%S%Z"))
> and it seems to work.
> 
> Now I would like to add a new "day" variable to group data by day, like
> this:
> 
> id; words; day
> 2005-07-07T09:59:56Z; 35; 1
> 2005-07-07T10:01:39Z; 13; 1
> 2005-07-08T10:02:22Z; 1; 2
> 2005-07-09T10:03:16Z; 23; 3
> 2005-07-10T10:04:23Z; 39; 4
> 2005-07-10T10:04:39Z; 15; 4
> 
> How can I do that?
> 
> Many thanks!
> 
> Michela
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Bron-Kerbosch algorithm

2011-02-14 Thread Kjetil Halvorsen
You could have tried to type
RSiteSearch("Bron-Kerbosch")
into R. As it is, that does not give any hits.

But in packages graph or igraph (on CRAN) there should
be some algorithm.

Kjetil

On Mon, Feb 14, 2011 at 4:33 PM, Yan Jiao  wrote:
> Dear R users
>
>
>
> I need to solve the finding all cliques in a graph problem, is there a R
> package implementing Bron-Kerbosch algorithm?
>
>
>
> Many thanks
>
>
>
> YAn
>
>
> **
> This email and any files transmitted with it are confide...{{dropped:10}}
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] problem running scripts

2011-02-14 Thread jose Bartolomei

 
Dear all,


I have encounter an odd situation.


I have various R scripts interconnected via the source () function.


After examining the results I noticed that not all the functions or procedures 
within a script were adequately conducted. 
Especially with the longest script ( about 180 lines)


Then, I ran every scripts individually (not using source () ) selecting all 
(Ctrl + a) and running the selection (Ctrl + r). 


R read every line in my script but, again, not all of the procedures were 
executed.


For example:


X$DATEACCI<-as.Date(X$DATEACCI) stayed as a factor.


Or 
 
X$AgeG bellow was not created


ageg<-c(0, 4, 11, 50, Inf)


X$AgeG<-cut(X$AGE, ageg, include.lowest=TRUE,
labels=c("0-4", "5-11", "12-50", "51+"))




X data.set is approximately dim( 345,000, 33) for year one but I will need to 
run the scripts on 10 years.


I tried it using attach and the function use() from epicalc but did not work 
neither.


Nonetheless, if I ran every line of my script individually, everything was OK.




Any insight on this? 
What I have missed from R Manuals?


I am using R 2.12.0 embedded within eclipse Helios via StatET 0.9.1 


In addition I ran the scripts using R console: 2.12.0, 32 bit (not within 
eclipse)


I conducted the same procedure in two different HP Z400 running in Window 7, 
4Gb of RAM


I'll try under Linux in my home, if different I'll report it.


Thanks,


Jose

  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get warning about implicit factor to integer coercion? Example.

2011-02-14 Thread WB Kloke
Ista Zahn  psych.rochester.edu> writes:

> 
> Hi,
> I think an example would be helpful. I'm not sure what behavior you
> are referring to.
> 
> best,

Here is an example:

If a data.frame of experimental results is read from an external
file, a column of strings, e.g. subject codes, is converted to a
factor by default.

If I have a second table containing additional data for the subjects
(or a big pool of subjects) with subject code as rownames, then
sometimes I want to add data looked up from the 2nd table to the
data.frame, possibly to use them as additional covariates.

The wrong way to do is:

df$age <- table2[df$Subject,"age"]

This doesn't work as expected because df$Subject is silently converted
to a number, which not meaningful for table2 except when the table
happens to list the subjects in the order of levels of df$Subject.

So only
df$age <- table2[levels(df$Subject)[as.numeric(df$Subject)],"age"]
or
df$age <- table2[as.character(df$Subject),"age"]

is expected to work correctly, as is explained in the factor() doc.

This has biten me more than once, and it will bite users I am
going to evangelize R to, too.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Bron-Kerbosch algorithm

2011-02-14 Thread Peter Ehlers

On 2011-02-14 11:50, Kjetil Halvorsen wrote:

You could have tried to type
RSiteSearch("Bron-Kerbosch")
into R. As it is, that does not give any hits.


Try it without the hyphen and you're pointed to
the RBGL package.

Peter Ehlers



But in packages graph or igraph (on CRAN) there should
be some algorithm.

Kjetil

On Mon, Feb 14, 2011 at 4:33 PM, Yan Jiao  wrote:

Dear R users



I need to solve the finding all cliques in a graph problem, is there a R
package implementing Bron-Kerbosch algorithm?



Many thanks



YAn


**
This email and any files transmitted with it are confide...{{dropped:10}}

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Return list of data frames.

2011-02-14 Thread Ista Zahn
Hi Alberto,
I don't think the error is coming from the code you've shown us
(although quoting the list names is a little strange).

This works for me:

f <- function() {
df.a <- data.frame(x=sample(letters, 100, replace=TRUE), y=rnorm(100))
df.b <- data.frame(z=factor(letters), y = factor(1:26))
MyList<- list("a"=df.a, "b"=df.b)
return(MyList)
}

f()

Please follow the posting guide, and include a short reproducible example.

Best,
Ista

On Mon, Feb 14, 2011 at 11:57 AM, Alberto Negron
 wrote:
> Hi,
>
> I've got a problem with a function trying to return 2 data frames in a list.
> The code is as follow:
>
> function(x) {
>
> # code
>
> MyList<- list("a"=df.a,"b"=df.b)
> return(MyList)
>
> }
>
> and I got the following message:
>
> Error in list_to_dataframe(res, attr(.data, "split_labels")) :
>  Results must be all atomic, or all data frames
>
> At first instance I thought it may be down that both df have same variable
> names so I changed var names in df.a - but I am still getting the same error
> then I changed my lIst to  MyList<- list(df.a,df.b)  just to try something
> different and still getting the same issue.
>
> Now I wondering if the error is down to the difference in size for each data
> frame df.a (r=500,c=8) and df.b(r=1,c=4) - if this is the case then I don'
> know how to fix it.
>
> Any ideas?
>
> Sorry, I've got one more question, once I return the list from my function
> how can I split out my list to get the original 2 data frames or how can I
> access each one independently, i.e. assign them to a data frame again.. if
> that makes sense.
>
> I am using 2.12.1 on Win7
>
> Regards,
>
> Alberto
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to get warning about implicit factor to integer coercion? Example.

2011-02-14 Thread Ista Zahn
Hi, thanks for the example. I agree that this is dangerous -- that is
one of the reasons why I use (and teach others to use) merge() instead
of relying on sorting when joining data.frames.

table2$Subject <- rownames(table2)
df2 <- merge(df, table2)

will work, even when Subject is stored as a character in table2 and as
a factor in df. merge() is a reliable and safe way to perform this
type of operation. I do agree that it would be good for [ to return a
warning when the index(s) are not numeric or character vectors, but
training yourself (and others) to use merge() will help avoid these
problems.

Best,
Ista

On Mon, Feb 14, 2011 at 2:10 PM, WB Kloke  wrote:
> Ista Zahn  psych.rochester.edu> writes:
>
>>
>> Hi,
>> I think an example would be helpful. I'm not sure what behavior you
>> are referring to.
>>
>> best,
>
> Here is an example:
>
> If a data.frame of experimental results is read from an external
> file, a column of strings, e.g. subject codes, is converted to a
> factor by default.
>
> If I have a second table containing additional data for the subjects
> (or a big pool of subjects) with subject code as rownames, then
> sometimes I want to add data looked up from the 2nd table to the
> data.frame, possibly to use them as additional covariates.
>
> The wrong way to do is:
>
> df$age <- table2[df$Subject,"age"]
>
> This doesn't work as expected because df$Subject is silently converted
> to a number, which not meaningful for table2 except when the table
> happens to list the subjects in the order of levels of df$Subject.
>
> So only
> df$age <- table2[levels(df$Subject)[as.numeric(df$Subject)],"age"]
> or
> df$age <- table2[as.character(df$Subject),"age"]
>
> is expected to work correctly, as is explained in the factor() doc.
>
> This has biten me more than once, and it will bite users I am
> going to evangelize R to, too.
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Ista Zahn
Graduate student
University of Rochester
Department of Clinical and Social Psychology
http://yourpsyche.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] txtProgressBar examples?

2011-02-14 Thread Greg Snow
Did you try running the examples on the help page ?txtProgressBar ?

Basically the txtProgressBar command creates the progress bar, then in the loop 
the setTxtProgressBar command updates the amount of progress for that bar.

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Scott Chamberlain
> Sent: Monday, February 14, 2011 11:33 AM
> To: r-help
> Subject: [R] txtProgressBar examples?
> 
> Dear R users,
> 
> 
> I am curious if someone could direct me towards websites/tutorials for
> uses of progress bars (especially) in R. I can't seem to figure them
> out.
> 
> 
> Thanks very much, Scott
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Apply Function To Each Row of Matrix

2011-02-14 Thread Stuart Jaffe
Hi,

I have a matrix with about 3000 rows, and 1 column. The contents of the matrix 
are stock symbols (IBM, AAPL, etc) for instance. I also have a function that 
takes a single stock symbol as an argument, parses the text of Google Finance 
and returns the related stock symbols that the page has listed into a CSV file. 
For example, IBM would return the following in a CSV file:
 rels
1  ORCL
2   HPQ
3  MSFT
4  CSCO
5   EMC
6  DELL
7   SAP
8  INTC
9SY
10   CA

I'm  trying to use sapply (or any of the apply functions) to loop through  the 
matrix of 3000 stock symbols and pass them through to my function. I  get the 
following errors when using sapply:

1: In file(con, "r") : only first  element of 'description' argument used
2: In if (file == "") file <- stdout() else if (is.character(file)) { ... :
  the condition has length > 1 and only the first element will be used
3: In file(file, ifelse(append, "a", "w")) :
  only first element of 'description' argument used

I  think it has something to do with the fact that I'm trying to pass a  vector 
as an argument into a function which only accepts a scalar  argument. I've also 
tried "do.call" but can't get any results. How do I  loop through the matrix in 
order to pass each individual stock symbol as  an argument into the function? 
Thanks so much for your help.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] CDF of Sample Quantile

2011-02-14 Thread Bentley Coffey
Yeah, I think that you don't understand me. You suggest:

1 - pnorm(Threshold,mean,sd) = Probability that rnorm(1,mean,sd) > Threshold


I want to know:

Probability that quantile(rnorm(n,mean,sd),prob) > Threshold

I use rnorm() to simulate a sample of size n and then I compute the
statistic from that sample using quantile(). Like all statistics, that
quantile stat (which is a weighted average of 2 order statistics) is a
function of the realized data and hence has a sampling distribution. I want
to compute the cdf of that sampling distribution. Even own the David and
Nagaraja _Order Statistics_ text in my library does not have a closed-form
cdf for that statistic...


On Mon, Feb 14, 2011 at 2:20 PM, Jonathan P Daily  wrote:

> If I understand this, you have a value x, or a vector of values x, and you
> want to know the CDF that this value is drawn from a normal distribution?
>
> I assume you are drawing from rnorm for your simulations, so look at the
> other functions listed when you ?rnorm.
>
> HTH
> --
> Jonathan P. Daily
> Technician - USGS Leetown Science Center
> 11649 Leetown Road
> Kearneysville WV, 25430
> (304) 724-4480
> "Is the room still a room when its empty? Does the room,
>  the thing itself have purpose? Or do we, what's the word... imbue it."
> - Jubal Early, Firefly
>
> r-help-boun...@r-project.org wrote on 02/14/2011 09:58:09 AM:
>
> > [image removed]
> >
> > [R] CDF of Sample Quantile
> >
> > Bentley Coffey
> >
> > to:
> >
> > r-help
> >
> > 02/14/2011 01:58 PM
> >
> > Sent by:
> >
> > r-help-boun...@r-project.org
> >
> > I need to calculate the probability that a sample quantile will exceed a
> > threshold given the size of the iid sample and the parameters describing
> the
> > distribution of each observation (normal, in my case). I can compute the
> > probability with brute force simulation: simulate a size N sample, apply
> R's
> > quantile() function on it, compare it to the threshold, replicate this
> MANY
> > times, and count the number of times the sample quantile exceeded the
> > threshold (dividing by the total number of replications yields the
> > probability of interest). The problem is that the number of replications
> > required to get sufficient precision (3 digits say) is so HUGE that this
> > takes FOREVER. I have to perform this task so much in my script
> (searching
> > over the sample size and repeated for several different distribution
> > parameters) that it takes too many hours to run.
> >
> > I've searched for pre-existing code to do this in R and haven't found
> > anything. Perhaps I'm missing something. Is anyone aware of an R
> function to
> > compute this probability?
> >
> > I've tried writing my own code using the fact that R's quantile()
> function
> > is a linear combination of 2 order statistics. Basically, I wrote down
> the
> > mathematical form of the joint pdf for the 2 order statistics (a
> function of
> > the sample size and the distribution parameters) then performed a
> > pseudo-Monte Carlo integration (i.e. using Halton Draws rather than R's
> > random draws) over the region where the sample quantile exceeds the
> > threshold. In theory, this should work and it takes about 1000 times
> fewer
> > clock cycles to compute than the Brute Force approach. My problem is
> that
> > there is a significant discrepancy between the results using Brute Force
> and
> > using this more efficient approach that I have coded up. I believe that
> the
> > problem is numerical error but it could be some programming bug;
> regardless,
> > I have been unable to locate the source of this problem and have spent
> over
> > 20 hours trying to identify it this weekend. Please, somebody help!!!
> >
> > So, again, my question: is there code in R for quickly evaluating the
> CDF of
> > a Sample Quantile given the sample size and the parameters governing the
> > distribution of each iid point in the sample?
> >
> > Grateful for any help,
> >
> > Bentley
> >
> >[[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Test for equivalence

2011-02-14 Thread syrvn

Hi!

first of all. Thank you all very much for your input. I am sorry but I
haven't had yet the
time to reply to all of your messages. I will give you a more detailed
description of my
problem within the next 2 days!

Many thanks again.

Best,
syrvn
-- 
View this message in context: 
http://r.789695.n4.nabble.com/Test-for-equivalence-tp3302739p3305890.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MCMC glmm

2011-02-14 Thread Ben Bolker
garciap  usal.es> writes:

> I'm working with abundance data of some species, but containing too zero
> values, and the factors are the ones typical in a BACI experiment
> (Before-and-After-Control-Impact). Thus, these are two fixed factors. As the
> data does not holds the normality and homogeneity of variances assumptions
> of clasiccal ANOVA, I'm trying to fit a zero-altered model using the MCMC
> glmm library.
> I've two questions:
> 
> 1.- how I can include an interaction between the BA (before and after) and
> the CI (control-impact) components in this kind of models? I'm searching in
> the notes available in the models but found no clear answer. My first
> approach to this wil be to wrote a formula like: Abundance~BA+CI+BA*CI.
> 2.- Even when I try to fit a model without interactions I can't do it
> because I obtain the following error:
> > fit<-MCMCglmm(Abundancia~BA+CI, random=NULL,
> > family="zapoisson",data=Trucha)
> Error in MCMCglmm(Abundancia ~ BA + CI, random = NULL, family = "zapoisson", 
> : 
>   please use idh(trait):units or us(trait):units or trait:units for error
> structures involving multinomial data with more than 2 categories or
> zero-infalted/altered/hurdle models

  Quick, not necessarily complete answers:

 (1) BA*CI (which is equivalent to BA+CI+BA:CI) is the right
syntax for the before-after by control-impact interaction.

 (2) MCMCglmm fits zero-altered models by constructing an augmented
set of response variables + predictor variables. This is a little 
tricky: I strongly recommend that you look at p. 100 and following of
 vignette("CourseNotes",package="MCMCglmm") and come back with
further questions after you've read it ...
  As mentioned therein, if you don't have random effects then it
will be considerably easier to fit your model using the functions
in the pscl package.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sem problem - did not converge

2011-02-14 Thread John Fox
Dear Felipe,

Without the data or the input covariance or correlation matrix it's not 
possible to say much. sem() would have complained if the input moment matrix 
weren't positive-definite, so your check of the eigenvalues of the matrix isn't 
providing additional information. 

If you haven't already done so, you might try modifying the par.size argument, 
as described in ?sem, or specifying the argument debug=TRUE to see if you can 
detect where the optimization is going off the rails. You might also try 
supplying your own start values for some or all of the parameters. 

It may just be that the problem is ill-conditioned and that sem() isn't capable 
of maximizing the likelihood.

I hope this helps,
 John


John Fox
Sen. William McMaster Prof. of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
http://socserv.mcmaster.ca/jfox/
On Mon, 14 Feb 2011 13:44:53 -0200
 Felipe Bhering  wrote:
>  Someone can help me? I tried several things and always don't converge
> I am making a confirmatory factor analysis model.
> 
> # Model
> library(sem)
> dados40.cov <- cov(dados40,method="spearman")
> model.dados40 <- specify.model()
> F1 ->  Item11, lam11, NA
> F1 ->  Item31, lam31, NA
> F1 ->  Item36, lam36, NA
> F1 ->  Item54, lam54, NA
> F1 ->  Item63, lam63, NA
> F1 ->  Item65, lam55, NA
> F1 ->  Item67, lam67, NA
> F1 ->  Item69, lam69, NA
> F1 ->  Item73, lam73, NA
> F1 ->  Item75, lam75, NA
> F1 ->  Item76, lam76, NA
> F1 ->  Item78, lam78, NA
> F1 ->  Item79, lam79, NA
> F1 ->  Item80, lam80, NA
> F1 ->  Item83, lam83, NA
> F2 ->  Item12, lam12, NA
> F2 ->  Item32, lam32, NA
> F2 ->  Item42, lam42, NA
> F2 ->  Item47, lam47, NA
> F2 ->  Item64, lam64, NA
> F2 ->  Item66, lam66, NA
> F2 ->  Item68, lam68, NA
> F2 ->  Item74, lam74, NA
> F3 ->  Item3, lam3, NA
> F3 ->  Item8, lam8, NA
> F3 ->  Item18, lam18, NA
> F3 ->  Item23, lam23, NA
> F3 ->  Item28, lam28, NA
> F3 ->  Item33, lam33, NA
> F3 ->  Item38, lam38, NA
> F3 ->  Item43, lam43, NA
> F4 ->  Item9, lam9, NA
> F4 ->  Item39, lam39, NA
> F5 ->  Item5, lam5, NA
> F5 ->  Item10, lam10, NA
> F5 ->  Item20, lam20, NA
> F5 ->  Item25, lam25, NA
> F5 ->  Item30, lam30, NA
> F5 ->  Item35, lam35, NA
> F5 ->  Item45, lam45, NA
> Item3 <-> Item3, e3,   NA
> Item5 <-> Item5, e5,   NA
> Item8 <-> Item8, e8,   NA
> Item9 <-> Item9, e9,   NA
> Item10 <-> Item10, e10,   NA
> Item11 <-> Item11, e11,   NA
> Item12 <-> Item12, e12,   NA
> Item18 <-> Item18, e18,   NA
> Item20 <-> Item20, e20,   NA
> Item23 <-> Item23, e23,   NA
> Item25 <-> Item25, e25,   NA
> Item28 <-> Item28, e28,   NA
> Item30 <-> Item30, e30,   NA
> Item31 <-> Item31, e31,   NA
> Item32 <-> Item32, e32,   NA
> Item33 <-> Item33, e33,   NA
> Item35 <-> Item35, e35,   NA
> Item36 <-> Item36, e36,   NA
> Item38 <-> Item38, e38,   NA
> Item39 <-> Item39, e39,   NA
> Item42 <-> Item42, e42,   NA
> Item43 <-> Item43, e43,   NA
> Item45 <-> Item45, e45,   NA
> Item47 <-> Item47, e47,   NA
> Item54 <-> Item54, e54,   NA
> Item63 <-> Item63, e63,   NA
> Item64 <-> Item64, e64,   NA
> Item65 <-> Item65, e65,   NA
> Item66 <-> Item66, e66,   NA
> Item67 <-> Item67, e67,   NA
> Item68 <-> Item68, e68,   NA
> Item69 <-> Item69, e69,   NA
> Item73 <-> Item73, e73,   NA
> Item74 <-> Item74, e74,   NA
> Item75 <-> Item75, e75,   NA
> Item76 <-> Item76, e76,   NA
> Item78 <-> Item78, e78,   NA
> Item79 <-> Item79, e79,   NA
> Item80 <-> Item80, e80,   NA
> Item83 <-> Item83, e83,   NA
> F1 <-> F1, NA,1
> F2 <-> F2, NA,1
> F3 <-> F3, NA,1
> F4 <-> F4, NA,1
> F5 <-> F5, NA,1
> F1 <-> F2, F1F2, NA
> F1 <-> F3, F1F3, NA
> F1 <-> F4, F1F4, NA
> F1 <-> F5, F1F5, NA
> F2 <-> F3, F2F3, NA
> F2 <-> F4, F2F4, NA
> F2 <-> F5, F2F5, NA
> F3 <-> F4, F3F4, NA
> F3 <-> F5, F3F5, NA
> F4 <-> F5, F4F5, NA
> 
> 
> ###i tryed several correlations, such as hetcor and polychor of polycor
> library
> 
> 
> hcor <- function(data) hetcor(data, std.err=FALSE)$correlations
> hetdados40=hcor(dados40)
> 
> 
> dados40.sem <- sem(model.dados40, dados40.cov, nrow(dados40))
> Warning message:
> In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
> vars,  :
>   Could not compute QR decomposition of Hessian.
> Optimization probably did not converge.
> 
> #
> 
> The same happen if i put hetdados40 in the place of dados40.cov
> of course hetdados40 has 1 in the diag, but any 0
> 
> what should i do? i tryed several things...
> 
> all value positive..
> 
> #
> 
> > eigen(hetdados40)$values
>  [1] 14.7231030  4.3807378  1.6271780  1.4000193  1.0670784  1.0217670
>  [7]  0.8792466  0.8103790  0.7397817  0.7279262  0.6909955  0.6589746
> [13]  0.6237204  0.6055884  0.550  0.5712017  0.5469284  0.5215437
> [19]  0.5073809  0.4892339  0.4644124  0.4485545  0.4372404  0.4290573
> [25]  0.4270672  0.4071262  0.3947753  0.37638

Re: [R] How to get warning about implicit factor to integer coercion?

2011-02-14 Thread Bill.Venables
Your complaint is based on what you think a factor should be rather than what 
it actually is andhow it works.  The trick with R (BTW I think it's version 
2.12.x rather than 12.x at this stage...) is learning to work *with* it as it 
is rather than making it work the way you would like it to do.

Factors are a bit tricky.  The are numeric objects, even if arithmetic is 
inhibited.

> f <- factor(letters)
> is.numeric(f)  ## this looks strange
[1] FALSE
> mode(f)## but at a lower level
[1] "numeric"

Take a simple example.  

> x <- structure(1:26, names = sample(letters))
> x
 h  o  u  w  l  z  a  j  e  n  k  i  s  v  t  g  f  x  c  b  y  d  m  q  p  r 
 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 

If you use the factor f as an index, it behaves as a numeric vector of indices:

> x[f]
 h  o  u  w  l  z  a  j  e  n  k  i  s  v  t  g  f  x  c  b  y  d  m  q  p  r 
 1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 

That's just the way it is.  That's the reality.  You have to learn to deal with 
it. 

Sometimes a factor behaves as a character string vector, when no other 
interpretation would make sense.  e.g.

> which(f == "o")
[1] 15
>

but in other cases they do not.  In this case you can make the coercion 
explicit of course, if that is your bent:

> which(as.character(f) == "o")
[1] 15
> 

but here there is no need.  There are cases were you *do* need to make an 
explicit coercion, though, if you want it to behave as a character string 
vector, and indexing is one:

> x[as.character(f)]
 a  b  c  d  e  f  g  h  i  j  k  l  m  n  o  p  q  r  s  t  u  v  w  x  y  z 
 7 20 19 22  9 17 16  1 12  8 11  5 23 10  2 25 24 26 13 15  3 14  4 18 21  6 
>  

If you want factors to behave universally as character string vectors, the 
solution is not to use factors at all but to use character string vectors 
instead.  You can get away with a surprising amount this way.  e.g. character 
string vectors used in model formulae are silently coerced to factors, anyway.  
What you need to learn is how to read in data frames keeping the character 
string columns "as is" and stop them from being made into factors at that 
stage.  That is a lesson for another day...

Bill Venables.


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of WB Kloke
Sent: Monday, 14 February 2011 8:31 PM
To: r-h...@stat.math.ethz.ch
Subject: [R] How to get warning about implicit factor to integer coercion?

Is there a way in R (12.x) to avoid the implicit coercion of factors to integers
in the context of subscripts?

If this is not possible, is there a way to get at least a warning, if any
coercion of this type happens, given that the action of this coercion is almost
never what is wanted?

Of course, in the rare case that as.integer() is applied explicitly onto a
factor, the warning is not needed, but certainly not as disastrous as in the
other case.

Probably, in a future version of R, an option similar to that for partial
matches of subscripts might be useful to change the default from maximal
unsafety to safe use.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] hazard.ratio command:

2011-02-14 Thread Angel Russo
Hi All,

In absence of any reply, I am posting a slightly modified question. What is
"x" in hazard.ratio in the command below?

example(hazard.ratio)

binscores<-cut(scores.train,c(-1000,-1,1,1000),c("low","intermediate","high"))

*

hazard.ratio(x=?, surv.time=train$ProgFreeSurv,
surv.event=train$PfSStatus, strat=binscores)

*

Input to "x" is ("x a vector of risk predictions"), what does it mean?
I have computed scores (binscore) above from a 100-gene signature I
have derived from an expression data. I need to compute hazard rations
of the groups (low and high) defined by me using "cut" command above.

Thanks, Angel.

--

I am interested in calculating hazard ratio of the stratified groups defined
by me.

library(survcomp)
binscores <-
cut(scores.train,c(-1000,-1,1,1000),c("low","intermediate","high"))

dd <- data.frame("surv.time"=OS, "surv.event"= status, "strat"=binscores)
km.coxph.plot(formula.s=Surv(surv.time, surv.event) ~ strat)

How do I calculate hazard ratio between the groups defined by me as above?
Thanks very much,
Angel

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A Math question

2011-02-14 Thread Kjetil Halvorsen
or even better:

http://mathoverflow.net/

On Sun, Feb 13, 2011 at 8:02 PM, David Winsemius  wrote:
>
> On Feb 13, 2011, at 4:47 PM, Maithula Chandrashekhar wrote:
>
>> Dear all, I admit this is not anything to do R and even with
>> Statistics perhaps. Strictly speaking this is a math related question.
>> However I have some reasonable feeling that experts here would come up
>> with some elegant suggestion to my question.
>>
>> Here my question is: What is sum of all Integers? I somewhere heard
>> that it is Zero as positive and negative integers will just cancel
>> each other out. However want to know is it correct?
>
> There are more appropriate places to pose such questions:
> http://math.stackexchange.com/
>
>
>
> David Winsemius, MD
> West Hartford, CT
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A Math question

2011-02-14 Thread Tsjerk Wassenaar
Also, googling around one will find the question has been asked (and
answered) already:

http://ask.metafilter.com/25060/Whats-the-sum-of-all-integers

Cheers,

Tsjerk

On Tue, Feb 15, 2011 at 1:33 AM, Kjetil Halvorsen
 wrote:
> or even better:
>
> http://mathoverflow.net/
>
> On Sun, Feb 13, 2011 at 8:02 PM, David Winsemius  
> wrote:
>>
>> On Feb 13, 2011, at 4:47 PM, Maithula Chandrashekhar wrote:
>>
>>> Dear all, I admit this is not anything to do R and even with
>>> Statistics perhaps. Strictly speaking this is a math related question.
>>> However I have some reasonable feeling that experts here would come up
>>> with some elegant suggestion to my question.
>>>
>>> Here my question is: What is sum of all Integers? I somewhere heard
>>> that it is Zero as positive and negative integers will just cancel
>>> each other out. However want to know is it correct?
>>
>> There are more appropriate places to pose such questions:
>> http://math.stackexchange.com/
>>
>>
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Tsjerk A. Wassenaar, Ph.D.

post-doctoral researcher
Molecular Dynamics Group
* Groningen Institute for Biomolecular Research and Biotechnology
* Zernike Institute for Advanced Materials
University of Groningen
The Netherlands

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Defining functions inside loops

2011-02-14 Thread Eduardo de Oliveira Horta
Hello again.

Let me try something a little more intricate. Let's say instead of
forcing evaluation of 'i' I'd want to force evaluation of a vector;
for example:
s <- c( 0.2, 0.45, 0.38, 0.9)
f  <- lapply(1:10, function(i)local({ force(i) ; function(x)x^2+s[i]}))
rm(s)
f[[1]](0.1)
Error in f[[1]](0.1) : object 's' not found

Any thoughts?

Best regards,

Eduardo

> sessionInfo()
R version 2.11.1 (2010-05-31)
x86_64-pc-mingw32

locale:
[1] LC_COLLATE=Portuguese_Brazil.1252  LC_CTYPE=Portuguese_Brazil.1252
[3] LC_MONETARY=Portuguese_Brazil.1252 LC_NUMERIC=C
[5] LC_TIME=Portuguese_Brazil.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] Revobase_4.2.0   RevoScaleR_1.1-1 lattice_0.19-13

loaded via a namespace (and not attached):
[1] grid_2.11.1   pkgXMLBuilder_1.0 revoIpe_1.0   tools_2.11.1
[5] XML_3.1-0

> On Mon, Nov 15, 2010 at 7:10 PM, William Dunlap  wrote:
>> You could make f[[i]] be function(t)t^2+i for i in 1:10
>> with
>>     f <- lapply(1:10, function(i)local({ force(i) ; function(x)x^2+i}))
>> After that we get the correct results
>>    > f[[7]](100:103)
>>    [1] 10007 10208 10411 10616
>> but looking at the function doesn't immdiately tell you
>> what 'i' is in the function
>>    > f[[7]]
>>    function (x)
>>    x^2 + i
>>    
>> You can find it in f[[7]]'s environment
>>    > get("i", envir=environment(f[[7]]))
>>    [1] 7
>>
>> The call to force() in the call to local() is not
>> necessary in this case, although it can help in
>> other situations.
>>
>> Bill Dunlap
>> Spotfire, TIBCO Software
>> wdunlap tibco.com
>>
>>> -Original Message-
>>> From: r-help-boun...@r-project.org
>>> [mailto:r-help-boun...@r-project.org] On Behalf Of Eduardo de
>>> Oliveira Horta
>>> Sent: Monday, November 15, 2010 12:50 PM
>>> To: r-help@r-project.org
>>> Subject: [R] Defining functions inside loops
>>>
>>> Hello,
>>>
>>> I was trying to define a set of functions inside a loop, with
>>> the loop index
>>> working as a parameter for each function. Below I post a
>>> simpler example, as
>>> to illustrate what I was intending:
>>>
>>> f<-list()
>>> for (i in 1:10){
>>>   f[[i]]<-function(t){
>>>     f[[i]]<-t^2+i
>>>   }
>>> }
>>> rm(i)
>>>
>>> With that, I was expecting that f[[1]] would be a function
>>> defined by t^2+1,
>>> f[[2]] by t^2+2 and so on. However, the index i somehow
>>> doesn't "get in" the
>>> function definition on each loop, that is, the functions
>>> f[[1]] through
>>> f[[10]] are all defined by t^2+i. Thus, if I remove the
>>> object i from the
>>> workspace, I get an error when evaluating these functions.
>>> Otherwise, if
>>> don't remove the object i, it ends the loop with value equal
>>> to 10 and then
>>> f[[1]](t)=f[[2]](t)=...=f[[10]](t)=t^2+10.
>>>
>>> I am aware that I could simply put
>>>
>>> f<-function(u,i){
>>>   f<-t^2+i
>>> }
>>>
>>> but that's really not what I want.
>>>
>>> Any help would be appreciated. Thanks in advance,
>>>
>>> Eduardo Horta
>>>
>>>       [[alternative HTML version deleted]]
>>>
>>> __
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] A Math question

2011-02-14 Thread Steve Lianoglou
On Sun, Feb 13, 2011 at 4:47 PM, Maithula Chandrashekhar
 wrote:
> Dear all, I admit this is not anything to do R and even with
> Statistics perhaps. Strictly speaking this is a math related question.
> However I have some reasonable feeling that experts here would come up
> with some elegant suggestion to my question.
>
> Here my question is: What is sum of all Integers?

You're right -- this question doesn't belong here, but if there was
ever a question that the following answer was appropriate for, surely
this must be it. So:

I'm pretty sure it's 42.

-- 
Steve Lianoglou
Graduate Student: Computational Systems Biology
 | Memorial Sloan-Kettering Cancer Center
 | Weill Medical College of Cornell University
Contact Info: http://cbio.mskcc.org/~lianos/contact

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] R, Ubuntu, package installation with non-zero exit status

2011-02-14 Thread B77S

All:
I have been looking through the string of posts regarding this same issue,
but I haven't been able to fix this problem. 
I am running Ubuntu 10.4 64bit, R version 2.10.1 
I cannot install certain packages (e.g. "vegetarian") and each time it says
basically the same thing (regardless of the package): 

... leaving stuff out ..

Error in library("vegetarian") : there is no package called 'vegetarian'
Execution halted

The downloaded packages are in
‘/tmp/RtmphcJ8mn/downloaded_packages’
Warning message:
In install.packages("vegetarian") :
  installation of package 'vegetarian' had non-zero exit status

###

I have tried the following:
1) ensured I have 'build-essential' Ubuntu package installed (I do)
2) attempted install.packages as root (sudo R)

to no avail
  
-- 
View this message in context: 
http://r.789695.n4.nabble.com/R-Ubuntu-package-installation-with-non-zero-exit-status-tp3305989p3305989.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Parzen fractional degree of difference estimator

2011-02-14 Thread Fologo Dubois
Does R have a function for Parzen fractional degree of differencing estimator? 
I am referring to the non-parametric kernel density estimator set forth by 
Parzen in Parzen (1983)




  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


  1   2   >