On Sat, 7 Feb 2015, Mike Miller wrote:
res <- residuals( model )
resStd <- ( res - mean( res, na.rm=TRUE ) ) / sd( res, na.rm=TRUE )
Another issue is how to make the theoretical quantiles for the normal
distribution. There are a few methods:
https://www.statsdirect.co
On Mon, 2 Feb 2015, Mikael Olai Milhøj wrote:
I'm having trouble trying to plot the density of the residuals against
the standard normal distribution N(0,1). (I'm trying to see if my
residuals are well-behaved).
I know hwo to calculate the standardized residuals (I guess that there
may be a
Kevin Thorpe wrote,
"Moral of story, computers do what you tell them, not what you meant."
But hope springs eternal! Of course this aphorism explains neatly every
problem I've ever run across while using a computer. But maybe someday
they'll make a computer that undertands *me*!
Peter Dalg
I've got to remember to use more spaces. Here's the basic problem:
These are the same:
v< 1
v<1
But these are extremely different:
v< -1
v<-1
This mistake can get you even inside of a function call like this:
v <- -2:2
which( v<1 )
[1] 1 2 3
which( v<-1 ) # oops, I meant v< -1 not v<-1
First, a very easy question: What is the difference between using
what="character" and what=character() in scan()? What is the reason for
the character() syntax?
I am working with some character vectors that are up to about 27.5 million
elements long. The elements are always unique. Specif
Thanks, Jeff. You really know the packages. I search and I guess I
didn't use the right terms. That package seems to do exactly what I
wanted.
Mike
On Tue, 13 Jan 2015, Jeff Newmiller wrote:
On Tue, 13 Jan 2015, Mike Miller wrote:
I have many pairs of data frames each with abo
I have many pairs of data frames each with about 15 million records each
and about 10 million records in common. They are sorted by two of their
fields and will be merged by those same fields.
The fact that the data are sorted could be used to greatly speed up a
merge, but I have the impressi
atteriesO.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
---
Sent from my phone. Please excuse my brevity.
On January 13, 2015 10:51:18 AM PST, Mike Miller wrote:
On Fr
On Fri, 9 Jan 2015, Duncan Murdoch wrote:
On 09/01/2015 5:32 PM, Erin Hodgess wrote:
Hello again.
Here is another question that I am puzzled about: I had the
(incorrect) impression that if I had Rtools on a Windows machine that I
could use any tar.gz package. However, that is not true.
I
ginal Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Mike Miller
Sent: Monday, 5 January 2015 1:03 p.m.
To: R-Help List
Subject: [R] counting sets of consecutive integers in a vector
I have a vector of sorted positive integer values (e.g., postive integers after
ap
I have a vector of sorted positive integer values (e.g., postive integers
after applying sort() and unique()). For example, this:
c(1,2,5,6,7,8,25,30,31,32,33)
I want to make a matrix from that vector that has two columns: (1) the
first value in every run of consecutive integer values, and (2
On Sun, 4 Jan 2015, Duncan Murdoch wrote:
On 04/01/2015 5:13 PM, Mike Miller wrote:
The help doc for readBin writeBin tells me this:
Handling R's missing and special (Inf, -Inf and NaN) values is discussed
in the ‘R Data Import/Export’ manual.
So I go here:
http://cran.r-project.or
The help doc for readBin writeBin tells me this:
Handling R's missing and special (Inf, -Inf and NaN) values is discussed
in the ‘R Data Import/Export’ manual.
So I go here:
http://cran.r-project.org/doc/manuals/r-release/R-data.html#Special-values
Unfortunately, I don't really understand th
the posting guide and hence reading the help page first helps:
"Possible sizes are 1, 2, 4 and possibly 8 for integer or logical vectors,
and 4, 8 and possibly 12/16 for numeric vectors."
On Sun, 4 Jan 2015, Duncan Murdoch wrote:
On 04/01/2015 12:31 AM, Mike Miller wrote:
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/BatteriesO.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
---
Sent from my phone.
It's an IEEE standard format:
http://en.wikipedia.org/wiki/Half-precision_floating-point_format#IEEE_754_half-precision_binary_floating-point_format:_binary16
This is what I see:
writeBin(vec , con, size=2 )
Error in writeBin(vec, con, size = 2) : size 2 is unknown on this machine
I'm not su
On Fri, 2 Jan 2015, Duncan Murdoch wrote:
On 01/01/2015 10:05 PM, Mike Miller wrote:
This is how big those errors are:
512*.Machine$double.eps
[1] 1.136868e-13
Under other conditions you also were seeing errors of twice that, or
1024*.Machine$double.eps. It might not be a coincidence
On Thu, 1 Jan 2015, Duncan Murdoch wrote:
On 01/01/2015 1:21 PM, Mike Miller wrote:
I understand that it's all about the problem of representing digital
numbers in binary, but I still find some of the results a little
surprising, like that list of numbers from the table() output.
0e+00 0.00e+00 -1.136868e-13
Or am I missing somthing else in what Mike Miller is seeking to do?
Ted.
On 01-Jan-2015 19:58:02 Mike Miller wrote:
I'd have to say thanks, but no thanks, to that one! ;-) The problem is
that it will take a long time and it will give the same answer.
On Thu, 1 Jan 2015, Duncan Murdoch wrote:
On 01/01/2015 1:21 PM, Mike Miller wrote:
On Thu, 1 Jan 2015, Duncan Murdoch wrote:
On 31/12/2014 8:44 PM, David Winsemius wrote:
On Dec 31, 2014, at 3:24 PM, Mike Miller wrote:
This is probably a FAQ, and I don't really have a question abo
2 0.325 1.12 1.9 1.
", what="")
padding <- c(".000", "000", "00", "0", "")
x.pad <- paste(x.in, padding[nchar(x.in)], sep="")
x.nodot <- sub(".", "", x.pad, fixed=TRUE)
x <- as.integer(x.n
On Thu, 1 Jan 2015, Duncan Murdoch wrote:
On 31/12/2014 8:44 PM, David Winsemius wrote:
On Dec 31, 2014, at 3:24 PM, Mike Miller wrote:
This is probably a FAQ, and I don't really have a question about it, but I just
ran across this in something I was working on:
as.integer(1000*
This is probably a FAQ, and I don't really have a question about it, but I
just ran across this in something I was working on:
as.integer(1000*1.003)
[1] 1002
I didn't expect it, but maybe I should have. I guess it's about the
machine precision added to the fact that as.integer always round
On Thu, 25 Dec 2014, Bert Gunter wrote:
You persist in failing to read the docs!
"the docs" -- do those exclude those I have been quoting and linking to?
Moreover, neither Hadley Wickham, nor anyone else, is the authoritative
source for R usage (other than for the (many!) packages he, himsel
On Thu, 25 Dec 2014, David Winsemius wrote:
On Dec 25, 2014, at 1:04 AM, Mike Miller wrote:
I just wanted to put this out there. It's just some of my observations
about things that happen with R, or happened in this particular
investigation. There were definitely some lessons for
On Thu, 25 Dec 2014, Mike Miller wrote:
I was going to ask a question about it how to test that an object is a
vector, but then I found this:
"is.vector() does not test if an object is a vector. Instead it returns
TRUE only if the object is a vector with no attributes apart from names.
On Thu, 25 Dec 2014, Duncan Murdoch wrote:
On 25/12/2014 1:57 PM, Mike Miller wrote:
I do think I get what is going on with this, but why should I buy into
this conceptualization? Why is it better to say that a matrix *is* a
vector than to say that a matrix *contains* a vector? The latter
RFGLS" package (CCing).
Best,
Uwe Ligges
On 25.12.2014 10:04, Mike Miller wrote:
I just wanted to put this out there. It's just some of my observations
about things that happen with R, or happened in this particular
investigation. There were definitely some lessons for me in this, and
may
On Thu, 25 Dec 2014, peter dalgaard wrote:
On 25 Dec 2014, at 08:15 , Mike Miller wrote:
"is.vector returns TRUE if x is a vector of the specified mode having
no attributes other than names. It returns FALSE otherwise."
So that means that a vector in R has no attributes other
On Thu, 25 Dec 2014, Mike Miller wrote:
On Thu, 25 Dec 2014, Jeff Newmiller wrote:
You have written a lot, Mike, as though we did not know it. You are
not the only one with math and multiple computing languages under your
belt.
I'm not assuming that you and Bert don't know thi
On Thu, 25 Dec 2014, Jeff Newmiller wrote:
You have written a lot, Mike, as though we did not know it. You are not
the only one with math and multiple computing languages under your belt.
I'm not assuming that you and Bert don't know things, but I do expect to
have a wider audience -- when I
I just wanted to put this out there. It's just some of my observations
about things that happen with R, or happened in this particular
investigation. There were definitely some lessons for me in this, and
maybe that will be true of someone else. The main thing I picked up is
that it is good
On Wed, 24 Dec 2014, Bert Gunter wrote:
You are again misinterpreting because you have not read the docs,
although this time I will grant that they are to some extent misleading.
First of all, a matrix _IS_ a vector:
a <- matrix(1:4, 2,2)
a[3] ## vector indexing works because it is a vector
On Wed, 24 Dec 2014, Jeff Newmiller wrote:
On December 24, 2014 6:49:47 PM PST, Mike Miller wrote:
On Wed, 24 Dec 2014, Mike Miller wrote:
Also, regarding the sacred text, "x A numeric." is a bit terse. The
same text later refers to length(x), so I suspect that "A numeric&
On Wed, 24 Dec 2014, Mike Miller wrote:
Also, regarding the sacred text, "x A numeric." is a bit terse. The
same text later refers to length(x), so I suspect that "A numeric" is
short for "A numeric vector", but that might not mean "a vector of
'
On Wed, 24 Dec 2014, Nordlund, Dan (DSHS/RDA) wrote:
For your character vector example, this will get you the counts.
table(charvec)[charvec]
Hope this is helpful,
It does help, Dan! I came up with the same idea and expanded on it a bit
to work properly with other kinds of vectors:
as.v
htly trickier with an integer vector:
intvec <- c(4,4,5,6,6,6)
table( intvec )[intvec]
intvec
NA NA NA NA NA NA
as.vector(table( intvec )[as.character(intvec)])
[1] 2 2 1 3 3 3
So I think this will always work for vectors of either type:
as.vector(table( as.character(vec) )[as
R 3.0.1 on Linux 64...
I was working with someone else's code. They were using ave() in a way
that I guess is nonstandard: Isn't FUN always supposed to be a variant of
mean()? The idea was to count for every element of a factor vector how
many times the level of that element occurs in the f
s of class "matrix"
Best,
Yixuan
2014-05-02 4:48 GMT-04:00 Berend Hasselman :
On 02-05-2014, at 09:17, Mike Miller wrote:
I have a symmetric matrix, X, and I just want the first K eigenvectors (those
associated with the K largest eigenvalues). Clearly, this works:
eigs <- e
I have a symmetric matrix, X, and I just want the first K eigenvectors
(those associated with the K largest eigenvalues). Clearly, this works:
eigs <- eigen( X, symmetric=TRUE )
K_eigenvectors <- eigs$vectors[ , 1:K ]
K_eigenvalues <- eigs$values[ 1:K ]
rm(eigs)
In order to do that, I have t
On Tue, 22 Apr 2014, William Dunlap wrote:
For me that other software would probably be Octave. I'm interested if
anyone here has read in these files using Octave, or a C program or
anything else.
I typed 'octave read binary file' into google.com and the first hit was
the Octave help file f
After saving a file like so...
con <- gzcon("file.gz", "wb"))
writeBin(vector, con, size=2)
close(con)
I can read it back into R like so...
con <- gzcon("file.gz", "rb"))
vector <- readBin(con, integer(), 4800, size=2, signed=FALSE)
close(con)
...and I'm wondering what other programs might
On Mon, 17 Mar 2014, Duncan Murdoch wrote:
On 14-03-17 6:22 PM, Mike Miller wrote:
Thanks! Another thing I've figured out: Use of "drop0trailing=T" in
format() fixes the .0 stuff that I didn't like:
write.table(format(data[1:10,], digits=5, trim=T, drop0trail
On Mon, 17 Mar 2014, Berend Hasselman wrote:
On 17-03-2014, at 21:03, Mike Miller wrote:
…...
data[,c(5:9,11,13,17:21)] <- signif(data[,c(5:9,11,13,17:21)], digits=5)
Then write.table(data) does what I'd want. It works better than format().
Example:
data2 <- data
data2[,c(5
On Sun, 16 Mar 2014, Duncan Murdoch wrote:
On 14-03-16 2:13 AM, Mike Miller wrote:
I always knew there was some numerical reason why I was getting very
long stretches of 9s or 0s in the write.table() output, but my concern
is really with how to prevent that from happening. So the question
On Sat, 15 Mar 2014, peter dalgaard wrote:
On 15 Mar 2014, at 20:54 , Mike Miller wrote:
$ cat data1.txt
0.005
0.00489
I don't know why it shows 17 digits and doesn't round to 15, but it is showing
that the numbers are different, for some reason.
Aiding my weakenin
On Sat, 15 Mar 2014, Rui Barradas wrote:
I haven't followed this thread since its start but I think you now have
a case for FAQ 7.31. See inline below.
Try
(1-0.995) - 0.005
[1] 4.336809e-18
(2-1.995) - 0.005
[1] -1.066855e-16
Hope this helps,
Yes, that does show the problem well, but
Having just learned a few tricks from you guys, this might be the neatest
way to show the issue:
write.table(c(1-0.995, 2-1.995), row.names=F, col.names=F)
0.005
0.00489
options(digits) only works with write(), and not with write.table():
options(digits=7)
write(c(1-0.995, 2-1.9
On Sat, 15 Mar 2014, peter dalgaard wrote:
I don't think so. I think some of your numbers differ sufficiently from
numbers with only a few digits to the right of the decimal that
write.table needs to write them with increased precision. You didn't
read them like that, didn't you? You did some
all of the helpful comments and ideas.
Best,
Mike
--
Michael B. Miller, Ph.D.
Minnesota Center for Twin and Family Research
Department of Psychology
University of Minnesota
http://scholar.google.com/citations?user=EV_phq4AAAAJ
On Sat, 15 Mar 2014, Duncan Murdoch wrote:
On 14-03-14 11:03 PM, Mike M
On Fri, 14 Mar 2014, Duncan Murdoch wrote:
On 14-03-14 8:59 PM, Mike Miller wrote:
What I'm using:
R version 3.0.1 (2013-05-16) -- "Good Sport"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
That's not current, b
What I'm using:
R version 3.0.1 (2013-05-16) -- "Good Sport"
Copyright (C) 2013 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)
According to some docs, options(digits) controls numerical precision in
output of write.table(). I'm using the default value f
In the bash shell we can use PROMPT_COMMAND="history -a" to tell bash to
always append the last command to the history file. It will do this with
every command so that if bash crashes or an ssh connection is lost, the
command history will still be available in the history file.
With R, I see
On Tue, 6 Aug 2013, David Winsemius wrote:
Look at the code. You are attributing behavior to `summaryBy` that
should be ascribed to `print.data.frame`, and to `format.data.frame`.
Your function is returning a numeric vector and getting displayed by
`print.default`.
Thanks! That's the thing
I received two additional suggestions, one off-list, both appended below.
Both helped me to learn a bit more about how to get what I want.
First, the aggregate() function is in package:stats, it provides the
numbers I needed, but I don't like the output format as much as I liked
the format fro
Summary of my question:
"I have a 5-way factorial design, two levels per factor, so 32 cells, and
I mostly just want the means and standard deviations for the contents of
every cell. Similarly, it would be nice to also have the range and maybe
some percentiles, if there is a function that wou
I'm looking for recommendations for a good way to do this. There must be
a good function in some package...
I have a 5-way factorial design, two levels per factor, so 32 cells, and I
mostly just want the means and standard deviations for the contents of
every cell. I can write a for loop, bu
Prof Brian Ripley wrote:
Maybe not. On a Unix-alike see ?Signals. If you can find the pid of
the R process and it is still running (and not e.g. suspended),
kill -USR1
will save the workspace and history.
Original query:
On Tue, 2 Oct 2012, Mike Miller wrote:
I connected from my deskt
I connected from my desktop Linux box to a Linux server using ssh in an
xterm, but that xterm was running in Xvnc. I'm running R on the server in
that xterm (over ssh). Something went wrong with Xvnc that has caused it
to hang, probably this bug:
https://bugs.launchpad.net/ubuntu/+source/vnc
On Thu, 23 Aug 2012, Gary Dong wrote:
I'm wondering if the gls function reports pseudo R. I do not see it by
summary(). If the package does not report, can I calculate it in this
way?
Adjusted pseudo R squared = 1 - [(Loglik(beta) - k ) / Loglik(null)] where
k is the number of IVs.
We've b
On Sun, 29 Apr 2012, Daniel Nordlund wrote:
I don't know what the OP is really trying to accomplish yet, and I am
not motivated (yet) to try to figure it out. However, all this
"flooring" and "ceiling) and "rounding" is not necessary for generating
uniform random integers. For N integers in
On Mon, 30 Apr 2012, Vale Fara wrote:
ok, what to do is to generate two sets (x,y) of integer uniform random
numbers so that the following condition is satisfied: the sum of the
numbers obtained in x,y matched two by two (first number obtained in x
with first number obtained in y and so on) is
On Fri, 27 Apr 2012, Vale Fara wrote:
I am working with lotteries and I need to generate two sets of uniform
random numbers.
Requirements:
1) each set has 60 random numbers
random integers?
2) random numbers in the first set are taken from an interval (0-10),
whereas numbers in the second s
On Thu, 8 Sep 2011, William Dunlap wrote:
Use gzcon() to make a compressed connection and any function that write
to a connection will write compressed data. E.g.,
> con <- gzcon(file("tempfile.junk", "wb"))
> x <- as.integer(rep(c(-127, 1, 127), c(3,2,1)))
> writeBin(x, con, size=1)
> cl
On Thu, 8 Sep 2011, Duncan Murdoch wrote:
On 11-09-07 6:25 PM, Mike Miller wrote:
I'm getting the impression from on-line docs that R cannot work with
single-precision floating-point numbers, but that it has a pseudo-mode
for single precision for communication with external programs
I'm getting the impression from on-line docs that R cannot work with
single-precision floating-point numbers, but that it has a pseudo-mode for
single precision for communication with external programs.
I don't mind that R is using doubles internally, but what about storage?
If all I need to s
I'm curious about what would cause this (see below), if it isn't a joke.
Is it possible that it didn't look ridiculous in the deleted HTML but the
text looked bad? It's almost unreadable. I guess the HTML gets deleted
because it is a waste of space, but I received a 14 MB message from this
li
On Fri, 24 Jun 2011, wang peter wrote:
aa file is:
x 1 NA 2
y 1 NA 3
and r program is
aa<-read.table("aa",row.names=1)
bb<-cor(t(aa),method = "pearson",use="pairwise.complete.obs")
bb
x y
x 1 1
y 1 1
i am confused why the pearson correlation coefficients between x and y is 1
You have
On Wed, 22 Jun 2011, Jim Silverton wrote:
I am generating 1,000 replicates of 10,000 of these 2 x 3 tables but R
cannot seem to save it. Its over 1 Gig. Any ideas on how I can store
this large amount of data? Should I use a list or a matrix?
Is English your first language? If so, you can pro
On Thu, 23 Jun 2011, David Duffy wrote:
I am interested in simulating 10,000 2 x 3 tables for SNPs data with
the Hardy Weinberg formulation. Is there a quick way to do this? I am
assuming that the minor allelle frequency is uniform in (0.05, 0.25).
rmultinom() with HWE expectations
I'm als
Resending to correct bad subject line...
On Mon, 20 Jun 2011, Jim Silverton wrote:
I a using plink on a large SNP dataset with a .map and .ped file. I want
to get some sort of file say a list of all the SNPs that plink is saying
that I have. ANyideas on how to do this?
All the SNPs you hav
On Mon, 20 Jun 2011, Jim Silverton wrote:
I a using plink on a large SNP dataset with a .map and .ped file. I want
to get some sort of file say a list of all the SNPs that plink is saying
that I have. ANyideas on how to do this?
All the SNPs you have are listed in the .map file. An easy way
On Tue, 3 May 2011, Christian Schulz wrote:
On Mon, 2 May 2011, P Ehlers wrote:
Use str_sub() in the stringr package:
require(stringr) # install first if necessary
s <- "abcdefghijklmnopqrstuvwxyz"
str_sub(s, c(1,12,17), c(3,15,-1))
#[1] "abc""lmno" "qrstuvwxyz"
Thanks. Th
On Mon, 2 May 2011, P Ehlers wrote:
Use str_sub() in the stringr package:
require(stringr) # install first if necessary
s <- "abcdefghijklmnopqrstuvwxyz"
str_sub(s, c(1,12,17), c(3,15,-1))
#[1] "abc""lmno" "qrstuvwxyz"
Thanks. That's very close to what I'm looking for, but i
On Mon, 2 May 2011, Gabor Grothendieck wrote:
On Mon, May 2, 2011 at 10:32 PM, Mike Miller wrote:
On Tue, 3 May 2011, Andrew Robinson wrote:
try substr()
OK. Apparently, it allows things like this...
substr("abcdef",2,4)
[1] "bcd"
...which is like this:
echo
On Tue, 3 May 2011, Andrew Robinson wrote:
try substr()
OK. Apparently, it allows things like this...
substr("abcdef",2,4)
[1] "bcd"
...which is like this:
echo "abcdef" | cut -c2-4
But that doesn't use a delimiter, it only does character-based cutting,
and it is very limited. With "c
The R "cut" command is entirely different from the UNIX "cut" command.
The latter retains selected fields in a line of text. I can do that kind
of manipulation using sub() or gsub(), but it is tedious. I assume there
is an R function that will do this, but I don't know its name. Can you
tell
On Fri, 29 Apr 2011, Duncan Murdoch wrote:
On 29/04/2011 7:41 PM, Miao wrote:
Can anyone help on gsub() in R? I have a string like something below, and
wanted to delete all the strings with leading backslash, including
"\xa0On",
"\023, "\xab", and many others. How should I write a regular
Rob--
Your biostatistician has not disagreed with the rest of us about anything
except for his preferred name for the test. He wants to call it the
Freeman-Halton test, some people call it the Fisher-Freeman-Halton test,
but most people call it the Fisher Exact test -- all are the same test.
On Fri, 29 Apr 2011, David Winsemius wrote:
On Apr 29, 2011, at 1:29 PM, Mike Miller wrote:
On Fri, 29 Apr 2011, Giovanni Petris wrote:
Well, but the original poster also refers to 0.2 and 0.8 as "expected min
and max", in which case we are back to a joke...
Well, he is a lot b
On Fri, 29 Apr 2011, Giovanni Petris wrote:
Well, but the original poster also refers to 0.2 and 0.8 as "expected
min and max", in which case we are back to a joke...
Well, he is a lot better with English than I am with Mandarin. He seemed
to like the truncated normal answers, so we'll let t
pnorm(U, mean=m, sd=s)
x <- qnorm(runif(n, p_L, p_U), mean=m, sd=s)
Or it could be written on one line:
x <- qnorm(runif(n, pnorm(L, mean=m, sd=s), pnorm(U, mean=m, sd=s)), mean=m,
sd=s)
Mike
On Thu, 28 Apr 2011, Mike Miller wrote:
Good point. It would be absurdly inefficient if the
Good point. It would be absurdly inefficient if the upper and lower
limits on the interval of interest were, say, 0.2 and 0.201 instead of 0.2
and 0.8. Here's what I think is probably the best general approach:
Compute the CDF for the upper and lower limits of the interval and
generate unifo
On Thu, 28 Apr 2011, viostorm wrote:
I have read the help page, or at least ?fisher.exact
I looked a bit on the Internet I guess it is applicable to > 2x2. I had
spoken to a biostatistician here who is quite excellent and was adamant
with me I could not do > 2x2.
I found this:
http://math
On Fri, 29 Apr 2011, Thomas Lumley wrote:
On Fri, Apr 29, 2011 at 8:01 AM, Mike Miller wrote:
On Thu, 28 Apr 2011, viostorm wrote:
I'm using fisher.exact on a 4x2 table and it seems to work.
Does anyone know exactly what is going on? I thought fisher.exact is
only for 2x2 tables.
On Thu, 28 Apr 2011, viostorm wrote:
I'm using fisher.exact on a 4x2 table and it seems to work.
Does anyone know exactly what is going on? I thought fisher.exact is
only for 2x2 tables.
You were wrong. I'm sure there's nothing wrong with the program. You
will find that with bigger tabl
On Mon, 25 Apr 2011, Mark Heckmann wrote:
I use a function that inserts line breaks ("\n" as escape sequence)
according to some criterion when there are blanks in the string. e.g.
"some text \nand some more text".
What I want now is another form of a blank, so my function will not
insert a ?
I sometimes get errors of this form:
Error: cannot allocate vector of size 13.8 Gb
I've also seen "Gb" used in R documents. Is "Gb" being used to refer to
gigabyte? We usually refer to bytes and gigabytes when discussing memory
usage, but the lowercase "b" more often refers to bits. Accord
I note that "current implementations of R use 32-bit integers for integer
vectors," but I am working with large arrays that contain integers from 0
to 3, so they could be stored as unsigned 8-bit integers. Can R do this?
(FYI -- This is for storing minor-allele counts for genetic studies.
Ther
(Apologies to the cc-list: I'm resending from a different address because
I didn't realize it was going to r-help.)
I'm also not an expert on this topic. I just wanted to list a couple of
ways that non-PD matrices might arise. I'll just add now a couple of
pointers:
First, I believe the te
I'm also not an expert on this topic. I just wanted to list a couple of
ways that non-PD matrices might arise. I'll just add now a couple of
pointers:
First, I believe the term "semipositive definite" is considered ambiguous
because in some literature it means that the matrix the smallest
e
Is there an R function for computing a variance-covariance matrix that
guarantees that it will have no negative eigenvalues? In my case, there
is a *lot* of missing data, especially for a subset of variables. I think
my tactic will be to compute cor(x, use="pairwise.complete.obs") and then
pr
On Sun, 30 Jan 2011, David Winsemius wrote:
On Jan 30, 2011, at 6:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final
conclusion is. I will have to check the the eigenvalues if any are
negative. Why would setting them to zero make a difference? Sorr
On Sun, 30 Jan 2011, David Winsemius wrote:
On Jan 30, 2011, at 6:02 AM, Alex Smith wrote:
Thank you for all your input but I'm afraid I dont know what the final
conclusion is. I will have to check the the eigenvalues if any are
negative. Why would setting them to zero make a difference? Sorr
On Mon, 24 Jan 2011, Peter Ehlers wrote:
10 x 10 strikes me as pretty near the limit of usefulness of a pairs
plot. You might want to investigate the xysplom() function in pkg:HH.
You'll have to write your own panel function, possibly subsetting your
data with the scale() function.
The 10 x
On Mon, 24 Jan 2011, David Winsemius wrote:
On Jan 24, 2011, at 6:49 PM, Mike Miller wrote:
I make plenty of scatterplots, especially using scatterplot.matrix from
library(car). One thing I don't know how to do is determine which
points are plotted last. Sometimes I plot a large numb
I make plenty of scatterplots, especially using scatterplot.matrix from
library(car). One thing I don't know how to do is determine which points
are plotted last. Sometimes I plot a large number of points for multiple
groups represented by different colors.
I would like to guarantee that poi
Thanks for sharing the interesting information about cran2deb. I was
unaware of that project (but I did know that Dirk E. had been doing Octave
and R binaries for Debian for years). Dirk Eddelbuettel (you spelled his
name correctly) and Charles Blundell gave a talk at UseR! 2009...
http://di
On Sun, 14 Mar 2010, Jonathan Baron wrote:
Just to make this thoroughly confusing, I will say that I am very happy
with Fedora
Just to make this less confusing: choose Ubuntu. I say this because it is
easy to use, has great repositories and it is the most popular Linux
distro, so it should
##
# Syntax highlighting for R #
# by Stephen Haptonstahl #
# March 15, 2009 #
# http://srh.ucdavis.edu/drupal/node/20 #
# edited by Mike Miller #
syntax "R" &qu
1 - 100 of 113 matches
Mail list logo