It's documented in the Encodings section of ?file:
"As from R 3.0.0 the encoding "UTF-8-BOM" is accepted for reading and
will remove a Byte Order Mark if present (which it often is for files
and webpages generated by Microsoft applications). If it is required
(it is not recommended) when writing i
Hi,
@David: Thanks for the explanation why this does not work. This of
course makes theoretically sense.
However in a recent discussion
(
http://stats.stackexchange.com/questions/107448/spatial-distance-between-cluster-means
)
it was stated that "the 'reversals problem' of centroid method is
not
> I'm wondering if you understand the RMySQL is only an interface to the
> MySQL package, and it needs to be installed separately for your OS.
>
> --
> David
Hi David.
As I wrote in my first mail, MySQL (MariaDB) is installed from the tar
file. Please, if you want to help me, re-read my first mai
A quick Google Scholar search turned up the (below paper. It has been
cited 38 times so that should give you plenty of references.
More than likely you are looking for use cases in your own context;
correct?
http://goo.gl/Vcx4j1
On Fri, Jul 11, 2014 at 3:17 AM, Katherine Gobin
wrote:
> Dear
Hi Lars,
Graph it:
plot(pred_leuk)
will show that some of the survival curves do not reach 0.5 before you run out
of data. Thus the median is not estimated and you get NA.
Chris
-Original Message-
From: Lars Bishop [mailto:lars...@gmail.com]
Sent: Thursday, July 10, 2014 6:23 AM
To:
Dear all,
I need some help with plotting boxplots in groups. I have a
file of the following format:
-
POS DISONODE_CAT
--
A20Hubs
C30Nonhubs
B50
Dear Anupam,
Try
boxplot(DISO ~ POS * NODE_CAT, data = yourdata)
Another option would be the last example in ?boxplot
HTH,
Jorge.-
On Fri, Jul 11, 2014 at 4:38 PM, anupam sinha
wrote:
> Dear all,
> I need some help with plotting boxplots in groups. I have a
> file of the followin
The easiest workaround is the one you included in your original posting.
Specify k= and not h=. Examine the dendrogram and decide how many clusters are
at the level you want. You could add guidelines to the dendrogram with abline()
to make it easier to see the number of clusters at various heigh
Dear Judith,
Michael and David's suggestion (and my earlier suggestion) that you use a
method appropriate to the distribution of your data is no doubt good advice,
but it remains to be explained why candisc() produced an error.
If you do the following, you'll see that many of your variables are
i
Hello all,
I have a data frame filled with senders and recipients. Some of the senders
have multiple rows with different recipients and I want to merge those
rows. For example I have
a...@email.com b...@email.com
a...@email.com c...@email.com d...@email.com
r...@email.com f...@em
Hi Ryan,
We can't tell from your example what structure your original data are
in, nor what your output is intended to look like, or for that matter
how you got from one to the other. Please don't post in HTML because
it gets mangled!
Using dput() to provide your example data is the best thing, b
If I may ask, how can I know the path to the font file(s) used by R when it
generates the png files with the Helvetica font?
Thanks
On Thu, Jul 10, 2014 at 12:21 AM, David Winsemius
wrote:
>
> On Jul 9, 2014, at 7:47 PM, Sébastien Bihorel wrote:
>
> > Hi,
> >
> > I have this set of R scripts
I am running a mixed effects model with random intercepts
fit.courseCross <- lme(fixed= zGrade ~ Rep + ISE
+P7APrior+Female+White+HSGPA+MATH+Years+Course+Course*P7APrior ,
random= ~1|SID,
data = Master.complete[Master.complete$Course != "P7A",])
where all varia
Dear Katie,
the code looks all right. On a standard example, everything works fine, e.g.:
# example(plm)
# coefs<-names(coef(zz))
# lht(zz, coefs)
# lht(zz, coefs, vcov=vcovHC) # "arellano" is the default HC method anyway
As the error message says, your case is somehow ill-conditioned and the vc
Hi there!
I have huge datafile of 600 columns 360 samples:
data <- read.table("small.txt", header = TRUE, sep = "\t", dec = ".",
row.names=1)
The txt.file (compiled with excel) is showing me only numbers, however R
gives me the structure of ANY column as "factor".
When i try "stringsAsFactors
On Jul 11, 2014, at 9:15 AM, Tim Richter-Heitmann
wrote:
> Hi there!
>
> I have huge datafile of 600 columns 360 samples:
>
> data <- read.table("small.txt", header = TRUE, sep = "\t", dec = ".",
> row.names=1)
>
> The txt.file (compiled with excel) is showing me only numbers, however R
>
On Jul 11, 2014, at 2:36 PM, Marc Schwartz wrote:
>
> On Jul 11, 2014, at 9:15 AM, Tim Richter-Heitmann
> wrote:
>
>> Hi there!
>>
>> I have huge datafile of 600 columns 360 samples:
>>
>> data <- read.table("small.txt", header = TRUE, sep = "\t", dec = ".",
>> row.names=1)
>>
>> The txt
Hi John,
I don't think your x2 is right, but who knows?
One possible approach would be:
R> lapply(split(x2$recipients, unique(x2$sender)), paste, collapse=", ")
$`a...@email.com`
[1] "b...@email.com, f...@email.com"
$`r...@email.com`
[1] "c(\"c...@email.com\", \"d...@email.com\"), h...@email.co
It is hard to diagnose without looking at the file. For example
readLines("small.txt", n=5)
would print out the first five lines that might show problems with wrapping the
lines. What does dim(data) give you? Are you getting all 360 samples and 600
columns? You could also try using the colClass
> data <- read.table("small.txt", header = TRUE, sep = "\t", dec = ".",
> row.names=1)
> ...
> Factor w/ 358 levels "0,123111694",..: 11 14 50 12 38 44 13 76 31 30
It looks like your data file used commas for the decimal point. Is that right?
You used dec="." when reading it; does dec="," work b
Full Disclosure: I am not an expert on this and this requires an
expert's answer.
But my understanding is that inference in mixed effect models is an
entirely nontrivial matter -- i.e. exactly what you want must be
clearly defined (what variance components to include) and the
distributions of the
Hi everyone,
Since metafor doesn't have its own list, I hope this is the correct place
for this posting- my apologies if there is a more appropriate list.
I'm conducting a meta-analysis where I would like to determine the
correlation between plasticity in leaf traits and climate. I'm calculating
Is there an algorithm to find sparse cuts in a graph ?
I am looking to find the best cut which minimizes the sparsity of a graph.
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-
Hi R-helpers,
Is there any packages can do "*zeta-squared transformation*"?
The Zeta-squared transformation comes from the following article:
Standardizing Variables in Multiplicative Choice Models
Lee G. Cooper and Masao Nakanishi
Journal of Consumer Research, Vol. 10, No. 1 (Jun., 1983), pp. 96
Have you tried
RSiteSearch("zeta squared")
?
Someone may recognize this, but it never hurts to communicate where you have
already looked.
---
Jeff NewmillerThe . . Go Live...
DCN:
25 matches
Mail list logo