plot&data_source=R_CC&init=true
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of ?
Assuming your sample data is called dta:
> table(dta$Results, dta$Analysis)
A B C
1-5 1 1 0
20-50 1 0 0
4-7 0 0 1
8-9 0 1 1
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: r-help-boun...@r-project.org [m
, 4.5), pch=19)
axis(1, 1:4, LETTERS[1:4])
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Luigi Marongiu
Sent: Wed
Don't the functions metaMDSdist() and metaMDSredist() that are documented on
the metaMDS manual page give you the distance matrix? If you want to compute
the distances based on a single axis, you could use vegdist().
David C
-Original Message-
From: r-help-boun...@r-project.org [mailto
. I am not
aware of a package that has a function to produce either of these.
Huberty, Carl J. and Stephen Olejink. 2006. Applied Manova and Discriminant
Analysis. Second Edition. Wiley-Interscience.
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Mes
r the number of
observations or one of your columns is a linear combination of (can be
predicted exactly from) the others.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
Fro
].
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Denis Kazakiewicz
Sent: Monday, September 1, 2014 5:27 AM
To: r-he
probably take
the mean of the 100 runs as a reasonable estimate. If the estimates are quite
variable, you should probably use more than 40 runs by setting T=100 or an even
larger number. Multiple runs should then be more similar to one another.
-
David L Carlson
1.
col 1 is perfectly correlated with col 2
col 3 is perfectly correlated with col 4
col 5 is perfectly correlated with col 6
David C
From: Patzelt, Edward [mailto:patz...@g.harvard.edu]
Sent: Tuesday, September 2, 2014 9:21 AM
To: David L Carlson
Cc: R-help@r-project.org
Subject: Re:
Another approach using barplot:
barplot(table(cut(art, breaks= -1:19, labels=0:19)))
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-
labels = c(expression(E[g]), "E", expression(E[j]),
"E", expression(E[t])), padj=1, mgp=c(3, .1, 0))
# Check alignment
abline(h=.7, xpd=TRUE, lty=3)
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX
.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Tal Galili
Sent: Wednesday, September 3, 2014 5:24 AM
To: W Bra
/questions/6127/which-permutation-test-implementation-in-r-to-use-instead-of-t-tests-paired-and
David C
From: wbradleyk...@gmail.com [mailto:wbradleyk...@gmail.com] On Behalf Of W
Bradley Knox
Sent: Wednesday, September 3, 2014 9:20 AM
To: David L Carlson
Cc: Tal Galili; r-help@r-project.org
Subject: Re
egating function is not working properly with mine.
Any other thoughts?
Simon
On Aug 18, 2014, at 10:44 AM, David L Carlson wrote:
> Another approach using reshape2:
>
>> library(reshape2)
>> # Construct data/ add column of row numbers
>> set.seed(42)
>> mydf
help@r-project.org
Subject: Re: [R] depth of labels of axis
On Sep 3, 2014, at 10:05 PM, Jinsong Zhao wrote:
> On 2014/9/3 21:33, Jinsong Zhao wrote:
>> On 2014/9/2 11:50, David L Carlson wrote:
>>> The bottom of the expression is set by the lowest character (which can
>>
There may be a specialized package for this in bioconductor, but it seems that
you could just use aggregate() to calculate the means for each population and
then use the results of that in dist().
?aggregate
-
David L Carlson
Department of Anthropology
Texas
on Kiss [mailto:sjk...@gmail.com]
Sent: Friday, September 5, 2014 10:22 AM
To: David L Carlson
Cc: r-help@r-project.org
Subject: Re: [R] Turn Rank Ordering Into Numerical Scores By Transposing A Data
Frame
HI, of course.
The a mini-version of my data-set is below, stored in d2. Then the code I'
contour, etc.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Saptarshi Guha
Sent: Mo
at1$b)
> table(dat1$new)
> table(dat1$new)
1 2 3
5 5 5
> table(dat1$b)
A1 A2 B1
5 5 5
If b is not a factor in your table, make it one ?factor
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
frame (or matrix) with at least 4 columns.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Mar
2 18042 46
[10,] 19220 17550 50
[11,] 107 23 24 19 112 19 25 20 349
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
F
mpler imputation methods. Package VIM has a number of options of which
nearest neighbor and hot deck might work well with your data.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message---
just index the return value:
> let <- letters[1:4]
> let[menu(let)]
1: a
2: b
3: c
4: d
Selection: 3
[1] "c"
Or a bit more polished:
> cat("Choice: ", let[menu(let)], "\n")
1: a
2: b
3: c
4: d
Selection: 4
Choice: d
---------
ith simulated p-value (based on 2000
replicates)
data: TT
X-squared = 7919.632, df = NA, p-value = 0.0004998
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From:
Factor w/ 36 levels "1","10","11",..: 1 12 23 31 32 33 34 35 36 2
...
$ station : chr "A" "A" "A" "A" ...
It seems strange that the discharge and year would be factors and station would
be character.
-
(... which is what fit
is.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Donia Smaali Bouhlila
Sent: Thursday, September 18, 201
------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ivan Calandra
Sent: Tuesday, September 23, 2014 8:12 AM
To: r-help@r
Read the documentation for cutree(). You will have to decide how many clusters
you want to use since agnes() provides results for everything from n clusters
(where n is the number of observations) to 1 cluster.
?cutree
-
David L Carlson
Department of
Another approach
fun <- function(i, dat=x) {
grp <- rep(1:(nrow(dat)/i), each=i)
aggregate(dat[1:length(grp),]~grp, FUN=sum)
}
lapply(2:6, fun, dat=TT)
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX
First, use stringsAsFactors=FALSE with the read.csv() function. That will
prevent the conversion to factors. Then try to convert date and time to
datetime objects.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 7
iginal four
variables
> new2 <- predict(lm(rowMeans(cbind(d1, d2, d3, d4))~pca$scores[,1]))
> lines(new2, col="red")
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
From: r-help-b
7
Since you want a single variable to stand for all four, you could scale new to
the mean:
> newd <- new*mean(d.svd$v[,1])
> head(newd)
[1] 130.9300 114.3972 120.3884 119.9340 116.1588 122.3983
-----
David L Carlson
Department of Anthropology
Texa
869 0.0097 0.01142 0.01219 ...
$ upper : num 1 1 0.997 0.992 0.989 ...
$ lower : num 0.987 0.966 0.959 0.947 0.941 ...
Since res is a list containing the columns you want plus other information, we
need to extract the needed columns from res and then combine those columns into
a dat
You need to use plain text, not html in your email. Your data are scrambled
(see below). It is better to send your data using the R dput() function:
dput(StartSignals)
dput(MainData)
dput(StopSignals)
-
David L Carlson
Department of Anthropology
Texas A&a
o for X1 "2014-01-05" is both a
start and stop date (value 1.04) and the second start/end would be "2014-01-11"
to "2014-01-13" (values .43, 1.51, .26). What do you mean by compounding?
David C
-Original Message-
From: Pooya Lalehzari [mailto:plalehz...@plati
0.00 0.00
10 2014-01-10 0.00 0.00 1.68 0.98 0.00
11 2014-01-11 0.43 0.00 1.98 1.46 0.00
12 2014-01-12 1.51 0.78 1.63 0.46 1.84
13 2014-01-13 0.26 0.34 0.34 0.97 1.13
David C
-Original Message-
From: Pooya Lalehzari [mailto:plalehz...@platinumlp.com]
Sent: Tuesday, October 7, 2014 8:06 PM
To
How about
> do.call(cbind, lapply(env, as.vector))
[,1] [,2]
[1,] 0.00 0.00
[2,] 0.05 0.15
[3,] 0.00 0.00
[4,] 20.00 15.00
[5,] 0.00 0.00
[6,] 0.10 0.20
[7,] 50.00 45.00
[8,] 0.00 0.00
[9,] 0.00 0.00
-----
David L Carlson
Departm
Actually Jeff Laake's can be made even shorter with
sapply(mat_list, as.vector)
David C
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Evan Cooch
Sent: Thursday, October 9, 2014 7:37 AM
To: Evan Cooch; r-help@r-project.org
Subjec
ain=paste("Plot of", x[1], "with", x[2])))
NULL
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Beh
("winters-pdf.pdf")
> ellipses(mean=mn, var=vr, r=r, steps=72, thinRatio=NULL, aspanel=FALSE,
+ col='red', lwd=2)
> dev.off()
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-
5.40, Inf),
labels=c("S", "I1", "I2", "F"), right=FALSE)
No loops, no ifelse's. Anything below 3 will
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-
ct.org] On
Behalf Of David L Carlson
Sent: Friday, October 17, 2014 9:15 AM
To: Monaly Mistry; r-help@r-project.org
Subject: Re: [R] assigning letter to a column
I think it is doing exactly what you have told it to do, but that is probably
not what you want it to do.
First, you do not need a loop
presented in the file, analyze them as character strings.
-------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Note that you do not have to create the vector of 1's (TRUE) and 0's (FALSE) if
you know the index values:
> j <- c(2, 4, 6)
> a[j, j]
[,1] [,2] [,3]
[1,]8 20 32
[2,] 10 22 34
[3,] 12 24 36
==========
David L. Carlson
Department of
Why not just
> library (alphahull)
> DT=data.frame(x=c(0.25,0.25,0.75,0.75),y=c(0.25,0.75,0.75,0.25))
> Hull <- ahull(DT, alpha = 0.5)
> TEST<- data.frame(x=c(0.25,0.5),y=c(0.5,0.5))
> apply(TEST, 1, function(x) inahull(Hull, x))
[1] FALSE TRUE
----
.
$ V19: num 0 7 2 27 2 0 0 80 30 0 ...
$ V20: num 0 0 0 0 0 0 0 0 0 0 ...
$ V21: num NA NA NA NA NA NA NA NA NA NA ...
$ V22: num NA NA NA NA NA NA NA NA NA NA ...
$ V23: num NA NA NA NA NA NA NA NA NA NA ...
$ V24: num NA NA NA NA NA NA NA NA NA NA ...
$ V25: num NA NA NA NA NA NA NA NA NA
You avoid the call to cmdscale() by supplying your own starting configuration
(see the manual page for the y= argument). You could still hit other barriers
within isoMDS() or insufficient memory on your computer.
-
David L Carlson
Department of Anthropology
$ group: Factor w/ 3 levels "A","B","C": 1 1 1 2 2 2 2 3
$ value: num 1 3 2 2 2 4 4 1
I changed df to dfa since df() is the density function for the f distribution.
R is not likely to get confused, but you might.
Then read the manual page on ave() to see why these wo
).
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gerrit Eichner
Sent: Wednesday, November 12, 2014 8:06 AM
To: David Studer
C
ing NA
> mean(get(txt))
[1] 5.5
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of John Posner
Sent: Thursday, November 13,
=TRUE)
axis(1, 0:4, c("Smoke", "-", "+", "-", "+"), line=2,
lwd=0, cex.axis=1.25, xpd=TRUE)
David
-Original Message-
From: Olivier [mailto:olivier.lerou...@ymail.com]
Sent: Monday, November 17, 2014 4:39 PM
To: David L Carlson
Subje
ou will need to read the manual pages for barplot() and axis() and
the page on graphical parameters par(). In particular, you will have to
allocate more space at the bottom of the plot if you want to add more lines.
-
David L Carlson
Department of Anthropo
Look at circular more carefully. It accepts both degrees and radians, but you
have to create a circular object with circular() to specify what kind of
circular data you have. Then you can plot and get circular statistics on your
data.
-
David L Carlson
No. Just use the circular() function to specify that your data are in degrees
and clockwise and the graph will be labeled that way.
David C (I was beginning to think that this thread was only for Davids).
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-proj
15 1251 2011 GR.3.1, GR.3.8
16 1801 2011 GR.3.8
-------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
94462_3
EU686593_2 94.4 95.6 94.8
JN166322_2 95.3 96.5 95.9
EU491340_2 96.5 97.7 96.0
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
Fro
Use predict.tree() with type="class" on the object returned by tree(). Then use
that to construct a cross tabulation against the original data. If you provide
a reproducible example with data, I could be more specific.
-----
David L Carlson
Dep
42 72 44 90 43
3 37 32 44 48 71 46 89 42
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Beh
You probably want function persp3d() in package rgl. You have to define the
formula as a function and specify the limits correctly, but you were close:
> library(rgl)
> persp3d(function(x, y) x^2+y^2, xlim=c(-3, 3), ylim=c(-3, 3))
-
David L Carlson
Depa
rsicolor",..: 1 1 1 1 1 1 1 1
1 1 ...
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Sam Albers
Sent: Tuesday, August 25, 20
No rex, but not much less complicated, than your original but a different
approach:
> i <- seq(1, nchar(str), 2)
> paste0(mapply(substr, str, i, i), collapse="")
[1] "ACEG"
---------
David L Carlson
Department of Anthropology
Texas A&
("ID","Nom_ech"))
If this is the expected outcome, the problem is the NA values in repet. I
changed them to 0 since you did not have any 0 entries in the data (otherwise
you could use 999 or some other value that does not occur in the data). Change
them back after running resha
pagne-Ardenne
GEGENAA - EA 3795
CREA - 2 esplanade Roland Garros
51100 Reims, France
+33(0)3 26 77 36 89
ivan.calan...@univ-reims.fr
https://www.researchgate.net/profile/Ivan_Calandra
Le 08/09/15 16:23, David L Carlson a écrit :
> I have not followed this thread closely, but this seems to work:
&g
The Wikipedia article gives a simple formula based on the number of discordant
pairs. You can get that from the ConDisPairs() function in package DescTools.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-
= .4.
$pi.c and $pi.d are used to compute C and D as follows:
> sum((xy * cd$pi.c))/2
[1] 6
> sum((xy * cd$pi.d))/2
[1] 4
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: Ragia Ibrahim [mailto:ragi...@hotmail.com]
Sent: Saturday, Septe
e Var 0.287 0.530 0.712 0.878 1.000
> pc$sdev^2
Comp.1Comp.2Comp.3Comp.4Comp.5
1.4362072 1.2145055 0.9068555 0.8315685 0.6108632
> # Now the sum of the squared loadings equals the
> # squared standard deviation (aka the eigenvalues)
---
726
Cumulative Proportion 0.2872414 0.5301425 0.7115137 0.8778274 1.000
David
-Original Message-
From: Marcelo Kittlein [mailto:kittl...@mdp.edu.ar]
Sent: Monday, September 14, 2015 1:28 PM
To: David L Carlson
Subject: Re: [R] Error in principal component loadings calculation
Thanks David
hist(x)
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Rui Barradas
Sent: Tuesday, September 15, 2015 12:41 PM
To: ted.hard...@wl
You could use match() and avoid ifelse():
> dat <- data.frame(ASB = c(LETTERS[1:3]), Flow=c(11.51, 9.2, 10.5),
> stringsAsFactors=FALSE)
> cat <- LETTERS[1:3]
> mult <- c(.1, .15, .2)
> dat$Flow * mult[match(dat$ASB, cat)]
[1] 1.151 1.380 2.100
-----
quot; , "'+'(5,", "sqrt(")
> sapply(parse(text=paste0(mult[match(dat$ASB, cat)], dat$Flow, ")")), eval)
[1] 23.02000 14.2 3.24037
David
-Original Message-
From: Bert Gunter [mailto:bgunter.4...@gmail.com]
Sent: Tuesday, September 15, 2015 4
"Med" "Max"
> tbl
wool tension breaks.Min breaks.Med breaks.Max
1A L 25 51 70
2B L 14 29 44
3A M 12 21 36
4B M 16 28 42
5A H
There is a version of sample especially for integers:
> Q1 <- matrix(sample.int(4, 200, replace=TRUE), 200)
> str(Q1)
int [1:200, 1] 1 4 4 2 3 3 4 4 2 3 ...
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station,
t you want.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of John McKown
Sent: Friday, September 18, 2015 9:31 AM
To:
18 19
13 18 19
> dst2[idx, idx]
13 18 19
13 NA 2.272407 3.606054
18 2.272407 NA 1.578150
19 3.606054 1.578150 NA
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
You defined x and y in your original email as:
> x<-rnorm(20)
> y<-rnorm(20)
>
> mm<-as.matrix(cbind(x,y))
>
> dst<-(dist(mm))
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
TRUE
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David Winsemius
Sent: Saturday, September 26, 2015 11:07 AM
To: Michael Eisenring
Cc: 'r-help
ichael.eisenr...@gmx.ch]
Sent: Monday, September 28, 2015 10:40 AM
To: David L Carlson
Cc: David Winsemius; 'r-help'
Subject: Aw: RE: [R] How to get significance codes after Kruskal Wallis test
Dear David,
Thanks for your answer.
Another member of the R list pointed out that one can actua
0 10
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Dimitri
Liakhovitski
Sent: Wednesday, September 30, 2015 10:22 A
4 22 20 30 29
1961 26 9 10 18 18 11 18 14 24 28 30 31
1962 22 14 19 2 18 19 27 26 26 29 15 28
1963 27 17 15 4 9 23 16 24 19 28 30 22
1964 15 25 9 13 19 14 23 20 24 30 25 27
1965 13 21 12 10 21 24 22 21 28 23 28
There is a simple way to get closer to how a floating point number is stored in
R with dput():
> dput(min(dataset$gpa))
1.8997615814
> dput(dataset$gpa[290])
1.8997615814
So you can see, the minimum is not 1.9, just very close to 1.9.
-
D
.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Marco Otoya
Chavarria
Sent: Tuesday, October 6, 2015 10:15 AM
To: r-help@r-project
4
44 5 - 80
45 5 - 90
46 6 - 60
47 6 - 72
48 6 - 80
49 6 - 90
50 7 - 70
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Beha
0 0.00 0.00
8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
This puts everything on the diagonal and upper triangle. To get the lower
triangle just use
> tbl <- xtabs(~X2+X1, mat2)
-----
David L Carlson
Depa
d1.plt$cols, the column coordinates.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Luca Meyer
Sent: Thursday, October
9 -87.6847
> n <- nrow(Places)
> distVincentyEllipsoid(Places[1:(n-1), 3:2], Places[2:n, 3:2])
[1] 1753420 3763369 1544119 2794013
> bearing(Places[1:(n-1), 3:2], Places[2:n, 3:2])
[1] -158.98140 -66.71221 -11.57217 90.36231
David L. Carlson
Department of Anthropology
Texas A&M Uni
1 6 7 0
2 4 0 6
3 0 2 4
If the 0's are a problem:
> tbl[tbl==0] <- NA
> print(tbl, na.print=NA)
rater.id
observation123
1 6 7
24 6
3 24
David L. Carlson
Department of Anthropology
0
[73] 21000 96000 189000 84000 24 432000 189000 432000 729000
# Or a bit more compactly
> dd <- aa[x[, c(2, 1)]] * bb[x[, c(4, 2)]] * cc[x[, c(1, 3)]]
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Messa
9.0 12 -0.635
4 F NA 9 NA
5 M5.8 1 2.002
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Martin Canon
Sent: Sunday, October 25, 2015 7:03 AM
To: R help
Subject
Thanks for this. I knew that na.exclude() existed, but never understood how to
combine it with naresid().
David C
-Original Message-
From: William Dunlap [mailto:wdun...@tibco.com]
Sent: Monday, October 26, 2015 1:46 PM
To: David L Carlson
Cc: Martin Canon; R help
Subject: Re: [R] y2z
if(x, 4)
[1] 0.9148
> library(MASS)
> fractions(x)
[1] 7334/8017
> dput(x) # But x is still the same
0.914806043496355
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf
em out, but then we have 4th: 1, 2, 3; 5th: 1, 2, 3; 6th:
1, 2, 3, 4, etc so you must have some additional rule in mind to get your
answer.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Messa
na(Zdf[, i]), arr.ind=TRUE)
+ Zdf[, i][miss]<- mn[miss[, 1]]
+ }
>
> Zdf
ID A1 A2 A3 B1 B2 B3 C1 C2 C3 C4
1 b 4 5 4.5 2.0 3.0 4 5 1 3 3
2 c 4 5 1.0 3.5 3.0 4 5 1 3 2
3 d 3 5 1.0 1.0 2.5 4 5 1 3 2
4 e 4 5 4.0 5.0 4.5 4 5 1 3 2
-
I think you can use predict.psych() in package psych. Since you analyzed a
correlation matrix with fa() it does not have access to the original data.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-
lways Crystal Reports format, unless the program is using the same
reporting engine."
If it is a Crystal Reports file, the portion needed would likely have to be
converted to .csv unless
-----
David L Carlson
Department of Anthropology
Texas A&M University
C
d circle) if above the line and
# 1 (open circle) if below the line
sym <- ifelse(outputs, 16, 1)
plot(y~x, data, pch=sym)
abline(a=fin_hyp$constant, b=fin_hyp$slope)
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@
$x + z[2])
> outputs
[,1] [,2] [,3]
[1,] FALSE TRUE FALSE
[2,] FALSE TRUE FALSE
[3,] FALSE TRUE TRUE
[4,] TRUE TRUE FALSE
[5,] TRUE TRUE TRUE
[6,] TRUE TRUE TRUE
The first column is the result for the first equation (row in fin_hyp) and so
on.
David L. Carlson
Department of Anthropol
margins(xtabs(~Country+STATUS, test), 2)
STATUS
Country L W Sum
FRA 2 3 5
GER 1 3 4
SPA 2 1 3
UNK 1 2 3
USA 1 2 3
I'll let you figure out how to get the last column.
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Me
I used your code but deleted sep="\t" since there were no tabs in your email
and added the fill= argument I mentioned before.
David
Original message
From: Ashta
Date: 11/14/2015 6:40 PM (GMT-06:00)
To: David L Carlson
Cc: R help
Subject: Re: [R] Ranking
Thank
an R script
file. Given the cost of a single business license for STATA in the US for
moderate-sized datasets of $1,195 (perpetual) versus the cost of R, $0
(perpetual), it would seem to be worth the bother.
-
David L Carlson
Department of Anthropology
Texas A
y J H Maindonald - https://cran.r-project.org/doc/contrib/usingR.pdf
R for Beginners by E Paradis -
https://cran.r-project.org/doc/contrib/Paradis-rdebuts_en.pdf
The R Guide by W J Owen -
https://cran.r-project.org/doc/contrib/Owen-TheRGuide.pdf
-
David L Carlson
Depar
301 - 400 of 931 matches
Mail list logo