created when you used one of the read.*() functions. Use str(samples) to see
what you are dealing with.
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help [mailto:r-
32
----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Engin YILMAZ
Sent: Sunday, October 29, 2017 1:49 PM
To: Ek Esawi
Cc: r-help@r-p
0.67403447 1.79825459
The decimal point is at the |
-2 | 5
-2 | 4
-1 |
-1 | 432000
-0 | 87755
-0 | 442110
0 | 001244
0 | 556789
1 | 113
1 | 5788
> # Success
Depending on your operating system, you may also be able to save the output
with File | S
'
Please let me know, if I have used the function in right way?.
Thank you
Priya
On Wednesday, 1 November 2017 9:32 PM, David L Carlson
wrote:
Let's try a simple example.
> # Create a script file of commands
> # Note we must print the results of quantile explicitly
&g
dates[3] - dates[1])
Time difference of 2 days
# But
> with(myData, p_dates[3] - p_dates[1])
Error in p_dates[3] - p_dates[1] :
non-numeric argument to binary operator
-
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From
, but
ks.test() just sees that you have provided two samples, not one sample and
values along a cumulative distribution.
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
Fro
s a symmetric matrix. Just like
cor()
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of varin sacha via
R-help
Sent: Sunday, December 10, 2017 11:07 AM
To: Rui Barradas ; R-help Mailing L
There is also a way to do this without a loop:
> strsplit(x, "")
[[1]]
[1] "t" "e" "s" "t" "i" "n" "g"
# Or if you just want the vector
> strsplit(x, "")[[1]]
[1] "t" "e" "s&q
In addition to stem() in the graphics package, there are other implementations
of stem-and-leaf plots that add additional features such as stem.leaf() in
package aplpack which will includes a function to produce back to back stem and
leaf plots.
David L
ble: Petal.Width
meansd n
setosa 0.246 0.105 50
versicolor 1.326 0.198 50
virginica 2.026 0.275 50
-------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-h
3456789 1011
[3,]3456789 10 1112
[4,]456789 10 11 1213
[5,]56789 10 11 12 1314
------------
David L Carlson
Department of Anthropolog
pace) == 0 && !missing(namespace),
fixNamespaces = c(dummy = TRUE, default = TRUE))
------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help On Behalf Of Bond, Stephen
Try the help files:
?factor
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help On Behalf Of Saif Tauheed
Sent: Monday, April 9, 2018 11:29 AM
To: r-help@r-project
useful to
know if the missing values are concentrated in particular rows or columns so
that eliminating a few rows and columns could substantially reduce the
percentage of missing values.
David L Carlson
Department of Anthropology
Texas A&M Univer
AM
To: David L Carlson
Subject: RE: nMDS with R: missing values
Dear Prof Carlson,
Thank you for your reply. I'm using 'vegan' with 'vegdist' and 'bray'. I have a
selection of datasets that cover different time periods (converted to
z-scores), so a record that start
debug.
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help On Behalf Of Lorenzo Isella
Sent: Wednesday, April 18, 2018 8:47 AM
To: r-help@r-project.org
Subject: [R] Reason
la to BloodPressure~Age (which makes more
sense than predicting age from blood pressure) or change the plot command to
plot(BloodPressure, Age, ...) and change the xlab to ylab and ylab to xlab).
------------
David L Carlson
Department of Anthropology
Texas A&M Uni
: 5 obs. of 3 variables:
# $ Sites : Factor w/ 5 levels "Site1","Site2",..: 1 2 3 4 5
# $ temp : num 14 15 12 12.5 17
# $ precip: Factor w/ 4 levels "20","high","low",..: 2 3 4 1 1
David L Carlson
Departm
Or add the type column first and then rbind:
x <- list(A=data.frame(x=1:2, y=3:4),B=data.frame(x=5:6,y=7:8))
x2 <- do.call(rbind, lapply(names(x), function(z)
data.frame(type=z, dat[[z]])))
----
David L Carlson
Department of Anthropology
Tex
.frame")
# Generate the row indices that we need so we can vectorize the logical
operation:
idxa <- rep(1:4, each=2)
idxb <- rep(1:2, 4)
ab <- (a[idxa, ] & b[idxb, ]) == b[idxb, ]
c <- cbind(idxa, idxb)[apply(ab, 1, all), ]
c
# idxa idxb
# [1,]21
# [2,]22
# [
Typo: dat[[z]] should be x[[z]]:
x2 <- do.call(rbind, lapply(names(x), function(z)
data.frame(type=z, x[[z]])))
x2
type x y
1A 1 3
2A 2 4
3B 5 7
4B 6 8
David C
-Original Message-
From: R-help On Behalf Of David L Carlson
Sent: Wednesday, May 2, 2018 3:51 PM
clude all of the R
code you are using.
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help On Behalf Of Jayaganesh
Anbuganapathy
Sent: Tuesday, May 8, 2018 11:10 PM
To:
The tutorial is from the mathewanalytics.com website, but this post is missing.
Have you contacted the author at mathewanalyt...@gmail.com?
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Orig
quot;,
"West")), .Names = c("Year", "Product", "Sales", "Region"), row.names = c(NA,
-15L), class = "data.frame")
It is not clear what you want in your new data frame. This one has 5 years of
data for each tape brand and you seem to want o
sent reasonably close?
What should it look like after it is transformed?
David C
From: nguy2952 University of Minnesota
Sent: Friday, June 1, 2018 1:57 PM
To: David L Carlson
Subject: Re: [R] Regroup and create new dataframe
Hi,
This is not an assignment for school.
This is a project at WORK
No html!, Copy the list using Reply-All.
The data frame group_PrivateLabel does not contain variables called
Product_Name or Region.
David C
From: nguy2952 University of Minnesota
Sent: Friday, June 1, 2018 2:13 PM
To: David L Carlson
Subject: Re: [R] Regroup and create new dataframe
Hi
I think the OP does not realize that head() and tail() do not print anything.
They extract the first or last values/rows and if they are not assigned to an
object, they automatically go to print().
Redefining print.data.frame would also fix that problem.
David L. Carlson
Department of
Also look at the DescTools package for functions KendallTauA, KendallTauB,
StuartTauC().
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help On Behalf Of Jeff Reic
larger magnitudes will
determine the groups more than the variables with the smaller magnitudes.
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jeff Reichman
Sent: Thursday, June 21, 201
at2
#A B
# 1 0.9148060 0.4577418
# 2 0.9370754 0.7191123
# 4 0.8304476 0.2554288
# 5 0.6417455 0.4622928
# 8 0.134 0.1174874
# 9 0.6569923 0.4749971
# 10 0.7050648 0.5603327
Notice that whichever one we use, the row numbers match the original data frame.
David L. C
This is your answer:
> str(hold)
Classes 'summaryDefault', 'table' Named num [1:6] -2.602 0.636 1.514 1.54
2.369 ...
..- attr(*, "names")= chr [1:6] "Min." "1st Qu." "Median" "Mean" ...
hold is a table of named numbers, i.e. a vector with a names attribute. It is
not a data.frame so it does
41083 1.000 0.9342174
disp -0.8138289 0.9342174 1.000
> Y$n
mpg cyl disp
mpg5 55
cyl5 55
disp 5 55
> Y$P
mpgcyl disp
mpg NA 0.02010207 0.09368854
cyl 0.02010207 NA 0.02005248
disp 0.09368854
(m), type.convert, as.is=TRUE))
# Ulrik's solution but without the pipes. Shows why you need 2 as_tibbles()
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-b
After creating ppdat and ppdat$Valbin, aggregate() will get you the churn
proportions:
> aggregate(Churn~Valbin, ppdat, mean)
Valbin Churn
1 (20.9,43.7] 0.833
2 (43.7,66.3] 0.000
3 (66.3,89.1] 0.500
David L. Carlson
Department of Anthropology
Texas A&M Uni
important information.
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Santiago Bueno
Sent: Monday, April 17, 2017 10:29 PM
To: R-help@r-project.org
Subject: [R] R help
I need help with R,
could also use
Cum_RespRate <- cum_R/cum_n)*100
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of BR_emai
lto:b...@dmstat1.com]
Sent: Thursday, April 20, 2017 4:31 PM
To: David L Carlson ; r-help@r-project.org
Subject: Re: [R] Looking for a package to replace xtable
David:
All is perfect, almost - after I ran your corrections.
Is there a way I can have more control of the column names, i.e.,
not be restrict
1's and 0's and those need to be randomized, sample(data) will do it for you.
Then those numbers are replicated 10 times. Why not just select 500 values
using rbinom() initially?
David C
-Original Message-
From: BR_email [mailto:b...@dmstat1.com]
Sent: Friday, April 21, 201
I've attached a modification of your script file (called .txt so it doesn't get
stripped). See if this does what you want.
David C
-Original Message-
From: Bruce Ratner PhD [mailto:b...@dmstat1.com]
Sent: Friday, April 21, 2017 3:46 PM
To: David L Carlson
Cc: r-help@r-p
u wish to include
fix(choose)
# Your list of variables will be the vector mycols
mycols <- choose$cols[choose$select==1]
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of BR_email
Sent:
5
5yy0
6zy NA
7xz 67
8yz 23
9zz0
> moredata.df[order(moredata.df$Freq, decreasing=TRUE), ]
Var1 Var2 Freq
7xz 67
8yz 23
4xy5
1xx0
5yy0
9zz0
2 yx NA
3z
yy0
6zy NA
7xz 67
8yz 23
9zz0
David C
From: abo dalash [mailto:abo_d...@hotmail.com]
Sent: Sunday, April 30, 2017 10:09 AM
To: David L Carlson ; r-help@R-project.org
Subject: Re: [R] Finding nrows with specefic values&converting a matrix
0 NA
$ z: int 67 23 0
> data.frame(as.table(as.matrix(mydf)))
Var1 Var2 Freq
1xx0
2yx NA
3zx NA
4xy5
5yy0
6zy NA
David C
From: abo dalash [mailto:abo_d...@hotmail.com]
Sent: Sunday, April 30, 2017 11:13 AM
To: David L Car
No. You are not using the correct command. Time to read the manual:
?write.table
You will find the answer to your question by looking at the alternate forms of
write.*().
David L. Carlson
Department of Anthropology
Texas A&M University
From: abo dalash [mailto:abo_d...@hotmail.com]
erly
formatted and contains the number of records you think it does?
David C
From: abo dalash [mailto:abo_d...@hotmail.com]
Sent: Sunday, April 30, 2017 7:20 PM
To: David L Carlson
Subject: Re: [R] Finding nrows with specefic values&converting a matrix into a
table
I have been trying to
You could try installing package ExtDist and using distribution Beta_ab in that
package.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun.
even though it
does not appear in the data:
> set.seed(42)
> x <- sample.int(6, 10, replace=TRUE)
> table(x)
x
1 2 4 5 6
1 1 3 3 2
> y <- factor(x, levels=1:6)
> y
[1] 6 6 2 5 4 4 5 1 4 5
Levels: 1 2 3 4 5 6
---------
David L Carlson
Department
What Rui said, but as important, you have four columns in your data called
"town", "year", "revenue", and "supply". You do not have a column called
"time".
-----
David L Carlson
Department of Anthropology
T
Actually, r is a vector, not an index value. You need
apply(compare_data, 1, function(r) cor(r, t(test_data)))
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mail
Actually, not using apply() would be faster and simpler
cor(t(compare_data), t(test_data))
David C
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David L Carlson
Sent: Friday, May 12, 2017 10:48 AM
To: Ismail SEZEN ; Micha Silver
Cc: R-help@r
M F 52 18 28
# 26 4 5 F F 33 73 22
# 27 4 6 F F 33 66 29
# 28 4 7 F F 33 18 47
# 34 5 6 F F 73 66 55
# 35 5 7 F F 73 18 7
# 42 6 7 F F 66 18 14
--
icates
dta12 <- data.frame(dta12, dsim=as.vector(dsim)) # Typo was here
dta12 <- dta12[, c("ID1", "ID2", "gender1", "gender2", "age1", "age2", "dsim")]
dta12
David C
-Original Message-
From: R-help [mailto:r-help-bo
64 65 66 67 68 69 70
$ X8 : int [1:2, 1:5] 71 72 73 74 75 76 77 78 79 80
$ X9 : int [1:2, 1:5] 81 82 83 84 85 86 87 88 89 90
$ X10: int [1:2, 1:5] 91 92 93 94 95 96 97 98 99 100
---------
David L Carlson
Department of Anthropology
Texas A&M University
College St
est[, "id"]), test[, "id"], sample, size=1)
test[indx, ]
# xcor ycor id
# [1,]46 1
# [2,]42 2
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
15
[4,] 47.15747 44.87350 48.67912
[5,] 48.28083 51.24702 44.78204
[6,] 45.69531 45.71741 48.25982
[7,] 47.42731 43.86328 55.15668
[8,] 54.55450 55.67621 47.28236
[9,] 56.42899 47.26354 51.90019
[10,] 50.89833 41.99718 50.46564
[11,] 55.81824 51.63207 53.83847
[12,] 50.88440 53.68807 44.30
] [,3] [,4] [,5]
# 25 - 34 11 15 NA NA NA
# 25 - 77 15 85 NA NA NA
# 34 - 39 11 NA NA NA NA
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...
lts) <- c("mealAcode", "mealBcode", "id")
Results
This pre-allocates space for a million rows so it should be even faster, but it
will fail if there are more rows, so guess high.
There are some specialized packages such as data.table and dplyr in R that
might b
19 50 21
# 5 1981 2 1 17 49 25
# 6 1981 2 2 20 47 23
# 7 1981 2 3 21 52 27
The attached .png image file shows you how to send plain text emails to r-help
using gmail.
-
David L Car
How about?
Trade <- xtabs(FLOW ~ iso_o + iso_d + year, dta)
Gives you a 3d table with FLOW as the cell entry. Then
apply(Trade, 1:2, sum, na.rm=TRUE)
Gives you a 2d table with the total flow
David L. Carlson
Department of Anthropology
Texas A&M University
-Original Message-
Refer to columns by position rather than name and everything is simpler:
for (i in 2:4 ) {
test[, i] <- test[, i] + test[, i-1]
}
Note your approach fails on the first line since you start with i=1 and there
is no Day0. Another approach that is simpler:
t(apply(test, 1, cumsum))
Davi
ngth(dfr.sp)
# We get 8 groups, 4x2
-------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Jim Lemon
Sent: Thursday, June 1, 2017 2:53 AM
To: carrie w
al page for function read.csv(). One of the problems with
spreadsheets is that these extra spaces are not readily apparent.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help
It looks like your printouts are based on the R summary() function? The
function lists the number of cases in the 5 largest categories when the
variable is coded as a FACTOR.
David C
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David L Carlson
Sent
f2(z, 2)
all.equal(z1, z2)
# [1] TRUE
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Bert Gunter
Sent: Thursday,
erence: 0.444 >"
[2] "Mean relative difference: 0.1262209"
> all.equal(f(z,4),f2(z,4))
[1] "Attributes: < Component “dim”: Mean relative difference: 0.5714286 >"
[2] "Mean relative difference: 0.5855162"
David C
-Original Message-
Fro
have it appeared to do what I want.
Thanks again,
-Roy
On Jun 1, 2017, at 12:22 PM, David L Carlson wrote:
My error. Clearly I did not do enough testing.
z <- array(1:24,dim=2:4)
all.equal(f(z,1),f2(z,1))
[1] TRUE
all.equal(f(z,2),f2(z,2))
[1] TRUE
all.equal(f(z,3),f2(z,3))
[1] &qu
(DFM <- data.frame(DFM, tmat[idx, ]))
#obs startend D bin t1 t2 t3 t4 t5
# 1 1 2015-02-01 2017-01-01 700 [500,Inf) 0 0 0 0 0
# 2 2 2010-04-11 2011-01-01 265 [200,300) 0 0 1 -1 -1
# 3 3 2006-01-04 2007-05-03 484 [400,500) 0 0 0 0 1
# 4 4 2007-10-
m1 100 300 - -
# m2 - - - -
# m3 - - 400 -
# m4 - - - -
# m5 - - - -
class(MN)
# [1] "xtabs" "table"
# MN is a table. If you want a data.frame
MN <- as.data.frame.matrix(MN)
class(MN)
# [1] "data.frame"
---
that index would be to use by():
idx <- as.vector(by(Daily, Daily$wyr, function(x) rownames(x)[which.max(x$Q)]))
Daily[idx, ]
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
quot;, diag=F)
text(1, dim(M)[1] - .1, "mpg", srt=90, xpd=TRUE)
# Replace first row/colnames if you will be using M later
colnames(M)[1] <- "mpg"
rownames(M)[1] <- "mpg"
-
David L Carlson
Department of Anthropology
Texas A&
gt; str(Test)
'data.frame': 3 obs. of 2 variables:
$ TransitDate: Date, format: "2013-04-01" "2013-06-01" ...
$ CargoTons : int 50 40 30
> Test
TransitDate CargoTons
1 2013-04-0150
2 2013-06-0140
3 2013-07-0130
-
;ID" "Ageclass"
> levels(ind.davis$Ageclass) <- c("Adult", "Juvenile", "Sub-adult")
> levels(ind.davis$Ageclass)
[1] "Adult" "Juvenile" "Sub-adult"
> str(ind.davis)
'data.frame': 10 obs. of 2 va
dnom # These are the alternating denominators
[1] 150 200
>
> for (i in res) {
+ r <- i %% 2 + 1
+ s <- seq_len(i-1)
+ L[i] <- abs(sum(L[s] * rows[r, s]))/ dnom[r]
+ }
> L
[1] 0.1000 0.1333 0.0667 0.0889 0. 0.14814815
0.18518519
[8] 0.24691358 0.3086
How about
> difftime(LAI_simulation$Date, LAI_simulation$Date[1], units="days")
Time differences in days
[1] 0 1 2 3 4 5 6 7 8 9 10 11 12 13
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Stati
.4519231NA NANA
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Thomas Adams
Sent: Wednesday, July 5, 2017 1
n are limited to the resolution of a computer screen,
pretty low. You probably want to plot to a graphics device that saves the file
so that you can specify a higher resolution. The R command
?device
should get you started.
-
David L Carlson
Department of Anthrop
re more columns than rows.
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Sanjay Kumar
Jaiswal
Sent: Tuesday, July
at only one site. I suspect there is a problem with your data or with the
way you have coded the data.
David L Carlson
-Original Message-
From: Sanjay Kumar Jaiswal [mailto:jaiswa...@tut.ac.za]
Sent: Tuesday, July 18, 2017 4:27 PM
To: David L Carlson
Subject: RE: [R] Redundanc
.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Tom D. Harray
Sent: Friday, July 28, 2017 7:56 AM
To: r-help@r-project.org
Sub
it, but no one seems to be able to provide an authoritative citation
before proceeding to demonstrate that it is false.
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [
but would use
some measure of association/correlation.
--------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Suz
y are still character strings.
--------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of greg holly
Sent: Tuesday, September 19, 2017 10:21 AM
To: Duncan Murdoch
Yes 1114 1380
3rd No 35 387 1789
Yes 1375 1476
Crew No 0 670 0 3
Yes 0 192 020
David L Carlson
Department of Anthropology
The default margins are set as lines below, left, top, and right using
mar=c(5.1, 4.1, 4.1, 2.1). Just change the top margin something like 1.1:
par(mfrow=c(1,2), mar=c(5.1, 4.1, 1.1, 2.1))
---
David L. Carlson
Department of Anthropology
Texas A&M Univer
dom number:
> rnd()
[1] 4.036111
> rnd()
[1] 3.88048
> rnd()
[1] 3.984268
> rnd()
[1] 3.808441
> rnd()
[1] 4.219925
------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Mes
Just change the separator:
data(Titanic)
Titanic.df <- as.data.frame(Titanic)
boxplot(Freq~Class*Sex, Titanic.df, cex.axis=.6, sep="\n")
See attached .png.
----
David L Carlson
Department of Anthropology
Texas A&M University
College Stati
Minor modification:
fff <- function(x) as.numeric(chartr(",", ".", x))
BX <- sapply(AX, fff)
Or this keeps the original data frame:
AX[, 1:2] <- sapply(AX[, 1:2], fff)
--------
David L Carlson
Department of Anthropology
Texas A&
e tables by hand
> write.csv(dat1[Samples$A[ , 1:10], ], row.names=FALSE, file="Test.csv")
--------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77843-4352
-Original Message-
From: R-help [mailto:r-help-b
4 1
20014 1
20050 0
4391076 19990 0
20000 0
20010 0
20059 1
You should read these manual pages:
?dput
?aggregate
?xtabs
?ftable
David L Carlson
Departm
> sapply(test, gsub, pattern=",", replacement=";")
C1C2
"a;b;c;d" "g;h;f"
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Mes
1]],
+ ylab=lbls[colnos[i, 2]])
+ }
Plots all of the unique plots.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org]
ts 3 groups as well:
> plot(density(data_mat))
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Lorenzo Isella
Sent: Mon
ot;a" "A"
1.1 "a" "A"
2 "b" "A"
2.1 "b" "A"
3 "c" "A"
4 "d" "A"
> rownames(ddd.new) <- NULL # Optional - get rid of row names
> head(ddd.new)
a b
[1,] "a&qu
X3
8 4.905341 6.035104 5.089833
9 7.018424 4.391074 2.006910
14 4.721211 2.585792 6.399737
16 5.635950 5.205999 6.302543
18 2.343545 5.758163 6.038506
19 2.559533 4.273295 5.920729
20 6.320113 3.631719 5.720878
23 4.828083 6.444101 5.623518
-
David L Carlson
Departm
directory you want and then click the More tab and select "Set As
Working Directory."
------------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@
23
-----
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Brittany Demmitt
Sent: Monday, June 20, 2016 12:15 PM
To: r-help@r-project.org
Subject: [R]
08
Warning message:
In chisq.test(rbind(c(transitions1), c(transitions2))) :
Chi-squared approximation may be incorrect
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help
=",value,",...)")))
}
}
Running this code will create the function.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r
50), ylabs=rep("", 10))
---------
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of Shane Carey
Sent: Wednesday, June 22, 2016 7:39 AM
To: r-help@r-project
s. But labeling them is not
easy since the coordinates are based on the columns:
> par("usr")
[1] -6.705729 7.179791 -6.705729 7.179791
David C
-Original Message-
From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of David L Carlson
Sent: Wednesday, June 22, 201
1 - 100 of 931 matches
Mail list logo