I want to assign the value of rmean from a survfit object to a variable so
that it can be used in subsequent calculations, which is also what I
interpreted the original poster to want. I did not understand Dr. Therneau's
answer to the original poster, so I figured I would provide a simple example
a
No cigar. This is what I get and my session info. Any suggestions?
>
> library(survival)
> library(ISwR)
> dat.s <- Surv(melanom$days,melanom$status==1)
> fit <- survfit(dat.s~1)
> print(fit, print.rmean=TRUE)
Call: survfit(formula = dat.s ~ 1)
records n.maxn.start events *rm
Well, that took a bit of detective work! Thanks. I am still not doing
something right here in my efforts to implement the "easy way". Can you
point out my error?
>
> library(survival)
> library(ISwR)
> dat.s <- Surv(melanom$days,melanom$status==1)
> fit <- survfit(dat.s~1)
> print(fit, print.rmea
Thank you for the reply Dr. Winsemius. Can you take your answer a step
further and, in the context of the simple, reproducible example, illustrate
how it is done? I would appreciate it.
Tom
--
View this message in context:
http://r.789695.n4.nabble.com/simple-save-question-tp3429148p3663645.htm
Here is a worked example. Can you point out to me where in temp rmean is
stored? Thanks.
Tom
> library(survival)
> library(ISwR)
>
> dat.s <- Surv(melanom$days,melanom$status==1)
> fit <- survfit(dat.s~1)
> plot(fit)
> summary(fit)
Call: survfit(formula = dat.s ~ 1)
time n.risk n.event survi
Below is a SIMEX object that was generated with the "simex" function from the
"simex" package applied to a logistic regression fit. From this mountain of
information I would like to extract all of the values summarized in this
line:
.. ..$ variance.jackknife: num [1:5, 1:4] 1.684 1.144 0.85 0.62
I recall that my problem on Windows was related to having a number of stray
versions of GTK+ installed. I went back and deleted all versions and
reinstalled the latest GTK+ and that seemed to fix things. However, when I
went to do any work of substance ggobi locked up and became unresponsive.
Neve
Seems to be a relevant and reasonable question to me.
--
View this message in context:
http://r.789695.n4.nabble.com/Tutorial-Tinn-R-tp2300451p2300648.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
How about
library(TeachingSampling)
SupportWR(20,4)
Tom
--
View this message in context:
http://r.789695.n4.nabble.com/Sampling-with-replacement-tp2257450p2257644.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.or
I know what "get a bigger sample means". I have no clue what "ask a more
statistically meaningful question" means. Can you elaborate a bit?
Tom
--
View this message in context:
http://n4.nabble.com/Ellipse-that-Contains-95-of-the-Observed-Data-tp1694538p1695357.html
Sent from the R help mailing
Concisely, here is what I am trying to do:
#I take a random sample of 300 measurements. After I have the measurements
#I post stratify them to 80 type A measurements and 220 type B measurements.
#These measurements tend to be lognormally distributed so I fit them to
#determine the geometric me
I can take the results of a simulation with one random variable and generate
an empirical interval that contains 95% of the observations, e.g.,
x <- rnorm(1)
quantile(x,probs=c(0.025,0.975))
Is there an R function that can take the results from two random variables
and generate an empirical
I have looked through the new "Complex Surveys" book and the documentation
for the "survey" package and it appears to me that there are no functions in
"survey" that help one to design a sampling scheme. For example, in the book
section 2.8 discusses the design of stratified samples, but there is
I don't want to hijack the thread here, but since you mentioned "hover pop-up
help" can you suggest a way to turn it OFF, totally and completely?
Tom LaBone
--
View this message in context:
http://n4.nabble.com/Ubunut-Eclipse-StatET-Console-terminates-upon-error-tp1589479p1589800.html
Sent from
Uwe Ligges-3 wrote:
>
>
>
> Tom La Bone wrote:
>> I have a measurement of 8165.666 and an uncertainty of 338.9741 (the
>> units of
>> both are unimportant). I can easily round the uncertainty to two
>> significant
>> digits with signif(338.9741,2),
I have a measurement of 8165.666 and an uncertainty of 338.9741 (the units of
both are unimportant). I can easily round the uncertainty to two significant
digits with signif(338.9741,2), which gives 340. Is there a function in R
that will take 8165.666 and round it to be consistent with its uncert
My concern is that the two tests give different DW statistics for the
weighted fit and very different p-values for the same DW statistic for the
unweighted fit. Is there a "right" answer here?
--
View this message in context:
http://www.nabble.com/Comparison-of-Output-from-%22dwtest%22-and-
Allow me to reword this question. I have performed two fits to the same set
of data below: a weighted fit and an unweighted fit. I performed the
Durbin-Watson tests on each fit using "dwtest" and "durbin.watson". For a
given fit (weighted or unweighted), should both dwtest and durbin.watson be
giv
Should "dwtest" and "durbin.watson" be giving me the same DW statistic and
p-value for these two fits?
library(lmtest)
library(car)
X <- c(4.8509E-1,8.2667E-2,6.4010E-2,5.1188E-2,3.4492E-2,2.1660E-2,
3.2242E-3,1.8285E-3)
Y <- c(2720,1150,1010,790,482,358,78,35)
W <- 1/Y^2
fit <- lm(Y ~
I want to print characters from the symbol font (or perhaps even Wingdings)
in an rgl 3d plot, but I am having no luck. So, what do I have to do in
order to get this snippet to print out a character from the symbol font?
library(rgl)
open3d()
text3d(1,1,1,"a",adj=c(0.5,0.5),cex=10,family="symbol"
The help page and vignette for summary.rq(quantreg) mention that there are
three different bootstrap methods available for the se="bootstrap" argument,
but I can't figure out how to select a particular method. For example, if I
want to use the "xy-pair bootstrap" how do I indicate this in summary.
I know this is easy, but I am stumped:
> gsub("0","K","8.00+00")
[1] "8.KK+KK"
> gsub("+","K","8.00+00")
Error in gsub("+", "K", "8.00+00") : invalid regular expression '+'
In addition: Warning message:
In gsub("+", "K", "8.00+00") :
regcomp error: 'Invalid preceding regular expression'
I d
I just installed Ubuntu 9.04 but there does not seem to be repository for
binaries for this version. Are there going to be such repositories set up in
the near future?
Tom
--
View this message in context:
http://www.nabble.com/R-64-bit-for-Ubuntu-9.04-64-bit-tp23246229p23246229.html
Sent fro
What do folks think about the c/gsl/Apophenia combination as one of the other
stat packages?
http://modelingwithdata.org/
http://apophenia.sourceforge.net/
Tom
Greg Snow-2 wrote:
>
>
> I also recommend that people doing statistics know at least the basics of
> 3 different stats packages
t; Tom La Bone wrote:
>> I can't seem to find MASS for the latest version of R. Is it coming or
>> has
>> the name of the package changed?
>>
>> Tom
>
> Where did you look?
>
> It is in the sources (as part of VR), and also in my (SUSE)
I can't seem to find MASS for the latest version of R. Is it coming or has
the name of the package changed?
Tom
--
View this message in context:
http://www.nabble.com/R-2.9.0-MASS-package-tp23144198p23144198.html
Sent from the R help mailing list archive at Nabble.com.
Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally "thinking" in R rather than thinking in
Mathcad and trying to translating to R. Anyway, the only task I still us
The following code generates 85% and 95% bivariate normal confidence
ellipses using the data.ellipse routine in the car package. Can anyone
suggest a routine in R that will tell me the confidence ellipse that would
intersect the green point, or more generally, any specified point on the
plot?
To
The mle2 function (bbmle library) gives an example something like the
following in its help page. How do I access the coefficients, standard
errors, etc in the summary of "a"?
> x <- 0:10
> y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
> LL <- function(ymax=15, xhalf=6)
+ -sum(stats::dpois(y, l
I have two collections of dates and I want to figure out what dates they have
in common. This is not giving me what I want (I don't know what it is giving
me). What is the best way to do this?
Tom
> data1
[1] "1948-02-24 EST" "1949-04-12 EST" "1950-05-29 EDT" "1951-05-21 EDT"
[5] "1951-12-20 E
Duncan Murdoch-2 wrote:
>
> Whoops, sorry, Uwe's advice is right. Text help is only the default
> when run outside a console; I use Cygwin, which doesn't look like a
> console to R.
>
I am using eclipse with StatET to run R under Ubuntu 8.04 and Windows XP
Pro. I would like both implementat
I would like to use Rterm in Windows XP and have the help files appear in
text format in the terminal rather than in the html popup window. For
example, I would like to enter help(lm) and get the text to appear in the
terminal window. Can anyone suggest a way to do this? Thanks and Happy Hew
Year.
Gavin Simpson wrote:
>
>
> It says that the two arguments have different numbers of observations.
> The reason for which should now be pretty obvious as you provided a
> single Date whereas airquality has 153 observations.
>
Thanks. I did look at ?transform but I was a bit confused because t
I would like to add a column to the airquality dataset that contains the date
1950-01-01 in each row. This method does not appear to work:
> attach(airquality)
> data1 <- transform(airquality,Date=as.Date("1950-01-01"))
Error in data.frame(list(Ozone = c(41L, 36L, 12L, 18L, NA, 28L, 23L, 19L, :
Look at the sand package, which is available at
http://www.csm.ornl.gov/esh/statoed/
and the NADA package, which is available from CRAN. One or both may have
items of interest.
Tom
Zita wrote:
>
> Hi.
>
> I am looking for a function for left-truncated data.
> I have one data set with 2
Can someone recommend a package in R that will perform a two-sample
Kolmogorov–Smirnov test on left censored data? The package "surv2sample"
appears to offer such a test for right censored data and I guess that I can
use this package if I flip my data, but I figured I would first ask if there
was
This is very nice also. I am going to use this approach in the future when I
use lm. However, I can't seem to get to work the way I want with cenmle. I
will continue to experiment. Thanks folks for the suggestions.
Tom
David Winsemius wrote:
>
>
> On Nov 12, 2008, at 8:48 A
Oh, I like your answer better than mine! Thanks.
Tom
Richard Cotton wrote:
>
>> I figured it out. In case anyone else ever has this question -- given
> the
>> following output from cenmle:
>>
>> >fit.cen <- cenmle(obs, censored, groups)
>> > fit.cen
>>Value Std. Error
-value is (for example):
p.value <- summary(fit.cen)[[9]][[11]]
Tom
Tom La Bone wrote:
>
> The cenmle function is used to fit two sets of censored data and test if
> they are significantly different. I can print out the results of the
> analysis on the screen but can't see
The cenmle function is used to fit two sets of censored data and test if they
are significantly different. I can print out the results of the analysis on
the screen but can't seem to figure out how to access these results in R and
assign them to new variables, e.g., assign the slope calculated wit
lines of data, assigning values from the table to the 43,000 elements of a
vector took 6 seconds whereas assigning values from the table to 43,000
elements of a dataframe took 21 minutes. Why is there such a huge
difference?
Tom
Tom La Bone wrote:
>
> Assume that I have the dataframe
ct 14, 2008 at 10:58 AM, Tom La Bone
> <[EMAIL PROTECTED]>wrote:
>
>>
>> Assume that I have the dataframe "data1", which is listed at the end of
>> this
>> message. I want count the number of lines that each person has for each
>> year. For e
Assume that I have the dataframe "data1", which is listed at the end of this
message. I want count the number of lines that each person has for each
year. For example, the person with ID=213 has 15 entries (NinYear) for 1953.
The following bit of code calculates NinYear:
for (i in 1:length(data1$
Is there an elegant way in R to change a number reported as a less-than
number in text format, "<1" for example, to the numeric equivalent 1? I have
been trying to use as.numeric, but have not come up with anything clever
yet.
Tom
--
View this message in context:
http://www.nabble.com/How-do-
To see some useful discussions of this problem in various settings search
R-Help for the phrase "cannot allocate vector of size".
Tom
ram basnet wrote:
>
> Hi R users,
>
> I am doing multiscale bootstrapping for clustering through pvclust
> package. I have large data set (observations 182 a
I am running R 2.7.1 on an eeepc with the standard Xandros Linux. I
install/upgrade R binaries using synaptic from the
http://cran.r-project.org/bin/linux/debian/etch-cran/ repository. R 2.7.2
does not appear to be available from this repository. Will it be available
soon or is there another repos
>> on 08/08/2008 03:58 PM milton ruser wrote:
>>> Dear Prof. B.Ripley,
>>>
>>> If :
>>>> .Machine$sizeof.pointer
>>> [1] 4
>>>> 2*2*2*2
>>> [1] 16
>>>
>>> So I am running with 16 bits?
>&
I think I installed 64-bit Ubuntu 8.04 and 64-bit R on my computer. I can't
seem to find anything that says "this is 64-bit Ubuntu" or "this is 64-bit
R". How do I tell what version of R I am running?
Tom
--
View this message in context:
http://www.nabble.com/How-Can-I-Tell-if-the-R-Running-is-
After doing some reading about 64-bit systems and software I am still
somewhat uncertain about some things. I have a Dell Dimension XPS 400 with a
dual core Intel Pentium D 940 (3.2 GHz) and 4 Gb of memory. I currently dual
boot the system with XP Professional and Ubuntu 8.04 (32 bit). If I simply
I have distilled my bootstrap problem down to this bit of code, which
calculates an estimate of the 95th percentile of 7500 random numbers drawn
from a standard normal distribution:
library(boot)
per95 <- function( annual.data, b.index) {
sample.data <- annual.data[b.index]
return(quantile(s
e objects are reasonable size and the memory size also
> seems reasonable. That is what I usually go by to see if there are
> large objects in my memory. If it was showing that R had 1.2GB of
> memory allocated to it, I wonder if there might be a memory leak
> somewhere.
>
> On
since that appears
> to be the only variable in your loop that might be growing.
>
> On Fri, Aug 1, 2008 at 12:09 PM, Tom La Bone <[EMAIL PROTECTED]>
> wrote:
>>
>>
>> I have a data file called inputdata.csv that looks something like this"
>>
>&g
I have a data file called inputdata.csv that looks something like this"
ID YearResult Month Date
1 71741954 103 540301
2 7174195443 540322
3 20924 1967 4 2 670223
4 20924
There appears to be a very promising response surface package being discussed
at useR-2008, but I have been unable to find the package on CRAN or contact
the authors.
www.statistik.uni-dortmund.de/useR-2008/abstracts/Sztendur+Diamond.pdf
Tom
Jinsong Zhao wrote:
>
> Hi,
>
> Is there a pack
On further experimentation I find that "points" (via points3d) serve my
purpose well (instead of the much prettier but more troublesome spheres).
The default "point" appears to be a square. Is there a way to make it a
circle?
Tom
Duncan Murdoch-2 wrote:
>
> On
After looking around a bit more I found the example I was looking for --
plotlm3d, which I found on the R wiki
http://wiki.r-project.org/rwiki/doku.php?id=graph_gallery:new-graphics
The original author was John Fox, and it was modified by Jose Claudio Faria
and Duncan Murdoch. Below is a si
Can anyone point me towards a tutorial on using the rgl graphics package?
Something with lots of examples would be nice. Thanks.
Tom
--
View this message in context:
http://www.nabble.com/Tutorial-on-rgl-Graphics-tp18649114p18649114.html
Sent from the R help mailing list archive at Nabble.com.
I tried to run the following example from section 4.1.4 of the "Scatterplot3d
-
an R package for Visualizing Multivariate Data" vignette and got an error on
the part that plots the regression plane:
> library(scatterplot3d)
> data(trees)
> s3d <- scatterplot3d(trees, type = "h", color = "blue",
+
The following code was adapted from an example Vincent Zoonekynd gave on his
web site http://zoonek2.free.fr/UNIX/48_R/03.html:
n <- 1000
x <- rnorm(n)
qqnorm(x)
qqline(x, col="red")
op <- par(fig=c(.02,.5,.5,.98), new=TRUE)
hist(x, probability=T,
col="light blue", xlab="", ylab="", main="",
I believe that Deming regression also goes by the name of orthogonal
regression, which can be performed in R using pca methods. If you do a
search of the list for "orthogonal regression" you can see the previous
discussions of this topic.
Tom
Dexter Riley wrote:
>
> Hi all. Has anyone ever do
Is there a built-in function in R that will generate simultaneous confidence
and prediction bands for linear regression?
Tom
--
View this message in context:
http://www.nabble.com/Simultaneous-Confidence-Prediction-Bands-tp17941537p17941537.html
Sent from the R help mailing list archive at Nabb
I am running R in the konsole of the Kate editor on an eeePC 900 (standard
Xandros OS). When a plot is generated it is initially too large to fit in
the screen, but if I click on the plot it automatically resizes to fit
properly. I have no idea if this is a default behavior.
Tom
Millo Giovann
I would like all three of these calculations to give an answer in days (or at
least the same units):
> as.POSIXct("1971-08-01 00:00:00") - as.POSIXct("1971-08-01 00:00:00")
Time difference of 0 secs
>
> as.POSIXct("1971-08-01 12:00:00") - as.POSIXct("1971-08-01 00:00:00")
Time difference of 12 h
FYI
This is a nice package that is not on CRAN for some reason. You can get it
at
http://www.csm.ornl.gov/esh/statoed/
Tom
--
View this message in context:
http://www.nabble.com/Statistical-Methods-and-Software-for-the-Analysis-of-Occupational-Exposure-Data-With-Non-Detectable-Values-tp169662
Does R have a function for doing an ARAR analysis of a time series?
Tom
--
View this message in context:
http://www.nabble.com/ARAR-Time-Series-Analysis-in-R-tp16722577p16722577.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@
Thanks for all of the suggestions. The key key here seems to be using the
"par" function to change the coordinate system like so
plot(rnorm(100), rnorm(100))
op <- par("usr")
par(usr = c(0, 1, 0, 1))
text(0.5,0.5,"TEST")
par(usr = op)
Prof Ripley commented that this approach will also work on l
What is the simplest way to specify the location of text in a scatter plot
(created using the plot function) in relative terms rather than specific x-y
coordinates? For example, rather than putting text at (300,49) on a plot,
how do I put it 1/10 of the way over from the y axis and 1/2 of the way
This is a nice list:
http://www.amazon.com/Use-R/lm/RNFBA3UHW2M73/ref=cm_lmt_srch_f_1_rsrsrs0
Tom
kayj wrote:
>
> Hi All,
>
> I am a new user in R and I would like to buy a book that teaches me how
> to use R. In addition, I may nees to do some advanced statistical
> analysis. Does anyone
Greetings,
For the following quarterly data I did a classical decomposition by hand in
a spreadsheet and got reasonably similar results using Minitab 15.
x
1 36
2 44
3 45
4 106
5 38
6 46
7 47
8 112
I am in a similar situation and found that Kate combined with Konsole
operates very much like Tinn-R. I actually use Kate for all interpreters
like Python, R, and Octave under linux.
Tom
Wade Wall wrote:
>
> Hi all,
>
> I know this question has been asked in the past, but I am wondering if
(courtesy of Vincent Goulet) did not require me to confirm that I wanted to
use R. Thanks everyone for the assistance.
Tom
Ben Bolker wrote:
>
> Tom La Bone gforcecable.com> writes:
>
>>
>>
>> I installed EMACS-ESS on Windows XP-Pro and it worked with R per
I installed EMACS-ESS on Windows XP-Pro and it worked with R perfectly right
out of the box. I was impressed with how easy it is to use (I normally use
Tinn-R). I then switched over to Ubuntu 7.10 and installed EMACS-ESS.
Everything worked the same as in Windows (which is my main reason for using
IMHO "The R Book" is far better than indicated in that review and should be
near the top of the list for beginners looking for a "manual" for R.
Tom
Katharine Mullen wrote:
>
> It was reviewed in the most recent R News
> (http://www.r-project.org/doc/Rnews/Rnews_2007-2.pdf).
>
> On Thu, 29 No
Is there a way to do a linear regression with lm (having one predictor
variable) and constrain the slope of the line to equal 1?
Tom
--
View this message in context:
http://www.nabble.com/Linear-Regression-with-lm-Forcing-the-Slope-to-Equal-1-tf4828804.html#a13815432
Sent from the R help maili
Greetings,
I would like to use the "boot" function to generate a bootstrap confidence
interval for the slope in a SLR that has a zero intercept. My attempt to do
this is shown below. Is this the correct implementation of the boot function
to solve this problem? In particular, should I be doing an
When I run this calculation
library(ISwR)
library(simple.boot)
data(thuesen)
fit <- lm(thuesen$short.velocity~thuesen$blood.glucose)
summary(fit)
fit.sb <- lm.boot(fit,R=1000,rows=F)
summary(fit.sb)
I get the following error from the lm.boot routine:
newdata' had 100 rows but va
Because it does. I should have looked ahead a few chapters in the book
before I asked the question. However, I can't seem to reproduce the values
of the hat matrix given by R for the weighted fit example I gave. Any
suggestions (other than looking ahead a few more chapters)?
Tom
[[
I understand that the hat matrix is a function of the predictor variable
alone. So, in the following example why do the values on the diagonal of the
hat matrix change when I go from an unweighted fit to a weighted fit? Is the
function hatvalues giving me something other than what I think it is?
Greetings,
I have been using lm to perform weighted linear regressions and resid to
extract the residuals. I happened upon some class notes on the internet that
described how one can specify type="pearson" in resid to extract the
weighted residuals. Where is this option documented? And if you k
79 matches
Mail list logo