I get the following error out of R, on a newer Ubuntu installation.
Error in `axis()`:
! X11 font -adobe-helvetica-%s-%s-*-*-%d-*-*-*-*-*-*-*, face 1 at size 12 could
not be loaded
Backtrace:
1. graphics::matplot(...)
3. graphics::plot.default(...)
4. graphics (local) localAxis(...)
6. gr
quot;function" is a string or integer, then it
>> is taken as the piece to be extracted, so you should be able to do
>> something like:
>>
>> library(purrr)
>> map(fits, 'iter')
>> # or
>> map_int(fits, 'iter')
>> # or
>> m
Is there a convenient package that computes standard covergence summaries for
and MCMC
run? This is something that I likely knew once and have now forgotton.
More detail: I'm trying to understand the MCMC done by a particular model
called
Subtype and Stage Inference (SuStaIn), suffice it
I not uncommonly have the following paradym
fits <- lapply(argument, function)
resulting in a list of function results. Often, the outer call is to
mclapply, and the
function encodes some long calculation, e.g. multiple chains in an MCMC.
Assume for illustration that each function returns
I prefer the duplicated() function, since the final code will be clear to a future reader.
(Particularly when I am that future reader).
last <- !duplicated(mydata$ID, fromLast=TRUE) # point to the last ID for each
subject
mydata$data3[last] <- NA
Terry T.
(I read the list once a day in dige
1-.428)*coef.
>
Yes, the "mean" component is the reference level for predict and survfit.� If I
could go
back in time it would be labeled as "reference" instead of "mean".�� Another
opportunity
for me to make the documentation clearer.
Good questions,
� Terry T
See ?coxph, in particular the new "nocenter" option.
Basically, the "mean" component is used to center later computations.� This can
be
critical for continuous variables, avoiding overflow in the exp function, but
is not
necessary for 0/1 covariates.�� The fact that the default survival curve
On 7/11/21 5:00 AM, r-help-requ...@r-project.org wrote:
Hello, is it kosher to call cox.zph on a syvcoxph model fit? I see that
someone proposed a modified version of cox.zph that uses resid(fit,
'schoenfeld', **weighted=TRUE**).
https://stats.stackexchange.com/questions/265307/assessing-prop
Is there a complement to the methods function, that will list all the defined
methods for
a class? One solution is to look directly at the NAMESPACE file, for the
package that
defines it, and parse out the entries. I was looking for something built-in,
i.e., easier.
--
Terry M Therneau
I wrote: "I confess to being puzzled WHY the R core has decided on this
definition..."
After just a little more thought let me answer my own question.
a. The as.vector() function is designed to strip off everything extraneous and
leave just
the core. (I have a mental image of Jack Webb saying
I am late to this discussion -- I read R-help as a once-a-day summary. A few
comments.
1. In the gene-discovery subfield of statistics (SNP studies, etc.) there is a
huge
multiple-testing problem. In defense, the field thinks in terms of thresholds
like 1e-5
or 1e-10 rather than the .05 or
In one of my plot functions it is convenient to use clipping to restrict the
range of some
output.
But at the end of the function I'd like to turn it off, i.e., so that a
subsequent use of
legend(), text() or whatever is not affected.
I don't quite see how to do this -- it seems that the only w
Martin,
A fun question.
Looking back at my oldest books, Feller (1950) used chi-square.
Then I walked down the hall to our little statistics library and looked at
Johnson and
Kotz, "Continous Univariate Distributions", since each chapter therein has
comments about
the history of the distrib
This is an excellent question.
The answer, in this particular case, mostly has to do with the outlier time
values. (I've
never been convinced that the death at time 999 isn't really a misplaced code
for
"missing", actually). If you change the knots used by the spline you can get
quite
dif
John,
The text below is cut out of a "how to write a package" course I gave at the
R
conference in Vanderbilt. I need to find a home for the course notes, because
it had a
lot of tidbits that are not well explained in the R documentation.
Terry T.
Model frames:
One of the first task
I've created a hex sticker for survival. How should that be added to the
package
directory? It's temporarily in man/figures on the github page.
Terry T.
(Actually, the idea was from Ryan Lennon. I liked it, and we found someone with
actual
graphical skills to execute it. )
[[alter
You are correct that the survreg routine only supports 'rho' of the
Fleming-Harrington G-rho tests. This is a function of age -- I wrote the
original code back when I was working with Tom (Fleming), and he was only using
1 parameter. Later he and David expanded the test to two parameters. Thi
is as much philosophy as statistics, and is perhaps best done over a beer.
Terry T.
From: Max Shell [archerr...@gmail.com]
Sent: Wednesday, January 17, 2018 10:25 AM
To: Therneau, Terry M., Ph.D.
Subject: Re: Time-dependent coefficients in a Cox model with categorical
This question likely has a 1 line answer, I'm just not seeing it. (2, 3, or 10 lines is
fine too.)
For a vector I can do group <- match(x, unqiue(x)) to get a vector that labels each
element of x.
What is an equivalent if x is a data frame?
The result does not have to be fast: the data set
You are mixing up two of the steps in rpart. 1: how to find the best candidate split and
2: evaluation of that split.
With the "class" method we use the information or Gini criteria for step 1. The code
finds a worthwhile candidate split at 0.5 using exactly the calculations you outline. For
in tmerge(dat, dat, id = Number, death = event(dYears, death), BMF =
event(ptemp, :
tstart must be > tstop
Could you help me?
Thanks,
Ahalya.
On Wed, Sep 7, 2016 at 8:29 AM, Therneau, Terry M., Ph.D.
wrote:
On 09/07/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Dear R-Team,
I hav
On 04/27/2017 09:53 AM, sesh...@mskcc.org wrote:
Thank you Drs. Therneau and Murdoch.
"Why not use coxph.fit?" -- My use case scenario is that I needed the Cox model
coefficients for resampled data. I was trying to reduce the computational overhead of
coxph.fit (since it will repeated a larg
Let me summarize rather than repeat the entire thread:
An error report from a user (seshan) stumped me, and I asked for help here.
Duncan Murdoch picked up on fine details of the error message, i.e., that the error did
NOT come from within the survival package. That changes the whole tenor of
user and to the next dozen who will invariably contact me directly.
Thanks,
Terry Therneau
Forwarded Message
Subject: RE: survival package
Date: Wed, 26 Apr 2017 18:05:30 +
From: sesh...@mskcc.org
To: Therneau, Terry M., Ph.D.
Thank you for the quick response. The se
Thanks much Duncan. Having someone do the work for me is even better than a
function!
The cmatrix function will be to make contrast matrices BTW.
On 03/28/2017 08:11 AM, Duncan Murdoch wrote:
On 28/03/2017 8:53 AM, Therneau, Terry M., Ph.D. wrote:
I'm thinking of adding a new &qu
I'm thinking of adding a new "cmatrix" function/method to the survival package but before
I do I'd like to find out if any other packages already use this function name. The
obvious method is to look at the NAMESPACE file for each package in CRAN and read the
export list.
This is the kind of
o clarity my question.
Best,
Alfredo
-Messaggio originale-
Da: Therneau, Terry M., Ph.D. [mailto:thern...@mayo.edu]
You will need to give more detail of exactly what you mean by "prune using a
validation
set". THe prune.rpart function will prune at any value you want, what I suspect
You will need to give more detail of exactly what you mean by "prune using a validation
set". THe prune.rpart function will prune at any value you want, what I suspect you are
looking for is to compute the error of each possible tree, using a validation data set,
then find the best one, and the
Look at the finegray command within the survival package; the competing risks vignette has
coverage of it. The command creates an expanded data set with case weights, such that
coxph() on the new data set = the Fine Gray model for the original data. Anything that
works with coxph is valid on t
This simple form of a hyperbola is not well known. I find it useful for change point
models: since the derivative is continuous it often behaves better in a maximizer.
h1 <- function(x, b, k=3) .5 * b * (x + sqrt(x^2 + k^2))
Function h1() has asymptotes of y=0 to the left of 0 and y=x to the
I have a process that I need to parallelize, and have a question about two
different ways to proceed. It is essentially an MCMC exploration where
the likelihood is a sum over subjects (6000 of them), and the per-subject
computation is the slow part.
Here is a rough schematic of the code using one
I'm looking for advice on which of the parallel systems to use.
Context: maximize a likelihood, each evaluation is a sum over a large number of
subjects (>5000) and each of those per subject terms is slow and complex.
If I were using optim the context would be
fit <- optim(initial.values, myfu
On 11/29/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Independent censoring is one of the fundamental assumptions in the survival
analysis. However, I cannot find any test for it or any paper which discusses
how real that assumption is.
I would be grateful if anybody could point me to
Survival version 2.40 has been relased to CRAN. This is a warning that some users may see
changes in results, however.
The heart of the issue can be shown with a simple example. Calculate the following simple
set of intervals:
<>=
birth <- as.Date("1973/03/10")
start <- as.Date("1998/09/13")
I'm off on vacation and checking email only intermittently.
Wrt the offset issue, I expect that you are correct. This is not a case that I
had ever envisioned, and so was not on my "list" when writing the code and
certainly has no test case. That does not mean that it shouldn't work, just
t
On 09/07/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Dear R-Team,
I have been trying to use the finegray routine that creates a special data
so that Fine and Gray model can be fit. However, it does not seem to work.
Could you please help me with this issue?
Thanks,
Ahalya.
You have
You can ignore the message below. The maximizing routine buried within the frailty()
command buried with coxph() has a maximizer that is not the brightest. It sometimes gets
lost but then finds its way again. The message is from one of those. It likely took a
not-so-good update step, and too
On 08/20/2016 05:00 AM, Vinzenz wrote:
For some days I have been struggling with a problem concerning the
?survSplit?-function of the package ?survival?. Searching the internet I
have found a pretty good -German- description of Daniel Wollschl?r
describing how to use survSplit:
The survSplit r
I'm traveling so chasing this down more fully will wait until I get home.
Four points.
1. This is an edge case. You will notice that if you add "subset=1:100" to the
coxph call that the function works perfectly. You have to get up to 1000 or so
before it fails.
2. The exact partial likeli
A new version of the survival package has been released. The biggest change is stronger
support for multi-state models, which is an outgrowth of their increasing use in my own
practice. Interested users are directed to the "time dependent covariates" vignette for
discussion of the tmerge and su
2016-04-15 13:58 GMT+02:00 Therneau, Terry M., Ph.D.
<mailto:thern...@mayo.edu>>:
I'd like to get interaction terms in a model to be in another form. Namely,
suppose I
had variables age and group, the latter a factor with levels A, B, C, with
age *
group in the model.
I'd like to get interaction terms in a model to be in another form. Namely, suppose I had
variables age and group, the latter a factor with levels A, B, C, with age * group in the
model. What I would like are the variables "age:group=A", "age:group=B" and
"age:group=C" (and group itself of c
On 04/02/2016 05:00 AM, r-help-requ...@r-project.org wrote:
Hello,
I'm looking for a way in which R can make my live easier.
Currently i'm using R convert data from a dataframe to json's and then sending
these json's to a rest api using a curl command in the terminal (i'm on a mac).
I've
Thanks to David for pointing this out. The "time dependent covariates" vignette in the
survival package has a section on time dependent coefficients that talks directly about
this issue. In short, the following model is simply wrong:
coxph(Surv(time, status) ~ trt + prior + karno + I(karn
Failure to converge in a coxph model is very rare. If the program does not make it in 20
iterations it likely will never converge, so your control argument will do little.
Without the data set I have no way to guess what is happening. My first question,
however, is to ask how many events you
On 03/02/2016 05:00 AM, r-help-requ...@r-project.org wrote:
I'd very much appreciate your help in resolving a problem that I'm having with
plotting a spline term.
I have a Cox PH model including a smoothing spline and a frailty term as
follows:
fit<-coxph(Surv(start,end,exit) ~ x + pspl
For an interval censored poisson or lognormal, use survreg() in the survival package. (Or
if you are a SAS fan use proc lifereg). If you have a data set where R and SAS give
different answers I'd like to know about it, but my general experience is that this is
more often a user error. I am al
I read the digest form which puts me behind, plus the last 2 days have been solid meetings
with an external advisory group so I missed the initial query. Three responses.
1. The clogit routine sets the data up properly and then calls a stratified Cox model. If
you want the survConcordance ro
How should the weights be treated? If they are multiple observation weights (a weight of
"3" is shorthand for 3 subjects) that leads to a different likelihood than sampling
weights ("3" means to give this one subject more influence). The clogit command can't
read your mind and so has chosen no
I expect that reading the result of print(fit.weib) will answer your question. If there
were any missing values in the data set, then the fit.weib$linear.predictors will be
shorter than the original data set,
and the printout will have a note about "...deleted due to missing".
The simplest sol
As a digest reader I am late to the discussion, but let me toss in 2 further
notes.
1. Three advantages of knitr over Sweave
a. The book "Dynamic documents with R and knitr". It is well written; sitting down for
an evening with the first half (70 pages) is a pretty good way to learn the pack
The error message states that there is an invalid value for the density. A long stretch
of code is not very helpful in understanding this. What we need are the definition of
your density -- as it would be written in a textbook. This formula needs to give a valid
response for the range -infini
Look at the rpart vignette "User written split functions". The code allows you to add
your own splitting method to the code (in R, no C required). This has proven to be very
useful for trying out new ideas.
The second piece would be to do your own cross-validation. That is, turn off the buil
Hi, I want to perform a survival analysis using survreg procedure from
survival library in R for a pareto distribution for a time variable, so I
set the new distribution using the following sintax:
library(foreign)
library(survival)
library(VGAM)
mypareto <- list(name='Pareto
On 10/28/2015 06:00 AM, r-help-requ...@r-project.org wrote:
Hello all!
I?m fitting a mixed effects cox model with coxme function of coxme package.
I want to konw what is the best way to check the model adequacy, once that
function cox.zph that does not work for coxme objects.
Thanks in advanc
On 10/14/2015 05:00 AM, r-help-requ...@r-project.org wrote:
I am trying to fit this data to a weibull distribution:
My y variable is:1 1 1 4 7 20 7 14 19 15 18 3 4 1 3 1 1 1
1 1 1 1 1 1
and x variable is:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
19 20 21 22 23 24
The cutpoint is on the predictor, so the interpretation is the same as it is for any other
rpart model. The subjects with predictor < cutpoint form one group and those > cutpoint
the other. The cutpoint is chosen to give the greatest difference in "average y" between
the groups. For poisson "
I'd like to flatten a list from 2 levels to 1 level. This has to be easy, but
is currently opaque to me.
temp <- list(1:3, list(letters[1:3], duh= 5:8), zed=15:17)
Desired result would be a 4 element list.
[[1]] 1:3
[[2]] "a", "b", "c"
[[duh]] 5:8
[[zed]] 15:17
(Preservation of the names is n
This was an FDA/SAS bargain a long while ago. SAS made the XPT format publicly available
and unchanging in return for it becoming a standard for submission. Many packages can
reliably read or write these files. (The same is not true for other SAS file formats, nor
is xport the SAS default.)
the data is a "text/csv" field coming from an http POST request. This
is an internal service on an internal Mayo server and coded by our own IT department; this
will not be the first case where I have found that their definition of "csv" is not quite
standard.
Terry T.
I have a csv file from an automatic process (so this will happen thousands of times), for
which the first row is a vector of variable names and the second row often starts
something like this:
5724550,"000202075214",2005.02.17,2005.02.17,"F", .
Notice the second variable which is
a c
I've been away for a couple weeks and am now catching up on email.
The issue is that the coxme code does not have conversions built-in for all of the
possible types of sparse matrix. Since it assumes that the variance matrix must be
symmetric, the non-neccarily-symmetric dgCMatrix class is not
On 08/30/2015 05:00 AM, r-help-requ...@r-project.org wrote:
I'm unable to fit a parametric survival regression using survreg() in the survival package with
data in "counting-process" ("long") form.
To illustrate using a scaled-down problem with 10 subjects (with data placed on
the web):
As
I read this list a day late as a digest so my answers are rarely the first. (Which is
nice as David W answers most of the survival questions for me!)
What you are asking is reasonable, and in fact is common practice in the realm of
industrial reliability, e.g., Meeker and Escobar, Statistical
On 07/22/2015 06:02 PM, Rolf Turner wrote:
On 23/07/15 01:15, Therneau, Terry M., Ph.D. wrote:
3. Should you ever use it [i.e. Type III SS]? No. There is a very strong
inverse
correlation between "understand what it really is" and "recommend its
use". Stephen Senn ha
"Type III" is a peculiarity of SAS, which has taken root in the world. There are 3 main
questions wrt to it:
1. How to compute it (outside of SAS). There is a trick using contr.treatment coding that
works if the design has no missing factor combinations, your post has a link to such a
descri
that t(BB) %*% A = 0?
Peter
On Thu, Jul 16, 2015 at 10:28 AM, Therneau, Terry M., Ph.D.
wrote:
This is as much a mathematics as an R question, in the "this should be easy
but I don't see it" category.
Assume I have a full rank p by p matrix V (aside: V = (X'X)^{-1} for a
part
This is as much a mathematics as an R question, in the "this should be easy but I don't
see it" category.
Assume I have a full rank p by p matrix V (aside: V = (X'X)^{-1} for a particular setup),
a p by k matrix B, and I want to complete an orthagonal basis for the space with distance
functio
The difference is that survreg is using a maximum likelihood estimate (MLE) of the
variance and that lm is using the unbiased (MVUE) estimate of variance. For simple linear
regression, the former divides by "n" and the latter by "n-p". The difference in your
variances is exactly n/(n-p) = 10/8
The help page for prmatrix states that it only exists for backwards compatability and
strongly hints at using print.matrix instead.
However, there does not seem to be a print.matrix() function.
The help page for print mentions a zero.print option, but that does not appear to affect
matrices.
Frank,
I don't think there is any way to "fix" your problem except the way that I
did it.
library(survival)
tdata <- data.frame(y=c(1,3,3,5, 5,7, 7,9, 9,13),
x1=factor(letters[c(1,1,1,1,1,2,2,2,2,2)]),
x2= c(1,2,1,2,1,2,1,2,1,2))
fit1 <- lm( y ~ x1 * st
Frank,
I'm not sure what is going on. The following test function works for me in both 3.1.1
and 3.2, i.e, the second model matrix has fewer columns. As I indicated to you earlier,
the coxph code removes the strata() columns after creating X because I found it easier to
correctly create the
here are 90 .Rd files so this saved me substantial time.
Terry T.
On 06/04/2015 03:00 PM, Marc Schwartz wrote:
On Jun 4, 2015, at 12:56 PM, Therneau, Terry M., Ph.D.
wrote:
I'm checking the survival package and get the following error. How do I find
the offending line? (There are a LOT
I'm checking the survival package and get the following error. How do I find the offending
line? (There are a LOT of files in the man directory.)
Terry T.
--
* checking PDF version of manual ... WARNING
LaTeX errors when creating PDF version.
This typically indicates Rd proble
You were not completely clear, but it appears that you have data where each subject has
results from 8 "trials", as a pair of variables is changed. If that is correct, then you
want to have a variance that corrects for the repeated measures. In R the glm command
handles the simple case but not
Your problem is that PatientID, FatherID, MotherID are factors. The authors of kinship2
(myself and Jason) simply never thought of someone doing this. Yes, that is an oversight.
We will correct it by adding some more checks and balances. For now, turn your id
variables into character or nume
On 04/21/2015 05:00 AM, r-help-requ...@r-project.org wrote:
Dear All,
I am in some difficulty with predicting 'expected time of survival' for each
observation for a glmnet cox family with LASSO.
I have two dataset 5 * 450 (obs * Var) and 8000 * 450 (obs * var), I
considered first one as t
The perils of backwards compatability
During computation the important quantity is loglik + penalty. That is what is contained
in the third element of the loglik vector.
Originally that is also what was printed, but I later realized that for statistical
inference one wants the loglik an
I have no idea. A data set that generates the error would be very helpful to me. What is
the role of the last line BTW, the one with "1%" on it?
Looking at the code I would guess that the vector "tied" has an NA in it, but how that
would happen I can't see. There is a reasonable chance that i
Your work around is not as "easy" looking to me.
Survival times come in multiple flavors: left censored, right censored, interval censored,
left-truncated and right censored, and multi-state. Can you give me guidance on how each
of these should sort? If a sort method is added to the package i
First:
summary(ss.rpart1)
or summary(ss.rpart, file="whatever")
The printout will be quite long since your tree is so large, so the second form may be
best followed by a perusal of the file with your favorite text editor. The file name of
"whatever" above should be something you choose, of
The pyears() and survexp() routines in the survival package are designed for
these
calculations.
See the technical report #63 of the Mayo Biostat group for examples
http://www.mayo.edu/research/departments-divisions/department-health-sciences-research/division-biomedical-statistics-infomatics/t
On 12/26/2014 05:00 AM, r-help-requ...@r-project.org wrote:
i want to analyse survival data using typeI HALF LOGISTIC
DISTRIBUTION.how can i go about it?it installed one on R in the
survival package didn't include the distribution...or i need a code to
use maximum likelihood to estimate the par
On 12/23/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Dear all,
I'm using the package "survival" for adjusting the Cox model with multiple
events (Prentice, Williams and Peterson Model). I have several covariates,
some of them are time-dependent.
I'm using the function"cox.zph" to check
Three responses to your question
1. Missing values in R are denoted by "NA". When reading in your data you want to use
the "na.strings" option so that the internal form of the data has missing values properly
denoted.
2. If this is done, then coxme will notice the missings and remove them
Use the coxme funtion (package coxme), which has the same syntax as lme4.
The frailty() function in coxph only handles the simple case of a random
intercept.
Terry Therneau
On 12/12/2014 05:00 AM, r-help-requ...@r-project.org wrote:
Hi,
I have a very simple Cox regression model in which I nee
nks Hadley
Terry T.
On 11/18/2014 08:47 AM, Hadley Wickham wrote:
Do you have a .Rbuildignore? If so, what's in it?
Hadley
On Tue, Nov 18, 2014 at 7:07 AM, Therneau, Terry M., Ph.D.
wrote:
I have a new package (local use only). R CMD check fails with a messge I
haven't seen before
I have a new package (local use only). R CMD check fails with a messge I
haven't seen before, and I haven't been able to guess the cause.
There are two vignettes, both of which have %\VignetteIndexEntry lines.
Same failure both under R-3.1.1 and R-devel, so it's me and not R. Linux OS.
Hints a
This is fixed in version 2.37-8 of the survival package, which has been in my "send to
CRAN real-soon-now" queue for 6 months. Your note is a prod to get it done. I've been
updating and adding vignettes.
Terry Therneau
On 11/05/2014 05:00 AM, r-help-requ...@r-project.org wrote:
I am receiv
Well duh -- type "c.Date" at the command prompt to see what is going on. I suspected I
was being dense.
Now that the behaior is clear can I follow up on David W's comment that redfining the
c.Date function as
structure(c(unlist(lapply(list(...), as.Date))), class = "Date")
allows for a
I'm a bit puzzled by a certain behavior with dates. (R version 3.1.1)
> temp1 <- as.Date(1:2, origin="2000/5/3")
> temp1
[1] "2000-05-04" "2000-05-05"
> temp2 <- as.POSIXct(temp1)
> temp2
[1] "2000-05-03 19:00:00 CDT" "2000-05-04 19:00:00 CDT"
So far so good. On 5/4, midnight in Greenwich it
I've attached two functions used locally. (The attachments will be stripped off of the
r-help response, but the questioner should get them). The functions "neardate" and
"tmerge" were written to deal with a query that comes up very often in our medical
statistics work, some variety of "get the
I would have caught this tomorrow (I read the digest).
Some thoughts:
1. Skip the entire step of subsetting the death.kmat object. The coxme function knows how
to do this on its own, and is more likely to get it correct. My version of your code would be
deathdat.kmat <- 2* with(deathdat, ma
On 07/30/2014 05:00 AM, r-help-requ...@r-project.org wrote:
A while ago, I inquired about fitting excess relative risk models in R. This is
a follow-up about what I ended up doing in case the question pops up again.
While I was not successful in using standard tools, switching to Bayesian
mo
I missed this question.
1. For survreg.
help("predict.survreg") shows an example of drawing a survival curve
Adding a survfit method has been on my list for a long time, since it would make this
information easier to find.
2. intcox. I had not been familiar with this function. Even thoug
On 08/13/2014 08:38 AM, John Pura wrote:
Thank you for the reply. However, I think I may not have clarified what my
cases are. I'm studying the effect of radiation treatment (vs. none) on
survival. My cases are patients who received radiation and controls are those
who did not. I used a prop
Ok, I will try to do a short tutorial answer.
1. The score statistic for a Cox model is a sum of (x - xbar), where "x" is the covariate
vector of the subject who had an event, and xbar is the mean covariate vector for the
population, at that event time.
- the usual Cox model uses the mean of
On 08/13/2014 05:00 AM, John Purda wrote:
I am curious about this problem as well. How do you go about creating the
weights for each pair, and are you suggesting that we can just incorporate a
weight statement in the model as opposed to the strata statement? And Dr.
Therneau, let's say I hav
You are asking for a one sample test. Using your own data:
connection <- textConnection("
GD2 1 8 12 GD2 3 -12 10 GD2 6 -52 7
GD2 7 28 10 GD2 8 44 6 GD2 10 14 8
GD2 12 3 8 GD2 14 -52 9 GD2 15 35 11
GD2 18 6 13 GD2 20 12 7 GD2 23 -7 13
GD2 24 -52 9 GD2 26 -52 12
I've been off on vacation for a few days and so am arriving late to this
discussion.
Try ?print.survfit, and look at the print.rmean option and the discussion thereof in the
"Details" section of the page. It will answer your question, in more detail than you
asked. The option applies to sur
1 - 100 of 126 matches
Mail list logo