On Fri, 3 Sep 2021, Rich Shepard wrote:
On Thu, 2 Sep 2021, Jeff Newmiller wrote:
Regardless of whether you use the lower-level split function, or the
higher-level aggregate function, or the tidyverse group_by function, the
key is learning how to create the column that is the same for all reco
On Thu, 2 Sep 2021, Jeff Newmiller wrote:
Regardless of whether you use the lower-level split function, or the
higher-level aggregate function, or the tidyverse group_by function, the
key is learning how to create the column that is the same for all records
corresponding to the time interval of
On 9/3/21 12:59 PM, Bond, Stephen wrote:
>
> I looked at the nocenter and it says (-1,0,1) values but it seems that any
> three-level
> factor is included in that (represented as 1,2,3 in R) .
>
A factor is turned into a set of 0/1 dummy variable, so the nocenter applies.�
I will add
more cla
I looked at the nocenter and it says (-1,0,1) values but it seems that any
three-level factor is included in that (represented as 1,2,3 in R) .
Also, is the baseline curve now showing the reference level and not the
fictional .428 sex? If I predict the risk for a new row, should I multiply the
c
See ?coxph, in particular the new "nocenter" option.
Basically, the "mean" component is used to center later computations.� This can
be
critical for continuous variables, avoiding overflow in the exp function, but
is not
necessary for 0/1 covariates.�� The fact that the default survival curve
Hi,
Please, help me understand what is happening with the means of a Cox model?
I have:
R version 4.0.2 (2020-06-22) -- "Taking Off Again"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)
getOption("contrasts")
unordered ord
On Thu, 2 Sep 2021, Jeff Newmiller wrote:
Regardless of whether you use the lower-level split function, or the
higher-level aggregate function, or the tidyverse group_by function, the
key is learning how to create the column that is the same for all records
corresponding to the time interval of
Hi Richard:
Thank you very much for your help in this matter.
with thanks
abou
__
*AbouEl-Makarim Aboueissa, PhD*
*Professor, Statistics and Data Science*
*Graduate Coordinator*
*Department of Mathematics and Statistics*
*University of Southern Maine*
On Fri, Sep 3, 202
Hi Avi: good morning
Again, many thanks to all of you. I appreciate all what you are doing. You
are good. I did it in Minitab. It cost me a little bit more time, but it is
okay.
It was a little bit confusing for me to do it in R. Because in *Step 1: *I
have to select a random sample of size n=204
Your question is ambiguous.
One reading is
n <- length(table$Data)
m <- n %/% 3
s <- sample(1:n, n)
X <- table$Data[s[1:m]]
Y <- table$Data[s[(m+1):(2*m)]]
Z <- table$Data[s[(m*2+1):(3*m)]]
On Fri, 3 Sept 2021 at 13:31, AbouEl-Makarim Aboueissa
wrote:
>
> Dear All:
>
> How to split
Yes, even
> summary(NA_real_)
Min. 1st Qu. MedianMean 3rd Qu.Max.NA's
NA NA NA NaN NA NA 1
which is presumably because the mean is an empty sum (= 0) divided by a zero
count, and 0/0 = NaN.
Notice also the differenc between
> mean(NA_real_)
Fair enough, I'll check the actual data to see if there are indeed any
NaN (which should not, since the data are categories, not generated by
math).
Thanks!
On Fri, Sep 3, 2021 at 8:26 AM PIKAL Petr wrote:
>
> Hi Luigi.
>
> Weird. But maybe it is the desired behaviour of summary when calculating
12 matches
Mail list logo