t;dfbeta", collapse = cluster, weighted =
TRUE) :
Wrong length for 'collapse'
I tried both 64 bit (R.3.1.0) and 32 bit (R.3.1.2) in Windows 7 64bit and get
the same errors
Inclusion of tt and cluster terms worked fine in R2.9.2-2.15.1 under Windows
Vista 32 bit and Ubuntu 64 bi
!
Christos Argyropoulos
> From: vicvoncas...@gmail.com
> Date: Sun, 1 Jan 2012 14:10:36 -0500
> To: dwinsem...@comcast.net
> CC: r-help@r-project.org
> Subject: Re: [R] R on Android
>
> If the phone is "rooted" one could hypothetically install from Debian
> reposi
I believe there was a fairly recent exchange (within the last 6 months) about
linear measurement error models/error-in-variable models/Deming
regression/total least squares/orthogonal regression. Terry Therneau provided
code for an R function that can estimate such models:
http://www.mail-arc
I'm not really sure I understand the question. Do you want to create a
function, which is defined as the integral of another function?
Something like:
> f1<-function(x) sin(x)
> f2<-function(x) cos(x)
>
> integral<-function(u,integrand) integrate(integrand,0,u)
>
> integral(pi,f1)
2 with abs
If the system is sparse and you have a really large cluster to play
with, then maybe (emphasis) PETSc/TAO is the right combination of tools
for your problem.
http://www.mcs.anl.gov/petsc/petsc-as/
Christos
value of X in the R code
> attached at the end of this email. Smaller values of x lead to greater
> discrepancies e.g. compare X=0.1 v.s X=5.1).
>
> From my understanding of how HPDinterval works, the intervals returned by the
> 2 different invokations should be very similar. S
e.g. compare X=0.1 v.s X=5.1).
>From my understanding of how HPDinterval works, the intervals returned by the
>2 different invokations should be very similar. So what causes this
>discrepancy? Which one of the 2 intervals should be used?
Regards,
Christos Argyropoulos
R COD
One possible way is the following:
x <-c(0.49534,0.80796,0.93970,0.8)
count <-c(0,33,0,4)
x[count==0]
[1] 0.49534 0.93970
> x[count>0]
[1] 0.80796 0.8
Christos
> Date: Tue, 6 Jul 2010 15:39:08 +0900
> From: gunda...@gmail.com
> To: r-h...@stat.math.ethz.ch
> Subject: [R] Conditional
e', but there is a function called
> > `adaptIntegrate' in the "cubature" package.
> >
> > Ravi.
> >
> > -Original Message-----
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org
> > ] On
> > Behalf Of
Function adapt in package integrate maybe?
> Date: Thu, 1 Jul 2010 05:30:25 -0700
> From: sarah_sanche...@yahoo.com
> To: r-help@r-project.org
> Subject: [R] Double Integration
>
> Dear R helpers
>
> I am working on the Bi-variate Normal distribution probabilities. I need to
> double integrate
Hi Raoul,
I presume you need these summaries for a table of descriptive statistics for a
thesis/report/paper
("Table 1" as known informally by medical researchers). If this is the case,
then specify
method="reverse" to summary.formula. In the following small example, I create 4
groups of pat
Look at the summary.formula function inside package Hmisc
Christos
> Date: Sat, 26 Jun 2010 05:17:34 -0700
> From: raoul.t.dso...@gmail.com
> To: r-help@r-project.org
> Subject: [R] Calculating Summaries for each level of a Categorical variable
>
>
> Hi,
>
> I have a dataset which has a categ
,then y. Ravi used sapply, which is good,
but it seems to be that Vectorize is easier.
Thanks for help. I appreciated
Carrie
2010/6/23 Christos Argyropoulos
No something else is going on here
f=function(x) {dmvnorm(c(0.6, 0.8), mean=c(0.75, 0.75/x))*dnorm(x,
mean=0.6,
sd=0.15)}
No something else is going on here
f=function(x) {dmvnorm(c(0.6, 0.8), mean=c(0.75, 0.75/x))*dnorm(x,
mean=0.6,
sd=0.15)}
> f(1)
[1] 0.01194131
> x<-seq(-2,2,.15)
> f(x)
Error in dmvnorm(c(0.6, 0.8), mean = c(0.75, 0.75/x)) :
mean and sigma have non-conforming size
But ...
> sapply
ather than try to distinguish between a quadratic and a more
> > general relationship, it might be easier to fit the "f3' model and note
> > the resulting degrees of freedom; if it is close to 2, then the data have
> > essentially told you that a quadratic function is app
marizing inferences. Note that I do not recommend fitting
the quadratic after a GAMM has suggested this relationship :)
One last thing you should be aware concerns the numerical performance of gamm
(versus its cousin gamm4); the lmer package is much much faster and numerically
more stable for larg
Hi,
You should use the sapply/lapply for such operations.
> r<-round(runif(10,1,10))
> head(r)
[1] 3 7 6 3 2 8
> filt<-function(x,thres) ifelse(x system.time(r2<-sapply(r,filt,thres=5))
user system elapsed
3.360.003.66
> head(r2)
[1] 3 5 5 3 2 5
To return a list, replace "
ggest.
Christos
> Subject: RE: [R] Popularity of R, SAS, SPSS, Stata...
> Date: Sun, 20 Jun 2010 21:11:14 -0400
> From: muenc...@utk.edu
> To: argch...@hotmail.com
>
>
>
> >-Original Message-
> >From: r-help-boun...@r-project.org
> [mailto:r-help-boun..
Hi,
Are you sure these are date objects and not strings? For example
> d1<-runif(10,0,100)
> d2<-runif(10,0,200)
> df<-data.frame(d1=d1+as.Date("2010-6-20")+d1,d2=as.Date("2009-6-20")+d2)
> df
d1 d2
1 2010-09-23 2009-06-30
2 2010-06-25 2009-08-21
3 2010-10-10 2009-08-04
4
How about getting statistics of downloads of the R-base from the different CRAN
mirrors ?
This should (in principle) allow one to estimate the total # of people who
intended to use R at some point in their life.
It may even be possible to analyze those numbers for temporal trends since the
da
Hi,
The error message you are getting (probably) means that the algorithm did not
converge. Did you check for convergence? (Look at the "fail" element of the
returned lrm object)
Christos
__
50% 75% 100%
54.00 67.50 77.50 90.25 112.00
Christos Argyropoulos
> Date: Fri, 18 Jun 2010 21:02:41 -0700
> From: jwiley.ps...@gmail.com
> To: r-help@r-project.org
> Subject: [R] quantile() depends on order of probs?
>
> Hello
Hi,
mod.poly3$coef/sqrt(diag(mod.poly3$var))
will give you the Wald stat values, so
pnorm(abs(mod.poly3$coef/sqrt(diag(mod.poly3$var))),lower.tail=F)*2
will yield the corresponding p-vals
Christos Argyropoulos
(look at the REML chapter in the manual).
Christos Argyropoulos
> Date: Mon, 3 May 2010 23:18:28 +0200
> From: duta...@gmail.com
> To: r-help@r-project.org
> Subject: [R] extended Kalman filter for survival data
>
> Dear all,
>
> I'm looking for an implementatio
ething like the following:
"df degrees of freedom; one can specify df rather than knots; bs() then
chooses df-degree-1 knots at suitable quantiles of x (which will ignore missing
values) if the inte
Can you give an example of what the python code is supposed to do?
Some of us are not familiar with python, and the R code is not particularly
informative. You seem to encode information on both the values and the names of
the elements of the vector "d". If this is the case, why don't you creat
nding on the type of outcome variable
specified. It can also create "TeX" versions of these tables which
can be imported (e.g. through htlatex) to MSWord and OpenOffice.
Cheers,
Christos Argyropoulos
> From: rui...@gmail.com
> Date: Sat, 1 May 2010 01:04:19 +0800
>
I presume you want to use such tables to summarize baseline information (a.k.a
"Table 1" in medical papers)
Try the Hmisc package ... will do the tables and statistics for you and save
them as tex (which you can import directly into
in your favorite Office like program after running htlatex)
li
tem elapsed
0.2146 0.0248 0.2403
It is ~50% slower than the "f2" solution given previously ... but it also gives
you a 2D of where the matches are.
These are stored in the "res" variable; use with care with very big datasets.
I wonder whether it is possible to
to reduce the memory footprint with bit level
operations ...
Christos Argyropoulos
_
Hotmail: Free, trusted and rich email service.
[[alternative HTML version deleted]]
__
ggplot2 should work (resize to get the plot to the dimensions you need for the
paper)
library(Hmisc)
library(pscl)
library(ggplot2)
## data
data("bioChemists", package = "pscl")
fm_pois <- glm(art ~ ., data = bioChemists, family = poisson)
summary(fm_pois)
### pull out rate-ratios and 95%
So ..
are you trying to figure out whether your data hasa substantial number of
outliers that call into question the adequacy of the normal distro fro your
data?
If this is the case, note that you cannot individually check the values (as you
are doing) without taking into account of the "
statement to the values estimated in the
first step. Then I could compare the (m=2) v.s. (m=3) models with ANOVA as the
2 models are properly nested within each other.
Any other ideas?
Hi,
The package evd most functions that one would need for analysis of Extreme
Values so you should consider giving it a try.
By the way your vector of numerical values is not valid; there are a couple of
values with repeated decimal point separators.
Regards,
Christos Argyropoulos
uot;)
splom(~Cars93[,5:8]|Origin,data=Cars93,panel=function(x,y,...) {
panel.splom(x,y,...)
dum<-format(cor(x,y,use="complete",method="kendal"),dig=2)
panel.text(30,40,bquote(tau == .(dum)),font=2)
},pscales=0,col="gray"
)
Any suggestions?
Christos Argyropoulos
ate: Sat, 31 Jan 2009 09:25:26 -0600> From: iver...@biostat.wisc.edu> To:
> argch...@hotmail.com> CC: r-help@r-project.org> Subject: Re: [R] This site
> may harm your computer - Google warning about cran website> > What OS/browser
> versions are you guys using?> >
s Safe Browsing Diagnostic page (it gives
diagnostics about any page that
it may contain Malware) does not seem to be working at the moment, so that the
problems that it found
with CRAN are not available for review.
Anyone else has the same problems?
Christos Argyropoulos
Universit
Each of the two integrals (g1, g2) seem to be divergent (or at least is
considered to be so by R :) )
Try this:
z <- c(80, 20, 40, 30)
"f1" <- function(x, y, z) {dgamma(cumsum(z)[-length(z)], shape=x, rate=y)}
"g1" <- function(y, z) {integrate(function(x) {f1(x=x, y=y, z=z)}, 0.1,
0.5, rel.to
function(x,y,...) { c(t.test(x,y,...))[1:3] }
q2<-mapply(my.t,as.data.frame(x),as.data.frame(y))
q2
Good luck!
Christos Argyropoulos
University of Pittsburgh Medical Center
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r
integration facilities with
this (and possibly other) numerical integration methods.
Christos Argyropoulos
University of Pittsburgh Medical Center
_
s. It's easy!
aspx&mkt=en-us
Hello,
I was hoping that someone could answer a few questions for me (the background
is given below):
1) Can the coxph accept an interaction between a covariate and a frailty term
2) If so, is it possible to
a) test the model in which the covariate and the frailty appear as main terms
using the
uot; and how 'transform'
works are accurate, both statements should produce the same output.
I got the same behaviour in Windows XP Pro 32-bit (running R v 2.7) and Ubuntu
Hardy (running the same version of R).
Thanks
Christos Argyropoulos
University of Pittsburgh Medical
42 matches
Mail list logo