So I'm in a stochastic simulations class and I having issues with the amount
of time it takes to run the Ising model.
I usually don't like to attach the code I'm running, since it will probably
make me look like a fool, but I figure its the best way I can find any bits
I can speed up run time.
As
hello.
how are you?
my name is arsalan. I'm from iran.
i want to write the program that done random walk(one variable and two
variable).
please help me.
thank you
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
Hi,
Have a look at the directlabels package; it does just that for lattice
and ggplot2.
HTH,
baptiste
On 26 October 2010 08:02, Jeffrey Spies wrote:
> Hi, all,
>
> Let's say I have some time series data--10 subjects measured 20
> times--that I plot as follows:
>
> library(ggplot2)
> dat <- dat
Maybe you should take a look at the view all button.
>From there you can guess the next link:
http://www.etintelligence.com/etig/et500/et500Ranking.jsp?param=1&msg=1&year=2010&rslt=500
this should give you the whole list
Bart
--
View this message in context:
http://r.789695.n4.nabble.com/Extra
I have many emails such as the one below, where someone gives thanks for
being helped but the help or answer that was given is excluded. I
believe, perhaps I am wrong, that emails such as this should not be
copied to the list. Such emails only fill up our mail boxes and we DO
NOT gain anything from
Thanks David!
Indeed the printout is perfect. But this image (produced with higher
resolution) should appear in a publication. I will ask to the author to
check her copy of the manuscript, if it is acceptable.
Anyway, thanks for the pointer to pdf doc.
Thanks Baptiste!
the problem with lattice (or more likely with my ignorance) is that it
does not accept NA values.
Ciao!
mario
On 26-Oct-10 07:38, baptiste auguie wrote:
Hi,
As an alternative, maybe you could use lattice::panel.levelplot.raster
which I think doesn't have t
Hi
I agree with this statement especially more complicated problems with
longer history of answers could be quite confusing when the history itself
is missing in some email. Besides reading help list questions and answers
is convenient and quick way to learn R. I do it myself even after several
Hi everyone,
Can you please help me on how to estimate the Mahalanobis distance in a
large matrix
Thanks
Ivone
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.eth
Hi Dr. Vilchis,
On Mon, Oct 25, 2010 at 7:11 PM, L. Ignacio Vilchis
wrote:
> Hi all,
>
> Could anybody who has successfully installed the rgdal package on r64 on a
> mac help me out. I have downloaded all o the needed frameworks(gdal, proj,
> etc), I am just having trouble getting the correct
Greetings fellow R entusiasts!
We have some problems converting a computer routine written initially for
Gauss to estimate a Markov Regime Switching analysis with Time Varying
Transition Probability. The source code in Gauss is here:
http://www.econ.washington.edu/user/cnelson/markov/programs/hmt
Thanks for your reply,
My main issue is that I don't have any equations to generate the data, just
a bunch of points, each corresponding to a polygon.
I was looking in package "sp" and there is a function to calculate areas (
areapl()), but not for intersecting polygons. Is there any other packag
Hello everybody,
Is there a way to add a subtitle to a lattice key?
It is important form me that the subtitle must be linked to the key
because those graphs are produced on a daily temporal scale,
and the numbers of rectangles from the key may be different from day to
day.
Thank you,
Alexandru Du
I am trying to create an array of date time objects using the strptime
function.
The first entry is in the form "1/1/1981 0:00" and U use;
strptime(datetime, "%m/%d/%Y %H:%M") which gives "1981-01-01 EST".
How do I alter this to give me "1981-01-01 0:00 EST"?
Thanks in advance,
Doug
--
View
On 10/26/2010 04:50 PM, Michael D wrote:
So I'm in a stochastic simulations class and I having issues with the amount
of time it takes to run the Ising model.
I usually don't like to attach the code I'm running, since it will probably
make me look like a fool, but I figure its the best way I can
Hi
r-help-boun...@r-project.org napsal dne 26.10.2010 10:55:57:
> Hi everyone,
>
> Can you please help me on how to estimate the Mahalanobis distance in a
> large matrix
did you by chance tried ?mahalanobis
Regards
Petr
>
> Thanks
>
> Ivone
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Hi,
I would like to set a constraint on the fixed effect estimates in a GLM
model, such as b1=b2. Is this possible in the glm package? Similarly I would
like to set some to equal zero too. I have tried searching the information
with this package, but I can't find anything for this.
Thanks in adv
jonas garcia googlemail.com> writes:
>
> Thanks for your reply,
>
> My main issue is that I don't have any equations to generate the data, just
> a bunch of points, each corresponding to a polygon.
>
> I was looking in package "sp" and there is a function to calculate areas (
> areapl()), but
Dear all,
By default the glm function in the stats package use IWLS. How can I fit a glm
model using BFGS algorithm ?
Justin BEM
BP 1917 Yaoundé
Tél (237) 76043774
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing
for instance, for logistic regression you can do something like this:
# simulate some data
x <- cbind(1, runif(100, -3, 3), rbinom(100, 1, 0.5))
y <- rbinom(100, 1, plogis(c( x%*% c(-2, 1, 0.3
# BFGS from optim()
fn <- function (betas, y, x) {
-sum(dbinom(y, 1, plogis(c(x %*% betas)), log
On 10/25/2010 09:37 PM, Gabor Grothendieck wrote:
On Tue, Oct 26, 2010 at 12:28 AM, Bob Cunningham wrote:
I have time-series data from a pair of inexpensive self-logging 3-axis
accelerometers (http://www.gcdataconcepts.com/xlr8r-1.html). Since I'm not
sure of the vibration/shock spectrum I
> From: ggrothendi...@gmail.com
> Date: Tue, 26 Oct 2010 00:37:05 -0400
> To: flym...@gmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] Time series data with dropouts/gaps
>
> On Tue, Oct 26, 2010 at 12:28 AM, Bob Cunningham wrote:
> > I have t
Here is a different solution:
library(gpclib)
p1 <- as(poly1, "gpc.poly")
p2 <- as(poly2, "gpc.poly")
area.poly(p2) + area.poly(p1) - area.poly(union(p1,p2))
I.e., take areas of both polygons and subtract the union (check
plot(union(p1,p2)) ) to get the area of the intersection.
greetings,
R
like this:
format(strptime(datetime, "%m/%d/%Y %H:%M"), "%m/%d/%Y %H:%M %Z")
remko
--
View this message in context:
http://r.789695.n4.nabble.com/datetime-objects-tp3013417p3013604.html
Sent from the R help mailing list archive at Nabble.com.
__
R-
Dear Jonas,
I already had to deal with such an issue.
Your can use the joinPolys function from the package PBSmapping, with "INT"
as operation.
The maptools package has functions SpatialPolygons2PolySet and
PolySet2SpatialPolygons to switch between formats suitable for sp or
PBSmapping.
Hope thi
Dear List,
I am looking to plot error bars on a line using dispersion.
I have values for the upper value and for the lower values, however i am unsure
how to plot different values for the upper CI and the lower CI?
I have been using
dispersion(1:35,sim,simCI,col="red")
Where there are 35 poi
Hi all,
I have a total newbie question, but I could really use some help.
I need to read in this file:
SampleIDDisease
E-CBIL-28-raw-cel-1435145228.cel1
E-CBIL-28-raw-cel-1435145451.cel2
E-CBIL-28-raw-cel-1435145479.cel2
E-CBIL-28-raw-cel-143514513
Dear experts,
I am using the LASSO method (a shrinkage and selection method for linear
regression) to create regression models. I am using the package "glmnet"
My aim is to calculate an overall p-value for a certain regression model
with a certain lambda.
I would like to do this with a likelih
2010/10/26 amindlessbrain
>
> (I'm not sure why the disease column isn't showing up as a tab here, but it
> is sep by "\t" in my file.
>
You've got a double tab space, I don't know is there a prettier way, but
paste this:
pd<-read.delim("new_treat.txt",sep="")
--
Mi³ego dnia
baptiste auguie googlemail.com> writes:
>
> Hi,
>
> Have a look at the directlabels package; it does just that for lattice
> and ggplot2.
>
Note that this package is on r-forge, not CRAN
http://directlabels.r-forge.r-project.org/ [very nice examples]
install.packages("directlabels",repos=
Remko Duursma gmail.com> writes:
>
> Here is a different solution:
>
> library(gpclib)
> p1 <- as(poly1, "gpc.poly")
> p2 <- as(poly2, "gpc.poly")
>
> area.poly(p2) + area.poly(p1) - area.poly(union(p1,p2))
>
> I.e., take areas of both polygons and subtract the union (check
> plot(union(p1,p2)
Hi
In that case you have to wait if some more capable people will try to
answer you. Besides there are some clustering methods which maybe you can
use directly.
??cluster
Regards
Petr
"Ivone Figueiredo" napsal dne 26.10.2010 12:36:20:
> Hi
>
> Yes we tried but this function Returns the squ
On Oct 25, 2010, at 11:17 PM, Arsalan Fathi wrote:
hello.
how are you?
my name is arsalan. I'm from iran.
i want to write the program that done random walk(one variable and two
variable).
please help me.
You should learn to use a search engine that is specific to R. Here's
an example:
htt
I'm importing a lot of text tables of data (from Latent Gold) that includes
hashes in some of the column names ("Cluster#1", "Cluster#2", etc.). Is
there an easy way to strip the offending hashes out before pushing the text
into a table or data frame? I thought I'd use gsub, e.g., but can't figur
On Tue, Oct 26, 2010 at 10:27 AM, David Winsemius
wrote:
>
> On Oct 25, 2010, at 11:17 PM, Arsalan Fathi wrote:
>
>> hello.
>> how are you?
>> my name is arsalan. I'm from iran.
>> i want to write the program that done random walk(one variable and two
>> variable).
>> please help me.
>
> You shoul
A call to read.table(..., sep = "", ...) reads in any length of whitespace
as the delimiter. On your sample text it read in a 2 column dataframe.
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4
hii everyone!!
i have two questions:
1) How to obtain a variable (attribute) importance using e1071:SVM (or other
svm methods)?
2) how to validate the results of svm?
currently i am using the following code to determine the error.
library(ipred)
for(i in 1:20) error.model1[i]<-
errorest(Spec
I've looked at the Kim/Nelson gauss code before, and I applaud your
effort to convert it to R.
I'm happy to have a look at it for you if you are willing to share your example.
-Whit
On Tue, Oct 26, 2010 at 4:13 AM, Houge wrote:
>
> Greetings fellow R entusiasts!
>
> We have some problems conve
morning, all
right now, I have R installed on a 32-bit ubuntu with PAE enabled. And
I can see more than 4-g memory available in system monitor. my
question is: might this 32-bit R take advantage of the extra memory
and handle large data?
thank you so much!
wensui
_
On 26/10/2010 10:33 AM, Donald Braman wrote:
I'm importing a lot of text tables of data (from Latent Gold) that includes
hashes in some of the column names ("Cluster#1", "Cluster#2", etc.). Is
there an easy way to strip the offending hashes out before pushing the text
into a table or data frame?
The constraint b1=b2 in a model such as b0 + b1 x1 + b2 x2 + b3 x3 implies that
b0 + b1 (x1 + x2) + b3 x3, so just add x1 and x2 (call this x12) and fit the
model b0 + b1 x12 + b3 x3 and you have imposed the constraint that b1=b2. To
impose the constraint that b3=0, just fit the model without va
Thank you - I will try this solution as well.
Sent via DROID X
-Original message-
From: Petr PIKAL
To: David Herzberg
Cc: Adrienne Wootten , "r-help@r-project.org"
Sent: Tue, Oct 26, 2010 06:43:09 GMT+00:00
Subject: Re: [R] Conditional looping over a set of variables in R
Hi
r-hel
I am looking for approximation for sum of two lognormal variables.
I found some papers suggesting power lognormal or Pearson type-IV,
but I wonder if there is anything ready in R?
Best regards,
Ryszard
--
Confidentiality Not
I have a vector of monthly log asset price returns. I would like use the boot
package to sample one-year returns, and then calculate confidence intervals on
the loss distribution only. More concretely, I would like to say something like
"99% of LOSSES (not RETURNS) are above cutoff X."
If the v
Caveats and disclaimers:
I am quite happy to undertake self-teaching if directed to a relevant
prior posting and welcome such
direction. I have programming and statistical training/experience which
I would characterize as Masters level.
Thank you for reading and replying to this post. It is very
"A call to read.table(..., sep = "", ...) reads in any length of whitespace
as the delimiter. On your sample text it read in a 2 column dataframe. "
Thanks! That works for the file, but when I enter in my next line of code it
doesn't work. I'm not sure if this is the problem, or if the next line
If I try that I get this:
Error in scan(file, what = "", sep = sep, quote = quote, nlines = 1, quiet =
TRUE, :
invalid 'sep' value: must be one byte
?
--
View this message in context:
http://r.789695.n4.nabble.com/Reading-in-a-tab-delimitated-file-tp3013620p3013771.html
Sent from the R he
Try this:
Lines <- "SampleIDDisease
E-CBIL-28-raw-cel-1435145228.cel1
E-CBIL-28-raw-cel-1435145451.cel2
E-CBIL-28-raw-cel-1435145479.cel2
E-CBIL-28-raw-cel-1435145132.cel3
E-CBIL-28-raw-cel-1435145417.cel3
E-CBIL-28-raw-cel-1435145301.cel2
E-
Hi
I need some help getting results from multiple linear models into a dataframe.
Let me explain the problem.
I have a dataframe with ejection fraction results measured over a number of
quartiles and grouped by base_study.
My dataframe (800 different base_studies) looks like
> afvtprelvefs
base
Right, I forgot to mention to use header = T.
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
"Is the room still a room when its empty? Does the room,
the thing itself have purpose? Or do we
That's one of the things I tried, but which didn't work. I get the following
error when I do that:
Error in read.table(file = "don.5.clusters.txt", header = TRUE, comment.char
= "", :
more columns than column names
If I remove the hashes by other means, I don't get that error.
On Tue, Oct 26
On Oct 26, 2010, at 7:50 AM, Viechtbauer Wolfgang (STAT) wrote:
The constraint b1=b2 in a model such as b0 + b1 x1 + b2 x2 + b3 x3
implies that b0 + b1 (x1 + x2) + b3 x3, so just add x1 and x2 (call
this x12) and fit the model b0 + b1 x12 + b3 x3 and you have imposed
the constraint that b1
Hello!
I am sorry if it's a naive/wrong question. But can one run a
regression with weights using lme?
Thank you!
--
Dimitri Liakhovitski
Ninah Consulting
www.ninah.com
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
in a simple plot. When i do plot is it possible to zoom in or out or this is
not
possible at all?
Best Regards
Alex
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-hel
On Oct 26, 2010, at 8:08 AM, Small Sandy (NHS Greater Glasgow & Clyde)
wrote:
Hi
I need some help getting results from multiple linear models into a
dataframe.
Let me explain the problem.
I have a dataframe with ejection fraction results measured over a
number of quartiles and grouped
Hi Alex,
After you have created the plot I do not know of a way to zoom (in
base graphics), but you can always use the xlim and ylim arguments to
focus on a particular area (or effectively zoom) when you are create
the plot. For instance,
plot(x = 1:10, y = 1:10)
plot(x = 1:10, y = 1:10, xlim
I think that this would be possible if you save the graph to a scalable
format.
Try looking into:
?postscript
?xfig
?pdf
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
"Is the room still a
Hi Dimitri,
The lme function is not in the lme4 package, so there is some
confusion there. But you can use weights with the lmer function in
lme4. ?lmer tells you that weights are specified the same way as in
the lm function, and refers you to ?lm for details.
HTH,
Ista
On Tue, Oct 26, 2010 at 11
On Oct 26, 2010, at 8:30 AM, Alaios wrote:
in a simple plot. When i do plot is it possible to zoom in or out or
this is not
possible at all?
Zoom? Do you mean restrict the region plotted to specific ranges?
xlim and ylim arguments provide that facility.
--
David.
Best Regards
Alex
On Tue, Oct 26, 2010 at 8:38 AM, Jonathan P Daily wrote:
> I think that this would be possible if you save the graph to a scalable
> format.
This is true to an extent. I have not checked on postscript or xfig,
but at least for PDF, even though you can "zoom"/blow the picture up,
you still have f
Try this:
read.table('don.5.clusters.txt', header = TRUE, comment.char = '', quote =
'')
On Tue, Oct 26, 2010 at 1:15 PM, Donald Braman wrote:
> That's one of the things I tried, but which didn't work. I get the
> following
> error when I do that:
>
> Error in read.table(file = "don.5.clusters.
Hello all,
I wish to learn a version control system for managing my R (data analysis)
projects.
I know of SVN and github, and wonder if there is any reason for which I
should prefer the one over the other (or any other platform). An example for
a reason could be if it will make it easier for me t
On 26/10/2010 11:30 AM, Alaios wrote:
in a simple plot. When i do plot is it possible to zoom in or out or this is not
possible at all?
There's no general support for that, but you could conceivably write it
yourself using getGraphicsEvent. The example code there adjusts xlim
and ylim accord
On 26/10/2010 12:16 PM, Tal Galili wrote:
Hello all,
I wish to learn a version control system for managing my R (data analysis)
projects.
I know of SVN and github, and wonder if there is any reason for which I
should prefer the one over the other (or any other platform). An example for
a reason
I would still recommend
vector_of_column_number <- apply(yourdata, 1, match, x=1)
as the simplest way if you only want the number of the
column that has the first 1 or "1" (the call works as is
for both numeric and character data). Rows which have no
1s will return a value of NA.
Anything wron
Many thanks for the help.
I assumed that I would need to account for the variables in the model, even
though I wish to assign a zero coefficient to them. I've looked at the
offset function, but does this not just assign the value 1 to the variables?
How would I specify a zero coefficient to more
Thanks a lot - it's very helpful.
On Tue, Oct 26, 2010 at 11:37 AM, Ista Zahn wrote:
> Hi Dimitri,
> The lme function is not in the lme4 package, so there is some
> confusion there. But you can use weights with the lmer function in
> lme4. ?lmer tells you that weights are specified the same way
We can further generalize this:
Suppose we want to constrain parameters such that :
b2 = a * b1
b3 = a * b1
We can do the following:
fit.a <- glm( y ~ I(x1 + a* x2 + a * x3), data= , ... )
For a fixed value of `a', we compute the log-likelihood of `fit.a'. This is
the profile likeli
For a quick exploration of the plot you can use the zoomplot function in the
TeachingDemos package. But for production graphs it is better to explicitly
set the xlim and ylim parameters in creating the plot up front.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthc
The caret package has answers to all your questions.
> -Original Message-
> From: r-help-boun...@r-project.org
> [mailto:r-help-boun...@r-project.org] On Behalf Of Neeti
> Sent: Tuesday, October 26, 2010 10:42 AM
> To: r-help@r-project.org
> Subject: [R] to determine the variable importa
Thanks for your help Gabor. That would be exactly what I am looking for. If I
use your code I get the nice representation I am looking for. However, when
I try to apply the code in the same fashion to my case, it does not produce
the x-axis. I believe the problem hinges on the following warning me
On Tue, Oct 26, 2010 at 12:56 PM, Manta wrote:
>
> Thanks for your help Gabor. That would be exactly what I am looking for. If I
> use your code I get the nice representation I am looking for. However, when
> I try to apply the code in the same fashion to my case, it does not produce
> the x-axis.
On Oct 26, 2010, at 11:22 AM, Duncan Murdoch wrote:
> On 26/10/2010 12:16 PM, Tal Galili wrote:
>> Hello all,
>>
>> I wish to learn a version control system for managing my R (data analysis)
>> projects.
>>
>> I know of SVN and github, and wonder if there is any reason for which I
>> should pref
Thanks David
That's great
As a matter of interest, to get a data frame by studies why do you have to do
fitsdf <- as.data.frame(t(as.data.frame(fits)))
Why doesn't
fitsdf <- as.data.frame(t(fits))
work?
Sandy Small
From: David Winsemius [dwinsem...@comc
Hello,
and sorry for asking a question without the data - hope it can still
be answered:
I've run two things on the same data:
# Using lme:
mix.lme <- lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
group, data = mydata)
# Using lmer
mix.lmer <- lmer(DV
~a+b+c+d+(1|group)+(e|group)+(f|grou
On Oct 26, 2010, at 9:27 AM, David Smith wrote:
Many thanks for the help.
You could express your thanks by including context the next time you
present a follow-up (as requested in the Posting Guide). Only a
minority of readers view this list on Nabble, so we don't see the web
delivered
On Oct 26, 2010, at 10:22 AM, Small Sandy (NHS Greater Glasgow &
Clyde) wrote:
Thanks David
That's great
As a matter of interest, to get a data frame by studies why do you
have to do
fitsdf <- as.data.frame(t(as.data.frame(fits)))
The apply family of functions often return results rot
Marc is exactly right about people having strong opinions.
R-forge is really the _only_ reason to consider using svn.
git is where the world is headed. This video is a little old:
http://www.youtube.com/watch?v=4XpnKHJAok8, but does a good job
getting the point across.
Hg is a good alternative,
Try this:
sapply(by(x, x$basestudy, lm, formula = ef ~ quartile), coef)
On Tue, Oct 26, 2010 at 1:08 PM, Small Sandy (NHS Greater Glasgow & Clyde) <
sandy.sm...@nhs.net> wrote:
> Hi
>
> I need some help getting results from multiple linear models into a
> dataframe.
> Let me explain the problem.
Hello all,
I could use some help installing the ncdf4 package in R (under CentOS 5.4).
I've installed R-2.12.0, zlib-1.2.5, hdf5-1.8.4-patch1 and NetCDF4.1.1 from
source. Make check reports all tests have passed for all of these programs.
When I issue an 'install.packages(c('ncdf4')) in R, comp
On Fri, 2010-10-22 at 15:39 +0200, Claudia Beleites wrote:
> On 10/22/2010 03:15 PM, DrCJones wrote:
>
> Being a chemist, it seemed natural to me to put the i after the concentration
> brackets into a subscript - though you didn't say you want that.
>
> A more "correct" expression would be:
>
shaticus wrote:
>
> Hello all,
>
> I could use some help installing the ncdf4 package in R (under CentOS
> 5.4).
>...
> When I issue an 'install.packages(c('ncdf4')) in R, compilation succeeds
> but
> I receive the following error during the loading phase of installation:
>
> ** building package in
On Fri, 2010-10-22 at 05:54 -0700, Penny Adversario wrote:
> I am doing cluster analysis on 8768 respondents on 5 lifestyle
> variables and am having difficulty constructing a dissimilarity matrix
> which I will use for PAM. I always get an error: “cannot allocate
> vector of size 293.3 Mb” even
Hi all,
I have a problem with this code... as it generates an error in R.
z<-predict(dat.fit, newdata=grd)
I saw a post on R forum about this ["chfactor.c", line 130: singular matrix in
function LDLfactor() ] error and tried pretty much anything I could read about
it and still hav
Hello Masters,
I run the loess() function to obtain local weighted regressions, given
lowess() can't handle NAs, but I don't
improve significantly my situation.., actually loess() performance leave
me much puzzled
I attach my easy experiment below
#--SCRIPT---
Hello,
I was wondering if anyone knew of a function that fits haplotype data into a
cox proportional hazard model. I have computed my Haplotype frequencies
using the haplo.stats package. I have also been using the haplo.glm function
but this is a linear regression and is not quite what I am look
I have an update on where the issue is coming from.
I commented out the code for "pos[k+1] <- M[i,j]" and the if statement for
time = 10^4, 10^5, 10^6, 10^7 and the storage and everything ran fast(er).
Next I added back in the "pos" statements and still runtimes were good
(around 20 minutes).
So
Hi:
When it comes to split, apply, combine, think plyr.
library(plyr)
ldply(split(afvtprelvefs, afvtprelvefs$basestudy),
function(x) coef(lm (ef ~ quartile, data=x, weights=1/ef_std)))
.id (Intercept) quartile
1 CBP090802020.92140 3.38546887
2 CBP090802129.31632 0.013
> git is where the world is headed. This video is a little old:
> http://www.youtube.com/watch?v=4XpnKHJAok8, but does a good job
> getting the point across.
And lots of R users are using github already:
http://github.com/languages/R/created
Hadley
--
Assistant Professor / Dobelman Family Juni
On Oct 26, 2010, at 11:15 AM, Gavin Simpson wrote:
On Fri, 2010-10-22 at 15:39 +0200, Claudia Beleites wrote:
On 10/22/2010 03:15 PM, DrCJones wrote:
Being a chemist, it seemed natural to me to put the i after the
concentration
brackets into a subscript - though you didn't say you want
?loess
use this instead:
fit <- loess(b~a)
lines(a, predict(fit))
--
Jonathan P. Daily
Technician - USGS Leetown Science Center
11649 Leetown Road
Kearneysville WV, 25430
(304) 724-4480
"Is the room still a room when its empty? Does the room,
the thing itself
On Oct 26, 2010, at 8:57 AM, sr500 wrote:
Hello,
I was wondering if anyone knew of a function that fits haplotype
data into a
cox proportional hazard model. I have computed my Haplotype
frequencies
using the haplo.stats package. I have also been using the haplo.glm
function
but this is
On Tue, Oct 26, 2010 at 11:55 AM, Dennis Murphy wrote:
> Hi:
>
> When it comes to split, apply, combine, think plyr.
>
> library(plyr)
> ldply(split(afvtprelvefs, afvtprelvefs$basestudy),
> function(x) coef(lm (ef ~ quartile, data=x, weights=1/ef_std)))
Or do it in two steps:
models <- d
Is it possible to get survfit to produce the survival line for a single strata
like
preddow <- survfit(modall,newdata=newdat,se.fit=F,strata=2)
# the strata argument is being ignored in the call above
Or even get a more economical/faster calculation of the hazard directly from
the coxph object
Hello
I am trying to use sbrier in ipred but got an error message (see below).
Can someone help?
---
I. function()
{
library(ipred)
library(survival)
set.seed(12345)
age <- rnorm(30, 50, 10)
stime <- rexp(30)
cens <- runif(30,.5,2)
sevent <- as.numeric(stime <= cens)
stime <- pmin(stime,
On Tue, Oct 26, 2010 at 12:27 PM, Dimitri Liakhovitski
wrote:
> Hello,
> and sorry for asking a question without the data - hope it can still
> be answered:
> I've run two things on the same data:
> # Using lme:
> mix.lme <- lme(DV ~a+b+c+d+e+f+h+i, random = random = ~ e+f+h+i|
> group, data = m
Hi everyone,
Why am I having such a tough time finding a way to put an mlogit summary
table into latex? Everywhere I read says that using Sweave and latex is the
most sophisticated, dynamic way to get output, but it appears very limited
in this respect. I'm just starting out with Sweave and LaTeX
Thanks David!
After setting the LD_LIBRARY_PATH variable(to /usr/local/lib) R was able to
successfully install the ncdf4 package.
On Tue, Oct 26, 2010 at 2:24 PM, David Pierce [via R] <
ml-node+3014258-455613790-200...@n4.nabble.com
> wrote:
> shaticus wrote:
>
> >
> > Hello all,
> >
> > I coul
On 2010-10-26 11:48, Jonathan P Daily wrote:
?loess
use this instead:
fit<- loess(b~a)
lines(a, predict(fit))
I don't think that will work when there are incomplete cases,
in which case 'a' and predict(fit) may not correspond.
I think that it's always best to define a set of predictor
values
On 10/25/2010 8:56 PM, Daisy Englert Duursma wrote:
Hello,
If I have a dataframe:
example(data.frame)
zz<-c("aa_bb","bb_cc","cc_dd","dd_ee","ee_ff","ff_gg","gg_hh","ii_jj","jj_kk","kk_ll")
ddd<- cbind(dd, group = zz)
and I want to divide the column named group by the "_", how would I do this?
1 - 100 of 143 matches
Mail list logo