This is a 'rather than re-invent the wheel' post. Has anyone out there
re-written combn so that it can be parallelized - with multicore, snow, or
otherwise? I have a job that requires large numbers of combinations, and
rather than get all of the index values, then crank it through mclapply, I
was
Hrm. I have to admit, I don't entirely understand how to use the scaling,
and that seems like a lot of unneeded extra code. It is what it is, though.
The documentation about scaling is somewhat obtuse. Do you have a clear
explanation of what it is and how to use it in this instance? Perhaps e
OK, I see how this would work for a rolling mean with continuous
observations. However, my actual problem is a bit more complex. I have a
number of observations, not necessarily evenly spaced. So, the data I'm
working with looks somewhat more like what this would produce
set.seed(2003)
sites<-
Interesting. While this will work for a single site, however, for multiple
sites or other grouping variables, I'm left with the same problem of
efficiently going from one data from to the other.
David Winsemius wrote:
>
>
>
> library(zoo)
> ?rollmean
>
> David Winsemius, MD
> Heritage Labo
Wow. That's actually quite beautiful. Gratzi - both to you and the authors
of the language.
Henrique Dallazuanna wrote:
>
> Try:
>
>
> a <- function(x)m<<-4
>
> On Mon, Jun 15, 2009 at 3:10 PM, Jarrett Byrnes
> wrote:
>
>
--
View this message in context:
http://www.nabble.com/alterin
No worries. I've actually switched to using Rgraphviz from bioconductor, and
the results are pretty great (after the initial head-pounding to get it
setup).
Gábor Csárdi-2 wrote:
>
> On Mon, May 4, 2009 at 8:31 PM, jebyrnes wrote:
>>
>> Nearly. The algorithm turn
simple,
I'd be willing to take a whack at it.
Gábor Csárdi-2 wrote:
>
> Jarrett,
>
> the 'igraph' package has a layout called layout.reingold.tilford that
> is designed for trees, there is a slight chance that it is good enough
> for you.
>
> Best,
&
I've been using sna to work with some networks, and am trying to visualize
them easily. My networks are hierarchical (food webs). All of the layout
engines I've tried with gplot don't seem to plot hierarchical networks, as
one would using dot from graphviz. While I could do all of this by
outpu
Hey, Dorothee. SEM does not use the S-B chi squared index. If you want to
use it, however, I have some methods written for the Satorra-Bentler Chi
Square test in the sem.additions package over at r-forge.
To install it directly within R type
install.packages("sem.additions",repos="http://R-Forg
Have you thought about using AIC weights? As long as you are not considering
models where you drop your random effects, calculating AIC values (or AICc
values) and doing multimodel inference is one way to approach your problem.
If you are fitting models with and without random effects, it gets t
Not that I know of. However, remember, the RAM format is just a matrix.
It's fairly simple to write some code to scan through and make changes to
models. For example, here's something to delete a specified path. It
should be fairly simple to run through a set of paths, delete them
piecewise, f
Michael,
As a general rule of thumb (I believe this is in Jim Grace's book, if not
others) one should use 10-20 observations per variable. If you have 5
variables, and 18 observations, you should probably be a bit suspect of your
results. That said, if some of your paths are indeed non-signific
I often just download the source, find the appropriate function, create an
file with an alternate version of it (i.e. plotMeans becomes plot.Means) and
modify it to suit. I load all custom functions like that from my .rprofile.
I guess you _could_ recompile the library, but, that might break oth
As a followup, what about wrting ANOVA tables and such to files? I keep
getting odd errors converting an ANOVA object to a data frame and such.
Mark Wardle wrote:
>
> I would look at Sweave, particularly outputting to LaTeX. Then have a
> look at the xtable or Hmisc's latex() functions.
>
> I
l). In this
case, one needs to look at the coefficients and setup the appropriate
contrast as on contr.
At least, I think so. I have not yet found a way to use mcp for factorial
designs. Does one exist?
-Jarrett
jebyrnes wrote:
>
> Quick question about the usage of glht. I'm worki
I believe you're looking for summary. So
my.glm<-glm(Response ~ TrtA*TrtB)
summary(my.glm)
will give you p values for each parameter value. Similarly anova(my.glm)
will give you p values for the likelihood ratio chi-square statistic for
each factor using sequential tests. You can also use Ano
getting hung up.
Chuck Cleland wrote:
>
> On 3/5/2008 3:19 PM, jebyrnes wrote:
>> Indeed, but are not each of the cell means also evaluations of the effect
>> of
>> one factor at the specific level of another factor? Is this an issue of
>> "Tomato, tomahto".
Indeed, but are not each of the cell means also evaluations of the effect of
one factor at the specific level of another factor? Is this an issue of
"Tomato, tomahto".
I guess my question is, if I want to know if each of those is different from
0, then should I use the 48df from the full model,
Ah. I see. So, if I want to test to see whether each simple effect is
different from 0, I would do something like the following:
cm2 <- rbind(
"A:L" = c(1, 0, 0, 0, 0, 0),
"A:M" = c(1, 1, 0, 0, 0, 0),
"A:H" = c(1, 0, 1, 0, 0, 0),
"B:L" = c(1, 0, 0, 1, 0, 0),
"B:M" = c(1, 1, 0, 1, 1, 0)
Huh. Very interesting. I haven't really worked with manipulating contrast
matrices before, save to do a prior contrasts. Could you explain the matrix
you laid out just a bit more so that I can generalize it to my case?
Chuck Cleland wrote:
>
>
>One approach would be to use glht() in t
Sounds like you want to use the filehash package which was written for just
such problems
http://yusung.blogspot.com/2007/09/dealing-with-large-data-set-in-r.html
http://cran.r-project.org/web/packages/filehash/index.html
or maybe the ff package
http://cran.r-project.org/web/packages/ff/index.h
So, Site is nested in location. Location is nested in Region. And you are
looking at how density varies. Let's think about this from the point of
view of a model with varying intercepts.
You have some mean density in your study. That mean will deviate by site,
location, and region. Each of w
Here's a similar variant on what has been proposed, but is simpler. It
relies on the fact that plot() doesn't need a ~.
a<-1:100
b<-seq(1,length(a),5)
plot(1:20, a[1:b])
Alternately, if you were using a data frame, as long as you knew the column
names, you could do something like
plot(my.dat
Rather than go over all of this again, see here
http://wiki.r-project.org/rwiki/doku.php?id=guides:lmer-tests
--
View this message in context:
http://www.nabble.com/lmer-in-package-of-lme4-tp15501802p15502440.html
Sent from the R help mailing list archive at Nabble.com.
__
The trick is to use the fitted() function, not predict(), to get your fitted
values. You should then be able to use that vector of values in just the
same way that you use your current mean values as below.
Darren Norris wrote:
>
>
> lmodel<-with(a_dataframe,lm(mean_ind~sin(time*2*pi)+cos(tim
Have you tried making a list of data frames instead? So
data.list<-list()
for (i in 1:20) {
g<-sample(rep(LETTERS[1:2],each=10))
#make a name
a.name<-paste("combination",i,sep="")
#add it to the list of data frames
data.list[[a.name]]<-data.frame(tab,g)
}
This shoul
Working with lists of models gets stranger. I've actually found that, with
lmer objects in a list, as long as I use them such as in the following
example
all.models<-c(my.lmer, my.lmer2)
summary(all.models[[1]])
Everything is fine. However, if, instead, I had lm objects or glm objects,
the sam
27 matches
Mail list logo