[R] compiling, linking and including multiple FORTRAN subroutines in R
Although there are a number of documents describing library development, in which compiling and using C or FORTRAN source code is discussed, such documents are too detailed and complicated for my limited needs (i.e. I can't follow them!). I am looking for a worked example (i.e. the series of commands etc) that does the following from within a WINDOWS environment: Given a series (i.e. more than one) of files containing FORTRAN source code for subroutines 1. how to compile and link these subroutines using the g77 compiler that is included in the R Tools (one main subroutine will call a series of other subroutines) to produce a .DLL file 2. how to call this compiled subroutine, linked to the others, from within an R function so that one can pass the arguments to the subroutine and get the returned values. I am familiar with FORTRAN programming and with R but not with the interface of the two. Thanks for any help or leads. Bill Shipley bill.ship...@usherbrooke.ca __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] specifying lme function with a priori hypothesis concerning between-group variation in slopes
I want to specify a 2-level mixed model using the lme function in order to test an a priori hypothesis about the between-group values of the slopes but don't know how to do this . Here is the problem. Consider first the case of a single group. The model is: Y_i= a +bX_i + error where I indexes the different values of X and Y in this group . The a priori hypothesis of the slope is: b=K. This is easily tested with a t-test (b-K=0). Now imagine that there are j groups. For each group j the model is: Y_ij= a_j + b_jX_ij + error. Both the intercepts (a) and the slopes (b) are allowed to vary between groups. The a priori (null) hypothesis of interest involved the between-group values of the slopes and is: b_j=Kj where Kj is specified a priori for each group j based on theoretical considerations but whose values differ between groups. This is clearly a mixed-model problem. I know how to specify the model in lme but I don't know how to set up the inferential test that b_j=Kj for all j groups versus the alternative hypothesis that b_j is not equal to Kj for at least one group. Any help in explaining how to do this using the mle function in R is appreciated. Thanks. Bill Shipley Département de biologie Université de Sherbrooke Sherbrooke (Québec) J1K 2R1 (819) 821-8000, 62079 (819) 821-8049 (Fax) NEW! Shipley, B. (2010). From plant traits to vegetation structure: Chance and selection in the assembly of ecological communities. Cambridge University Press. http://www.amazon.com/Plant-Traits-Vegetation-Structure-Communities/dp/05211 33556/ref=sr_1_3?ie=UTF8&s=books&qid=1260148938&sr=1-3 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] superposing barplots having different scales
Hello. I know how to make a bar plot in which a numeric y variable is plotted against some grouping variable X (say, groups A, B, C) when this grouping variable is subdivided into each of two subgroups; so the bars would be: (group A subgroup 1) beside (group A subgroup 2), then (group B subgroup 1) beside (group B subgroup 2), and so on. This is done using the beside=TRUE argument in the barplot() function. However, I want to make a slightly different type of barplot in which the numerical values of the subgroups (subgroups 1 and 2) are measured in very different scales. I want to use different scales to define the numerical y values for each subgroup. This graph would have one scale on the left-hand y-axis and a second scale on the right-hand y-axis. I cannot simply superimpose two bar plots because I have to make sure that the subgroup bars are beside each other. Bill Shipley North American Editor, Annals of Botany Département de biologie Université de Sherbrooke Sherbrooke (Québec) J1K 2R1 Canada (819) 821-8000, poste 62079 (819) 821-8049 FAX <http://pages.usherbrooke.ca/jshipley/recherche/> http://pages.usherbrooke.ca/jshipley/recherche/ [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] Type I or III SS with mixed model function lme
Hello, I have come across a result that I cannot explain, and am hoping that someone else can provide an answer. A student fitted a mixed model using the lme function: out<- lme(fixed=Y~A+B+A:B, random=~1|Site). Y is a continuous variable while A and B are factors. The data set is balanced with the same number of observations in each combination of A and B. There are two hierarchical levels: Site and plots nested in site. He tried two different ways of getting theANOVA table: anova(out) and anova(out, type=marginal). Since the data were balanced, these two ways should (I think) give the same output since they correspond to Type I and III sums of squares in the SAS terminology. At least, this is the case with normal (i.e. not mixed) linear models. However, he finds very different results of these two types of ANOVA tables. Why? Bill Shipley North American Editor, Annals of Botany Département de biologie Université de Sherbrooke Sherbrooke (Québec) J1K 2R1 Canada (819) 821-8000, poste 62079 (819) 821-8049 FAX <http://pages.usherbrooke.ca/jshipley/recherche/> http://pages.usherbrooke.ca/jshipley/recherche/ [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] solution to differences in sequential and marginal ANOVA using a mixed model
Yesterday I posted the following question to the help list. Thanks to John Fox (copied below) who pointed out the solution. Original question: I have come across a result that I cannot explain, and am hopingthat someone else can provide an answer. A student fitted a mixed model usingthe lme function: out<- lme(fixed=Y~A+B+A:B, random=~1|Site). Y is a continuous variable while A and B are factors. The data set is balanced with the same number of observations in each combination of A and B. There are two hierarchical levels: Site and plots nested in site. He tried two different ways of getting theANOVA table: anova(out) and anova(out, type="marginal"). Since the data were balanced, these two ways should (I think) give the same output since they correspond to Type I and III sums of squares in the SAS terminology. At least, this is the case with normal (i.e. not mixed) linear models. However, he finds very different results of these two types of ANOVA tables. Why? Response of John Fox: Dear Bill, I expect that the problem is in the contrasts that your student used for A and B, though I haven't thought specifically about the context of a mixed model. If he or she used the default contr.treatment(), then the contrasts for different factors (and the interaction) are not orthogonal in the row basis of the model matrix and hence are not orthogonal, even for balanced data. Using, e.g., contr.sum() should provide A, B, and A:B contrasts that are orthogonal to each other. Indeed, changing to an appropriate type of contrast did solve the problem! My problem was in forgetting that R uses treatment contrasts by default while SPLUS uses Helmert contrasts by default (which would have worked as well as sum contrasts). [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
[R] avoiding termination of nls given convergence failure
Hello. I have a script in which I repeatedly fit a nonlinear regression to a series of data sets using nls and the port algorithm from within a loop. The general structure of the loop is: for(i in 1:n){ extract relevant vectors of dependent and independent variables estimate starting values for Amax and Q.LCP fit<-nls(photosynthesis~fit.Mitcherlich(irradiance,Amax,LCP,Q.LCP),data=temp , start=list(Amax=Astart,Q.LCP=x,LCP=33),control=list(maxiter=100,tol=5e-4), na.action=na.omit,trace=T,algorithm="port",lower=c(0,0,0)) } Despite trying to estimate good starting values, the nls function occasionally experiences problems with convergence. When this happens the function stops and prints an error message, thus preventing the loop from continuing. Is there some what of detecting the convergence problem while preventing the nls function from stopping when this happens, so that the loop can continue? Bill Shipley North American Editor, Annals of Botany Département de biologie Université de Sherbrooke Sherbrooke (Québec) J1K 2R1 Canada (819) 821-8000, poste 62079 (819) 821-8049 FAX http://pages.usherbrooke.ca/jshipley/recherche/ [[alternative HTML version deleted]] __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.