Am Tue, 09 May 2017 10:00:17 -0700 schrieb Jeff Newmiller <jdnew...@dcn.davis.ca.us>:
> This boils down to the fact that some "my ways" are more effective in > the long run than others.. but I really want to address the complaint > > "... sometimes tedious to rebuild my environment by reexecuting > commands in the history" > > by asserting that letting R re-run a script that loads my functions > and packages (though perhaps not the data analysis steps) is always > very fast and convenient to do explicitly. I almost never use the > history file feature, because I type nearly every R instruction I use > into a script file and execute/edit it until it does what I want. I > keep functions in a separate file or package, and steps dealing with > a particular data set in their own file that uses source() to load > the functions) even when I am executing the lines interactively. My > goal is to regularly re-execute the whole script so that > tomorrow/next year/whenever someone notices something was wrong then > I can re-execute the sequence without following the dead ends I went > down the first time (as using a history file does) and I don't have a > separate clean-up-the-history-file step to go through to create it. > When I have confirmed that the script still works as it did before > then I can find where the analysis/data problem went wrong and fix it. My usual work with R is probably a bit different from yours. As I said before I work on many projects (often simultaneously) but I do routine work. For that I have my super function, the one I want to reload every time R starts, at the moment about 250 lines of code. This is always work in progress. In almost every project there is something that makes me edit this function. But in order to apply my function I need to prepare the data, e.g. getting them from a database or csv files, renaming the columns of data.frames etc. This is all tedious and not worth putting in scripts because these steps are very specific to the project and are rarely needed more than once. Sometimes one or two data records in project on which I worked a few days before turn out to be wrong and need to be changed. That's why I want to keep the data because changing the data.frame directly is much easier then starting from scratch. Meanwhile my function has evolved. But in the .RData file is still the old version, which is bad. However, I found a solution! .Last() gets executed before saving here, too. I simply had forgotten that I need to use rm() with pos=1, i.e. rm(myfun,pos=1) because otherwise rm wants to delete myfun from within the context of the function .Last() where it doesn't live. I changed my .Rprofile to: .First=function(){ assign("myfun",eval(parse(file=("~/R/myfun.R"))),pos=1) } .Last=function(){ rm(.First,pos=1) rm(myfun,pos=1) rm(.Last,pos=1) } and everything works as I want it. So no design flaw but still way too complicated in my opinion. Thanks to everybody who came up with suggestions. Ralf ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.