Re: [R] run r script in r-fiddle
Try that source statement here -- it is running R 3.4.1: https://www.tutorialspoint.com/execute_r_online.php On Mon, Oct 30, 2017 at 11:14 AM, Suzen, Mehmet wrote: > Note that, looks like r-fiddle runs R 3.1.2. > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Portable R in zip file for Windows
I believe that the ordinary Windows installer for R can produce a portable result by choosing the appropriate configuration options from the offered screens when you run the installer Be sure to enter the desired path in the Select Destination Location screen, choose Yes on the Startup options screen and ensure that all boxes are unchecked on the Select additional tasks screen. On Wed, Jan 24, 2018 at 10:11 PM, Juan Manuel Truppia wrote: > I read a message from 2009 or 2010 where it mentioned the availability of R > for Windows in a zip file, no installation required. It would be very > useful for me. Is this still available somewhere? > > Thanks > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Portable R in zip file for Windows
Can you clarify what the nature of the security restriction is? If you can't run the R installer then how it is that you could run R? That would still involve running an external exe even if it came in a zip file. Could it be that the restriction is not on running exe files but on downloading them? If that is it then there are obvious workarounds (rename it not to have an exe externsion or zip it using another machine, upload to the cloud and download onto the restricted machine) but it might be safer to just ask the powers that be to download it for you. You probably don't need a new version of R more than once a year. On Thu, Jan 25, 2018 at 3:04 PM, Juan Manuel Truppia wrote: > What is wrong with you guys? I asked for a zip, like R Studio has for > example. Totally clear. > I cant execute exes. But I can unzip files. > Thanks Gabor, I had that in mind, but can't execute the exe due to security > restrictions. > Geez, really, treating people who ask questions this way just makes you > don't want to ask a single one. > > > On Thu, Jan 25, 2018, 11:19 Gabor Grothendieck > wrote: >> >> I believe that the ordinary Windows installer for R can produce a >> portable result by choosing the appropriate configuration options from the >> offered screens when you run the installer Be sure to enter the desired >> path in the Select Destination Location screen, choose Yes on the >> Startup options screen and ensure that all boxes are unchecked on the >> Select additional tasks screen. >> >> On Wed, Jan 24, 2018 at 10:11 PM, Juan Manuel Truppia >> wrote: >> > I read a message from 2009 or 2010 where it mentioned the availability >> > of R >> > for Windows in a zip file, no installation required. It would be very >> > useful for me. Is this still available somewhere? >> > >> > Thanks >> > >> > [[alternative HTML version deleted]] >> > >> > __ >> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> > https://stat.ethz.ch/mailman/listinfo/r-help >> > PLEASE do read the posting guide >> > http://www.R-project.org/posting-guide.html >> > and provide commented, minimal, self-contained, reproducible code. >> >> >> >> -- >> Statistics & Software Consulting >> GKX Group, GKX Associates Inc. >> tel: 1-877-GKX-GROUP >> email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Plotting quarterly time series
Using Achim's d this also works to generate z where FUN is a function used to transform the index column and format is also passed to FUN. z <- read.zoo(d, index = "time", FUN = as.yearqtr, format = "Q%q %Y") On Sun, Jan 28, 2018 at 4:53 PM, Achim Zeileis wrote: > On Sun, 28 Jan 2018, p...@philipsmith.ca wrote: > >> I have a data set with quarterly time series for several variables. The >> time index is recorded in column 1 of the dataframe as a character vector >> "Q1 1961", "Q2 1961","Q3 1961", "Q4 1961", "Q1 1962", etc. I want to produce >> line plots with ggplot2, but it seems I need to convert the time index from >> character to date class. Is that right? If so, how do I make the conversion? > > > You can use the yearqtr class in the zoo package, converting with > as.yearqtr(..., format = "Q%q %Y"). zoo also provides an autoplot() method > for ggplot2-based time series visualizations. See ?autoplot.zoo for various > examples. > > ## example data similar to your description > d <- data.frame(sin = sin(1:8), cos = cos(1:8)) > d$time <- c("Q1 1961", "Q2 1961", "Q3 1961", "Q4 1961", "Q1 1962", > "Q2 1962", "Q3 1962", "Q4 1962") > > ## convert to zoo series > library("zoo") > z <- zoo(as.matrix(d[, 1:2]), as.yearqtr(d$time, "Q%q %Y")) > > ## ggplot2 display > library("ggplot2") > autoplot(z) > > ## with nicer axis scaling > autoplot(z) + scale_x_yearqtr() > > ## some variations > autoplot(z, facets = Series ~ .) + scale_x_yearqtr() + geom_point() > autoplot(z, facets = NULL) + scale_x_yearqtr() + geom_point() > > > >> __ >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Equivalent of gtools::mixedsort in R base
split any mixed columns into letter and number columns and then order can be used on that: DF <- data.frame(x = c("a10", "a2", "a1")) o <- do.call("order", transform(DF, let = gsub("\\d", "", x), no = as.numeric(gsub("\\D", "", x)), x = NULL)) DF[o,, drop = FALSE ] On Mon, Mar 12, 2018 at 12:15 AM, Sebastien Bihorel wrote: > Hi, > > Searching for functions that would order strings that mix characters and > numbers in a "natural" way (ie, "a1 a2 a10" instead of "a1 a10 a2"), I found > the mixedsort and mixedorder from the gtools package. > > Problems: > 1- mixedorder does not work in a "do.call(mixedorder, mydataframe)" call like > the order function does > 2- gtools has not been updated in 2.5 years > > Are you aware of an equivalent of this function in base R or a another > contributed package (with correction of problem #1)? > > Thanks > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Take average of previous weeks
There is no `value` column in the `dput` output shown in the question so using `tmin` instead note that the `width=` argument of `rollapply` can be a list containing a vector of offsets (-1 is prior value, -2 is value before that, etc.) and that we can use `rollapplyr` with an `r` on the end to get right alignment. See `?rollapply` library(dplyr) library(zoo) roll <- function(x, k) rollapplyr(x, list(-seq(1:k)), mean, fill = NA) df %>% group_by(citycode) %>% mutate(mean2 = roll(tmin, 2), mean3 = roll(tmin, 3), mean4 = roll(tmin, 4)) %>% ungroup (The code above has been indented 2 spaces so you can identify inadvertent line wrapping by the email system.) On Sun, Mar 25, 2018 at 10:48 AM, Miluji Sb wrote: > Dear all, > > I have weekly data by city (variable citycode). I would like to take the > average of the previous two, three, four weeks (without the current week) > of the variable called value. > > This is what I have tried to compute the average of the two previous weeks; > > df = df %>% > mutate(value.lag1 = lag(value, n = 1)) %>% > mutate(value .2.previous = rollapply(data = value.lag1, > width = 2, > FUN = mean, > align = "right", > fill = NA, > na.rm = T)) > > I crated the lag of the variable first and then attempted to compute the > average but this does not seem to to what I want. What I am doing wrong? > Any help will be appreciated. The data is below. Thank you. > > Sincerely, > > Milu > > dput(droplevels(head(df, 10))) > structure(list(year = c(1970L, 1970L, 1970L, 1970L, 1970L, 1970L, > 1970L, 1970L, 1970L, 1970L), citycode = c(1L, 1L, 1L, 1L, 1L, > 1L, 1L, 1L, 1L, 1L), month = c(1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, > 2L, 3L), week = c(1L, 2L, 3L, 4L, 5L, 5L, 6L, 7L, 8L, 9L), date = > structure(c(1L, > 2L, 3L, 4L, 5L, 5L, 6L, 7L, 8L, 9L), .Label = c("1970-01-10", > "1970-01-17", "1970-01-24", "1970-01-31", "1970-02-07", "1970-02-14", > "1970-02-21", "1970-02-28", "1970-03-07"), class = "factor"), > value = c(-15.035, -20.478, -22.245, -23.576, -8.840995, > -18.497, -13.892, -18.974, -15.919, -13.576)), .Names = c("year", > "citycode", "month", "week", "date", "tmin"), row.names = c(NA, > 10L), class = "data.frame") > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Extract function parameters from a R expression
If you specifically want to know which packages were loaded by the script then using a vanilla version of R (i.e. one where only base packages are loaded): vanilla_search <- search() source("myRprg.R") setdiff(search(), vanilla_search) On Wed, Jun 20, 2018 at 4:08 AM, Sigbert Klinke wrote: > Hi, > > I have read an R program with > > expr <- parse("myRprg.R") > > How can I extract the parameters of a specifc R command, e.g. "library"? > > So, if myprg.R containes the lines > > library("xyz") > library("abc") > > then I would like to get "xyz" and "abc" back from expr. > > Thanks in advance > > Sigbert > > -- > https://hu.berlin/sk > > > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Combine by columns a vector with another vector that is constant across rows
Try Reduce: Reduce(cbind, vec, 1:5) On Tue, Jul 3, 2018 at 9:28 AM, Viechtbauer, Wolfgang (SP) wrote: > Hi All, > > I have one vector that I want to combine with another vector and that other > vector should be the same for every row in the combined matrix. This > obviously does not work: > > vec <- c(2,4,3) > cbind(1:5, vec) > > This does, but requires me to specify the correct value for 'n' in > replicate(): > > cbind(1:5, t(replicate(5, vec))) > > Other ways that do not require this are: > > t(sapply(1:5, function(x) c(x, vec))) > do.call(rbind, lapply(1:5, function(x) c(x, vec))) > t(mapply(c, 1:5, MoreArgs=list(vec))) > > I wonder if there is a simpler / more efficient way of doing this. > > Best, > Wolfgang > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Combine by columns a vector with another vector that is constant across rows
or this variation if you don't want the first column to be named init: Reduce(cbind2, vec, 1:5) On Tue, Jul 3, 2018 at 10:46 AM, Gabor Grothendieck wrote: > Try Reduce: > > Reduce(cbind, vec, 1:5) > > On Tue, Jul 3, 2018 at 9:28 AM, Viechtbauer, Wolfgang (SP) > wrote: >> Hi All, >> >> I have one vector that I want to combine with another vector and that other >> vector should be the same for every row in the combined matrix. This >> obviously does not work: >> >> vec <- c(2,4,3) >> cbind(1:5, vec) >> >> This does, but requires me to specify the correct value for 'n' in >> replicate(): >> >> cbind(1:5, t(replicate(5, vec))) >> >> Other ways that do not require this are: >> >> t(sapply(1:5, function(x) c(x, vec))) >> do.call(rbind, lapply(1:5, function(x) c(x, vec))) >> t(mapply(c, 1:5, MoreArgs=list(vec))) >> >> I wonder if there is a simpler / more efficient way of doing this. >> >> Best, >> Wolfgang >> >> __ >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to calculate Rolling mean for a List object
Try this code which does not use rollapply: w <- 3 Mean <- function(L) Reduce("+", L) / length(L) lapply(w:length(Data), function(i) Mean(Data[seq(to = i, length = w)])) On Sun, May 14, 2017 at 6:44 PM, Christofer Bogaso wrote: > Hi again, > > I am looking to find a way on how to calculate Rolling average for the > elements of a list. For example consider below object 'Data'. This is > a list, where each elements are a Matrix. Basically, I am trying to > get Rolling average of those Matrices with rolling window as 5. > > Data = structure(list(`2017-03-01` = structure(c(1.24915216491479e-06, > -2.0209685810767e-06, -6.64165527006046e-06, -2.0209685810767e-06, > 3.26966891657893e-06, 1.07453495291747e-05, -6.64165527006046e-06, > 1.07453495291747e-05, 3.53132196103035e-05), .Dim = c(3L, 3L)), > `2017-03-02` = structure(c(0.00863066441403338, -7.25585852047094e-05, > -0.000950715788640005, -7.25585852047094e-05, 6.10004981580403e-07, > 7.99273256915577e-06, -0.000950715788640005, 7.99273256915577e-06, > 0.000104726642980084), .Dim = c(3L, 3L)), `2017-03-03` = > structure(c(0.000785677680557358, > 0.000283148300122928, 0.000170319078518317, 0.000283148300122928, > 0.000102043066573597, 6.13808419844048e-05, 0.000170319078518317, > 6.13808419844048e-05, 3.6921741860797e-05), .Dim = c(3L, > 3L)), `2017-03-06` = structure(c(0.000100715163251975, > 1.80035062425799e-06, > -5.05489732985851e-07, 1.80035062425799e-06, 3.21824665284709e-08, > -9.03596565752718e-09, -5.05489732985851e-07, -9.03596565752718e-09, > 2.53705461922188e-09), .Dim = c(3L, 3L)), `2017-03-07` = > structure(c(0.000640065014281149, > -0.000110994847091752, -0.000231235438845606, -0.000110994847091752, > 1.92478198402357e-05, 4.00989612058198e-05, -0.000231235438845606, > 4.00989612058198e-05, 8.35381203238728e-05), .Dim = c(3L, > 3L)), `2017-03-08` = structure(c(7.72648041923266e-06, > -2.11571338014623e-05, > 7.82052544997182e-06, -2.11571338014623e-05, 5.79337921544145e-05, > -2.14146538093767e-05, 7.82052544997182e-06, -2.14146538093767e-05, > 7.91571517626794e-06), .Dim = c(3L, 3L)), `2017-03-09` = > structure(c(4.43321118550061e-05, > 1.90242249279913e-05, 5.68672547310199e-05, 1.90242249279913e-05, > 8.16385953582618e-06, 2.44034267661023e-05, 5.68672547310199e-05, > 2.44034267661023e-05, 7.29467766214148e-05), .Dim = c(3L, > 3L)), `2017-03-10` = structure(c(0.000100081081692311, > 1.39245218598852e-05, > 2.0935583168872e-05, 1.39245218598852e-05, 1.93735225227204e-06, > 2.91281809264057e-06, 2.0935583168872e-05, 2.91281809264057e-06, > 4.3794355057858e-06), .Dim = c(3L, 3L)), `2017-03-14` = > structure(c(7.82185299651879e-06, > -3.05963602958646e-05, -4.65590052688468e-05, -3.05963602958646e-05, > 0.00011968228804236, 0.000182122586662866, -4.65590052688468e-05, > 0.000182122586662866, 0.000277139058045361), .Dim = c(3L, > 3L)), `2017-03-15` = structure(c(4.02156693772954e-05, > -2.2362610665311e-05, > -2.08706726432905e-05, -2.2362610665311e-05, 1.24351120722764e-05, > 1.16054944222453e-05, -2.08706726432905e-05, 1.16054944222453e-05, > 1.08312253240602e-05), .Dim = c(3L, 3L)), `2017-03-16` = > structure(c(2.64254966198469e-05, > 5.78730550194069e-06, 5.0445603894268e-05, 5.78730550194069e-06, > 1.26744656702641e-06, 1.10478196556107e-05, 5.0445603894268e-05, > 1.10478196556107e-05, 9.62993804379875e-05), .Dim = c(3L, > 3L)), `2017-03-17` = structure(c(0.000138433807049962, > 8.72005344938308e-05, > 0.00014374477881467, 8.72005344938308e-05, 5.49282966209652e-05, > 9.05459570205481e-05, 0.00014374477881467, 9.05459570205481e-05, > 0.000149259504428865), .Dim = c(3L, 3L)), `2017-03-20` = > structure(c(3.92058275846982e-05, > 1.24332187386233e-05, -1.24235553811814e-05, 1.24332187386233e-05, > 3.94290690251335e-06, -3.93984239286701e-06, -1.24235553811814e-05, > -3.93984239286701e-06, 3.93678026502162e-06), .Dim = c(3L, > 3L)), `2017-03-21` = structure(c(0.000407544227952838, > -6.22427018306449e-05, > 1.90596071859105e-05, -6.22427018306449e-05, 9.50609446890975e-06, > -2.9109023406881e-06, 1.90596071859105e-05, -2.9109023406881e-06, > 8.91360007491622e-07), .Dim = c(3L, 3L)), `2017-03-22` = > structure(c(0.000220297355944482, > 0.000282600064158173, 8.26030839524992e-05, 0.000282600064158173, > 0.000362522718077154, 0.00010596421697645, 8.26030839524992e-05, > 0.00010596421697645, 3.09729976068491e-05), .Dim = c(3L, > 3L)), `2017-03-23` = structure(c(1.19559010537042e-05, > 3.56054556562106e-05, > 5.51130473489473e-06, 3.56054556562106e-05, 0.000106035376739222, > 1.64130261253175e-05, 5.51130473489473e-06, 1.64130261253175e-05, > 2.54054292892148e-06), .Dim = c(3L, 3L)), `2017-03-24` = > structure(c(0.000573948692221572, > -7.36566239512158e-05, 5.40736580500709e-05, -7.36566239512158e-05, > 9.45258404700116e-06,
Re: [R] rollapply() produces NAs
Maybe you want this.It computes VaRfun(r[c(i-500, i-1)] for each i for which the argument to r makes sense. rollapply(r, width = list(c(-500, -1)), FUN = VaRfun), On Sat, May 27, 2017 at 5:29 PM, Sepp via R-help wrote: > Hello, > I am fairly new to R and trying to calculate value at risk with exponentially > decreasing weights.My function works for a single vector of returns but does > not work with rollapply(), which is what I want to use. The function I am > working on should assig exponentially decreasing weights to the K most recent > returns and then order the returns in an ascending order. Subsequently it > should pick the last return for which the cumulative sum of the weights is > smaller or equal to a significance level. Thus, I am trying to construct a > cumulative distribution function and find a quantile. > This is the function I wrote: > VaRfun <- function(x, lambda = 0.94) { > #create data.frame and order returns such that the lates return is the first > df <- data.frame(weight = c(1:length(x)), return = rev(x)) K <- nrow(df) > constant <- (1-lambda)/(1-lambda^(K))#assign weights to the returnsfor(i > in 1:nrow(df)) {df$weight[i] <- lambda^(i-1) * constant}#order > returns in an ascending order df <- df[order(df$return),] > #add the cumulative sum of the weights df$cum.weight <- cumsum(df$weight) > #calculate value at risk VaR <- -tail((df$return[df$cum.weight <= .05]), 1) > signif(VaR, digits = 3)} > It works for a single vector of returns but if I try to use it with > rollapply(), such as > rollapply(r, width = list(-500, -1), FUN = VaRfun), > it outputs a vector of NAs and I don't know why. > Thank you for your help! > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Package sqldf in R and dates manipulation
See FAQ #4 on the sqldf github home page. On Fri, Aug 11, 2017 at 9:21 AM, Mangalani Peter Makananisa wrote: > Dear all, > > I recently read the book " R data preperation and manipulation using sqldf > package" by Djoni Darmawikarta > However, I have a problem with manipulation of dates using this package, I do > not get the expected results. Do I need to install some packages to be able > to subset the data by dates in sqldf? > > I am not getting Djoni Darmawikarta email address. > > Please see the practice data attached and advise, > > Kind regards, > > Mangalani Peter Makananisa (0005786) > South African Revenue Service (SARS) > Specialist: Statistical Support > TCEI_OR (Head Office) > Tell: +2712 422 7357, Cell: +2782 456 4669 > > > product = read.csv('D:/Users/S1033067/Desktop/sqldf prac/sqlprac.csv', > na.strings = '', header = T) > head(product) > library(sqldf) > sqldf() > > # out-put > >> sqldf("select * from product > + where (date(Launch_dt) >= date('01-01-2003')) > + ") > [1] P_CodeP_NameLaunch_dt Price > <0 rows> (or 0-length row.names) >> > > Please Note: This email and its contents are subject to our email legal > notice which can be viewed at > http://www.sars.gov.za/Pages/Email-disclaimer.aspx > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Case statement in sqldf
2018-03-3 in your code should be 2018-03-31. The line then'201415' needs to be fixed. When posting please provide minimal self-contained examples. There was no input provided and library statements not relevant to the posted code were included. Fixing the invalid date and bad line, getting rid of those library statements that are unnecessary and providing some test input, it works for me for the input shown. (Note that it would NOT work if we omitted library(RH2) since the default sqlite back end does not have date types and does not know that an R date -- which is sent to sqlite as the number of days since 1970-01-01 -- corresponds to a particular character string; however, the H2 database does have date types. See FAQ #4 on the sqldf github home page for more info. https://github.com/ggrothendieck/sqldf ) This works: library(sqldf) library(RH2) cr <- data.frame(ReportDate = as.Date("2017-09-11")) # input cr2 = sqldf(" select ReportDate , case when ReportDate between '2012-04-01' and '2013-03-31' then '2012_13' when ReportDate between '2013-04-01' and '2014-03-31' then '2013_14' when ReportDate between '2014-04-01' and '2015-03-31' then '2014_15' when ReportDate between '2015-04-01' and '2016-03-31' then '2015_16' when ReportDate between '2016-04-01' and '2017-03-31' then '2016_17' when ReportDate between '2017-04-01' and '2018-03-31' then '2017_18' else null end as FY from cr where ReportDate >= '2012-04-01' ") giving: > cr2 ReportDate FY 1 2017-09-11 2017_18 Note that using as.yearqtr from zoo this alternative could be used: library(zoo) cr <- data.frame(ReportDate = as.Date("2017-09-11")) # input fy <- as.integer(as.yearqtr(cr$ReportDate) + 3/4) transform(cr, FY = paste0(fy-1, "_", fy %% 100)) giving: ReportDate FY 1 2017-09-11 2017_18 On Mon, Sep 11, 2017 at 4:05 AM, Mangalani Peter Makananisa wrote: > Hi all, > > > > I am trying to create a new variable called Fiscal Year (FY) using case > expression in sqldf and I am getting a null FY , see the code below . > > >> +then '2017_18' else null>> South African Revenue >> Service (SARS)>> Specialist: Statistical Support>> TCEI_OR (Head Office)>> >> Tell: +272 422 7357, Cell: +2782 456 4669>> >> http://www.sars.gov.za/Pages/Email-disclaimer.aspxemail: ggrothendieck at >> gmail.with > Please advise me as to how I can do this mutation. > > > > library(zoo) > > library(lubridate) > > library(stringr) > > library(RH2) > > library(sqldf) > > > > cr$ReportDate = as.Date(cr$ReportDate, format ='%Y-%m-%d') > > > >> cr2 = sqldf(" select ReportDate > > + , case > > +when ReportDate between '2012-04-01' and > '2013-03-31' > > +then '2012_13' > > +when ReportDate between '2013-04-01' and > '2014-03-31' > > +then '2013_14' > > +when ReportDate between '2014-04-01' and > '2015-03-31' > > +then'201415' > > +when ReportDate between '2015-04-01' and > '2016-03-31' > > +then '2015_16' > > +when ReportDate between '2016-04-01' and > '2017-03-31' > > +then '2016_17' > > +when ReportDate between '2017-04-01' and > '2018-03-3' > > +end as FY > > + from cr > > + where ReportDate >= '2012-04-01' > > + ") > > > > Thanking you in advance > > > > Kind regards, > > > > Mangalani Peter Makananisa (0005786) > > > > > > Disclaimer > > Please Note: This email and its contents are subject to our email legal > notice which can be viewed at -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Convert data into zoo object using Performance analytics package
Depending on how you created df maybe your code has the column names wrong. In any case these 4 alternatives all work. Start a fresh R session and then copy and paste this into it. library(zoo) u <- "https://faculty.washington.edu/ezivot/econ424/sbuxPrices.csv"; fmt <- "%m/%d/%Y" # 1 sbux1.z <- read.csv.zoo(u, FUN = as.yearmon, format = fmt) # 2 df <- read.csv(u) sbux2.z <- read.zoo(df, FUN = as.yearmon, format = fmt) # 3 df <- read.csv(u) names(head(df)) ## [1] "Date" "Adj.Close" sbux3.z <- zoo(df$Adj.Close, as.yearmon(df$Date, fmt)) # 4 df <- read.csv(u) sbux4.z <- zoo(df[[2]], as.yearmon(df[[1]], fmt)) On Mon, Sep 18, 2017 at 7:36 AM, Upananda Pani wrote: > Dear All, > > While i am trying convert data frame object to zoo object I am > getting numeric(0) error in performance analytics package. > > The source code i am using from this website to learn r in finance: > https://faculty.washington.edu/ezivot/econ424/returnCalculations.r > > # create zoo objects from data.frame objects > dates.sbux = as.yearmon(sbux.df$Date, format="%m/%d/%Y") > dates.msft = as.yearmon(msft.df$Date, format="%m/%d/%Y") > sbux.z = zoo(x=sbux.df$Adj.Close, order.by=dates.sbux) > msft.z = zoo(x=msft.df$Adj.Close, order.by=dates.msft) > class(sbux.z) > head(sbux.z) >> head(sbux.z) > Data: > numeric(0) > > I will be grateful if anybody would like to guide me where i am making the > mistake. > > With best regards, > Upananda Pani > > > -- > > > You may delay, but time will not. > > > Research Scholar > alternative mail id: up...@iitkgp.ac.in > Department of HSS, IIT KGP > KGP > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] symbolic computing example with Ryacas
Here are some more examples: library(Ryacas) x <- Sym("x") yacas("x:=2") Eval(x*x) ## [1] 4 # vignette has similar example y <- Sym("y") Eval(Subst(y*y, y, 3)) ## [1] 9 # demo("Ryacas-Function") has similar example to this f <- function(z) {} body(f) <- yacas(expression(z*z))[[1]] f(4) ## [1] 16 On Tue, Sep 19, 2017 at 2:08 PM, Vivek Sutradhara wrote: > Thanks for the response. Yes, I did study the vignette but did not > understand it fully. Anyway, I have tried once again now. I am happy to say > that I have got what I wanted. > > library(Ryacas) > x <- Sym("x");U <- Sym("U");x0 <- Sym("x0");C <- Sym("C") > my_func <- function(x,U,x0,C) { > return (U/(1+exp(-(x-x0)/C)))} > FirstDeriv <- deriv(my_func(x,U,x0,C), x) > PrettyForm(FirstDeriv) > #slope <- yacas("Subst(x,x0),deriv(my_func(x,U,x0,C), x)") > slope <- Subst(FirstDeriv,x,x0) > #PrettyForm(slope) - gives errors > PrettyForm(Simplify(slope)) > > I was confused by the references to the yacas command. Now, I have chosen > to omit it. Then I get what I want. > Thanks, > Vivek > > 2017-09-19 16:04 GMT+02:00 Bert Gunter : > >> Have you studied the "Introduction to Ryacas" vignette that come with the >> package? >> >> Cheers, >> Bert >> >> >> >> Bert Gunter >> >> "The trouble with having an open mind is that people keep coming along and >> sticking things into it." >> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip ) >> >> On Tue, Sep 19, 2017 at 2:37 AM, Vivek Sutradhara >> wrote: >> >>> Hi all, >>> I am trying to implement the following matlab code with Ryacas : >>> >>> syms U x x0 C >>> >>> d1=diff(U/(1+exp(-(x-x0)/C)),x); >>> >>> pretty(d1) >>> >>> d2=diff(U/(1+exp(-(x-x0)/C)),x,2); >>> >>> pretty(d2) >>> >>> solx2 = solve(d2 == 0, x, 'Real', true) >>> >>> pretty(solx2) >>> >>> slope2=subs(d1,solx2) >>> >>> >>> I have tried the following : >>> >>> library(Ryacas) >>> >>> x <- Sym("x");U <- Sym("U");x0 <- Sym("x0");C <- Sym("C") >>> >>> my_func <- function(x,U,x0,C) { >>> >>> return (U/(1+exp(-(x-x0)/C)))} >>> >>> FirstDeriv <- deriv(my_func(x,U,x0,C), x) >>> >>> PrettyForm(FirstDeriv) >>> >>> slope <- yacas("Subst(x,x0),deriv(my_func(x,U,x0,C), x)") >>> >>> PrettyForm(slope) >>> >>> >>> I don't understand how I should use the Subst command. I want the slope of >>> the first derivative at x=x0. How do I implement that? >>> >>> I would appreciate any help that I can get. >>> >>> Thanks, >>> >>> Vivek >>> >>> [[alternative HTML version deleted]] >>> >>> __ >>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide http://www.R-project.org/posti >>> ng-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >>> >> >> > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] require help
Assuming the input data.frame, DF, is of the form shown reproducibly in the Note below, to convert the series to zoo or ts: library(zoo) # convert to zoo z <- read.zoo(DF) # convert to ts as.ts(z) # Note: DF <- structure(list(year = c(1980, 1981, 1982, 1983, 1984), cnsm = c(174, 175, 175, 172, 173), incm = c(53.4, 53.7, 53.5, 53.2, 53.3), with = c(60.3, 60.5, 60.2, 60.1, 60.7)), .Names = c("year", "cnsm", "incm", "with"), row.names = c(NA, -5L), class = "data.frame") On Sat, Sep 16, 2017 at 8:10 AM, yadav neog wrote: > oky.. thank you very much to all of you > > > On Sat, Sep 16, 2017 at 2:06 PM, Eric Berger wrote: > >> You can just use the same code that I provided before but now use your >> dataset. Like this >> >> df <- read.csv(file="data2.csv",header=TRUE) >> dates <- as.Date(paste(df$year,"-01-01",sep="")) >> myXts <- xts(df,order.by=dates) >> head(myXts) >> >> #The last command "head(myXts)" shows you the first few rows of the xts >> object >>year cnsmincmwlth >> 1980-01-01 1980 173.6527 53.3635 60.3013 >> 1981-01-01 1981 175.4613 53.6929 60.4980 >> 1982-01-01 1982 174.5724 53.4890 60.2358 >> 1983-01-01 1983 171.5070 53.2223 60.1047 >> 1984-01-01 1984 173.3462 53.2851 60.6946 >> 1985-01-01 1985 171.7075 53.1596 60.7598 >> >> >> On Sat, Sep 16, 2017 at 9:55 AM, Berend Hasselman wrote: >> >>> >>> > On 15 Sep 2017, at 11:38, yadav neog wrote: >>> > >>> > hello to all. I am working on macroeconomic data series of India, which >>> in >>> > a yearly basis. I am unable to convert my data frame into time series. >>> > kindly help me. >>> > also using zoo and xts packages. but they take only monthly >>> observations. >>> > >>> > 'data.frame': 30 obs. of 4 variables: >>> > $ year: int 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 ... >>> > $ cnsm: num 174 175 175 172 173 ... >>> > $ incm: num 53.4 53.7 53.5 53.2 53.3 ... >>> > $ wlth: num 60.3 60.5 60.2 60.1 60.7 ... >>> > -- >>> >>> Second try to do what you would like (I hope and think) >>> Using Eric's sample data >>> >>> >>> zdf <- data.frame(year=2001:2010, cnsm=sample(170:180,10,replace=TRUE), >>> incm=rnorm(10,53,1), wlth=rnorm(10,60,1)) >>> zdf >>> >>> # R ts >>> zts <- ts(zdf[,-1], start=zdf[1,"year"]) >>> zts >>> >>> # turn data into a zoo timeseries and an xts timeseries >>> >>> library(zoo) >>> z.zoo <- as.zoo(zts) >>> z.zoo >>> >>> library(xts) >>> z.xts <- as.xts(zts) >>> z.xts >>> >>> >>> Berend Hasselman >>> >>> > Yadawananda Neog >>> > Research Scholar >>> > Department of Economics >>> > Banaras Hindu University >>> > Mob. 9838545073 >>> > >>> > [[alternative HTML version deleted]] >>> > >>> > __ >>> > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> > https://stat.ethz.ch/mailman/listinfo/r-help >>> > PLEASE do read the posting guide http://www.R-project.org/posti >>> ng-guide.html >>> > and provide commented, minimal, self-contained, reproducible code. >>> >>> __ >>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide http://www.R-project.org/posti >>> ng-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >>> >> >> > > > -- > Yadawananda Neog > Research Scholar > Department of Economics > Banaras Hindu University > Mob. 9838545073 > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Time series: xts/zoo object at annual (yearly) frequency
Maybe one of these are close enough: xts(c(2, 4, 5), yearqtr(1991:1993)) as.xts(ts(c(2, 4, 5), 1991)) of if you want only a plain year as the index then then use zoo, zooreg or ts class: library(zoo) zoo(c(2, 4, 5), 1991:1993) zooreg(c(2, 4, 5), 1991) ts(c(2, 4, 5), 1991) On Fri, Oct 6, 2017 at 2:56 AM, John wrote: > Hi, > >I'd like to make a time series at an annual frequency. > >> a<-xts(x=c(2,4,5), order.by=c("1991","1992","1993")) > Error in xts(x = c(2, 4, 5), order.by = c("1991", "1992", "1993")) : > order.by requires an appropriate time-based object >> a<-xts(x=c(2,4,5), order.by=1991:1993) > Error in xts(x = c(2, 4, 5), order.by = 1991:1993) : > order.by requires an appropriate time-based object > > How should I do it? I know that to do for quarterly or monthly time > series, we use as.yearqtr or as.yearmon. What about annual? > >Thanks, > > John > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Concatenate two lists replacing elements with the same name.
Try this: Reduce(modifyList, list(x, y, z)) On Tue, Jul 19, 2016 at 12:34 PM, Luca Cerone wrote: > Dear all, > I would like to know if there is a function to concatenate two lists > while replacing elements with the same name. > > For example: > > x <- list(a=1,b=2,c=3) > y <- list( b=4, d=5) > z <- list(a = 6, b = 8, e= 7) > > I am looking for a function "concatfun" so that > > u <- concatfun(x,y,z) > > returns: > > u$a=6 > u$b=8 > u$c=3 > u$d=5 > u$e=7 > > I.e. it combines the 3 lists, but when names have the same value it > keeps the most recent one. > > Does such a function exists? > > Thanks for the help, > > Cheers, > Luca > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] C/C++/Fortran Rolling Window Regressions
Just replacing lm with a faster version would speed it up. Try lm.fit or even faster is fastLm in the RcppArmadillo package. On Thu, Jul 21, 2016 at 2:02 PM, jeremiah rounds wrote: > Hi, > > A not unusual task is performing a multiple regression in a rolling window > on a time-series.A standard piece of advice for doing in R is something > like the code that follows at the end of the email. I am currently using > an "embed" variant of that code and that piece of advice is out there too. > > But, it occurs to me that for such an easily specified matrix operation > standard R code is really slow. rollapply constantly returns to R > interpreter at each window step for a new lm. All lm is at its heart is > (X^t X)^(-1) * Xy, and if you think about doing that with Rcpp in rolling > window you are just incrementing a counter and peeling off rows (or columns > of X and y) of a particular window size, and following that up with some > matrix multiplication in a loop. The psuedo-code for that Rcpp > practically writes itself and you might want a wrapper of something like: > rolling_lm (y=y, x=x, width=4). > > My question is this: has any of the thousands of R packages out there > published anything like that. Rolling window multiple regressions that > stay in C/C++ until the rolling window completes? No sense and writing it > if it exist. > > > Thanks, > Jeremiah > > Standard (slow) advice for "rolling window regression" follows: > > > set.seed(1) > z <- zoo(matrix(rnorm(10), ncol = 2)) > colnames(z) <- c("y", "x") > > ## rolling regression of width 4 > rollapply(z, width = 4, >function(x) coef(lm(y ~ x, data = as.data.frame(x))), >by.column = FALSE, align = "right") > > ## result is identical to > coef(lm(y ~ x, data = z[1:4,])) > coef(lm(y ~ x, data = z[2:5,])) > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] C/C++/Fortran Rolling Window Regressions
I would be careful about making assumptions regarding what is faster. Performance tends to be nonintuitive. When I ran rollapply/lm, rollapply/fastLm and roll_lm on the example you provided rollapply/fastLm was three times faster than roll_lm. Of course this could change with data of different dimensions but it would be worthwhile to do actual benchmarks before making assumptions. I also noticed that roll_lm did not give the same coefficients as the other two. set.seed(1) library(zoo) library(RcppArmadillo) library(roll) z <- zoo(matrix(rnorm(10), ncol = 2)) colnames(z) <- c("y", "x") ## rolling regression of width 4 library(rbenchmark) benchmark(fastLm = rollapplyr(z, width = 4, function(x) coef(fastLm(cbind(1, x[, 2]), x[, 1])), by.column = FALSE), lm = rollapplyr(z, width = 4, function(x) coef(lm(y ~ x, data = as.data.frame(x))), by.column = FALSE), roll_lm = roll_lm(coredata(z[, 1, drop = F]), coredata(z[, 2, drop = F]), 4, center = FALSE))[1:4] test replications elapsed relative 1 fastLm 1000.221.000 2 lm 1000.723.273 3 roll_lm 1000.642.909 On Thu, Jul 21, 2016 at 3:45 PM, jeremiah rounds wrote: > Thanks all. roll::roll_lm was essentially what I wanted. I think maybe > I would prefer it to have options to return a few more things, but it is > the coefficients, and the remaining statistics you might want can be > calculated fast enough from there. > > > On Thu, Jul 21, 2016 at 12:36 PM, Achim Zeileis > wrote: > >> Jeremiah, >> >> for this purpose there are the "roll" and "RcppRoll" packages. Both use >> Rcpp and the former also provides rolling lm models. The latter has a >> generic interface that let's you define your own function. >> >> One thing to pay attention to, though, is the numerical reliability. >> Especially on large time series with relatively short windows there is a >> good chance of encountering numerically challenging situations. The QR >> decomposition used by lm is fairly robust while other more straightforward >> matrix multiplications may not be. This should be kept in mind when writing >> your own Rcpp code for plugging it into RcppRoll. >> >> But I haven't check what the roll package does and how reliable that is... >> >> hth, >> Z >> >> >> On Thu, 21 Jul 2016, jeremiah rounds wrote: >> >> Hi, >>> >>> A not unusual task is performing a multiple regression in a rolling window >>> on a time-series.A standard piece of advice for doing in R is >>> something >>> like the code that follows at the end of the email. I am currently using >>> an "embed" variant of that code and that piece of advice is out there too. >>> >>> But, it occurs to me that for such an easily specified matrix operation >>> standard R code is really slow. rollapply constantly returns to R >>> interpreter at each window step for a new lm. All lm is at its heart is >>> (X^t X)^(-1) * Xy, and if you think about doing that with Rcpp in rolling >>> window you are just incrementing a counter and peeling off rows (or >>> columns >>> of X and y) of a particular window size, and following that up with some >>> matrix multiplication in a loop. The psuedo-code for that Rcpp >>> practically writes itself and you might want a wrapper of something like: >>> rolling_lm (y=y, x=x, width=4). >>> >>> My question is this: has any of the thousands of R packages out there >>> published anything like that. Rolling window multiple regressions that >>> stay in C/C++ until the rolling window completes? No sense and writing it >>> if it exist. >>> >>> >>> Thanks, >>> Jeremiah >>> >>> Standard (slow) advice for "rolling window regression" follows: >>> >>> >>> set.seed(1) >>> z <- zoo(matrix(rnorm(10), ncol = 2)) >>> colnames(z) <- c("y", "x") >>> >>> ## rolling regression of width 4 >>> rollapply(z, width = 4, >>> function(x) coef(lm(y ~ x, data = as.data.frame(x))), >>> by.column = FALSE, align = "right") >>> >>> ## result is identical to >>> coef(lm(y ~ x, data = z[1:4,])) >>> coef(lm(y ~ x, data = z[2:5,])) >>> >>> [[alternative HTML version deleted]] >>> >>> __ >>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide >>> http://www.R-project.org/posting-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >>> >>> > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com
Re: [R] stfrtime function not returning proper results through sqldf package in R
To be precise it's SQLite that does not have date and time data types. If you use an sqldf backend such as H2 that does have such types then sqldf will pass them as such. In the case of R's "Date" class such objects are passed to SQLite as numbers since that is what SQLite can understand but they are passed as dates to H2 (and other backends if they have a date type). Also note that after sqldf passes a date to SQLite as a number then when SQLite passes it back to sqldf then sqldf knows that it was originally a date (due to the column name being the same) and coerces it to Date class again. In the example below sqldf passed the number of days to 2000-01-01 since the UNIX epoch to SQLite. SQLite then processed it and passed it back as a number again. Then sqldf realized that it was originally a Date because the column name is still d and the original column d was of "Date" class and so coerces the number from SQLite to Date. There are a limited number of circumstances where this heuristic works but they are sufficient that it's often transparent even though SQLite has no date and time types. > library(sqldf) > DF <- data.frame(d = as.Date("2000-01-01")) > sqldf("select d+1 as d from DF") # return next day d 1 2000-01-02 At the same time iif you really need to do serious date processing on the SQL side it's much easier with an sqldf backend such as H2 that actually supports date and time types and no heuristic is needed by sqldf and no user workarounds on the R side are needed. On Fri, Sep 16, 2016 at 10:15 AM, Jeff Newmiller wrote: > SQLite only understands certain fundamental data types, and neither Date nor > POSIXct types are among them. They get stored as their internal numeric > representations. > > The internal numeric representations of Date and POSIXct are incompatible. > You are sending Dates to SQLite and trying to then interpret it as POSIXct by > handing that numeric to strftime. > > Note that within R the Date and POSIXct types are made sort-of compatible by > internal checking of class attributes that are not stored in SQLite. They are > still only sort-of compatible because Date has no concept of time zone and > always assumes GMT rather than local time when being converted. > > I recommend retrieving the stored Date value as a Date value into R so that > strftime can recognize how to interpret it. If you need to handle time as > well as date you may find that converting to character first before > converting to POSIXct with an appropriate time zone behaves with least > surprises. > -- > Sent from my phone. Please excuse my brevity. > > On September 16, 2016 6:23:48 AM PDT, PIKAL Petr > wrote: >>Hi Peter >> >>The devil is in detail >> >>Data from OP had different format and was transferred to Date object by >>as.Date, which results in incorrect values (and NA if not transferred) >>df <- data.frame(Date = >>c("2013/05/25","2013/05/28","2013/05/31","2013/06/01","2013/06/02", >>"2013/06/05","2013/06/07"), Quantity = c(9,1,15,4,5,17,18)) >>df$Date<-as.Date(df$Date) >>cbind(df, sqldf("select strftime( '%m', Date) from df")) >> >>Data formatted according to your example transferred to Date object by >>as data, again incorrect result >>df2 <- data.frame(Date = >>c("2013-05-25","2013-05-28","2013-05-31","2013-06-01","2013-06-02", >>"2013-06-05","2013-06-07"), Quantity = c(9,1,15,4,5,17,18)) >>df2$Date<-as.Date(df2$Date) >>cbind(df2, sqldf("select strftime( '%m', Date) from df2")) >> >>Data formatted according to your example but **not** changed to Dates, >>correct result >>df3 <- data.frame(Date = >>c("2013-05-25","2013-05-28","2013-05-31","2013-06-01","2013-06-02", >>"2013-06-05","2013-06-07"), Quantity = c(9,1,15,4,5,17,18)) >>cbind(df3, sqldf("select strftime( '%m', Date) from df3")) >> >>so sqldf is a bit peculiar about required input values and does not >>know how to handle Date objects. >> >>Cheers >>Petr >> >> >>> -Original Message- >>> From: peter dalgaard [mailto:pda...@gmail.com] >>> Sent: Friday, September 16, 2016 2:45 PM >>> To: PIKAL Petr >>> Cc: Manohar Reddy ; R-help >> project.org> >>> Subject: Re: [R] stfrtime function not returning proper results >>through sqldf >>> package in R >>> >>> Presumably, sqldf does not know about Date object so passes an >>integer that >>> gets interpreted as who knows what... >>> >>> This seems to work: >>> >>> > df <- data.frame(date=as.character(Sys.Date()+seq(0,180,,10))) >>> > cbind(df, sqldf("select strftime( '%m', date) from df")) >>> date strftime( '%m', date) >>> 1 2016-09-1609 >>> 2 2016-10-0610 >>> 3 2016-10-2610 >>> 4 2016-11-1511 >>> 5 2016-12-0512 >>> 6 2016-12-2512 >>> 7 2017-01-1401 >>> 8 2017-02-0302 >>> 9 2017-02-2302 >>> 10 2017-03-1503 >>> >>> -pd >>> >>> >>> >>> On 16 Sep 2016
[R] Fwd: stfrtime function not returning proper results through sqldf package in R
1. Convert the date from R's origin to the origin used by SQLite's strftime function and then be sure you are using the correct SQLite strftime syntax: library(sqldf) sqldf("select strftime('%m', Date + 2440588.5) month from log") 2. Alternately use the H2 backend which actually supports dates unlike SQLite which only supports functions that interpret certain numbers and character strings as dates. library(RH2) # if RH2 is loaded sqldf will default to the H2 database library(sqldf) sqldf("select month(Date) from log") Note that the first time you use sqldf with RH2 in a session it will load java which is time consuming but after that it should run ok. Note: See 1. the sqldf home page which has more info on dates and times: https://github.com/ggrothendieck/sqldf 2. the sqldf help page: ?sqldf 3. the SQLite date and time function page which explains SQLite's strftime function https://www.sqlite.org/lang_datefunc.html 4. the H2 documentation: http://www.h2database.com On Fri, Sep 16, 2016 at 12:35 AM, Manohar Reddy wrote: > Hi , > > > > I have data something looks like below (or PFA), but when I’m extracting > month using *strftime* function through *sqldf* library ,it’s returning > below results but it’s not returning exact results ,it supposed to return > 05,05,05,06,06,06.Can anyone please guide me how to do that with *strftime* > function. > > > > Thanks in advance. > > > > Quiries : > > > library(scales) > > # load data: > log <- data.frame(Date = > c("2013/05/25","2013/05/28","2013/05/31","2013/06/01","2013/06/02","2013/06/05","2013/06/07"), > Quantity = c(9,1,15,4,5,17,18)) > > > # convert date variable from factor to date format: > log$Date <- as.Date(log$Date, > "%Y/%m/%d") # tabulate all the options here > str(log) > > > > > > > > Manu. > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Toronto CRAN mirror 403 error?
On Fri, May 29, 2015 at 10:12 PM, Mark Drummond wrote: > I've been getting a 403 when I try pulling from the Toronto CRAN mirror > today. > > http://cran.utstat.utoronto.ca/ > > Is there a contact list for mirror managers? > See the cran_mirrors.csv file in R.home("doc") of your R distribution. __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] help for lay person assisting R user with disability
On Thu, Jun 18, 2015 at 10:32 AM, Courtney Bryant wrote: > Good Morning, > I am currently working with a disabled R user who is a student here at > CMU. The student has both sight and mobility issues. The student has > asked for an assistant who is well versed in R to enter data for her, which > we are having a hard time finding. I would like information from R > developers/users about how/how well R interfaces with Excel (an easier > skill set to find!) In your opinion, could it be as easy as uploading > data from excel into R? > > Also, do you know of a way to enlarge the R interface or otherwise assist > in making the program accessible to a low vision person? My limited > understanding leads me to believe that screen magnifiers like zoom text > don't work particularly well. If you have information on that, I would > very much appreciate it. > > Thanks for your help and for bearing with me! > Courtney > > 1. If the data file is in the form of a rectangular table with rows and columns and the first row is a header row then if, in Excel, it is saved as a .csv file it can be read into R like this: DF <- read.csv("/Users/JoeDoe/myspreadsheet.csv") 2. The openxlsx, readxl (and a number of other packages) can alternetely be used to directly read in an xls or xlsx file, e.g. install.packages("readxl") library(readxl) DF <- read_excel("/Users/JoeDoe/myspreadsheet.xlsx") 3. The Windows magnifier that comes with Windows does work with R. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Spline Graphs
Sorry if this appears twice but I am not sure the first attempt got through. This is not much to go on but here is a short self contained example which creates a longitudinal data frame L in long form from the built in data frame BOD and then plots it as points and splines using lattice: L <- rbind(cbind(Id = 1, BOD), cbind(Id = 2, BOD)) library(lattice) xyplot(demand ~ Time | Id, L, type = c("p", "spline")) On Mon, Jun 29, 2015 at 2:28 AM, deva d wrote: > I wish to analyse longitudinal data and fit spline graphs to it looking to > the data pattern. > > can someone suggest some starting point, and package in R to be used for > it. > > what would be the requirement for structuring the raw data. > > ** > > *Deva* > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] : Ramanujan and the accuracy of floating point computations - using Rmpfr in R
You can do that with bc if you pass the entire expression to bc(...) in quotes but in that case you will have to use bc notation, not R notation so, for example, exp is e and atan is a. > library(bc) > bc("e(sqrt(163)*4*a(1))") [1] "262537412640768743.2500725971981856888793538563373369908627075374103782106479101186073116295306145602054347" On Fri, Jul 3, 2015 at 3:01 PM, Ravi Varadhan wrote: > Thank you all. I did think about declaring `pi' as a special constant, > but for some reason didn't actually try it. > Would it be easy to have the mpfr() written such that its argument is > automatically of extended precision? In other words, if I just called: > mpfr(exp(sqrt(163)*pi, 120), then all the constants, 163, pi, are > automatically of 120 bits precision. > > Is this easy to do? > > Best, > Ravi > > From: David Winsemius > Sent: Friday, July 3, 2015 2:06 PM > To: John Nash > Cc: r-help; Ravi Varadhan > Subject: Re: [R] : Ramanujan and the accuracy of floating point > computations - using Rmpfr in R > > > On Jul 3, 2015, at 8:08 AM, John Nash wrote: > > > > > > > > > > Third try -- I unsubscribed and re-subscribed. Sorry to Ravi for extra > > traffic. > > > > In case anyone gets a duplicate, R-help bounced my msg "from non-member" > (I changed server, but not email yesterday, > > so possibly something changed enough). Trying again. > > > > JN > > > > I got the Wolfram answer as follows: > > > > library(Rmpfr) > > n163 <- mpfr(163, 500) > > n163 > > pi500 <- mpfr(pi, 500) > > pi500 > > pitan <- mpfr(4, 500)*atan(mpfr(1,500)) > > pitan > > pitan-pi500 > > r500 <- exp(sqrt(n163)*pitan) > > r500 > > check <- "262537412640768743.25007259719818568887935385..." > > savehistory("jnramanujan.R") > > > > Note that I used 4*atan(1) to get pi. > > RK got it right by following the example in the help page for mpfr: > > Const("pi", 120) > > The R `pi` constant is not recognized by mpfr as being anything other than > another double . > > > There are four special values that mpfr recognizes. > > — > Best; > David > > > > It seems that may be important, > > rather than converting. > > > > JN > > > > On 15-07-03 06:00 AM, r-help-requ...@r-project.org wrote: > > > >> Message: 40 > >> Date: Thu, 2 Jul 2015 22:38:45 + > >> From: Ravi Varadhan > >> To: "'Richard M. Heiberger'" , Aditya Singh > >> > >> Cc: r-help > >> Subject: Re: [R] : Ramanujan and the accuracy of floating point > >> computations - using Rmpfr in R > >> Message-ID: > >> <14ad39aaf6a542849bbf3f62a0c2f...@dom-eb1-2013.win.ad.jhu.edu> > >> Content-Type: text/plain; charset="utf-8" > >> > >> Hi Rich, > >> > >> The Wolfram answer is correct. > >> http://mathworld.wolfram.com/RamanujanConstant.html > >> > >> There is no code for Wolfram alpha. You just go to their web engine > and plug in the expression and it will give you the answer. > >> http://www.wolframalpha.com/ > >> > >> I am not sure that the precedence matters in Rmpfr. Even if it does, > the answer you get is still wrong as you showed. > >> > >> Thanks, > >> Ravi > >> > >> -Original Message- > >> From: Richard M. Heiberger [mailto:r...@temple.edu] > >> Sent: Thursday, July 02, 2015 6:30 PM > >> To: Aditya Singh > >> Cc: Ravi Varadhan; r-help > >> Subject: Re: [R] : Ramanujan and the accuracy of floating point > computations - using Rmpfr in R > >> > >> There is a precedence error in your R attempt. You need to convert > >> 163 to 120 bits first, before taking > >> its square root. > >> > exp(sqrt(mpfr(163, 120)) * mpfr(pi, 120)) > >> 1 'mpfr' number of precision 120 bits > >> [1] 262537412640768333.51635812597335712954 > >> > >> ## just the last four characters to the left of the decimal point. > tmp <- c(baseR=8256, Wolfram=8744, Rmpfr=8333, wrongRmpfr=7837) > tmp-tmp[2] > >> baseRWolfram Rmpfr wrongRmpfr > >> -488 0 -411 -907 > >> You didn't give the Wolfram alpha code you used. There is no way of > verifying the correct value from your email. > >> Please check that you didn't have a similar precedence error in that > code. > >> > >> Rich > >> > >> > >> On Thu, Jul 2, 2015 at 2:02 PM, Aditya Singh via R-help < > r-help@r-project.org> wrote: > Ravi > > I am a chemical engineer by training. Is there not something like law > of corresponding states in numerical analysis? > > Aditya > > > > -- > On Thu 2 Jul, 2015 7:28 AM PDT Ravi Varadhan wrote: > > >> Hi, > >> > >> Ramanujan supposedly discovered that the number, 163, has this > interesting property that exp(sqrt(163)*pi), which is obviously a > transcendental number, is real close to an integer (close to 10^(-12)). > >> > >> If I compute this using the Wolfram alpha engine, I get: > >> 262537412640768743.25007259719818568887935385... > >> > >> When I do this in R 3.1.1
Re: [R] How to create a time series object with time only (no date)
On Sun, Dec 21, 2014 at 12:09 AM, ce wrote: > > Dear all, > > I want to create a time series object from 00:00:00 to 23:59:00 without dates > ? > I can't figure it out with xts ? > This uses zoo, rather than xts: library(zoo) library(chron) tt <- seq(times("00:00:00"), times("23:59:00"), by = times("00:01:00")) dat <- 1:1440 z <- zoo(dat, tt) -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ave(x, y, FUN=length) produces character output when x is character
On Thu, Dec 25, 2014 at 1:57 PM, Mike Miller wrote: > I do think I get what is going on with this, but why should I buy into this > conceptualization? Why is it better to say that a matrix *is* a vector than > to say that a matrix *contains* a vector? The latter seems to be the more > common way of thinking but such things. Even in R you've had to construct > two different definitions of "vector" to deal with the inconsistency created > by the "matrix is a vector" way of thinking. So there must be something > really good about it that I am not understanding (and I'm not being > facetious or ironic!) I think its the idea that in R all data objects are vectors (for some notion of vector) in the sense that all Lisp objects are lists, all APL objects are arrays and all tcl objects are character strings. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Evaluating a formula
On Fri, Jan 16, 2015 at 3:16 AM, philippe massicotte wrote: > Hi all. > > How we evaluate a formula in R? > > Ex.: > > params <- list(a = 2, b = 3) > x <- seq(1,10, length.out = 100) > > func1 <- as.formula("y ~ a*x^2 + b*x") > > ##How to evaluate func1 using x and the params list > ??? > > > Thank you in advance, > Phil Remove the lhs of the formula giving fo; then use fn$ from gsubfn to turn fo into a function, func, and call it using do.call. library(gsubfn) fo <- formula(sub(".*~", "~", deparse(func1))) func <- fn$identity(fo) do.call(func, c(list(x = x), params)) -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Dropping time series observations
On Sat, Jan 31, 2015 at 2:16 PM, Mikael Olai Milhøj wrote: > Hi > > Is there an easy way to drop, for instance, the first 4 observation of a > time series object in R? I have tried to google the answer without any > luck. > > To be more clear: > Let us say that I have a time seres object x which data from 2000Q1 to > 2014Q4 but I only want the data from 2001Q1 to 2014Q4.How do I remove the > first four elements? > > By using x[5:?] R no longer recognize it as a ts.object. > 1. We could convert it to a zoo series, drop the first 4 and convert back: For example, using the built in presidents ts series: library(zoo) as.ts(tail(as.zoo(presidents), -4)) 1a. This would work too: library(zoo) as.ts(as.zoo(presidents)[-(1:4)]) 2. Using only base R one can use window like this since 4 observations is one cycle (given that the frequency of the presidents dataset is 4. window(presidents, start = start(presidents) + 1) or in terms of 4: window(presidents, start = start(presidents) + 4 * deltat(presidents)) Here deltat is the time between observations so we want to start 4 deltat's later. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Superscript in legend without using expression function
On Sat, Feb 7, 2015 at 4:57 PM, jgui001 wrote: > I am plotting three sets of data on a single graph, and doing around 100+ > graphs. > I can use the expression function to superscript the 2 but that seems to > force me to manually put in the R squared values. Is there away around this? > > This code will show what it should look like this but with the 2 > superscripted > > r1<-c(0.59,0.9,0.6) > plot(1:6) > legend("topleft", > legend=c(paste("G1 r=",r1[1]), paste("G2 r=",r1[2]), paste("G3 r=",r1[3]))) Replace the legend statement with: leg <- as.list(parse(text = sprintf("G%d~r^2=%.2f", 1:3, r1))) legend("topleft", legend = leg) -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] sqldf() difference between R 3.1.2 and 3.0.1
On Wed, Feb 11, 2015 at 9:45 AM, Doran, Harold wrote: > I have a function written and tested using R 3.0.1 and sqldf_0.4-7.1 that > works perfectly. However, using this same code with R 3.1.2 and sqldf_0.4-10 > yields the error below that I am having a difficult time deciphering. Hence, > same code behaves differently on different versions of R and sqldf(). > > Error in sqliteSendQuery(con, statement, bind.data) : > error in statement: no such column: V1 > > > Reproducible example below as well as complete sessionInfo all provided below. > > > My function and code using the function are below. > > dorReader <- function(dorFile, layout, sepChar = '\n'){ > sepChar <- as.character(sepChar) > dorFile <- as.character(dorFile) > layout$type2 <- ifelse(layout$type == 'C', 'character', > > ifelse(layout$type == 'N', 'numeric', 'Date')) > dor <- file(dorFile) > attr(dor, "file.format") <- list(sep = sepChar) > getVars <- paste("select", >paste("substr(V1, ", layout$Start, ", ", > layout$Length, ") '", layout$Variable.Name, "'", > collapse = ", "), "from dor") > dat <- sqldf(getVars) > > classConverter <- function(obj, types){ > out <- lapply(1:length(obj),FUN = > function(i){FUN1 <- switch(types[i],character = as.character,numeric = > as.numeric,factor = as.factor, Date = as.character); FUN1(obj[,i])}) > names(out) <- colnames(obj) > as.data.frame(out) > } > dat <- classConverter(dat, layout$type2) > names(dat) <- layout$Variable.Name > dat > } > > ### contents of fwf file 'sample.txt' > 1234567 > 1234567 > 1234567 > 1234567 > 1234567 > 1234567 > 1234567 > 1234567 > > layout <- data.frame("Variable.Name" =c('test1', 'test2'), "Length" = c(3,4), > "Start" =c(1,4), "End" = c(3,7), "type" = c('N', 'N')) > > tmp <- dorReader('sample.txt', layout) sqldf is documented to use the sqliteImportFile defaults for file.format components. It may be that RSQLite 1.0 has changed the default for header in sqliteImportFile. Try replacing your statement that sets file.format with this: attr(dor, "file.format") <- list(sep = sepChar, header = FALSE) __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Substring replacement in string
On Fri, Feb 27, 2015 at 5:19 PM, Alrik Thiem wrote: > I would like to replace all lower-case letters in a string that are not part > of certain fixed expressions. For example, I have the string: > > "pmin(pmax(pmin(x1, X2), pmin(X3, X4)) == Y, pmax(Z1, z1))" > > Where I would like to replace all lower-case letters that do not belong to > the functions "pmin" and "pmax" by 1 - toupper(...) to get > > "pmin(pmax(pmin(1 - X1, X2), pmin(X3, X4)) == Y, pmax(Z1, 1 - Z1))" > Assuming x is the input string: gsub("(\\b[a-oq-z][a-z0-9]+)", "1-\\U\\1", x, perl = TRUE) ## [1] "pmin(pmax(pmin(1-X1, X2), pmin(X3, X4)) == Y, pmax(Z1, 1-Z1))" -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Substring replacement in string
Replace the + (i.e. 1 or more) in the pattern with a * (i.e. 0 or more): x <- "pmin(pmax(pmin(a,B),pmin(a,C,d))==Y,pmax(E,e))" gsub("(\\b[a-oq-z][a-z0-9]*)", "1-\\U\\1", x, perl = TRUE) giving: [1] "pmin(pmax(pmin(1-A,B),pmin(1-A,C,1-D))==Y,pmax(E,1-E))" Here is a visualization of the regular expression: https://www.debuggex.com/i/5ByOCQS2zIdPEf-f.png On Sat, Feb 28, 2015 at 8:16 AM, Alrik Thiem wrote: > Dear Gabor, > > Many thanks. Works like a charm, but I can't get it to work with > > "pmin(pmax(pmin(a,B),pmin(a,C,d))==Y,pmax(E,e))" > > i.e., with strings where there're no integers following the components in the > pmin/pmax functions. Could this be generalized to handle both cases? > > Best wishes, > Alrik > > -Ursprüngliche Nachricht- > Von: Gabor Grothendieck [mailto:ggrothendi...@gmail.com] > Gesendet: Samstag, 28. Februar 2015 13:35 > An: Alrik Thiem > Cc: r-help@r-project.org > Betreff: Re: [R] Substring replacement in string > > On Fri, Feb 27, 2015 at 5:19 PM, Alrik Thiem wrote: >> I would like to replace all lower-case letters in a string that are not part >> of certain fixed expressions. For example, I have the string: >> >> "pmin(pmax(pmin(x1, X2), pmin(X3, X4)) == Y, pmax(Z1, z1))" >> >> Where I would like to replace all lower-case letters that do not belong to >> the functions "pmin" and "pmax" by 1 - toupper(...) to get >> >> "pmin(pmax(pmin(1 - X1, X2), pmin(X3, X4)) == Y, pmax(Z1, 1 - Z1))" >> > > Assuming x is the input string: > > gsub("(\\b[a-oq-z][a-z0-9]+)", "1-\\U\\1", x, perl = TRUE) > ## [1] "pmin(pmax(pmin(1-X1, X2), pmin(X3, X4)) == Y, pmax(Z1, 1-Z1))" > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Extract month and year as one column
On Fri, Mar 13, 2015 at 11:36 AM, Kumsaa wrote: > How could I extract both month and year from a date? I know how to > separately extract both using lubridate package: > > df$month <- month(df$date) > df$year<- year(df$date) > > I wish to extract year and month as one column > >> dput(mydf) > structure(list(date = structure(c(14975, 14976, 14977, 14978, > 14979, 14980, 14981, 14982, 14983, 14984, 14985, 14986, 14987, > 14988, 14989, 15340, 15341, 15342, 15343, 15344, 15345, 15346, > 15347, 15348, 15349, 15350, 15351, 15352, 15353, 15354), class = "Date"), > temp = c(6.5140544091004, 3.69073712745884, 3.04839429519466, > 9.16988228171461, -1.17176248610603, 2.88216040747883, > 4.98853844809017, > 4.07520306701834, 9.82902813943658, 2.79305715971987, 8.04721677924611, > 7.50667729759095, 2.91055000121842, 1.65559895014064, 4.8019596483372, > 16.2567986804179, 13.3352908067145, 16.6955807821108, 6.28373374879922, > 6.97181051627531, 5.74282686202818, 4.37018386569785, 12.5725962512824, > 4.6583055309578, 8.76457542037641, 10.7070862034423, 12.84023567151, > 5.78620621848167, 5.98643374478599, 13.0993210289842)), .Names = > c("date", > "temp"), row.names = c(NA, 30L), class = "data.frame") > The zoo package has a "yearmon" class that represents dates as year and month with no day: > library(zoo) > transform(mydf, yearmon = as.yearmon(date)) date temp yearmon 1 2011-01-01 6.514054 Jan 2011 2 2011-01-02 3.690737 Jan 2011 3 2011-01-03 3.048394 Jan 2011 4 2011-01-04 9.169882 Jan 2011 5 2011-01-05 -1.171762 Jan 2011 6 2011-01-06 2.882160 Jan 2011 etc. __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] plotmath and logical operators?
Try this: plot(1) tmp <- x >= 3 ~ "&" ~ y <= 3 mtext(tmp) On Mon, Aug 20, 2018 at 5:00 PM MacQueen, Don via R-help wrote: > > I would like to use plotmath to annotate a plot with an expression that > includes a logical operator. > > ## works well > tmp <- expression(x >= 3) > plot(1) > mtext(tmp) > > ## not so well > tmp <- expression(x >= 3 & y <= 3) > plot(1) > mtext(tmp) > > Although the text that's displayed makes sense, it won't be obvious to my > non-mathematical audience. > > I'd appreciate suggestions. > > > I've found a work-around that gets the annotation to look right > tmpw <- expression(paste( x >= 3, " & ", y <= 3) ) > plot(1) > mtext(tmpw) > > > But it breaks my original purpose, illustrated by this example: > > df <- data.frame(x=1:5, y=1:5) > tmp <- expression(x >= 3 & y <= 3) > tmpw <- expression(paste( x >= 3, " & ", y <= 3) ) > with(df, eval(tmp)) > [1] FALSE FALSE TRUE FALSE FALSE > with(df, eval(tmpw)) > [1] "FALSE & TRUE" "FALSE & TRUE" "TRUE & TRUE" "TRUE & FALSE" "TRUE > & FALSE" > > Thanks > -Don > > -- > Don MacQueen > Lawrence Livermore National Laboratory > 7000 East Ave., L-627 > Livermore, CA 94550 > 925-423-1062 > Lab cell 925-724-7509 > > > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] looking for formula parser that allows coefficients
Some string manipulation can convert the formula to a named vector such as the one shown at the end of your post. library(gsubfn) # input fo <- y ~ 2 - 1.1 * x1 + x3 - x1:x3 + 0.2 * x2:x2 pat <- "([+-])? *(\\d\\S*)? *\\*? *([[:alpha:]]\\S*)?" ch <- format(fo[[3]]) m <- matrix(strapplyc(ch, pat)[[1]], 3) m <- m[, colSums(m != "") > 0] m[2, m[2, ] == ""] <- 1 m[3, m[3, ] == ""] <- "(Intercept)" co <- as.numeric(paste0(m[1, ], m[2, ])) v <- m[3, ] setNames(co, v) ## (Intercept) x1 x3 x1:x3 x2:x2 ## 2.0-1.1 1.0-1.0 0.2 On Tue, Aug 21, 2018 at 6:46 PM Paul Johnson wrote: > > Can you point me at any packages that allow users to write a > formula with coefficients? > > I want to write a data simulator that has a matrix X with lots > of columns, and then users can generate predictive models > by entering a formula that uses some of the variables, allowing > interactions, like > > y ~ 2 + 1.1 * x1 + 3 * x3 + 0.1 * x1:x3 + 0.2 * x2:x2 > > Currently, in the rockchalk package, I have a function simulates > data (genCorrelatedData2), but my interface to enter the beta > coefficients is poor. I assumed user would always enter 0's as > place holder for the unused coefficients, and the intercept is > always first. The unnamed vector is too confusing. I have them specify: > > c(2, 1.1, 0, 3, 0, 0, 0.2, ...) > > I the documentation I say (ridiculously) it is easy to figure out from > the examples, but it really isnt. > It function prints out the equation it thinks you intended, thats > minimum protection against user error, but still not very good: > > dat <- genCorrelatedData2(N = 10, rho = 0.0, > beta = c(1, 2, 1, 1, 0, 0.2, 0, 0, 0), > means = c(0,0,0), sds = c(1,1,1), stde = 0) > [1] "The equation that was calculated was" > y = 1 + 2*x1 + 1*x2 + 1*x3 > + 0*x1*x1 + 0.2*x2*x1 + 0*x3*x1 > + 0*x1*x2 + 0*x2*x2 + 0*x3*x2 > + 0*x1*x3 + 0*x2*x3 + 0*x3*x3 > + N(0,0) random error > > But still, it is not very good. > > As I look at this now, I realize expect just the vech, not the whole vector > of all interaction terms, so it is even more difficult than I thought to get > the > correct input.Hence, I'd like to let the user write a formula. > > The alternative for the user interface is to have named coefficients. > I can more or less easily allow a named vector for beta > > beta = c("(Intercept)" = 1, "x1" = 2, "x2" = 1, "x3" = 1, "x2:x1" = 0.1) > > I could build a formula from that. That's not too bad. But I still think > it would be cool to allow formula input. > > Have you ever seen it done? > pj > -- > Paul E. Johnson http://pj.freefaculty.org > Director, Center for Research Methods and Data Analysis http://crmda.ku.edu > > To write to me directly, please address me at pauljohn at ku.edu. > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] looking for formula parser that allows coefficients
Also here is a solution that uses formula processing rather than string processing. No packages are used. Parse <- function(e) { if (length(e) == 1) { if (is.numeric(e)) return(e) else setNames(1, as.character(e)) } else { if (isChar(e[[1]], "*")) { x1 <- Recall(e[[2]]) x2 <- Recall(e[[3]]) setNames(unname(x1 * x2), paste0(names(x1), names(x2))) } else if (isChar(e[[1]], "+")) c(Recall(e[[2]]), Recall(e[[3]])) else if (isChar(e[[1]], "-")) { if (length(e) == 2) -1 * Recall(e[[2]]) else c(Recall(e[[2]]), -Recall(e[[3]])) } else if (isChar(e[[1]], ":")) setNames(1, paste(e[-1], collapse = ":")) } } # test fo <- y ~ 2 - 1.1 * x1 + x3 - x1:x3 + 0.2 * x2:x2 Parse(fo[[3]]) giving: x1x3 x1:x3 x2:x2 2.0 -1.1 1.0 -1.0 0.2 On Wed, Aug 22, 2018 at 11:50 AM Paul Johnson wrote: > > Thanks as usual. I owe you more KU decorations soon. > On Wed, Aug 22, 2018 at 2:34 AM Gabor Grothendieck > wrote: > > > > Some string manipulation can convert the formula to a named vector such as > > the one shown at the end of your post. > > > > library(gsubfn) > > > > # input > > fo <- y ~ 2 - 1.1 * x1 + x3 - x1:x3 + 0.2 * x2:x2 > > > > pat <- "([+-])? *(\\d\\S*)? *\\*? *([[:alpha:]]\\S*)?" > > ch <- format(fo[[3]]) > > m <- matrix(strapplyc(ch, pat)[[1]], 3) > > m <- m[, colSums(m != "") > 0] > > m[2, m[2, ] == ""] <- 1 > > m[3, m[3, ] == ""] <- "(Intercept)" > > co <- as.numeric(paste0(m[1, ], m[2, ])) > > v <- m[3, ] > > setNames(co, v) > > ## (Intercept) x1 x3 x1:x3 x2:x2 > > ## 2.0-1.1 1.0-1.0 0.2 > > On Tue, Aug 21, 2018 at 6:46 PM Paul Johnson wrote: > > > > > > Can you point me at any packages that allow users to write a > > > formula with coefficients? > > > > > > I want to write a data simulator that has a matrix X with lots > > > of columns, and then users can generate predictive models > > > by entering a formula that uses some of the variables, allowing > > > interactions, like > > > > > > y ~ 2 + 1.1 * x1 + 3 * x3 + 0.1 * x1:x3 + 0.2 * x2:x2 > > > > > > Currently, in the rockchalk package, I have a function simulates > > > data (genCorrelatedData2), but my interface to enter the beta > > > coefficients is poor. I assumed user would always enter 0's as > > > place holder for the unused coefficients, and the intercept is > > > always first. The unnamed vector is too confusing. I have them specify: > > > > > > c(2, 1.1, 0, 3, 0, 0, 0.2, ...) > > > > > > I the documentation I say (ridiculously) it is easy to figure out from > > > the examples, but it really isnt. > > > It function prints out the equation it thinks you intended, thats > > > minimum protection against user error, but still not very good: > > > > > > dat <- genCorrelatedData2(N = 10, rho = 0.0, > > > beta = c(1, 2, 1, 1, 0, 0.2, 0, 0, 0), > > > means = c(0,0,0), sds = c(1,1,1), stde = 0) > > > [1] "The equation that was calculated was" > > > y = 1 + 2*x1 + 1*x2 + 1*x3 > > > + 0*x1*x1 + 0.2*x2*x1 + 0*x3*x1 > > > + 0*x1*x2 + 0*x2*x2 + 0*x3*x2 > > > + 0*x1*x3 + 0*x2*x3 + 0*x3*x3 > > > + N(0,0) random error > > > > > > But still, it is not very good. > > > > > > As I look at this now, I realize expect just the vech, not the whole > > > vector > > > of all interaction terms, so it is even more difficult than I thought to > > > get the > > > correct input.Hence, I'd like to let the user write a formula. > > > > > > The alternative for the user interface is to have named coefficients. > > > I can more or less easily allow a named vector for beta > > > > > > beta = c("(Intercept)" = 1, "x1" = 2, "x2" = 1, "x3" = 1, "x2:x1" = 0.1) > > > > > > I could build a formula from that. That's not too bad. But I still think > > > it would be cool to allow formula input. > > > > > > Have you ever seen it done? > > > pj > > > -- > > > Paul E. Johnson http://pj.freefaculty.org > > > Director, Center for Research Methods and Data Analysis > > > http://crmda.ku.edu > > > > > > To write to me
Re: [R] looking for formula parser that allows coefficients
The isChar function used in Parse is: isChar <- function(e, ch) identical(e, as.symbol(ch)) On Fri, Aug 24, 2018 at 10:06 PM Gabor Grothendieck wrote: > > Also here is a solution that uses formula processing rather than > string processing. > No packages are used. > > Parse <- function(e) { > if (length(e) == 1) { > if (is.numeric(e)) return(e) > else setNames(1, as.character(e)) > } else { > if (isChar(e[[1]], "*")) { >x1 <- Recall(e[[2]]) >x2 <- Recall(e[[3]]) >setNames(unname(x1 * x2), paste0(names(x1), names(x2))) > } else if (isChar(e[[1]], "+")) c(Recall(e[[2]]), Recall(e[[3]])) > else if (isChar(e[[1]], "-")) { > if (length(e) == 2) -1 * Recall(e[[2]]) > else c(Recall(e[[2]]), -Recall(e[[3]])) > } else if (isChar(e[[1]], ":")) setNames(1, paste(e[-1], collapse = ":")) > } > } > > # test > fo <- y ~ 2 - 1.1 * x1 + x3 - x1:x3 + 0.2 * x2:x2 > Parse(fo[[3]]) > > giving: > > x1x3 x1:x3 x2:x2 > 2.0 -1.1 1.0 -1.0 0.2 > On Wed, Aug 22, 2018 at 11:50 AM Paul Johnson wrote: > > > > Thanks as usual. I owe you more KU decorations soon. > > On Wed, Aug 22, 2018 at 2:34 AM Gabor Grothendieck > > wrote: > > > > > > Some string manipulation can convert the formula to a named vector such as > > > the one shown at the end of your post. > > > > > > library(gsubfn) > > > > > > # input > > > fo <- y ~ 2 - 1.1 * x1 + x3 - x1:x3 + 0.2 * x2:x2 > > > > > > pat <- "([+-])? *(\\d\\S*)? *\\*? *([[:alpha:]]\\S*)?" > > > ch <- format(fo[[3]]) > > > m <- matrix(strapplyc(ch, pat)[[1]], 3) > > > m <- m[, colSums(m != "") > 0] > > > m[2, m[2, ] == ""] <- 1 > > > m[3, m[3, ] == ""] <- "(Intercept)" > > > co <- as.numeric(paste0(m[1, ], m[2, ])) > > > v <- m[3, ] > > > setNames(co, v) > > > ## (Intercept) x1 x3 x1:x3 x2:x2 > > > ## 2.0-1.1 1.0-1.0 0.2 > > > On Tue, Aug 21, 2018 at 6:46 PM Paul Johnson wrote: > > > > > > > > Can you point me at any packages that allow users to write a > > > > formula with coefficients? > > > > > > > > I want to write a data simulator that has a matrix X with lots > > > > of columns, and then users can generate predictive models > > > > by entering a formula that uses some of the variables, allowing > > > > interactions, like > > > > > > > > y ~ 2 + 1.1 * x1 + 3 * x3 + 0.1 * x1:x3 + 0.2 * x2:x2 > > > > > > > > Currently, in the rockchalk package, I have a function simulates > > > > data (genCorrelatedData2), but my interface to enter the beta > > > > coefficients is poor. I assumed user would always enter 0's as > > > > place holder for the unused coefficients, and the intercept is > > > > always first. The unnamed vector is too confusing. I have them specify: > > > > > > > > c(2, 1.1, 0, 3, 0, 0, 0.2, ...) > > > > > > > > I the documentation I say (ridiculously) it is easy to figure out from > > > > the examples, but it really isnt. > > > > It function prints out the equation it thinks you intended, thats > > > > minimum protection against user error, but still not very good: > > > > > > > > dat <- genCorrelatedData2(N = 10, rho = 0.0, > > > > beta = c(1, 2, 1, 1, 0, 0.2, 0, 0, 0), > > > > means = c(0,0,0), sds = c(1,1,1), stde = 0) > > > > [1] "The equation that was calculated was" > > > > y = 1 + 2*x1 + 1*x2 + 1*x3 > > > > + 0*x1*x1 + 0.2*x2*x1 + 0*x3*x1 > > > > + 0*x1*x2 + 0*x2*x2 + 0*x3*x2 > > > > + 0*x1*x3 + 0*x2*x3 + 0*x3*x3 > > > > + N(0,0) random error > > > > > > > > But still, it is not very good. > > > > > > > > As I look at this now, I realize expect just the vech, not the whole > > > > vector > > > > of all interaction terms, so it is even more difficult than I thought > > > > to get the > > > > correct input.Hence, I'd like to let the user write a formula. > > > > > > > > The alternative for the user interface is to have named coefficients. > > > > I can more or less easily allow a named vector for beta > >
Re: [R] year and week to date - before 1/1 and after 12/31
Replace the week in the date with week 2, say -- a week in which nothing will go wrong and then add or subtract the appropriate number of weeks. d <- c('2016 00 Sun', '2017 53 Sun', '2017 53 Mon') # test data as.Date(sub(" .. ", "02", d), "%Y %U %a") + 7 * (as.numeric(sub(" (..) ...", "\\1", d)) - 2) ## [1] "2015-12-27" "2017-12-31" "2018-01-01" On Tue, Oct 16, 2018 at 10:23 AM peter salzman wrote: > > hi, > > to turn year and week into the date one can do the following: > > as.Date('2018 05 Sun', "%Y %W %a") > > however, when we want the Sunday (1st day of week) of the 1st week of 2018 > we get NA because 1/1/2018 was on Monday > > as.Date('2018 00 Mon',format="%Y %U %a") > ## 2018-01-01 > as.Date('2018 00 Sun',format="%Y %U %a") > ## NA > > btw the same goes for last week > as.Date('2017 53 Sun',format="%Y %U %a") > ## 2017-12-31 > as.Date('2017 53 Mon',format="%Y %U %a") > ## NA > > So my question is : > how do i get > from "2018 00 Sun" to 2018-12-31 > and > from "2017 53 Mon" to 2018-01-01 > > i realize i can loop over days of week and do some if/then statements, > but is there a built in function? > > thank you > peter > > > > > > -- > Peter Salzman, PhD > Department of Biostatistics and Computational Biology > University of Rochester > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R code for if-then-do code blocks
There is some discussion of approaches to this here: https://stackoverflow.com/questions/34096162/dplyr-mutate-replace-on-a-subset-of-rows/34096575#34096575 On Mon, Dec 17, 2018 at 10:30 AM Paul Miller via R-help wrote: > > Hello All, > > Season's greetings! > > Am trying to replicate some SAS code in R. The SAS code uses if-then-do code > blocks. I've been trying to do likewise in R as that seems to be the most > reliable way to get the same result. > > Below is some toy data and some code that does work. There are some things I > don't necessarily like about the code though. So I was hoping some people > could help make it better. One thing I don't like is that the within function > reverses the order of the computed columns such that test1:test5 becomes > test5:test1. I've used a mutate to overcome that but would prefer not to have > to do so. > > Another, perhaps very small thing, is the need to calculate an ID variable > that becomes the basis for a grouping. > > I did considerable Internet searching for R code that conditionally computes > blocks of code. I didn't find much though and so am wondering if my search > terms were not sufficient or if there is some other reason. It occurred to me > that maybe if-then-do code blocks like we often see in SAS as are frowned > upon and therefore not much implemented. > > I'd be interested in seeing more R-compatible approaches if this is the case. > I've learned that it's a mistake to try and make R be like SAS. It's better > to let R be R. Trouble is I'm not always sure how to do that. > > Thanks, > > Paul > > > d1 <- data.frame(workshop=rep(1:2,4), > gender=rep(c("f","m"),each=4)) > > library(tibble) > library(plyr) > > d2 <- d1 %>% > rownames_to_column("ID") %>% > mutate(test1 = NA, test2 = NA, test4 = NA, test5 = NA) %>% > ddply("ID", > within, > if (gender == "f" & workshop == 1) { > test1 <- 1 > test1 <- 6 + test1 > test2 <- 2 + test1 > test4 <- 1 > test5 <- 1 > } else { > test1 <- test2 <- test4 <- test5 <- 0 > }) > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Newbie Question on R versus Matlab/Octave versus C
R has many similarities to Octave. Have a look at: https://cran.r-project.org/doc/contrib/R-and-octave.txt https://CRAN.R-project.org/package=matconv On Mon, Jan 28, 2019 at 4:58 PM Alan Feuerbacher wrote: > > Hi, > > I recently learned of the existence of R through a physicist friend who > uses it in his research. I've used Octave for a decade, and C for 35 > years, but would like to learn R. These all have advantages and > disadvantages for certain tasks, but as I'm new to R I hardly know how > to evaluate them. Any suggestions? > > Thanks! > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] [FORGED] Newbie Question on R versus Matlab/Octave versus C
This would be a suitable application for NetLogo. The R package RNetLogo provides an interface. In a few lines of code you get a simulation with graphics. On Mon, Jan 28, 2019 at 7:00 PM Alan Feuerbacher wrote: > > On 1/28/2019 4:20 PM, Rolf Turner wrote: > > > > On 1/29/19 10:05 AM, Alan Feuerbacher wrote: > > > >> Hi, > >> > >> I recently learned of the existence of R through a physicist friend > >> who uses it in his research. I've used Octave for a decade, and C for > >> 35 years, but would like to learn R. These all have advantages and > >> disadvantages for certain tasks, but as I'm new to R I hardly know how > >> to evaluate them. Any suggestions? > > > > * C is fast, but with a syntax that is (to my mind) virtually > >incomprehensible. (You probably think differently about this.) > > I've been doing it long enough that I have little problem with it, > except for pointers. :-) > > > * In C, you essentially have to roll your own for all tasks; in R, > >practically anything (well ...) that you want to do has already > >been programmed up. CRAN is a wonderful resource, and there's more > >on github. > > > > * The syntax of R meshes beautifully with *my* thought patterns; YMMV. > > > > * Why not just bog in and try R out? It's free, it's readily available, > >and there are a number of good online tutorials. > > I just installed R on my Linux Fedora system, so I'll do that. > > I wonder if you'd care to comment on my little project that prompted > this? As part of another project, I wanted to model population growth > starting from a handful of starting individuals. This is exponential in > the long run, of course, but I wanted to see how a few basic parameters > affected the outcome. Using Octave, I modeled a single person as a > "cell", which in Octave has a good deal of overhead. The program > basically looped over the entire population, and updated each person > according to the parameters, which included random statistical > variations. So when the total population reached, say 10,000, and an > update time of 1 day, the program had to execute 10,000 x 365 update > operations for each year of growth. For large populations, say 100,000, > the program did not return even after 24 hours of run time. > > So I switched to C, and used its "struct" declaration and an array of > structs to model the population. This allowed the program to complete in > under a minute as opposed to 24 hours+. So in line with your comments, C > is far more efficient than Octave. > > How do you think R would fare in this simulation? > > Alan > > > --- > This email has been checked for viruses by Avast antivirus software. > https://www.avast.com/antivirus > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Newbie Question on R versus Matlab/Octave versus C
Two additional comments: - depending on the nature of your problem you may be able to get an analytic solution using branching processes. I found this approach successful when I once had to model stem cell growth. - in addition to NetLogo another alternative to R would be the Julia language which is motivated to some degree by Octave but is actually quite different and is particularly suitable in terms of performance for iterative computations where one iteration depends on the prior one. On Mon, Jan 28, 2019 at 6:32 PM Gabor Grothendieck wrote: > > R has many similarities to Octave. Have a look at: > > https://cran.r-project.org/doc/contrib/R-and-octave.txt > https://CRAN.R-project.org/package=matconv > > On Mon, Jan 28, 2019 at 4:58 PM Alan Feuerbacher wrote: > > > > Hi, > > > > I recently learned of the existence of R through a physicist friend who > > uses it in his research. I've used Octave for a decade, and C for 35 > > years, but would like to learn R. These all have advantages and > > disadvantages for certain tasks, but as I'm new to R I hardly know how > > to evaluate them. Any suggestions? > > > > Thanks! > > > > --- > > This email has been checked for viruses by Avast antivirus software. > > https://www.avast.com/antivirus > > > > __ > > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Better use with gsub
On Fri, Aug 1, 2014 at 10:46 AM, Doran, Harold wrote: > I have done an embarrassingly bad job using a mixture of gsub and strsplit to > solve a problem. Below is sample code showing what I have to start with (the > vector xx) and I want to end up with two vectors x and y that contain only > the digits found in xx. > > Any regex users with advice most welcome > > Harold > > xx <- c("S24:57", "S24:86", "S24:119", "S24:129", "S24:138", "S24:163") > yy <- gsub("S","\\1", xx) > a1 <- gsub(":"," ", yy) > a2 <- sapply(a1, function(x) strsplit(x, ' ')) > x <- as.numeric(sapply(a2, function(x) x[1])) > y <- as.numeric(sapply(a2, function(x) x[2])) > library(gsubfn) > strapply(xx, "\\d+", as.numeric, simplify = TRUE) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 24 24 24 24 24 24 [2,] 57 86 119 129 138 163 -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Old g++ in Rtools
On Wed, Aug 6, 2014 at 2:46 AM, Rguy wrote: > I recently downloaded Rtools. I see the g++ version is > gcc version 4.6.3 20111208 (prerelease) (GCC) > > I also recently downloaded MinGW. Its version of g++ is > gcc version 4.8.1 (GCC) > > I believe that later versions of g++ provide better support for C++11. > Why does Rtools provide a version considerably older than the latest? > Any plans to update the version? > Is it bad practice to compile with a later version when interfacing with R? Later versions of gcc/g++ also support openmp which 4.6.3 does not. There may be other C/C++ packages too that can't be used until with Rtools until it is upgraded to a more recent version. By the way, there is a MinGW 4.8.2 distribution (slightly more recent than the one you downloaded) here: http://nuwen.net/mingw.html -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] dynamic runSum
On Thu, Aug 7, 2014 at 9:32 AM, amarjit chandhial wrote: > Hello, > runSum calculates a running sum looking back a fixed distance n, e.g. 20. > How do I calculate a dynamic runSum function for an xts object? > In > otherwords, I want to calculate a running sum at each point in time > looking back a variable distance. In this example, values governed by > the vector VL. The width argument in rollapplyr in the zoo package can be a vector. It can't be NA though so we have used 1 in those cases here and at the end used na.omit to get rid of the junk at the beginning: na.omit(rollapplyr(acf1, ifelse(is.na(acf1$VL), 1, acf1$VL), sum)) __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] R Package for Text Manipulation
On Sat, Aug 9, 2014 at 8:15 AM, Omar André Gonzáles Díaz wrote: > Hi all, > > I want to know, where i can find a package to simulate the functions > "Search and Replace and "Find Words that contain - replace them with...", > that we can use in EXCEL. > > I've look in other places and they say: "Reshape2" by Hadley Wickham. How > ever, i've investigated it and its not exactly what i'm looking (it's main > functions are "cast" and "melt", sure you know them). > > May you help me please? I want to download data from Google Analytics and > clean it, what is the best approach? > > [[alternative HTML version deleted]] > 1. The gsubfn function in the gsubfn package can do that. These commands extract the words and then apply the function represented in formula notation in the second argument to them: library(gsubfn) # home page at http://gsubfn.googlecode.com s <- "The quick brown fox" # test data # replace the word quick with QUICK gsubfn("\\S+", ~ if (x == "quick") "QUICK" else x, s) ## [1] "The QUICK brown fox" # replace words containing o with ? gsubfn("\\S+", ~ if (grepl("o", x)) "?" else x, s) ## [1] "The quick ? ?" 2. It can also be done without packages: # replace quick with QUICK gsub("\\bquick\\b", "QUICK", s) ## [1] "The QUICK brown fox" # or the following which first split s into a vector of words and # operate on that pasting it back into a single string at the end words <- strsplit(s, "\\s+")[[1]] paste(replace(words, words == "quick", "QUICK"), collapse = " ") ## [1] "The QUICK brown fox" # replace words containing o with ?. Use `words` from above. paste(replace(words, grepl("o", words), "?"), collapse = " ") ## [1] "The quick ? ?" -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Sldf command returns negative value for date
On Thu, Aug 14, 2014 at 3:47 PM, Sneha Bishnoi wrote: > Hi All! > > I am trying to increment date column of data frame so as to merge it with > another data frame using sqldf: > my query is : > merge<-sqldf("select m.* ,e.* from mdata as m left join event as e on > date(m.Datest,'+1 day')=e.Start") > > The query returns null for all columns related to event table. > When I investigated further with query : > sqldf("select date(Datest,'+1 day')") from eventflight;") > gives me -ve valued dates like : -4671-02-15 > > However this works: > sqldf("select date(('2009-05-01'),'+1')") > > Dataframes are as follows: > mdata : > LOS Arrivals BookRange Datest > 1 1283 0-42009-05-01 > 1 1650 0-42009-05-08 > 1 1302 5-92009-05-15 > > event: > Event.Name Event.location Start End > BirthdayTexas (US) 2009-05-022009-05-03 > Anni Texas (US) 2009-05-09 2009-01-11 > > What am I doing wrong? This is a FAQ. See #4 here: http://sqldf.googlecode.com . The SQLite date function assumes its argument is a timestring but R "Date" class variables are transferred to SQLite as days since 1970-01-01 so just add 1. sqldf("select * from mdata as m left join event on Datest+1 = Start") -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Bus stop sequence matching problem
Try dtw. First convert ref to numeric since dtw does not handle character input. Then align using dtw and NA out repeated values in the alignment. Finally zap ugly row names and calculate loading: library(dtw) s1 <- as.numeric(stop_sequence$ref) s2 <- as.numeric(factor(as.character(stop_onoff$ref), levels(stop_sequence$ref))) a <- dtw(s1, s2) DF <- cbind(stop_sequence, stop_onoff[replace(a$index2, c(FALSE, diff(a$index2) == 0), NA), ])[-3] rownames(DF) <- NULL transform(DF, loading = cumsum(ifelse(is.na(on), 0, on)) - cumsum(ifelse(is.na(off), 0, off))) giving: seq ref on off loading 1 10 A 5 0 5 2 20 B NA NA 5 3 30 C NA NA 5 4 40 D 0 2 3 5 50 B 10 2 11 6 60 A 0 6 5 You will need to test this with more data and tweak it if necessary via the various dtw arguments. On Fri, Aug 29, 2014 at 8:46 PM, Adam Lawrence wrote: > I am hoping someone can help me with a bus stop sequencing problem in R, > where I need to match counts of people getting on and off a bus to the > correct stop in the bus route stop sequence. I have tried looking > online/forums for sequence matching but seems to refer to numeric sequences > or DNA matching and over my head. I am after a simple example if anyone can > please help. > > I have two data series as per below (from database), that I want to > combine. In this example “stop_sequence” includes the equence (seq) of bus > stops and “stop_onoff” is a count of people getting on and off at certain > stops (there is no entry if noone gets on or off). > > stop_sequence <- data.frame(seq=c(10,20,30,40,50,60), > ref=c('A','B','C','D','B','A')) > ## seq ref > ## 1 10 A > ## 2 20 B > ## 3 30 C > ## 4 40 D > ## 5 50 B > ## 6 60 A > stop_onoff <- > data.frame(ref=c('A','D','B','A'),on=c(5,0,10,0),off=c(0,2,2,6)) > ## ref on off > ## 1 A 5 0 > ## 2 D 0 2 > ## 3 B 10 2 > ## 4 A 0 6 > > I need to match the stop_onoff numbers in the right sto sequence, with the > correctly matched output as follows (load is a cumulative count of on and > off) > > desired_output <- data.frame(seq=c(10,20,30,40,50,60), > ref=c('A','B','C','D','B','A'), > on=c(5,'-','-',0,10,0),off=c(0,'-','-',2,2,6), load=c(5,0,0,3,11,5)) > ## seq ref on off load > ## 1 10 A 5 05 > ## 2 20 B - -0 > ## 3 30 C - -0 > ## 4 40 D 0 23 > ## 5 50 B 10 2 11 > ## 6 60 A 0 65 > > In this example the stop “B” is matched to the second stop “B” in the stop > sequence and not the first because the onoff data is after stop “D”. > > Any guidance much appreciated. > > Regards > Adam > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Operator proposal: %between%
On Thu, Sep 4, 2014 at 10:41 AM, Torbjørn Lindahl wrote: > Not sure if this is the proper list to propose changes like this, if it > passes constructive criticism, it would like to have a %between% operator > in the R language. > There is a between function in the data.table package. > library(data.table) > between function (x, lower, upper, incbounds = TRUE) { if (incbounds) x >= lower & x <= upper else x > lower & x < upper } and also in the tis package which works differently: > library(tis) > between function (y, x1, x2) { y <- unclass(y) x1 <- unclass(x1) x2 <- unclass(x2) small <- pmin(x1, x2) large <- pmax(x1, x2) (y >= small) & (y <= large) } In addition, SQL has a between operator > library(sqldf) > sqldf("select * from BOD where Time between 3 and 5") Time demand 13 19.0 24 16.0 35 15.6 __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Import data from Excel to R
On Tue, Sep 9, 2014 at 4:48 PM, JAWADI Fredj wrote: > Hi > I am a New user of R. > Please, how to import data from Excel to R? > Thanks, > Best regards, > Fredj, > There are some ways listed here: https://web.archive.org/web/20131109195709/http://rwiki.sciviews.org/doku.php?id=tips:data-io:ms_windows&s=excel -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Using sqldf() to read in .fwf files
On Mon, Sep 15, 2014 at 12:09 PM, Doran, Harold wrote: > I am learning to use sqldf() to read in very large fixed width files that > otherwise do not work efficiently with read.fwf. I found the following > example online and have worked with this in various ways to read in the data > > cat("1 8.3 > 210.3 > 319.0 > 416.0 > 515.6 > 719.8 > ", file = "fixed") > > fixed <- file("fixed") > sqldf("select substr(V1, 1, 1) f1, substr(V1, 2, 4) f2 from fixed") > > I then applied this to my real world data problem though it yields the > following error message and I am not sure how to interpret this. > > dor <- file("dor") >> sqldf("select substr(V1, 1, 1) f1, substr(V1, 2, 4) f2 from dor") > Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, : > line 1 did not have 6 elements > > Looking at my .fwf. data in a text editor shows the data are structured as I > would expect. In fact, I can read in the first few lines of the file using > read.fwf and the data are as I would expect after being read into R. > We want it to regard the entire line as one field so specify sep= as some character not in the file. attr(fixed, "file.format") <- list(sep = ";") -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Using sqldf() to read in .fwf files
On Mon, Sep 15, 2014 at 3:23 PM, Doran, Harold wrote: > Thank you, Gabor. This has seemingly resolved the issue. Perhaps a quick > follow up. Suppose I know that the 1st variable I am reading in is to be > numeric and the second is character. Can that be specified in the substr() > argument? > > sqldf("select substr(V1, 1, 1) f1, substr(V1, 2, 4) f2 from fixed") > Cast the numeric field to real: select cast(substr(V1, 1, 1) as real) ... __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Savitzky-Golay Smoother
On Fri, Sep 26, 2014 at 3:32 AM, Erick Okuto wrote: > Dear Paul and Henrik, > I have a time series with some missing data points that i need smoothed > using Savitzky-Golay filter. Related question was asked here > http://thr3ads.net/r-help/2012/11/2121748-Savitzky-Golay-filtering-with-missing-data > but no straight forward answer was posted. However, Henrik (cc'd here) did > ask related question on smoothing for reflectance here > http://thr3ads.net/r-help/2004/02/835137-Savitzky-Golay-smoothing-for-reflectance-data > which i have as well been unable to follow up. I will be glad if you could > assist me with some insights on the way forward or point to a relevant > source of help. Not Savitzky-Golay but if z is a time series then library(zoo) na.spline(z) will fill in NAs with spline curve fits. See ?na.spline There are other NA filling routines in zoo too: > ls(pattern = "^na[.]", "package:zoo") [1] "na.aggregate" "na.aggregate.default" "na.approx" [4] "na.approx.default""na.fill" "na.fill.default" [7] "na.locf" "na.locf.default" "na.spline" [10] "na.spline.default""na.StructTS" "na.trim" [13] "na.trim.default" "na.trim.ts" -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Could someone recommend a package for time series?
On Mon, Sep 29, 2014 at 5:05 AM, jpm miao wrote: > Hi, > >I've not used R for about one year and don't know well about the updates > on the time series-related package. > > My primary job is to do economic/financial time series data analysis - > annual, monthly, daily, etc. I usually read data by the package > "XLConnect", which can read xls or xlsx files directly. It's excellent. > However I can't find a package to manipulate time series data. For example, > I just want to do an easy manipulation , e.g, to label the dates of the > data from , say, 1991M10 to 2014M07, and then extract part of the data, > say, 2005M01 to 2010M12 and do analysis. Is there any package work well for > my purpose? > > I sometimes need to aggregate monthly data to quarterly data and I find > "aggregate" function helpful. > > In the past I used packages xts, zoo and don't find it really user > friendly. Maybe I haven't mastered it; maybe there're some updates (which I > don't know) now. Could someone recommend a package or provide an example > (or just the document, I can read it) for my purpose? > The built in "ts" class works well with regularly spaced monthly and quarterly data but is less suitable for daily data since it cannot represent exact dates. What you have described is very easy to do with zoo and/or xts. If you are familiar with the core functions of R then zoo is pretty easy to use since nearly all its functions are methods of core generics allowing you to leverage your knowledge of R. See: vignette("zoo-design") for the design principles used and all 5 vignettes: vignette(package = "zoo").. xts (which works with zoo) could also be of interest as well as 139 other packages that work with zoo and/or xts which means that in many cases whatever functionality you need already exists. See the bottom of each of these pages for links to the other packages: http://cran.r-project.org/package=zoo http://cran.r-project.org/package=xts Regarding your problem, zoo does have built in yearmon and yearqtr classes for monthly and quarterly data. Here is an example which creates some daily test data, aggregates to monthly, extracts a subset and displays the first few data points. Then it aggregates to quarterly and displays the first few data points At the end it plots the data using zoo's classic graphics plot.zoo method. ( zoo also has direct support for ggplot2 (autoplot.zoo) and lattice graphics (xyplot.zoo).) library(zoo) # create test data, z tt <- seq(as.Date("2000-10-01"), as.Date("2013-12-31"), by = "day") z <- zoo(seq_along(tt), tt) # aggregate to monthly series and change time scale to year/month zm <- aggregate(z, as.yearmon, mean) # extract part of it zm0 <- window(zm, start = "2005-01", end = "2010-12") head(zm0) ## Jan 2005 Feb 2005 Mar 2005 Apr 2005 May 2005 Jun 2005 ## 1569.0 1598.5 1628.0 1658.5 1689.0 1719.5 # aggregate to quarterly zq <- aggregate(zm0, as.yearqtr, mean) head(zq) ## 2005 Q1 2005 Q2 2005 Q3 2005 Q4 2006 Q1 2006 Q2 ## 1598.500 1689.000 1780.833 1872.500 1963.500 2054.000 plot(zq) __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Count number of Fridays in a month
On Fri, Oct 10, 2014 at 7:28 AM, Abhinaba Roy wrote: > Hi R helpers, > > I want to write a function which will > > 1. Count the number of fridays in the current month ( to extract month from > given date) and also the number of fridays in the preceeding month > > 2. Calculate the ratio of the number of fridays in current month to the > number of fridays in the precceding month > > 3. Return a integer value calculated as > ifelse(ratio>1,1,ifesle(ration<1,-1),0) > > The date which is passed is in the format *'31-may-2014'* > > So, given the date '31-may-2014' > > Number of fridays in May2014 = 5 > Number of fridays in Apr2014 = 4 > > Ratio = 5/4 >1 > Hence, the function will return a value 1 > > I want to call the function by passing '31-may-2014' as an argument If d is a "Date" class variable equal to a month end date, e.g. d <- as.Date("31-may-2014", format = "%d-%b-%Y") then this gives the ratio of the number of Fridays in d's month to the number of Fridays in the prior month: days <- seq(as.Date(cut(d - 32, "month")), d, "day") ratio <- exp(diff(log(tapply(format(days, "%w") == 5, format(days, "%Y%m"), sum Now apply the formula in your point 3 to ratio and put it all in a function. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] ggplot "scale_x_date" : to plot quarterly scale?
On Tue, Oct 14, 2014 at 3:36 AM, jpm miao wrote: > Hi, > > I am plotting time series by ggplot2, but I believe that my question > applies to other plotting tool as well. > > I want to make my x-axis the quarterly scale, e.g: > 2000Q1 2000Q2. > >However, scale_x_date and date_format("%m/%d") support all time formats > BUT QUARTERs > > library(scales) # to access breaks/formatting functions > dt + scale_x_date() > dt + scale_x_date(labels = date_format("%m/%d")) > 1. zoo has a "yearqtr" class and its ggplot2 interface includes scale_x_yearqtr() so: library(zoo) library(ggplot2) library(scales) # test data DF <- data.frame(date = seq(as.Date("2014-01-01"), length = 4, by = "3 months"), y = c(1, 4, 2, 3)) # convert date to yearmon class DF2 <- transform(DF, date = as.yearqtr(date)) ggplot(DF2, aes(date, y)) + geom_line() + scale_x_yearqtr(format = "%YQ%q") 2. zoo also has a zoo method for ggplot2's autoplot generic so we could just convert DF to zoo and write: z <- zoo(DF$y, as.yearqtr(DF$date)) autoplot(z) + scale_x_yearqtr(format = "%YQ%q") In both cases if the format argument is omitted one gets a default of format = "%Y-%q". -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Ways to get all function signatures of a library?
On Wed, Oct 29, 2014 at 9:09 AM, Thorsten Jolitz wrote: > > Hi List, > > are there ways to get signatures of all functions of a library in a > format that is easy to process by a programm (list, xml or so)? > > The info about function name, return value and arguments (types) is all > there in the docs, but more in a human readable format embedded in much > extra information. How to extract it without writing a documentation > parser or so? I'm pretty sure the functionality exists, but did not find > it. > In general, R functions do not have argument and return types (and don't even have to have names) but maybe this would do: library(lattice) # need this for make.groups # load the package of interest library(zoo) DF <- do.call(make.groups, Map(function(x) names(formals(get(x))), ls("package:zoo"))) rownames(DF) <- NULL giving: > head(DF) data which 1 x as.Date 2... as.Date 3 x as.Date.numeric 4 origin as.Date.numeric 5... as.Date.numeric 6 x as.Date.ts -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extracting every nth character from a string...
This uses a regular expression but is shorter: > gsub("(.).", "\\1", "ABCDEFG") [1] "ACEG" It replaces each successive pair of characters with the first of that pair. If there is an odd number of characters then the last character is not matched and therefore kept -- thus it works properly for both even and odd. On Sat, Sep 5, 2015 at 4:59 PM, Evan Cooch wrote: > Suppose I had the following string, which has length of integer multiple > of some value n. So, say n=2, and the example string has a length of (2x4) > = 8 characters. > > str <- "ABCDEFGH" > > What I'm trying to figure out is a simple, base-R coded way (which I > heuristically call StrSubset in the following) to extract every nth > character from the string, to generate a new string. > > So > > str <- "ABCDEFGH" > > new_str <- StrSubset(str); > > print(new_str) > > which would yield > > "ACEG" > > > Best I could come up with is something like the following, where I extract > every odd character from the string: > > StrSubset <- function(string) > { > paste(unlist(strsplit(string,""))[seq(1,nchar(string),2)],collapse="") } > > > Anything more elegant come to mind? Trying to avoid regex if possible > (harder to explain to end-users), but if that meets the 'more elegant' > sniff test, happy to consider... > > Thanks in advance... > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to read CSV from web?
Your read.csv call works for me under Windows on "R version 3.2.2 Patched (2015-08-25 r69180)" but not on "R version 3.1.3 Patched (2015-03-16 r68169)". Suggest you upgrade your R installation and try again. If you are on Windowsw and don't want to upgrade right now an alternative is to issue this command first: setInternet2() Also note that header = TRUE and sep = "," are the defaults for read.csv so those arguments can be optionally omitted. On Wed, Jul 29, 2015 at 6:37 AM, jpara3 wrote: > data<-read.csv(" > https://raw.githubusercontent.com/sjkiss/Survey/master/mlogit.out.csv > ",header=T,sep=",") > > > > -- > View this message in context: > http://r.789695.n4.nabble.com/How-to-read-CSV-from-web-tp4710502p4710513.html > Sent from the R help mailing list archive at Nabble.com. > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] How to coerce a parameter in nls?
Express the formula in terms of simple operations like this: # add 0/1 columns ref.1, ref.2, ..., ref.6 dproot2 <- do.call(data.frame, transform(dproot, ref = outer(dproot$ref, seq(6), "==") + 0)) # now express the formula in terms of the new columns library(nlmrt) fitdp1<-nlxb(den ~ (Rm1 * ref.1 + Rm2 * ref.2 + Rm3 * ref.3 + Rm4 * ref.4 + Rm5 * ref.5 + Rm6 * ref.6)/(1+(depth/d50)^c), data = dproot2, start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, Rm5=1.01, Rm6=1, d50=20, c=-1), masked = "Rm6") where we used this input: Lines <- " depth den ref 1 20 0.573 1 2 40 0.780 1 3 60 0.947 1 4 80 0.990 1 5100 1.000 1 6 10 0.600 2 7 20 0.820 2 8 30 0.930 2 9 40 1.000 2 1020 0.480 3 1140 0.734 3 1260 0.961 3 1380 0.998 3 14 100 1.000 3 1520 3.2083491 4 1640 4.9683383 4 1760 6.2381133 4 1880 6.5322348 4 19 100 6.5780660 4 20 120 6.6032064 4 2120 0.614 5 2240 0.827 5 2360 0.950 5 2480 0.995 5 25 100 1.000 5 2620 0.4345774 6 2740 0.6654726 6 2860 0.8480684 6 2980 0.9268951 6 30 100 0.9723207 6 31 120 0.9939966 6 32 140 0.9992400 6" dproot <- read.table(text = Lines, header = TRUE) On Mon, Sep 21, 2015 at 12:22 PM, Jianling Fan wrote: > Thanks Prof. Nash, > > Sorry for late reply. I am learning and trying to use your nlmrt > package since I got your email. It works good to mask a parameter in > regression but seems does work for my equation. I think the problem is > that the parameter I want to mask is a group-specific parameter and I > have a "[]" syntax in my equation. However, I don't have your 2014 > book on hand and couldn't find it in our library. So I am wondering if > nlxb works for group data? > Thanks a lot! > > following is my code and I got a error form it. > > > fitdp1<-nlxb(den~Rm[ref]/(1+(depth/d50)^c),data=dproot, > + start =c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, > Rm5=1.01, Rm6=1, d50=20, c=-1), > + masked=c("Rm6")) > > Error in deriv.default(parse(text = resexp), names(start)) : > Function '`[`' is not in the derivatives table > > > Best regards, > > Jianling > > > On 20 September 2015 at 12:56, ProfJCNash wrote: > > I posted a suggestion to use nlmrt package (function nlxb to be precise), > > which has masked (fixed) parameters. Examples in my 2014 book on > Nonlinear > > parameter optimization with R tools. However, I'm travelling just now, or > > would consider giving this a try. > > > > JN > > > > > > On 15-09-20 01:19 PM, Jianling Fan wrote: > >> > >> no, I am doing a regression with 6 group data with 2 shared parameters > >> and 1 different parameter for each group data. the parameter I want to > >> coerce is for one group. I don't know how to do it. Any suggestion? > >> > >> Thanks! > >> > >> On 19 September 2015 at 13:33, Jeff Newmiller > > >> wrote: > >>> > >>> Why not rewrite the function so that value is not a parameter? > >>> > >>> > --- > >>> Jeff NewmillerThe . . Go > >>> Live... > >>> DCN:Basics: ##.#. ##.#. Live > >>> Go... > >>>Live: OO#.. Dead: OO#.. > Playing > >>> Research Engineer (Solar/BatteriesO.O#. #.O#. with > >>> /Software/Embedded Controllers) .OO#. .OO#. > >>> rocks...1k > >>> > >>> > --- > >>> Sent from my phone. Please excuse my brevity. > >>> > >>> On September 18, 2015 9:54:54 PM PDT, Jianling Fan > >>> wrote: > > Hello, everyone, > > I am using a nls regression with 6 groups data. I am trying to coerce > a parameter to 1 by using a upper and lower statement. but I always > get an error like below: > > Error in ifelse(internalPars < upper, 1, -1) : > (list) object cannot be coerced to type 'double' > > does anyone know how to fix it? > > thanks in advance! > > My code is below: > > > > > dproot > > depth den ref > 1 20 0.573 1 > 2 40 0.780 1 > 3 60 0.947 1 > 4 80 0.990 1 > 5100 1.000 1 > 6 10 0.600 2 > 7 20 0.820 2 > 8 30 0.930 2 > 9 40 1.000 2 > 1020 0.480 3 > 1140 0.734 3 > 1260 0.961 3 > 1380 0.998 3 > 14 100 1.000 3 > 1520 3.2083491 4 > 1640 4.9683383 4 > 1760 6.2381133 4 > 1880 6.5322348 4 > 19 100 6.5780660 4 > 20 120 6.6032064 4 > 2120 0.614 5 > 2240
Re: [R] How to coerce a parameter in nls?
Just write out the 20 terms. On Mon, Sep 21, 2015 at 10:26 PM, Jianling Fan wrote: > Hello, Gabor, > > Thanks again for your suggestion. And now I am trying to improve the > code by adding a function to replace the express "Rm1 * ref.1 + Rm2 * > ref.2 + Rm3 * ref.3 + Rm4 * ref.4 + Rm5 * ref.5 + Rm6 * ref.6" because > I have some other dataset need to fitted to the same model but with > more groups (>20). > > I tried to add the function as: > > denfun<-function(i){ >for(i in 1:6){ > Rm<-sum(Rm[i]*ref.i) > return(Rm)} > } > > but I got another error when I incorporate this function into my > regression: > > >fitdp1<-nlxb(den ~ denfun(6)/(1+(depth/d50)^c), >data = dproot2, > start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, > Rm5=1.01, Rm6=1, d50=20, c=-1), > masked = "Rm6") > > Error in deriv.default(parse(text = resexp), names(start)) : > Function 'denfun' is not in the derivatives table > > I think there must be something wrong with my function. I tried some > times but am not sure how to improve it because I am quite new to R. > > Could anyone please give me some suggestion. > > Thanks a lot! > > > Jianling > > > On 22 September 2015 at 00:43, Gabor Grothendieck > wrote: > > Express the formula in terms of simple operations like this: > > > > # add 0/1 columns ref.1, ref.2, ..., ref.6 > > dproot2 <- do.call(data.frame, transform(dproot, ref = outer(dproot$ref, > > seq(6), "==") + 0)) > > > > # now express the formula in terms of the new columns > > library(nlmrt) > > fitdp1<-nlxb(den ~ (Rm1 * ref.1 + Rm2 * ref.2 + Rm3 * ref.3 + Rm4 * > ref.4 + > > Rm5 * ref.5 + Rm6 * ref.6)/(1+(depth/d50)^c), > > data = dproot2, > > start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, Rm5=1.01, > Rm6=1, > > d50=20, c=-1), > > masked = "Rm6") > > > > where we used this input: > > > > Lines <- " depth den ref > > 1 20 0.573 1 > > 2 40 0.780 1 > > 3 60 0.947 1 > > 4 80 0.990 1 > > 5100 1.000 1 > > 6 10 0.600 2 > > 7 20 0.820 2 > > 8 30 0.930 2 > > 9 40 1.000 2 > > 1020 0.480 3 > > 1140 0.734 3 > > 1260 0.961 3 > > 1380 0.998 3 > > 14 100 1.000 3 > > 1520 3.2083491 4 > > 1640 4.9683383 4 > > 1760 6.2381133 4 > > 1880 6.5322348 4 > > 19 100 6.5780660 4 > > 20 120 6.6032064 4 > > 2120 0.614 5 > > 2240 0.827 5 > > 2360 0.950 5 > > 2480 0.995 5 > > 25 100 1.000 5 > > 2620 0.4345774 6 > > 2740 0.6654726 6 > > 2860 0.8480684 6 > > 2980 0.9268951 6 > > 30 100 0.9723207 6 > > 31 120 0.9939966 6 > > 32 140 0.9992400 6" > > > > dproot <- read.table(text = Lines, header = TRUE) > > > > > > > > On Mon, Sep 21, 2015 at 12:22 PM, Jianling Fan > > wrote: > >> > >> Thanks Prof. Nash, > >> > >> Sorry for late reply. I am learning and trying to use your nlmrt > >> package since I got your email. It works good to mask a parameter in > >> regression but seems does work for my equation. I think the problem is > >> that the parameter I want to mask is a group-specific parameter and I > >> have a "[]" syntax in my equation. However, I don't have your 2014 > >> book on hand and couldn't find it in our library. So I am wondering if > >> nlxb works for group data? > >> Thanks a lot! > >> > >> following is my code and I got a error form it. > >> > >> > fitdp1<-nlxb(den~Rm[ref]/(1+(depth/d50)^c),data=dproot, > >> + start =c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, > >> Rm5=1.01, Rm6=1, d50=20, c=-1), > >> + masked=c("Rm6")) > >> > >> Error in deriv.default(parse(text = resexp), names(start)) : > >> Function '`[`' is not in the derivatives table > >> > >> > >> Best regards, > >> > >> Jianling > >> > >> > >> On 20 September 2015 at 12:56, ProfJCNash wrote: > >> > I posted a suggestion to use nlmrt package (function nlxb to be > >> > precise), &
Re: [R] How to coerce a parameter in nls?
Or if you really can't bear to write out 20 terms have R do it for you: # number of terms is the number of unique values in ref column nterms <- length(unique(dproot$ref)) dproot2 <- do.call(data.frame, transform(dproot, ref = outer(dproot$ref, seq(nterms), "==") + 0)) # construct the formula as a string terms <- paste( sprintf("Rm%d*ref.%d", 1:nterms, 1:nterms), collapse = "+") fo <- sprintf("den ~ (%s)/(1+(depth/d50)^c)", terms) library(nlmrt) fm <- nlxb(fo, data = dproot2, masked = "Rm6", start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, Rm5=1.01, Rm6=1, d50=20, c=-1)) On Tue, Sep 22, 2015 at 7:04 AM, Gabor Grothendieck wrote: > Just write out the 20 terms. > > On Mon, Sep 21, 2015 at 10:26 PM, Jianling Fan > wrote: > >> Hello, Gabor, >> >> Thanks again for your suggestion. And now I am trying to improve the >> code by adding a function to replace the express "Rm1 * ref.1 + Rm2 * >> ref.2 + Rm3 * ref.3 + Rm4 * ref.4 + Rm5 * ref.5 + Rm6 * ref.6" because >> I have some other dataset need to fitted to the same model but with >> more groups (>20). >> >> I tried to add the function as: >> >> denfun<-function(i){ >>for(i in 1:6){ >> Rm<-sum(Rm[i]*ref.i) >> return(Rm)} >> } >> >> but I got another error when I incorporate this function into my >> regression: >> >> >fitdp1<-nlxb(den ~ denfun(6)/(1+(depth/d50)^c), >>data = dproot2, >> start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, >> Rm5=1.01, Rm6=1, d50=20, c=-1), >> masked = "Rm6") >> >> Error in deriv.default(parse(text = resexp), names(start)) : >> Function 'denfun' is not in the derivatives table >> >> I think there must be something wrong with my function. I tried some >> times but am not sure how to improve it because I am quite new to R. >> >> Could anyone please give me some suggestion. >> >> Thanks a lot! >> >> >> Jianling >> >> >> On 22 September 2015 at 00:43, Gabor Grothendieck >> wrote: >> > Express the formula in terms of simple operations like this: >> > >> > # add 0/1 columns ref.1, ref.2, ..., ref.6 >> > dproot2 <- do.call(data.frame, transform(dproot, ref = outer(dproot$ref, >> > seq(6), "==") + 0)) >> > >> > # now express the formula in terms of the new columns >> > library(nlmrt) >> > fitdp1<-nlxb(den ~ (Rm1 * ref.1 + Rm2 * ref.2 + Rm3 * ref.3 + Rm4 * >> ref.4 + >> > Rm5 * ref.5 + Rm6 * ref.6)/(1+(depth/d50)^c), >> > data = dproot2, >> > start = c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, Rm5=1.01, >> Rm6=1, >> > d50=20, c=-1), >> > masked = "Rm6") >> > >> > where we used this input: >> > >> > Lines <- " depth den ref >> > 1 20 0.573 1 >> > 2 40 0.780 1 >> > 3 60 0.947 1 >> > 4 80 0.990 1 >> > 5100 1.000 1 >> > 6 10 0.600 2 >> > 7 20 0.820 2 >> > 8 30 0.930 2 >> > 9 40 1.000 2 >> > 1020 0.480 3 >> > 1140 0.734 3 >> > 1260 0.961 3 >> > 1380 0.998 3 >> > 14 100 1.000 3 >> > 1520 3.2083491 4 >> > 1640 4.9683383 4 >> > 1760 6.2381133 4 >> > 1880 6.5322348 4 >> > 19 100 6.5780660 4 >> > 20 120 6.6032064 4 >> > 2120 0.614 5 >> > 2240 0.827 5 >> > 2360 0.950 5 >> > 2480 0.995 5 >> > 25 100 1.000 5 >> > 2620 0.4345774 6 >> > 2740 0.6654726 6 >> > 2860 0.8480684 6 >> > 2980 0.9268951 6 >> > 30 100 0.9723207 6 >> > 31 120 0.9939966 6 >> > 32 140 0.9992400 6" >> > >> > dproot <- read.table(text = Lines, header = TRUE) >> > >> > >> > >> > On Mon, Sep 21, 2015 at 12:22 PM, Jianling Fan >> > wrote: >> >> >> >> Thanks Prof. Nash, >> >> >> >> Sorry for late reply. I am learning and trying to use your nlmrt >> >> package since I got your email. It works good to mask a parameter in >> >> regression but seems does work for my equation. I think the problem
Re: [R] How to coerce a parameter in nls?
You may have to do without masking and switch back to nls. dproot2 and fo are from prior post. # to mask Rm6 omit it from start and set it explicitly st <- c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, Rm5=1.01, d50=20, c=-1) Rm6 <- 1 fm.nls <- nls(fo, dproot2, start = st) AIC(fm.nls) summary(fm.nls) On Tue, Sep 22, 2015 at 12:46 PM, Jianling Fan wrote: > Hello Prof. Nash, > > My regression works good now. But I found another problem when I using > nlxb. In the output, the SE, t-stat, and p-value are not available. > Furthermore, I can't extract AIC from the output. The output looks > like below: > > Do you have any suggestion for this? > > Thanks a lot! > > Regards, > > nlmrt class object: x > residual sumsquares = 0.29371 on 33 observations > after 9Jacobian and 10 function evaluations > namecoeff SE tstat pval > gradientJSingval > Rm1 1.1162NA NA NA > -3.059e-13 2.745 > Rm2 1.56072NA NA NA > 1.417e-131.76 > Rm3 1.09775NA NA NA > -3.179e-13 1.748 > Rm4 7.18377NA NA NA > -2.941e-12 1.748 > Rm5 1.13562NA NA NA > -3.305e-13 1.076 > Rm61 M NA NA NA > 0 0.603 > d50 22.4803NA NA NA > 4.975e-13 0.117 > c -1.64075NA NA NA > 4.12e-12 1.908e-17 > > > > On 21 September 2015 at 13:38, ProfJCNash wrote: > > I've not used it for group data, and suspect that the code to generate > > derivatives cannot cope with the bracket syntax. If you can rewrite the > > equation without the brackets, you could get the derivatives and solve > that > > way. This will probably mean having a "translation" routine to glue > things > > together. > > > > JN > > > > > > On 15-09-21 12:22 PM, Jianling Fan wrote: > >> > >> Thanks Prof. Nash, > >> > >> Sorry for late reply. I am learning and trying to use your nlmrt > >> package since I got your email. It works good to mask a parameter in > >> regression but seems does work for my equation. I think the problem is > >> that the parameter I want to mask is a group-specific parameter and I > >> have a "[]" syntax in my equation. However, I don't have your 2014 > >> book on hand and couldn't find it in our library. So I am wondering if > >> nlxb works for group data? > >> Thanks a lot! > >> > >> following is my code and I got a error form it. > >> > >>> fitdp1<-nlxb(den~Rm[ref]/(1+(depth/d50)^c),data=dproot, > >> > >> + start =c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65, > >> Rm5=1.01, Rm6=1, d50=20, c=-1), > >> + masked=c("Rm6")) > >> > >> Error in deriv.default(parse(text = resexp), names(start)) : > >>Function '`[`' is not in the derivatives table > >> > >> > >> Best regards, > >> > >> Jianling > >> > >> > >> On 20 September 2015 at 12:56, ProfJCNash wrote: > >>> > >>> I posted a suggestion to use nlmrt package (function nlxb to be > precise), > >>> which has masked (fixed) parameters. Examples in my 2014 book on > >>> Nonlinear > >>> parameter optimization with R tools. However, I'm travelling just now, > or > >>> would consider giving this a try. > >>> > >>> JN > >>> > >>> > >>> On 15-09-20 01:19 PM, Jianling Fan wrote: > > > no, I am doing a regression with 6 group data with 2 shared parameters > and 1 different parameter for each group data. the parameter I want to > coerce is for one group. I don't know how to do it. Any suggestion? > > Thanks! > > On 19 September 2015 at 13:33, Jeff Newmiller < > jdnew...@dcn.davis.ca.us> > wrote: > > > > > > Why not rewrite the function so that value is not a parameter? > > > > > > > --- > > Jeff NewmillerThe . . Go > > Live... > > DCN:Basics: ##.#. ##.#. > Live > > Go... > > Live: OO#.. Dead: OO#.. > > Playing > > Research Engineer (Solar/BatteriesO.O#. #.O#. with > > /Software/Embedded Controllers) .OO#. .OO#. > > rocks...1k > > > > > > > --- > > Sent from my phone. Please excuse my brevity. > > > > On September 18, 2015 9:54:54 PM PDT, Jianling Fan > > wrote: > >> > >> > >> Hello, everyone, > >> > >> I am using a nls regression with 6 groups data. I am trying to > coerce > >> a parameter to 1 by using a upper and lower statement. but I always > >> get an error like below: > >> > >> Error in ifelse(internalPars < upper, 1, -1) : > >>
Re: [R] flatten a list
> do.call(c, lapply(temp, function(x) if (is.list(x)) x else list(x))) [[1]] [1] 1 2 3 [[2]] [1] "a" "b" "c" $duh [1] 5 6 7 8 $zed [1] 15 16 17 On Tue, Sep 29, 2015 at 11:00 AM, Therneau, Terry M., Ph.D. < thern...@mayo.edu> wrote: > I'd like to flatten a list from 2 levels to 1 level. This has to be easy, > but is currently opaque to me. > > temp <- list(1:3, list(letters[1:3], duh= 5:8), zed=15:17) > > Desired result would be a 4 element list. > [[1]] 1:3 > [[2]] "a", "b", "c" > [[duh]] 5:8 > [[zed]] 15:17 > > (Preservation of the names is not important) > > Terry T > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Most appropriate function for the following optimisation issue?
Yes, it's the projection of S onto the subspace orthogonal to B which is: X <- S - B%*%B / sum(B*B) and is also implied by Duncan's solution since that is what the residuals of linear regression are. On Tue, Oct 20, 2015 at 1:00 PM, Paul Smith wrote: > On Tue, Oct 20, 2015 at 11:58 AM, Andy Yuan wrote: > > > > Please could you help me to select the most appropriate/fastest function > to use for the following constraint optimisation issue? > > > > Objective function: > > > > Min: Sum( (X[i] - S[i] )^2) > > > > Subject to constraint : > > > > Sum (B[i] x X[i]) =0 > > > > where i=1…n and S[i] and B[i] are real numbers > > > > Need to solve for X > > > > Example: > > > > Assume n=3 > > > > S <- c(-0.5, 7.8, 2.3) > > B <- c(0.42, 1.12, 0.78) > > > > Many thanks > > I believe you can solve *analytically* your optimization problem, with > the Lagrange multipliers method, Andy. By doing so, you can derive > clean and closed-form expression for the optimal solution. > > Paul > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Most appropriate function for the following optimisation issue?
Correction. Yes, it's the projection of S onto the subspace orthogonal to B which is: X <- S - (B%o%B) %*% S/ sum(B*B) and is also implied by Duncan's solution since that is what the residuals of linear regression are. On Tue, Oct 20, 2015 at 1:11 PM, Gabor Grothendieck wrote: > Yes, it's the projection of S onto the subspace orthogonal to B which is: > > X <- S - B%*%B / sum(B*B) > > and is also implied by Duncan's solution since that is what the residuals > of linear regression are. > > On Tue, Oct 20, 2015 at 1:00 PM, Paul Smith wrote: > >> On Tue, Oct 20, 2015 at 11:58 AM, Andy Yuan wrote: >> > >> > Please could you help me to select the most appropriate/fastest >> function to use for the following constraint optimisation issue? >> > >> > Objective function: >> > >> > Min: Sum( (X[i] - S[i] )^2) >> > >> > Subject to constraint : >> > >> > Sum (B[i] x X[i]) =0 >> > >> > where i=1…n and S[i] and B[i] are real numbers >> > >> > Need to solve for X >> > >> > Example: >> > >> > Assume n=3 >> > >> > S <- c(-0.5, 7.8, 2.3) >> > B <- c(0.42, 1.12, 0.78) >> > >> > Many thanks >> >> I believe you can solve *analytically* your optimization problem, with >> the Lagrange multipliers method, Andy. By doing so, you can derive >> clean and closed-form expression for the optimal solution. >> >> Paul >> >> __ >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Linear regression with a rounded response variable
This could be modeled directly using Bayesian techniques. Consider the Bayesian version of the following model where we only observe y and X. y0 is not observed. y0 <- X b + error y <- round(y0) The following code is based on modifying the code in the README of the CRAN rcppbugs R package. library(rcppbugs) set.seed(123) # set up the test data - y and X are observed but not y0 NR <- 1e2L NC <- 2L X <- cbind(1, rnorm(10)) y0 <- X %*% 1:2 y <- round(y0) # for comparison run a normal linear model w/ lm.fit using X and y lm.res <- lm.fit(X,y) print(coef(lm.res)) ##x1x2 ## 0.9569366 1.9170808 # RCppBugs Model b <- mcmc.normal(rnorm(NC),mu=0,tau=0.0001) tau.y <- mcmc.gamma(sd(as.vector(y)),alpha=0.1,beta=0.1) y.hat <- deterministic(function(X,b) { round(X %*% b) }, X, b) y.lik <- mcmc.normal(y,mu=y.hat,tau=tau.y,observed=TRUE) m <- create.model(b, tau.y, y.hat, y.lik) # run the Bayesian model based on y and X cat("running model...\n") runtime <- system.time(ans <- run.model(m, iterations=1e5L, burn=1e4L, adapt=1e3L, thin=10L)) print(apply(ans[["b"]],2,mean)) ## [1] 0.9882485 2.0009989 On Wed, Oct 21, 2015 at 10:53 AM, Ravi Varadhan wrote: > Hi, > I am dealing with a regression problem where the response variable, time > (second) to walk 15 ft, is rounded to the nearest integer. I do not care > for the regression coefficients per se, but my main interest is in getting > the prediction equation for walking speed, given the predictors (age, > height, sex, etc.), where the predictions will be real numbers, and not > integers. The hope is that these predictions should provide unbiased > estimates of the "unrounded" walking speed. These sounds like a measurement > error problem, where the measurement error is due to rounding and hence > would be uniformly distributed (-0.5, 0.5). > > Are there any canonical approaches for handling this type of a problem? > What is wrong with just doing the standard linear regression? > > I googled and saw that this question was asked by someone else in a > stackexchange post, but it was unanswered. Any suggestions? > > Thank you, > Ravi > > Ravi Varadhan, Ph.D. (Biostatistics), Ph.D. (Environmental Engg) > Associate Professor, Department of Oncology > Division of Biostatistics & Bionformatics > Sidney Kimmel Comprehensive Cancer Center > Johns Hopkins University > 550 N. Broadway, Suite -E > Baltimore, MD 21205 > 410-502-2619 > > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optim() and Instability
Tyipcally the parameters being optimized should be the same order of magnitude or else you can expect numerical problems. That is what the fnscale control parameter is for. On Sat, Nov 14, 2015 at 10:15 AM, Lorenzo Isella wrote: > Dear All, > I am using optim() for a relatively simple task: a linear model where > instead of minimizing the sum of the squared errors, I minimize the sum > of the squared relative errors. > However, I notice that the default algorithm is very sensitive to the > choice of the initial fit parameters, whereas I get much more stable > (and therefore better?) results with the BFGS algorithm. > I would like to have some feedback on this (perhaps I made a mistake > somewhere). > I provide a small self-contained example. > You can download a tiny data set from the link > > https://www.dropbox.com/s/tmbj3os4ev3d4y8/data-instability.csv?dl=0 > > whereas I paste the script I am using at the end of the email. > Any feedback is really appreciated. > Many thanks > > Lorenzo > > > > min.perc_error <- function(data, par) { > with(data, sum(((par[1]*x1 + par[2]*x2+par[3]*x3 - > y)/y)^2)) >} > > par_ini1 <- c(.3,.1, 1e-3) > > par_ini2 <- c(1,1, 1) > > > data <- read.csv("data-instability.csv") > > mm_def1 <-optim(par = par_ini1 >, min.perc_error, data = data) > > mm_bfgs1 <-optim(par = par_ini1 >, min.perc_error, data = data, method="BFGS") > > print("fit parameters with the default algorithms and the first seed > ") > print(mm_def1$par) > > print("fit parameters with the BFGS algorithms and the first seed ") > print(mm_bfgs1$par) > > > > mm_def2 <-optim(par = par_ini2 >, min.perc_error, data = data) > > mm_bfgs2 <-optim(par = par_ini2 >, min.perc_error, data = data, method="BFGS") > > > > > print("fit parameters with the default algorithms and the second seed > ") > print(mm_def2$par) > > print("fit parameters with the BFGS algorithms and the second seed ") > print(mm_bfgs2$par) > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optim() and Instability
I meant the parscale parameter. On Sat, Nov 14, 2015 at 10:30 AM, Gabor Grothendieck wrote: > Tyipcally the parameters being optimized should be the same order of > magnitude or else you can expect numerical problems. That is what the > fnscale control parameter is for. > > On Sat, Nov 14, 2015 at 10:15 AM, Lorenzo Isella > wrote: >> Dear All, >> I am using optim() for a relatively simple task: a linear model where >> instead of minimizing the sum of the squared errors, I minimize the sum >> of the squared relative errors. >> However, I notice that the default algorithm is very sensitive to the >> choice of the initial fit parameters, whereas I get much more stable >> (and therefore better?) results with the BFGS algorithm. >> I would like to have some feedback on this (perhaps I made a mistake >> somewhere). >> I provide a small self-contained example. >> You can download a tiny data set from the link >> >> https://www.dropbox.com/s/tmbj3os4ev3d4y8/data-instability.csv?dl=0 >> >> whereas I paste the script I am using at the end of the email. >> Any feedback is really appreciated. >> Many thanks >> >> Lorenzo >> >> >> >> min.perc_error <- function(data, par) { >> with(data, sum(((par[1]*x1 + par[2]*x2+par[3]*x3 - >> y)/y)^2)) >>} >> >> par_ini1 <- c(.3,.1, 1e-3) >> >> par_ini2 <- c(1,1, 1) >> >> >> data <- read.csv("data-instability.csv") >> >> mm_def1 <-optim(par = par_ini1 >>, min.perc_error, data = data) >> >> mm_bfgs1 <-optim(par = par_ini1 >>, min.perc_error, data = data, method="BFGS") >> >> print("fit parameters with the default algorithms and the first seed >> ") >> print(mm_def1$par) >> >> print("fit parameters with the BFGS algorithms and the first seed ") >> print(mm_bfgs1$par) >> >> >> >> mm_def2 <-optim(par = par_ini2 >>, min.perc_error, data = data) >> >> mm_bfgs2 <-optim(par = par_ini2 >>, min.perc_error, data = data, method="BFGS") >> >> >> >> >> print("fit parameters with the default algorithms and the second seed >> ") >> print(mm_def2$par) >> >> print("fit parameters with the BFGS algorithms and the second seed ") >> print(mm_bfgs2$par) >> >> __ >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Optim() and Instability
Since some questioned the scaling idea, here are runs first with scaling and then without scaling. Note how much better the solution is in the first run (see arrows). It is also evident from the data > head(data, 3) y x1 x2 x3 1 0.660 20 7.0 1680 2 0.165 5 1.7 350 3 0.660 20 7.0 1680 > # run 1 - scaling > str(optim(par = c(1,1, 1), min.perc_error, data = data, + control = list(parscale = c(1, 1, 0.0001 List of 5 $ par: num [1:3] 0.030232 0.024411 -0.000113 $ value : num 0.653 <= $ counts : Named int [1:2] 180 NA ..- attr(*, "names")= chr [1:2] "function" "gradient" $ convergence: int 0 $ message: NULL > # run 2 - no scaling > str(optim(par = c(1,1, 1), min.perc_error, data = data)) List of 5 $ par: num [1:3] 0.6305 -0.1247 -0.0032 $ value : num 473 <= $ counts : Named int [1:2] 182 NA ..- attr(*, "names")= chr [1:2] "function" "gradient" $ convergence: int 0 $ message: NULL On Sat, Nov 14, 2015 at 10:32 AM, Gabor Grothendieck wrote: > I meant the parscale parameter. > > On Sat, Nov 14, 2015 at 10:30 AM, Gabor Grothendieck > wrote: >> Tyipcally the parameters being optimized should be the same order of >> magnitude or else you can expect numerical problems. That is what the >> fnscale control parameter is for. >> >> On Sat, Nov 14, 2015 at 10:15 AM, Lorenzo Isella >> wrote: >>> Dear All, >>> I am using optim() for a relatively simple task: a linear model where >>> instead of minimizing the sum of the squared errors, I minimize the sum >>> of the squared relative errors. >>> However, I notice that the default algorithm is very sensitive to the >>> choice of the initial fit parameters, whereas I get much more stable >>> (and therefore better?) results with the BFGS algorithm. >>> I would like to have some feedback on this (perhaps I made a mistake >>> somewhere). >>> I provide a small self-contained example. >>> You can download a tiny data set from the link >>> >>> https://www.dropbox.com/s/tmbj3os4ev3d4y8/data-instability.csv?dl=0 >>> >>> whereas I paste the script I am using at the end of the email. >>> Any feedback is really appreciated. >>> Many thanks >>> >>> Lorenzo >>> >>> >>> >>> min.perc_error <- function(data, par) { >>> with(data, sum(((par[1]*x1 + par[2]*x2+par[3]*x3 - >>> y)/y)^2)) >>>} >>> >>> par_ini1 <- c(.3,.1, 1e-3) >>> >>> par_ini2 <- c(1,1, 1) >>> >>> >>> data <- read.csv("data-instability.csv") >>> >>> mm_def1 <-optim(par = par_ini1 >>>, min.perc_error, data = data) >>> >>> mm_bfgs1 <-optim(par = par_ini1 >>>, min.perc_error, data = data, method="BFGS") >>> >>> print("fit parameters with the default algorithms and the first seed >>> ") >>> print(mm_def1$par) >>> >>> print("fit parameters with the BFGS algorithms and the first seed ") >>> print(mm_bfgs1$par) >>> >>> >>> >>> mm_def2 <-optim(par = par_ini2 >>>, min.perc_error, data = data) >>> >>> mm_bfgs2 <-optim(par = par_ini2 >>>, min.perc_error, data = data, method="BFGS") >>> >>> >>> >>> >>> print("fit parameters with the default algorithms and the second seed >>> ") >>> print(mm_def2$par) >>> >>> print("fit parameters with the BFGS algorithms and the second seed ") >>> print(mm_bfgs2$par) >>> >>> __ >>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >> >> >> >> -- >> Statistics & Software Consulting >> GKX Group, GKX Associates Inc. >> tel: 1-877-GKX-GROUP >> email: ggrothendieck at gmail.com > > > > -- > Statistics & Software Consulting > GKX Group, GKX Associates Inc. > tel: 1-877-GKX-GROUP > email: ggrothendieck at gmail.com -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] fancy linear model and grouping
Try the mclust package: library(mclust) temp.na <- na.omit(temp) fm <- Mclust(temp.na) g <- fm$classification plot(temp.na, pch = g, col = g) On Tue, Feb 2, 2016 at 6:35 AM, PIKAL Petr wrote: > Dear all > > I have data like this > >> dput(temp) > > temp <- structure(list(X1 = c(93, 82, NA, 93, 93, 79, 79, 93, 93, 85, > 82, 93, 87, 93, 92, NA, 87, 93, 93, 93, 74, 77, 87, 93, 82, 87, > 75, 82, 93, 92, 68, 93, 93, 73, NA, 85, 81, 79, 75, 87, 93, NA, > 87, 87, 85, 92, 87, 92, 93, 87, 87, NA, 69, 87, 93, 87, 93, 87, > 82, 79, 87, 93, 87, 80, 87, 87, 87, 92, 93, 69, 76, 87, 82, 93, > 82, NA, 54, 87, 77, 73, 93, 82, 73, 93, 92, 82, 77, 93, 87, 75, > 87, 87, 87, 60, 92, 87, 87, NA, 77, 78), X2 = c(224, 624, NA, > 224, 224, 642, 642, 224, 224, 599, 622, 224, 239, 224, 225, NA, > 239, 224, 224, 224, 688, 657, 239, 224, 624, 239, 672, 254, 224, > 225, 499, 224, 224, 692, NA, 599, 627, 642, 677, 239, 224, NA, > 239, 239, NA, 375, 239, 375, 224, 239, 239, NA, 299, 239, 224, > 239, 224, 239, 621, 642, 239, 224, 239, 638, 239, 239, 239, 225, > 224, 299, 672, 239, 618, 224, 620, NA, 626, 239, 657, 693, 224, > 624, 693, 224, 225, 621, 657, 224, 239, 673, 239, 239, 239, 569, > 224, 239, 239, NA, 657, 651)), .Names = c("X1", "X2"), row.names = c(NA, > -100L), class = "data.frame") >> > > You can see there are 3 distinct linear relationships of those 2 variables. > > plot(1/temp[,1], temp[,2]) > > Is there any simple way how to evaluate such data without grouping variable? > I know that in case I have proper grouping variable I can evaluate it with > lme and get intercepts and/or slopes. > > My question is: > > Does anybody know about a way/package/function which can give me appropriate > grouping of such data or which can give me separate slope/intercept for each > set. > > I hope I expressed my problem clearly. > > Best regards > Petr > > > > Tento e-mail a jakékoliv k němu připojené dokumenty jsou důvěrné a jsou > určeny pouze jeho adresátům. > Jestliže jste obdržel(a) tento e-mail omylem, informujte laskavě neprodleně > jeho odesílatele. Obsah tohoto emailu i s přílohami a jeho kopie vymažte ze > svého systému. > Nejste-li zamýšleným adresátem tohoto emailu, nejste oprávněni tento email > jakkoliv užívat, rozšiřovat, kopírovat či zveřejňovat. > Odesílatel e-mailu neodpovídá za eventuální škodu způsobenou modifikacemi či > zpožděním přenosu e-mailu. > > V případě, že je tento e-mail součástí obchodního jednání: > - vyhrazuje si odesílatel právo ukončit kdykoliv jednání o uzavření smlouvy, > a to z jakéhokoliv důvodu i bez uvedení důvodu. > - a obsahuje-li nabídku, je adresát oprávněn nabídku bezodkladně přijmout; > Odesílatel tohoto e-mailu (nabídky) vylučuje přijetí nabídky ze strany > příjemce s dodatkem či odchylkou. > - trvá odesílatel na tom, že příslušná smlouva je uzavřena teprve výslovným > dosažením shody na všech jejích náležitostech. > - odesílatel tohoto emailu informuje, že není oprávněn uzavírat za společnost > žádné smlouvy s výjimkou případů, kdy k tomu byl písemně zmocněn nebo písemně > pověřen a takové pověření nebo plná moc byly adresátovi tohoto emailu > případně osobě, kterou adresát zastupuje, předloženy nebo jejich existence je > adresátovi či osobě jím zastoupené známá. > > This e-mail and any documents attached to it may be confidential and are > intended only for its intended recipients. > If you received this e-mail by mistake, please immediately inform its sender. > Delete the contents of this e-mail with all attachments and its copies from > your system. > If you are not the intended recipient of this e-mail, you are not authorized > to use, disseminate, copy or disclose this e-mail in any manner. > The sender of this e-mail shall not be liable for any possible damage caused > by modifications of the e-mail or by delay with transfer of the email. > > In case that this e-mail forms part of business dealings: > - the sender reserves the right to end negotiations about entering into a > contract in any time, for any reason, and without stating any reasoning. > - if the e-mail contains an offer, the recipient is entitled to immediately > accept such offer; The sender of this e-mail (offer) excludes any acceptance > of the offer on the part of the recipient containing any amendment or > variation. > - the sender insists on that the respective contract is concluded only upon > an express mutual agreement on all its aspects. > - the sender of this e-mail informs that he/she is not authorized to enter > into any contracts on behalf of the company except for cases in which he/she > is expressly authorized to do so in writing, and such authorization or power > of attorney is submitted to the recipient or the person represented by the > recipient, or the existence of such authorization is known to the recipient > of the person represented by the recipient. > __ > R-help@r-project.org mailing l
Re: [R] fancy linear model and grouping
Try the EEV model with 3 clusters where temp is the large dataset: Mclust(temp, 3, modelNames = "EEV") On Tue, Feb 2, 2016 at 8:13 AM, PIKAL Petr wrote: > Hi > > Thanks, it work for my example, which is actually a subset of a bigger data > (4000 rows) with the same characteristics. For the whole problem it does not > give correct clustering. I tried to set G to 3 but it did not help either. > > I attached the whole dataset (dput) that you can use, however after quick > tour through Mclust it seems to me that it is designed for slightly different > problem. > > Here is the result with whole data. > > fm <- Mclust(temp) > g <- fm$classification > plot(1/temp[,1], temp[,2], pch = g, col = g) > > I will go through the docs more thoroughly, to be 100% sure I did not miss > anything. > > Cheers > Petr > > >> -Original Message- >> From: Gabor Grothendieck [mailto:ggrothendi...@gmail.com] >> Sent: Tuesday, February 02, 2016 1:20 PM >> To: PIKAL Petr >> Cc: R Help R >> Subject: Re: [R] fancy linear model and grouping >> >> Try the mclust package: >> >> library(mclust) >> temp.na <- na.omit(temp) >> fm <- Mclust(temp.na) >> g <- fm$classification >> plot(temp.na, pch = g, col = g) >> >> >> >> On Tue, Feb 2, 2016 at 6:35 AM, PIKAL Petr >> wrote: >> > Dear all >> > >> > I have data like this >> > >> >> dput(temp) >> > >> > temp <- structure(list(X1 = c(93, 82, NA, 93, 93, 79, 79, 93, 93, 85, >> > 82, 93, 87, 93, 92, NA, 87, 93, 93, 93, 74, 77, 87, 93, 82, 87, 75, >> > 82, 93, 92, 68, 93, 93, 73, NA, 85, 81, 79, 75, 87, 93, NA, 87, 87, >> > 85, 92, 87, 92, 93, 87, 87, NA, 69, 87, 93, 87, 93, 87, 82, 79, 87, >> > 93, 87, 80, 87, 87, 87, 92, 93, 69, 76, 87, 82, 93, 82, NA, 54, 87, >> > 77, 73, 93, 82, 73, 93, 92, 82, 77, 93, 87, 75, 87, 87, 87, 60, 92, >> > 87, 87, NA, 77, 78), X2 = c(224, 624, NA, 224, 224, 642, 642, 224, >> > 224, 599, 622, 224, 239, 224, 225, NA, 239, 224, 224, 224, 688, 657, >> > 239, 224, 624, 239, 672, 254, 224, 225, 499, 224, 224, 692, NA, 599, >> > 627, 642, 677, 239, 224, NA, 239, 239, NA, 375, 239, 375, 224, 239, >> > 239, NA, 299, 239, 224, 239, 224, 239, 621, 642, 239, 224, 239, 638, >> > 239, 239, 239, 225, 224, 299, 672, 239, 618, 224, 620, NA, 626, 239, >> > 657, 693, 224, 624, 693, 224, 225, 621, 657, 224, 239, 673, 239, 239, >> > 239, 569, 224, 239, 239, NA, 657, 651)), .Names = c("X1", "X2"), >> > row.names = c(NA, -100L), class = "data.frame") >> >> >> > >> > You can see there are 3 distinct linear relationships of those 2 >> variables. >> > >> > plot(1/temp[,1], temp[,2]) >> > >> > Is there any simple way how to evaluate such data without grouping >> variable? I know that in case I have proper grouping variable I can >> evaluate it with lme and get intercepts and/or slopes. >> > >> > My question is: >> > >> > Does anybody know about a way/package/function which can give me >> appropriate grouping of such data or which can give me separate >> slope/intercept for each set. >> > >> > I hope I expressed my problem clearly. >> > >> > Best regards >> > Petr >> > >> > > > > Tento e-mail a jakékoliv k němu připojené dokumenty jsou důvěrné a jsou > určeny pouze jeho adresátům. > Jestliže jste obdržel(a) tento e-mail omylem, informujte laskavě neprodleně > jeho odesílatele. Obsah tohoto emailu i s přílohami a jeho kopie vymažte ze > svého systému. > Nejste-li zamýšleným adresátem tohoto emailu, nejste oprávněni tento email > jakkoliv užívat, rozšiřovat, kopírovat či zveřejňovat. > Odesílatel e-mailu neodpovídá za eventuální škodu způsobenou modifikacemi či > zpožděním přenosu e-mailu. > > V případě, že je tento e-mail součástí obchodního jednání: > - vyhrazuje si odesílatel právo ukončit kdykoliv jednání o uzavření smlouvy, > a to z jakéhokoliv důvodu i bez uvedení důvodu. > - a obsahuje-li nabídku, je adresát oprávněn nabídku bezodkladně přijmout; > Odesílatel tohoto e-mailu (nabídky) vylučuje přijetí nabídky ze strany > příjemce s dodatkem či odchylkou. > - trvá odesílatel na tom, že příslušná smlouva je uzavřena teprve výslovným > dosažením shody na všech jejích náležitostech. > - odesílatel tohoto emailu informuje, že není oprávněn uzavírat za společnost > žádné smlouvy s výjimkou případů, kdy k tomu byl písemně zmocněn nebo písemně > pověřen
Re: [R] Create macro_var in R
See Example 5. Insert Variables on the sqldf home page. https://github.com/ggrothendieck/sqldf On Wed, Feb 3, 2016 at 2:16 PM, Amoy Yang via R-help wrote: > First, MVAR<-c("population) should be the same as "population'". Correct? > You use tab[[MVAR]] to refer to "population" where double [[...]] removes > double quotes "...", which seemingly work for r-code although it is tedious > in comparison direct application in SAS %let MVAR=population. But it does not > work for sqldef in R as shown below. > >> key<-"pop" >> library(sqldf) >> sqldf("select grade, count(*) as cnt, min(tab[[key]]) as min, > + max(pop) as max, avg(pop) as mean, median(pop) as median, > + stdev(pop) as stdev from tab group by grade") > Error in sqliteSendQuery(con, statement, bind.data) : > error in statement: near "[[key]": syntax error > > > > > On Wednesday, February 3, 2016 12:40 PM, "ruipbarra...@sapo.pt" > wrote: > > > Hello, > > You can't use tab$MVAR but you can use tab[[MVAR]] if you do MVAR <- > "population" (no need for c()). > > Hope this helps, > > Rui Barradas > Citando Amoy Yang via R-help : > population is the field-name in data-file (say, tab). MVAR<-population takes > data (in the column of population) rather than field-name as done in SAS: > %let MVAR=population; > In the following r-program, for instance, I cannot use ... tab$MVAR...or > simply MVAR itself since MVAR is defined as "population" with double quotes > if using MVAR<-c("population") > >On Wednesday, February 3, 2016 11:54 AM, Duncan Murdoch > wrote: > > > On 03/02/2016 12:41 PM, Amoy Yang via R-help wrote: > There is a %LET statement in SAS: %let MVAR=population; Thus, MVAR can be > used through entire program. > In R, I tried MAVR<-c("population"). The problem is that MAVR comes with > double quote "" that I don't need. But MVAR<-c(population) did NOT work > out. Any way that double quote can be removed as done in SAS when creating > macro_var? > Thanks in advance for helps! > R doesn't have a macro language, and you usually don't need one. > > If you are only reading the value of population, then > > MAVR <- population > > is fine. This is sometimes the same as c(population), but in general > it's different: c() will remove some attributes, such as > the dimensions on arrays. > > If you need to modify it in your program, it's likely more complicated. > The normal way to go would be to put your code in a function, and have > it return the modified version. For example, > > population <- doModifications(population) > > where doModifications is a function with a definition like > > doModifications <- function(MAVR) { > # do all your calculations on MAVR > # then return it at the end using > MAVR > } > > Duncan Murdoch > > > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.htmland provide commented, minimal, > self-contained, reproducible code. > > > > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] sqldf --Warning message:
sqldf does not use Tk so you can ignore this. On Fri, Feb 19, 2016 at 12:32 PM, Divakar Reddy wrote: > Dear R users, > > I'm getting Waring message while trying to load "sqldf" package in R3.2.3 > and assuming that we can ignore this as it's WARNING Message and not an > error message. > Can you guide me if my assumption is wrong? > > >> library(sqldf); > Loading required package: gsubfn > Loading required package: proto > Loading required package: RSQLite > Loading required package: DBI > Warning message: > no DISPLAY variable so Tk is not available > >> version _ > platform x86_64-redhat-linux-gnu > version.string R version 3.2.3 (2015-12-10) >> > > Thanks, > Divakar > Phoenix,USA > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Functional programming?
This manufactures the functions without using eval by using substitute to substitute i-1 and a[i] into an expression for the body which is then assigned to the body of the function: hh <- vector("list", 5) hh[[1]] <- f(a[1]) for(i in 2:5) { hh[[i]] <- hh[[1]] body(hh[[i]]) <- substitute(hh[[iprev]](x) * g(ai)(x), list(iprev = i-1, ai = a[i])) } all.equal(h[[5]](.5), hh[[5]](.5)) # test ## [1] TRUE This uses quote to define the body of h[[i]] as a call object and then substitutes in the values of i-1 and a[i] assigning the result to the body of h[[i]]. h <- vector("list", 5) h[[1]] <- f(a[1]) for(i in 2:5) { h[[i]] <- h[[1]] body(hh[[i]]) <- do.call(substitute, list(quote(hh[[iprev]](x) * g(ai)(x)), list(iprev = i-1, ai = a[i]))) } On Wed, Mar 2, 2016 at 11:47 AM, Roger Koenker wrote: > I have a (remarkably ugly!!) code snippet (below) that, given > two simple functions, f and g, generates > a list of new functions h_{k+1} = h_k * g, k= 1, …, K. Surely, there are > vastly > better ways to do this. I don’t particularly care about the returned list, > I’d be happy to have the final h_K version of the function, > but I keep losing my way and running into the dreaded: > > Error in h[[1]] : object of type 'closure' is not subsettable > or > Error: evaluation nested too deeply: infinite recursion / > options(expressions=)? > > Mainly I’d like to get rid of the horrible, horrible paste/parse/eval evils. > Admittedly > the f,g look a bit strange, so you may have to suspend disbelief to imagine > that there is > something more sensible lurking beneath this minimal (toy) example. > > f <- function(u) function(x) u * x^2 > g <- function(u) function(x) u * log(x) > set.seed(3) > a <- runif(5) > h <- list() > hit <- list() > h[[1]] <- f(a[1]) > hit[[1]] <- f(a[1]) > for(i in 2:5){ > ht <- paste("function(x) h[[", i-1, "]](x) * g(", a[i], ")(x)") > h[[i]] <- eval(parse(text = ht)) > hit[[i]] <- function(x) {force(i); return(h[[i]] (x))} > } > x <- 1:99/10 > plot(x, h[[1]](x), type = "l") > for(i in 2:5) > lines(x, h[[i]](x), col = i) > > Thanks, > Roger > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Linear Regressions with constraint coefficients
This is a quadratic programming problem that you can solve using either a quadratic programming solver with constraints or a general nonlinear solver with constraints. See https://cran.r-project.org/web/views/Optimization.html for more info on what is available. Here is an example using a nonlinear least squares solver and non-negative bound constraints. The constraint that the coefficients sum to 1 is implied by dividing them by their sum and then dividing the coefficients found by their sum at the end: # test data set.seed(123) n <- 1000 X1 <- rnorm(n) X2 <- rnorm(n) X3 <- rnorm(n) Y <- .2 * X1 + .3 * X2 + .5 * X3 + rnorm(n) # fit library(nlmrt) fm <- nlxb(Y ~ (b1 * X1 + b2 * X2 + b3 * X3)/(b1 + b2 + b3), data = list(Y = Y, X1 = X1, X2 = X2, X3 = X3), lower = numeric(3), start = list(b1 = 1, b2 = 2, b3 = 3)) giving the following non-negative coefficients which sum to 1 that are reasonably close to the true values of 0.2, 0.3 and 0.5: > fm$coefficients / sum(fm$coefficients) b1 b2 b3 0.18463 0.27887 0.53650 On Tue, Apr 26, 2016 at 8:39 AM, Aleksandrovic, Aljosa (Pfaeffikon) wrote: > Hi all, > > I hope you are doing well? > > I’m currently using the lm() function from the package stats to fit linear > multifactor regressions. > > Unfortunately, I didn’t yet find a way to fit linear multifactor regressions > with constraint coefficients? I would like the slope coefficients to be all > inside an interval, let’s say, between 0 and 1. Further, if possible, the > slope coefficients should add up to 1. > > Is there an elegant and not too complicated way to do such a constraint > regression estimation in R? > > I would very much appreciate if you could help me with my issue? > > Thanks a lot in advance and kind regards, > Aljosa Aleksandrovic > > > > Aljosa Aleksandrovic, FRM, CAIA > Quantitative Analyst - Convertibles > aljosa.aleksandro...@man.com > Tel +41 55 417 7603 > > Man Investments (CH) AG > Huobstrasse 3 | 8808 Pfäffikon SZ | Switzerland > > > -Original Message- > From: Kevin E. Thorpe [mailto:kevin.tho...@utoronto.ca] > Sent: Dienstag, 26. April 2016 14:35 > To: Aleksandrovic, Aljosa (Pfaeffikon) > Subject: Re: Linear Regressions with constraint coefficients > > You need to send it to r-help@r-project.org however. > > Kevin > > On 04/26/2016 08:32 AM, Aleksandrovic, Aljosa (Pfaeffikon) wrote: >> Ok, will do! Thx a lot! >> >> Please find below my request: >> >> Hi all, >> >> I hope you are doing well? >> >> I’m currently using the lm() function from the package stats to fit linear >> multifactor regressions. >> >> Unfortunately, I didn’t yet find a way to fit linear multifactor regressions >> with constraint coefficients? I would like the slope coefficients to be all >> inside an interval, let’s say, between 0 and 1. Further, if possible, the >> slope coefficients should add up to 1. >> >> Is there an elegant and not too complicated way to do such a constraint >> regression estimation in R? >> >> I would very much appreciate if you could help me with my issue? >> >> Thanks a lot in advance and kind regards, Aljosa Aleksandrovic >> >> >> >> Aljosa Aleksandrovic, FRM, CAIA >> Quantitative Analyst - Convertibles >> aljosa.aleksandro...@man.com >> Tel +41 55 417 7603 >> >> Man Investments (CH) AG >> Huobstrasse 3 | 8808 Pfäffikon SZ | Switzerland >> >> >> -Original Message- >> From: Kevin E. Thorpe [mailto:kevin.tho...@utoronto.ca] >> Sent: Dienstag, 26. April 2016 14:28 >> To: Aleksandrovic, Aljosa (Pfaeffikon); r-help-ow...@r-project.org >> Subject: Re: Linear Regressions with constraint coefficients >> >> I believe I approved a message with such a subject. Perhaps there was >> another layer that subsequently rejected it after that. I didn't notice any >> unusual content. Try again, making sure you send the message in plain text >> only. >> >> Kevin >> >> On 04/26/2016 08:16 AM, Aleksandrovic, Aljosa (Pfaeffikon) wrote: >>> Do you know where I get help for my issue? >>> >>> Thanks in advance and kind regards, >>> Aljosa >>> >>> >>> Aljosa Aleksandrovic, FRM, CAIA >>> Quantitative Analyst - Convertibles >>> aljosa.aleksandro...@man.com >>> Tel +41 55 417 7603 >>> >>> Man Investments (CH) AG >>> Huobstrasse 3 | 8808 Pfäffikon SZ | Switzerland >>> >>> -Original Message- >>> From: R-help [mailto:r-help-boun...@r-project.org] On Behalf Of >>> r-help-ow...@r-project.org >>> Sent: Dienstag, 26. April 2016 14:10 >>> To: Aleksandrovic, Aljosa (Pfaeffikon) >>> Subject: Linear Regressions with constraint coefficients >>> >>> The message's content type was not explicitly allowed >>> > > > -- > Kevin E. Thorpe > Head of Biostatistics, Applied Health Research Centre (AHRC) > Li Ka Shing Knowledge Institute of St. Michael's Hospital > Assistant Professor, Dalla Lana School of Public Health > University of Toronto > email: kevin.tho...@utoronto.ca Tel: 416.864.5776 Fax: 416.864.3016 > > This email has been sent by a member of the Man group (“M
Re: [R] Approximate taylor series
Regress on a multivariate polynomial: lm(y ~ polym(x1, x2, x3, x4, degree = 3)) See ?polym __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Linear Regressions with constraint coefficients
The nls2 package can be used to get starting values. On Thu, Apr 28, 2016 at 8:42 AM, Aleksandrovic, Aljosa (Pfaeffikon) wrote: > Hi Gabor, > > Thanks a lot for your help! > > I tried to implement your nonlinear least squares solver on my data set. I > was just wondering about the argument start. If I would like to force all my > coefficients to be inside an interval, let’s say, between 0 and 1, what kind > of starting values are normally recommended for the start argument (e.g. > Using a 4 factor model with b1, b2, b3 and b4, I tried start = list(b1 = 0.5, > b2 = 0.5, b3 = 0.5, b4 = 0.5))? I also tried other starting values ... Hence, > the outputs are very sensitive to that start argument? > > Thanks a lot for your answer in advance! > > Kind regards, > Aljosa > > > > Aljosa Aleksandrovic, FRM, CAIA > Quantitative Analyst - Convertibles > aljosa.aleksandro...@man.com > Tel +41 55 417 76 03 > > Man Investments (CH) AG > Huobstrasse 3 | 8808 Pfäffikon SZ | Switzerland > > -Original Message- > From: Gabor Grothendieck [mailto:ggrothendi...@gmail.com] > Sent: Dienstag, 26. April 2016 17:59 > To: Aleksandrovic, Aljosa (Pfaeffikon) > Cc: r-help@r-project.org > Subject: Re: [R] Linear Regressions with constraint coefficients > > This is a quadratic programming problem that you can solve using either a > quadratic programming solver with constraints or a general nonlinear solver > with constraints. See https://cran.r-project.org/web/views/Optimization.html > for more info on what is available. > > Here is an example using a nonlinear least squares solver and non-negative > bound constraints. The constraint that the coefficients sum to 1 is implied > by dividing them by their sum and then dividing the coefficients found by > their sum at the end: > > # test data > set.seed(123) > n <- 1000 > X1 <- rnorm(n) > X2 <- rnorm(n) > X3 <- rnorm(n) > Y <- .2 * X1 + .3 * X2 + .5 * X3 + rnorm(n) > > # fit > library(nlmrt) > fm <- nlxb(Y ~ (b1 * X1 + b2 * X2 + b3 * X3)/(b1 + b2 + b3), > data = list(Y = Y, X1 = X1, X2 = X2, X3 = X3), > lower = numeric(3), > start = list(b1 = 1, b2 = 2, b3 = 3)) > > giving the following non-negative coefficients which sum to 1 that are > reasonably close to the true values of 0.2, 0.3 and 0.5: > >> fm$coefficients / sum(fm$coefficients) > b1 b2 b3 > 0.18463 0.27887 0.53650 > > > On Tue, Apr 26, 2016 at 8:39 AM, Aleksandrovic, Aljosa (Pfaeffikon) > wrote: >> Hi all, >> >> I hope you are doing well? >> >> I’m currently using the lm() function from the package stats to fit linear >> multifactor regressions. >> >> Unfortunately, I didn’t yet find a way to fit linear multifactor regressions >> with constraint coefficients? I would like the slope coefficients to be all >> inside an interval, let’s say, between 0 and 1. Further, if possible, the >> slope coefficients should add up to 1. >> >> Is there an elegant and not too complicated way to do such a constraint >> regression estimation in R? >> >> I would very much appreciate if you could help me with my issue? >> >> Thanks a lot in advance and kind regards, Aljosa Aleksandrovic >> >> >> >> Aljosa Aleksandrovic, FRM, CAIA >> Quantitative Analyst - Convertibles >> aljosa.aleksandro...@man.com >> Tel +41 55 417 7603 >> >> Man Investments (CH) AG >> Huobstrasse 3 | 8808 Pfäffikon SZ | Switzerland >> >> >> -Original Message- >> From: Kevin E. Thorpe [mailto:kevin.tho...@utoronto.ca] >> Sent: Dienstag, 26. April 2016 14:35 >> To: Aleksandrovic, Aljosa (Pfaeffikon) >> Subject: Re: Linear Regressions with constraint coefficients >> >> You need to send it to r-help@r-project.org however. >> >> Kevin >> >> On 04/26/2016 08:32 AM, Aleksandrovic, Aljosa (Pfaeffikon) wrote: >>> Ok, will do! Thx a lot! >>> >>> Please find below my request: >>> >>> Hi all, >>> >>> I hope you are doing well? >>> >>> I’m currently using the lm() function from the package stats to fit linear >>> multifactor regressions. >>> >>> Unfortunately, I didn’t yet find a way to fit linear multifactor >>> regressions with constraint coefficients? I would like the slope >>> coefficients to be all inside an interval, let’s say, between 0 and 1. >>> Further, if possible, the slope coefficients should add up to 1. >>> >>> Is there an elegant and not too complicated way to do such a constraint >>>
Re: [R] Clean method to convert date and time between time zones keeping it in POSIXct format
This involves mucking with the internals as well but it is short: structure(T1, tzone = "UTC") On Mon, May 9, 2016 at 9:24 AM, Arnaud Mosnier wrote: > Dear UseRs, > > I know two ways to convert dates and time from on time zone to another but > I am pretty sure that there is a better (cleaner) way to do that. > > > Here are the methods I know: > > > ## The longest way ... > > T1 <- as.POSIXct("2016-05-09 10:00:00", format="%Y-%m-%d %H:%M:%S", > tz="America/New_York") > > print(T1) > > T2 <- as.POSIXct(format(T1, tz="UTC"), tz="UTC") # format convert it to > character, so I have to convert it back to POSIXct afterward. > > print(T2) > > > > ## The shortest but probably not the cleanest ... > > attributes(T1)$tzone <- "UTC" > > print(T1) > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] break string at specified possitions
Here are two ways that do not use any packages: s <- paste(letters, collapse = "") # test input substring(s, first, last) ## [1] "abcde" "fghij" "klmnopqrs" read.fwf(textConnection(s), last - first + 1) ## V1V2V3 ## 1 abcde fghij klmnopqrs On Wed, May 11, 2016 at 4:12 PM, Jan Kacaba wrote: > Dear R-help > > I would like to split long string at specified precomputed positions. > 'substring' needs beginings and ends. Is there a native function which > accepts positions so I don't have to count second argument? > > For example I have vector of possitions pos<-c(5,10,19). Substring > needs input first=c(1,6,11) and last=c(5,10,19). There is no problem > to write my own function. Just asking. > > Derek > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Finding starting values for the parameters using nls() or nls2()
If you are not tied to that model the SSasymp() model in R could be considered and is easy to fit: # to plot points in order o <- order(cl$Area) cl.o <- cl[o, ] fm <- nls(Retention ~ SSasymp(Area, Asym, R0, lrc), cl.o) summary(fm) plot(Retention ~ Area, cl.o) lines(fitted(fm) ~ Area, cl.o, col = "red") __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Split strings based on multiple patterns
Replace newlines and colons with a space since they seem to be junk, generate a pattern to replace the attributes with a comma and do the replacement and finally read in what is left into a data frame using the attributes as column names. (I have indented each line of code below by 2 spaces so if any line starts before that then it's been wrapped around by the email and needs to be adjusted.) attributes <- c("Water temp", "Waterbody type", "Water pH", "Conductivity", "Water color", "Water turbidity", "Manmade", "Permanence", "Max water depth", "Primary substrate", "Evidence of cattle grazing", "Shoreline Emergent Veg(%)", "Fish present", "Fish species") ugly2 <- gsub("[:\n]", " ", ugly) pat <- paste(gsub("([[:punct:]])", ".", attributes), collapse = "|") ugly3 <- gsub(pat, ",", ugly2) dd <- read.table(text = ugly3, sep = ",", strip.white = TRUE, col.names = c("", attributes))[-1] On Fri, Oct 14, 2016 at 7:16 PM, Joe Ceradini wrote: > Afternoon, > > I unfortunately inherited a dataframe with a column that has many fields > smashed together. My goal is to split the strings in the column into > separate columns based on patterns. > > Example of what I'm working with: > > ugly <- c("Water temp:14: F Waterbody type:Permanent Lake/Pond: Water > pH:Unkwn: > Conductivity:Unkwn: Water color: Clear: Water turbidity: clear: > Manmade:no Permanence:permanent: Max water depth: <3: Primary > substrate: Silt/Mud: Evidence of cattle grazing: none: > Shoreline Emergent Veg(%): 1-25: Fish present: yes: Fish species: unkwn: no > amphibians observed") > ugly > > Far as I can tell, there is not a single pattern that would work for > splitting this string. Splitting on ":" is close but not quite consistent. > Each of these attributes should be a separate column: > > attributes <- c("Water temp", "Waterbody type", "Water pH", "Conductivity", > "Water color", "Water turbidity", "Manmade", "Permanence", "Max water > depth", "Primary substrate", "Evidence of cattle grazing", "Shoreline > Emergent Veg(%)", "Fish present", "Fish species") > > So, conceptually, I want to do something like this, where the string is > split for each of the patterns in attributes. However, strsplit only uses > the 1st value of attributes > strsplit(ugly, attributes) > > Should I loop through the values of "attributes"? > Is there an argument in strsplit I'm missing that will do what I want? > Different approach altogether? > > Thanks! Happy Friday. > Joe > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] abline with zoo series
Recessions are typically shown by shading. The zoo package has xblocks for this purpose. If app1 is your zoo object then: plot(app1) tt <- time(app1) xblocks(tt, tt >= "1990-07-01" & tt <= "1991-03-31", col = rgb(0.7, 0.7, 0.7, 0.5)) # transparent grey See ?xblocks for more info. On Thu, Nov 24, 2016 at 10:03 PM, Erin Hodgess wrote: > Hello! Happy Thanksgiving to those who are celebrating. > > I have a zoo series that I am plotting, and I would like to have some > vertical lines at certain points, to indicate US business cycles. Here is > an example: > > app1 <- get.hist.quote(instrument="appl", > start="1985-01-01",end="2016-08-31", quote="AdjClose", compression="m") > #Fine > plot(app1,main="Historical Stock Prices: Apple Corporation") > #Still Fine > #Now I want to use abline at July 1990 and March 1991 (as a start) for > business cycles. I tried v=67 and v="1990-07", no good. > > I have a feeling that it's really simple and I'm just not seeing it. > > Any help much appreciated. > > Thanks, > Erin > > > -- > Erin Hodgess > Associate Professor > Department of Mathematical and Statistics > University of Houston - Downtown > mailto: erinm.hodg...@gmail.com > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Matrix
Assuming that the input is x <- 1:4, try this one-liner: > embed(c(0*x[-1], x, 0*x[-1]), 4) [,1] [,2] [,3] [,4] [1,]1000 [2,]2100 [3,]3210 [4,]4321 [5,]0432 [6,]0043 [7,]0004 On Mon, Mar 6, 2017 at 11:18 AM, Peter Thuresson wrote: > Hello, > > Is there a function in R which can transform, let say a vector: > > c(1:4) > > to a matrix where the vector is repeated but "projected" +1 one step down for > every (new) column. > I want the output below from the vector above, like this: > > p<-c(1,2,3,4,0,0,0,0,1,2,3,4,0,0,0,0,1,2,3,4,0,0,0,0,1,2,3,4) > > matrix(p,7,4) > -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] nlmrt problems - No confInt, NA StdErr, t-, or p-values
You can use wrapnls from the nlmrt package to get an nls object. Run it instead of nlxb. It runs nlxb followed by nls so that the output is an nls object.. Then you can use all of nls' methods. On occiasion that fails even if nlxb succeeds since the nls optimization can fail independently of nlxb. Also, it does not show the output from nlxb, only from the final nls, so you could alternately run nlxb and then run nls2 from the nls2 package after that. nls2 can compute the nls object at a particular set of coefficients so no second optimization that could fail is done. Here is an example that uses nls2 to generate starting values for nlxb, then runs nlxb and then uses nls2 again to get an nls object so that it canthen use nls methods (in this case fitted) on it. http://stackoverflow.com/questions/42511278/nls-curve-fit-singular-matrix-error/42513058#42513058 On Tue, Mar 21, 2017 at 6:57 AM, DANIEL PRECIADO wrote: > Dear list, > > I want to use nlxb (package nlmrt) to fit different datasets to a gaussian, > obtain parameters (including standard error, t-and p-value) and confidence > intervals. > > nlxb generates the parameters, but very often results in NA standard > error,t-and p-values. Furthermore, using confint() to obtain the confidence > intervals generates a : Error in vcov.default(object) : object does not > have variance-covariance matrix" erro. > > Can someone indicate why is nlxb generating NAs (when nls has no problem with > them) and how to obtain confidence intervals from an nlmrt object? > > Thanks > > > [[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] barchart with bars attached to y=0-line
You could do it without a panel function using xyplot and type "h" : df2 <- transform(df, Compound = factor(abbreviate(Compound))) xyplot(Ratio ~ Compound | Class, df2, type = c("h", "g"), lwd = 7, par.settings = list(grid.pars = list(lineend = 1))) On Wed, Jul 16, 2008 at 6:48 AM, Henning Wildhagen <[EMAIL PROTECTED]> wrote: > Dear R users, > > i am using the following code to produce barcharts with lattice: > > Compound<-c("Glutamine", "Arginine", "Glutamate", "Glycine", "Serine", > "Glucose", "Fructose", "Raffinose", > "Glycerol", "Galacglycerol", "Threitol", "Galactinol", "Galactitol") > > > Class<-c("aminos","aminos","aminos","aminos","aminos","sugars","sugars","sugars","glycerols","glycerols","sugar > alcohols","sugar alcohols","sugar alcohols") > > set.seed(5) > Ratio<-rnorm(13, 0.5, 3) > > df<-data.frame(Compound, Class,Ratio) > > library(lattice) > > P<-barchart(data=df,Ratio~Compound|Class) > > However, I would like to have the bars attached to an imaginary y=0-line so > that they go up if Ratio>0 and down if Ratio<0. > I saw some older entries in the mailing list supposing to use > panel.barchart. However, I did not fully understand the usage of this > function. Also, it was annouced to add an option to "barchart" to make it > easier to get this type of plot. > > Has anyone an idea what the easiest solution might be? > > A second problem is concerning the display of the "Compound"-objects in > accordance with the conditioning variable "Class". In other words: Is it > possible to use the whole space of each panel for only those > "Compound"-objects which belong to the "Class" displayed in that particular > panel? Of course, for this purpose the panels have to be "detached" to > allow appropiate labling of the x-axis. > > Many thanks for your help, > > Henning > > > -- > > > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] date to decimal date conversion
Its not clear from your post what the decimal portion is supposed to represent since Feb is not 35% of the way through the year but in any case see R News 4/1 and the full page table at the end of that article, in particular, for some date idioms that you can use. On Wed, Jul 16, 2008 at 7:41 AM, Yogesh Tiwari <[EMAIL PROTECTED]> wrote: > Hello R Users, > > I want to convert date (yr, mo, day, hr, min, sec) to decimal date, > > for example: > > Date (in six columns): > > yr mo dy hr minsec > 1993 02 13 05 52 00 > > Converted to Decimal date : > > 1993.3542 > > How to write a small code in R so I can convert six column date to decimal > date > > Many thanks, > > Yogesh > > -- > Yogesh K. Tiwari (Dr.rer.nat), > Scientist, > Indian Institute of Tropical Meteorology, > Homi Bhabha Road, > Pashan, > Pune-411008 > INDIA > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] RSQLite maximum table size
It depends on what the values for certain variables were set to when it was compiled. See: http://www.sqlite.org/limits.html On Wed, Jul 16, 2008 at 4:14 PM, Vidhu Choudhary <[EMAIL PROTECTED]> wrote: > Hi All, > I am trying to make write a table in RSQLite. And I get the error mentioned > below > > mat<-as.data.frame(matrix(rnorm(n=24400),nrow=244000,ncol=1000)) >> dbWriteTable(con, "array", mat) > [1] FALSE > *Warning message: > In value[[3]](cond) : > RS-DBI driver: (error in statement: too many SQL variables)* > > Can someone please tell me what is maximum size of a table( max number of > rows and cols) we can have in RSQLite and how big the database can grow > > Thank you > Regards > Vidhu > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] finding "chuncks" of data that have no NA s
na.contiguous.zoo() will return the longest stretch of non-NA data. Its a zoo method of the na.contiguous generic in the core of R. rle(!is.na(rowSums(coredata(z will find all stretches. On Fri, Jul 18, 2008 at 6:47 AM, stephen sefick <[EMAIL PROTECTED]> wrote: > I have a data frame that is 122 columns and 7ish rows it is a zoo > object, but could be easily converted or read in as something else. It is > multiparameter multistation water quality data - there are a lot of NA s. I > would like to find "chuncks" of data that are free of NA s to do some > analysis. All of the data is numeric. Is there a way besides graphing to > find these NA less "chuncks". I did not include data because of the size of > the data frame, and because I don't know exactly how to tackle this > problem. I will send a subset of the data to the list if requested and when > I get to work. As for reproducible code I am not entirly sure how to go > about this, so that too is missing. > Sorry for breaking the rules this early in the morning, > > Stephen > > -- > Let's not spend our time and resources thinking about things that are so > little or so large that all they really do for us is puff us up and make us > feel like gods. We are mammals, and have not exhausted the annoying little > problems of being mammals. > > -K. Mullis > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] finding "chuncks" of data that have no NA s
UE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, TRUE, FALSE, > TRUE, FALSE, TRUE, FALSE, TRUE, FALSE,
Re: [R] How to cut data elements included in a text line
strapply in gsubfn finds matches of the indicated regexp, in this case a \t followed by one or more minus, dot or digit, and with backref of -1 it passes the backreference, i.e. portion of the match within (..), to the function in the third arg. See http://gsubfn.googlecode.com library(gsubfn) strapply(x, "\t([-.0-9]+)", as.numeric, backref = -1)[[1]] On Fri, Jul 18, 2008 at 12:44 PM, Christine A. <[EMAIL PROTECTED]> wrote: > > Hello, > > assume I have an "unstructured" text line from a connection. Unfortunately, > it is in string format: > > R> x > [1] "\talpha0\t-0.638\t0.4043\t0.4043\t-2.215\t-0.5765\t-0.137\t501\t2000" > > > How can I extract the data included in this string object "x" in order to > get the elements for the parameter vector called "alpha0", i.e. > -0.638 0.4043 0.0467 0.4043 -2.215 -0.5765 -0.137 501 > > > Any hints how to handle this would be appreciated. > Best regards, > Christine > > > -- > Christine Adrion, Dipl.-Stat., MPH > > Ludwig-Maximilians-Universitaet Muenchen > IBE - Institut fuer Medizinische Informations- > verarbeitung, Biometrie und Epidemiologie > Marchioninistr. 15 > D- 81377 Muenchen > > Tel.: +49 (0)89 7095 - 4483 > eMail:[EMAIL PROTECTED] > web: http://ibe.web.med.uni-muenchen.de > > > > > > > > > -- > View this message in context: > http://www.nabble.com/How-to-cut-data-elements-included-in-a-text-line-tp18533319p18533319.html > Sent from the R help mailing list archive at Nabble.com. > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Change font-face in title
Try this: plot(1, type="l", xlab="Wellenlänge [nm]", col="darkslategray", main = ~ Spektrum ~ italic(Deschampsia ~ caespitosa)) On Sat, Jul 19, 2008 at 11:31 AM, Albin Blaschka <[EMAIL PROTECTED]> wrote: > > > Dear List, > > Is there a possibility to change the font-face for a part of the title of a > plot? > > For example I have the following... > > plot(nirs, type="l", xlab="Wellenlänge [nm]", col="darkslategray", > main = "Spektrum Deschampsia caespitosa") > > ...and I would like to change the part of the title-string "Deschampsia > caespitosa" to italics? Is that possible? If yes, how? > > The only possibility which came to my mind was "tricking" with the subtitle > or with using text()...but that would be ugly... > > System: Both Linux and Windows, R-Version 2.7.1 > > Thanks in advance, > Albin Blaschka > > -- > - > | Albin Blaschka, Mag. rer.nat - Salzburg, Austria > | http://www.albinblaschka.info http://www.thinkanimal.info > | It's hard to live in the mountains, hard, but not hopeless! > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Change font-face in title
Or this which looks slightly better: plot(1, type="l", xlab="Wellenlänge [nm]", col="darkslategray", main = ~ Spektrum ~ italic("Deschampsia caespitosa")) On Sat, Jul 19, 2008 at 11:56 AM, Gabor Grothendieck <[EMAIL PROTECTED]> wrote: > Try this: > > plot(1, type="l", xlab="Wellenlänge [nm]", col="darkslategray", > main = ~ Spektrum ~ italic(Deschampsia ~ caespitosa)) > > On Sat, Jul 19, 2008 at 11:31 AM, Albin Blaschka > <[EMAIL PROTECTED]> wrote: >> >> >> Dear List, >> >> Is there a possibility to change the font-face for a part of the title of a >> plot? >> >> For example I have the following... >> >> plot(nirs, type="l", xlab="Wellenlänge [nm]", col="darkslategray", >> main = "Spektrum Deschampsia caespitosa") >> >> ...and I would like to change the part of the title-string "Deschampsia >> caespitosa" to italics? Is that possible? If yes, how? >> >> The only possibility which came to my mind was "tricking" with the subtitle >> or with using text()...but that would be ugly... >> >> System: Both Linux and Windows, R-Version 2.7.1 >> >> Thanks in advance, >> Albin Blaschka >> >> -- >> - >> | Albin Blaschka, Mag. rer.nat - Salzburg, Austria >> | http://www.albinblaschka.info http://www.thinkanimal.info >> | It's hard to live in the mountains, hard, but not hopeless! >> >> __ >> R-help@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extracting colnames to label plots in a function
See ?paste and the collapse argument, in particular: plot(d, main = paste(paste(colnames(x), collapse = " "), paste(colnames(y), collapse = " "), sep = " - ")) Also your function sets na.method but never uses it and leaves the par settings changed afterwards. See ?par. It could also benefit from the use of ?match.arg Also please include drivers that call the posted function so one can run them in a reproducible manner. On Sat, Jul 19, 2008 at 12:04 PM, stephen sefick <[EMAIL PROTECTED]> wrote: > #this is my little function that I would like to use the column names of the > x and y arguments in the function. I would like it to read > #site1-site2 how would I do this > diff.temp <- function(x, y ,use="pairwise.complete.obs") > { >na.method <- pmatch(use, c("all.obs", "complete.obs", >"pairwise.complete.obs")) >par(mfrow=c(2,1)) >d <- (x-y) >plot(d, main="paste(colnames(x))-paste(colnames(y))") >plot(density(na.omit(coredata(d > } > > -- > Let's not spend our time and resources thinking about things that are so > little or so large that all they really do for us is puff us up and make us > feel like gods. We are mammals, and have not exhausted the annoying little > problems of being mammals. > > -K. Mullis > >[[alternative HTML version deleted]] > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] extracting colnames to label plots in a function
On Sat, Jul 19, 2008 at 12:42 PM, stephen sefick <[EMAIL PROTECTED]> wrote: > I can fix the par settings. I am new to function writing. I would like to > use the na.method in the > d<- (x-y) > > how? > > I don't know what a driver is sorry for my ignorance 1. As stated in the prior post its code and data that calls your function. With only your function if someone wants to answer your question they have to come up with their own data and then write it out and then write out the call simply to verify what it does. You could do that work for us and also clarify precisely what it is that should be returned by calculating it by hand, if feasible.Reproducible as requested at the bottom of every message to r-help means that we can copy the code from your post and just paste it into our running R session and see the result without having to come up with more code ourselves or having to manipulate it in any way. 2. Regarding na.method see ?switch and try this: d <- switch(na.method, all.obs = ..., ... ) 3. Regarding par try this: on.exit(par(op)) op <- par(...whatever...) 4. Regarding match.arg follow the first example in the examples section of ?match.arg . Note that the choices are best placed in the formal parameter list, not in the match.arg function. > > On Sat, Jul 19, 2008 at 12:36 PM, Gabor Grothendieck > <[EMAIL PROTECTED]> wrote: >> >> See ?paste and the collapse argument, in particular: >> >> plot(d, main = paste(paste(colnames(x), collapse = " "), >>paste(colnames(y), collapse = " "), sep = " - ")) >> >> Also your function sets na.method but never uses it and leaves the par >> settings >> changed afterwards. See ?par. It could also benefit from the use of >> ?match.arg >> >> Also please include drivers that call the posted function so one can >> run them in >> a reproducible manner. >> >> >> On Sat, Jul 19, 2008 at 12:04 PM, stephen sefick <[EMAIL PROTECTED]> >> wrote: >> > #this is my little function that I would like to use the column names of >> > the >> > x and y arguments in the function. I would like it to read >> > #site1-site2 how would I do this >> > diff.temp <- function(x, y ,use="pairwise.complete.obs") >> > { >> >na.method <- pmatch(use, c("all.obs", "complete.obs", >> >"pairwise.complete.obs")) >> >par(mfrow=c(2,1)) >> >d <- (x-y) >> >plot(d, main="paste(colnames(x))-paste(colnames(y))") >> >plot(density(na.omit(coredata(d >> > } >> > >> > -- >> > Let's not spend our time and resources thinking about things that are so >> > little or so large that all they really do for us is puff us up and make >> > us >> > feel like gods. We are mammals, and have not exhausted the annoying >> > little >> > problems of being mammals. >> > >> > -K. Mullis >> > >> >[[alternative HTML version deleted]] >> > >> > __ >> > R-help@r-project.org mailing list >> > https://stat.ethz.ch/mailman/listinfo/r-help >> > PLEASE do read the posting guide >> > http://www.R-project.org/posting-guide.html >> > and provide commented, minimal, self-contained, reproducible code. >> > > > > > -- > Let's not spend our time and resources thinking about things that are so > little or so large that all they really do for us is puff us up and make us > feel like gods. We are mammals, and have not exhausted the annoying little > problems of being mammals. > > -K. Mullis __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Non-linearly constrained optimisation
The Ryacas package is an interface between R and the computer algebra system, yacas. It includes an R to yacas parser/translator and an OpenMath/XML to R parser/translator. See: http://ryacas.googlecode.com On Sat, Jul 19, 2008 at 11:03 PM, Stuart Nettleton <[EMAIL PROTECTED]> wrote: > Tolga, > Your issue seems to be a common one at present. While I am relatively new to > R (and would welcome being corrected), I haven't been able to find an > existing module to parse algebraic equations and build acyclic networks (for > the objective function and each constraint) to submit to solving routines > (such as optim, BB, Patrick Burns' genopt from S Poetry, Algencan etc). > Certainly there are the components to build one, for example topological > sort packages like mathgraph and Carter Butts' network. I have implemented > acyclic networks in three major projects and so I have started to > contemplate the missing package in both R and Mathematica. At this point I > am significantly further advanced in Mathematica, building from Eric > Swanson's excellent perturbationAIM package. Yet it seems slightly odd to me > that the required functionality hasn't been developed in R up to this point > in time. Many people need this functionality, the network algorithms have > been around for forty years and there are many solvers, even open source > ones like ipopt. Of course, the "big guns" in this field are GAMS and AMPL > and it is perhaps their overwhelming presence or respect for the developers > of these packages that has led R developers to be somewhat cautious about > releasing code in this area. However, there are already alternatives. For a > quasi open source version of AMPL you could use Dr Ampl > (http://www.gerad.ca/~orban/drampl/ ) or write your problem in GAMS or AMPL > format and submit to the Neos server either directly > (http://neos.mcs.anl.gov/neos/) or by using pyneos.py > (www.gerad.ca/~orban/pyneos/pyneos.py). I really hope this gets worked out > in R at some stage! > Stuart > > On Sun, 20 Jul 2008 08:10:35 +1000, <[EMAIL PROTECTED]> wrote: > >> Dear R Users, >> I am looking for some guidance on setting up an optimisation in R with >> non-linear constraints. >> >> Here is my simple problem: >> - I have a function h(inputs) whose value I would like to maximise >> - the 'inputs' are subject to lower and upper bounds >> - however, I have some further constraints: I would like to constrain the >> values for two other separate function f(inputs) and g(inputs) to be >> within >> certain bounds >> >> This means the 'inputs' must not only lie within the bounds specified by >> the 'upper' and 'lower' bounds, but they must also not take on values such >> that f(inputs) and g(inputs) take on values outside defined values. h, f >> and g are all non-linear. >> >> I believe constroptim would work if f and g were linear. Alas, they are >> not. Is there any other way I can achieve this in R ? >> >> Thanks in advance, >> Tolga >> >> Generally, this communication is for informational purposes only >> and it is not intended as an offer or solicitation for the purchase >> or sale of any financial instrument or as an official confirmation >> of any transaction. In the event you are receiving the offering >> materials attached below related to your interest in hedge funds or >> private equity, this communication may be intended as an offer or >> solicitation for the purchase or sale of such fund(s). All market >> prices, data and other information are not warranted as to >> completeness or accuracy and are subject to change without notice. >> Any comments or statements made herein do not necessarily reflect >> those of JPMorgan Chase & Co., its subsidiaries and affiliates. >> >> This transmission may contain information that is privileged, >> confidential, legally privileged, and/or exempt from disclosure >> under applicable law. If you are not the intended recipient, you >> are hereby notified that any disclosure, copying, distribution, or >> use of the information contained herein (including any reliance >> thereon) is STRICTLY PROHIBITED. Although this transmission and any >> attachments are believed to be free of any virus or other defect >> that might affect any computer system into which it is received and >> opened, it is the responsibility of the recipient to ensure that it >> is virus free and no responsibility is accepted by JPMorgan Chase & >> Co., its subsidiaries and affiliates, as applicable, for any loss >> or damage arising in any way from its use. If you received this >> transmission in error, please immediately contact the sender and >> destroy the material in its entirety, whether in electronic or hard >> copy format. Thank you. >> Please refer to http://www.jpmorgan.com/pages/disclosures for >> disclosures relating to UK legal entities. >> >> __ >> R-help@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the postin
Re: [R] estimating volume from xyz points
On Sat, Jul 19, 2008 at 5:21 PM, milton ruser <[EMAIL PROTECTED]> wrote: > Dear all, > > I have several sets of x-y-z points and I need to estimate the volume that > encompass all my points. > Recently I got some adivice to show the "convex hull" of my points using > geometry package (see code below). > But now I need to calculate the volume of my set of points. ?convhulln also in geometry package. __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Conditionally Updating Lattice Plots
Try this: library(lattice) fancy.lm <- function(x, y, fit = TRUE, resid = TRUE){ model <- lm(y ~ x) y.pred <- predict(model) # Compute residuals for plotting res.x <- as.vector(rbind(x, x, rep(NA,length(x # NAs induce breaks in line res.y <- as.vector(rbind(y, y.pred, rep(NA,length(x # after Fig 5.1 of DAAG (clever!) plot(xyplot(y ~ x, pch = 20)) trellis.focus() if (fit) panel.abline(model, col = "red") if (resid) panel.xyplot(res.x, res.y, col = "lightblue", type = "l") trellis.unfocus() } x <- jitter(c(1:10), factor = 5) y <- jitter(c(1:10), factor = 10) fancy.lm(x, y, fit = TRUE, resid = TRUE) On Sun, Jul 20, 2008 at 12:44 PM, Bryan Hanson <[EMAIL PROTECTED]> wrote: > Hi All... > > I can¹t seem to find an answer to this in the help pages, archives, or > Deepayan¹s Lattice Book. > > I want to do a Lattice plot, and then update it, possibly more than once, > depending upon some logical options. Code below; it produces a second plot > page when the second update is called, from which I would infer that you > can't update the update or I'm not calling it correctly. I have a nagging > sense too that the "real" way to do this is with a non-standard use of > panel.superpose but I don't quite see how to do that from available > examples. > > TIF for any suggestions, Bryan > > > Example: a function then, the call to the function > > fancy.lm <- function(x, y, fit = TRUE, resid = TRUE){ > > model <- lm(y ~ x) > > y.pred <- predict(model) # Compute residuals for plotting > res.x <- as.vector(rbind(x, x, rep(NA,length(x # NAs induce breaks in > line > res.y <- as.vector(rbind(y, y.pred, rep(NA,length(x # after Fig 5.1 of > DAAG (clever!) > > p <- xyplot(y ~ x, pch = 20, >panel = function(...) { >panel.xyplot(...) # not strictly necessary if I understand correctly >}) > > plot(p, more = TRUE) > > if (fit) { >plot(update(p, more = TRUE, >panel = function(...){ >panel.xyplot(...) >panel.abline(model, col = "red") >}))} > > if (resid) { >plot(update(p, more = TRUE, >panel = function(...){ >panel.xyplot(res.x, res.y, col = "lightblue", type = "l") >}))} > > } > > x <- jitter(c(1:10), factor = 5) > y <- jitter(c(1:10), factor = 10) > fancy.lm(x, y, fit = TRUE, resid = TRUE) > > > Session Info >> sessionInfo() > R version 2.7.1 (2008-06-23) > i386-apple-darwin8.10.1 > > locale: > en_US.UTF-8/en_US.UTF-8/C/C/en_US.UTF-8/en_US.UTF-8 > > attached base packages: > [1] datasets grid grDevices graphics stats utils methods > [8] base > > other attached packages: > [1] fastICA_1.1-9 DescribeDisplay_0.1.3 ggplot_0.4.2 > [4] RColorBrewer_1.0-2reshape_0.8.0 MASS_7.2-42 > [7] pcaPP_1.5 mvtnorm_0.9-0 hints_1.0.1-1 > [10] mvoutlier_1.3 robustbase_0.2-8 lattice_0.17-8 > [13] rggobi_2.1.9 RGtk2_2.12.5-3 > > loaded via a namespace (and not attached): > [1] tools_2.7.1 > > > > > __ > R-help@r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > __ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.