# ls -l `which curl-config`
-rwxr-xr-x 1 root root 6327 Oct 20 15:25 /usr/bin/curl-config
On Friday 01 November 2013 20:21:36 Michael Hannon wrote:
> The error message doesn't seem to refer to the tmp directory. What do you
> get from:
>
> ls -l `which curl-config`
>
> -- Mike
>
>
> On
The error message doesn't seem to refer to the tmp directory. What do you
get from:
ls -l `which curl-config`
-- Mike
On Fri, Nov 1, 2013 at 7:43 PM, Rainer Schuermann wrote:
> I'm trying to install.packages( "RCurl" ) as root but get
> ERROR: 'configure' exists but is not executable
>
>
I'm trying to install.packages( "RCurl" ) as root but get
ERROR: 'configure' exists but is not executable
I remember having had something like that before on another machine and tried
in bash what is described here
http://mazamascience.com/WorkingWithData/?p=1185
and helped me before:
# mkdir ~/
wudadan wrote
> Dear R users,
>
> I wonder if there is a way that I can plot a time series data which is in
> a
> wide format like this:
>
> CITY_NAME 2000Q12000Q2 2000Q32000Q4 2001Q1
> 2001Q2 2001Q3 2001Q4 2002Q1 2002Q2
> CITY1100.5210
On 1 November 2013 11:06, IZHAK shabsogh wrote:
> below is a code to compute hessian matrix , which i need to generate 29
> number of different matrices for example first
You may consider using Numerical Derivatives package for that instead, see:
http://cran.r-project.org/web/packages/numDeriv/v
Dear R users,
I wonder if there is a way that I can plot a time series data which is in a
wide format like this:
CITY_NAME 2000Q12000Q2 2000Q32000Q4 2001Q1
2001Q2 2001Q3 2001Q4 2002Q1 2002Q2
CITY1100.5210 101.9667 103.24933 104.050
William Dunlap writes:
> You can bullet-proof it a bit by making sure that length(formula)==3
> before assuming that formula[[2]] is the response. If length(formula)==2
> then there is no response term, only predictor terms. E.g., replace
>resp <- frm[[2]]
> with
>resp <- if (length(fr
Dear all,
I am trying to make a series of waffle plot-like figures for my data to
visualize the ratios of amino acid residues at each position. For each one
of 37 positions, there may be one to four different amino acid residues. So
the data consist of the positions, what residues are there, and t
Hi,
You may try:
dat1 <- read.table(text="
Friend1,Friend2
A,B
A,C
B,A
C,D",sep=",",header=TRUE,stringsAsFactors=FALSE)
indx <- as.vector(outer(unique(dat1[,1]),unique(dat1[,2]),paste))
res <-
cbind(setNames(read.table(text=indx,sep="",header=FALSE,stringsAsFactors=FALSE),paste0("Friend",1:2)),
Hi Richard
Untested Perhaps adding some dummy factors with NA and then have their
labels as " " and color of lines as 0 or "transparent".
I think that I used it partly for the same reason and in addition I was
combining 2 purposes with the groups and wanted to split them
Duncan
-Original
On 11/01/2013 08:22 AM, Magnus Thor Torfason wrote:
Sure,
I was attempting to be concise and boiling it down to what I saw as the root
issue, but you are right, I could have taken it a step further. So here goes.
I have a set of around around 20M string pairs. A given string (say, A) can
either
You can bullet-proof it a bit by making sure that length(formula)==3
before assuming that formula[[2]] is the response. If length(formula)==2
then there is no response term, only predictor terms. E.g., replace
resp <- frm[[2]]
with
resp <- if (length(frm)==3) frm[[2]] else NULL
(or call st
Hi David,
thanks for your quick answer!
David Winsemius writes:
> On Oct 31, 2013, at 1:27 PM, Andreas Leha wrote:
>
>> Hi all,
>>
>> what is the recommended way to quickly (and without much burden on the
>> memory) extract the response from a formula?
>
> If you want its expression value its
Hi,
Check whether this works:
vec1 <- c( 'eric', 'JOHN', 'eric', 'JOHN', 'steve', 'scott', 'steve', 'scott',
'JOHN', 'eric')
vec2 <- c( 'eric', 'JOHN', 'eric', 'eric', 'JOHN', 'JOHN', 'steve', 'steve',
'scott', 'scott')
vec3 <- c( 'eric', 'eric', 'JOHN', 'eric', 'JOHN', 'JOHN', 'steve', 'ste
On Nov 1, 2013, at 11:16 AM, cesar garcia perez de leon wrote:
> Dear all,
>
> We are conducting a study in with a set of covariates and a time to event
> outcome.
> Covariates b1 and b3 violate proportionality.
Can you describe the basis for that statement?
> We applied a coxph with a „tt‰
Hi Jean,
nevertheless this page "R-bloggers" looks realy interesting so I'll
work through the tutorial.
Thanks again for recommanding this web-site.
Best regards
Claudia
Zitat von "Adams, Jean" :
Claudia,
I have not worked through the example myself. Since you seem to be getting
errors,
> I have no specific expertise here, but I just wanted to point out that
> this sounds like a losing strategy long term: As new packages and
> newer versions of packages come out that fix bugs and add features,
> you'll be unable to use them because you'll be stuck with 2.15.3 . I
> suggest you bi
Hello,
Dr. Simon Wood told me how to force a cubic spline passing through a
point. The code is as following. Anyone who knows how I can change the code
to force the first derivative to be certain value. For example, the first
derivative of the constrained cubic spline equals 2 at point (0,
Hi Jim,
that works nice.
Thanks again!
Have a nice weekend, best regards
Claudia
Zitat von Jim Lemon :
On 10/31/2013 03:04 AM, palad...@trustindata.de wrote:
Hi Jim,
thats the second time that you helped me in a short while so thanks a lot!
But it seems to me quite laborious and error-pron
Hi Jean,
thanks again for your response. As I told you I did the downloads and
double checked if I selected the right directory.
But I noticed right now what happend:
The command in the example is :
eurMap <- readShapePoly(fn="NUTS_2010_60M_SH/Shape/data/NUTS_RG_60M_2010")
But it should be :
A sample of my data looks like this.
Header: Time Sender Receiver
11 2
11 3
22 1
22 1
31 2
31 2
There are 3 time periods (sessions) and the edgelists between nodes.
I
Dear all,
We are conducting a study in with a set of covariates and a time to event
outcome.
Covariates b1 and b3 violate proportionality. We applied a coxph with a tt
term to evaluate the nature of time dependence.
Call:
coxph(formula = Surv(start - 1, stop, outcome) ~ tt(b1) +
b5 +
... but you may be interested in this:
http://andywoodruff.com/blog/why-are-choropleth-mercator-maps-bad-because-we-said-so/
Cheers,
Bert
On Fri, Nov 1, 2013 at 12:18 PM, Adams, Jean wrote:
> Claudia,
>
> I have not worked through the example myself. Since you seem to be getting
> errors, perh
Claudia,
I have not worked through the example myself. Since you seem to be getting
errors, perhaps a different example would help. Here are some more
choropleth maps (although these use US states rather than European
countries).
http://blog.revolutionanalytics.com/2009/11/choropleth-challenge-
Jason,
Thank you for your reply. Interesting ... so you think the 'classes' in the
error message "The combined number of values in at least one class..." is
referring to the CDF bins rather than the sub-population classes that I
defined.
That makes sense as I only defined two classes (!). I
I use the spsurvey package a decent amount. The cont.cdftest function bins the
cdf in order to perform the test which I think is the root of the problem.
Unfortunately, the default is 3 which is the minimum number of bins.
I would contact Tom Kincaid or Tony Olsen at NHEERL WED directly to ask
If you want complex roots, there is a post by Ravi Varadhan from
2010, a reprint of which I found quickly by a google search at
http://r.789695.n4.nabble.com/finding-complex-roots-in-R-td2541514.html
On 1-Nov-13, at 11:20 AM, Don McKenzie wrote:
If you just want the nth root of X, use X^(1/
If you just want the nth root of X, use X^(1/n)
> x <- 256
> x^(1/8)
[1] 2
> x <- -256
> x^(1/8)
[1] NaN
It appears that you get the positive real root.
Is this all you wanted?
On 1-Nov-13, at 11:11 AM, Gary Dong wrote:
Dear R users,
I wonder if R has a default function that I can use to
Dear R users,
I wonder if R has a default function that I can use to do extraction of
roots.
Here is an example:
X N
2.5 5
3.4 7
8.9 9
6.4 1
2.1 0
1.1 2
I want to calculate Y = root(X)^N, where N repre
On Nov 1, 2013, at 10:03 AM, Gary Dong wrote:
> Dear R users,
>
> I wonder how I can use R to identify the max value of each row, the column
> number column name:
>
> For example:
>
> a <- data.frame(x = rnorm(4), y = rnorm(4), z = rnorm(4))
>
>> a
> x y z
> 1 -0.7
Yeah, now it works. Thanks a lot, William and everyone who helped me. This
forum is really helpful for beginners like me. :)
Mano.
On Fri, Nov 1, 2013 at 3:54 PM, William Dunlap wrote:
> You are not using the inv_ecdf function that Rui sent. His was
>
>inv_ecdf_orig <-
>
>fun
Hi,
Try:
cbind(a,do.call(rbind,apply(a,1,function(x) {data.frame(max=max(x),
max.col.num=which.max(x),
max.col.name=names(a)[which.max(x)],stringsAsFactors=FALSE)}))) ##assuming that
unique max for each row.
A.K.
On Friday, November 1, 2013 1:05 PM, Gary Dong wrote:
Dear R users,
I wonder
?which.max should start you down the right path
Clint BowmanINTERNET: cl...@ecy.wa.gov
Air Quality Modeler INTERNET: cl...@math.utah.edu
Department of Ecology VOICE: (360) 407-6815
PO Box 47600FAX:(360)
On Nov 1, 2013, at 6:50 AM, Ryan wrote:
> Good day all.
>
> I am hoping you can help me (and I did this right). I've been working in R
> for a week now, and have encountered a problem with forecast.lm().
>
> I have a list of 12 variables, all type = double, with 15 data entries.
> (I imported
Dear R users,
I wonder how I can use R to identify the max value of each row, the column
number column name:
For example:
a <- data.frame(x = rnorm(4), y = rnorm(4), z = rnorm(4))
> a
x y z
1 -0.7289964 0.2194702 -2.4674780
2 1.0889353 0.3167629 -0.9208548
3 -0.6
Have you looked into the 'igraph' package?
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
> Behalf
> Of Magnus Thor Torfason
> Sent: Friday, November 01, 2013 8:23 AM
> To: r-help@
Hi Duncan,
Thanks for that template. Not quite the solution I was hoping for, but
that works!
Richard
On Thu, Oct 31, 2013 at 3:47 PM, Duncan Mackay wrote:
> Hi Richard
>
> If you cannot get a better suggestion this example from Deepayan Sarkar may
> help.
> It is way back in the archives and I
Hi
Yes you are right. This gives number of zeroes not max number of consecutive
zeroes.
Regards
Petr
> -Original Message-
> From: arun [mailto:smartpink...@yahoo.com]
> Sent: Friday, November 01, 2013 2:17 PM
> To: R help
> Cc: PIKAL Petr; Carlos Nasher
> Subject: Re: [R] Count number
(Inline)
On Fri, Nov 1, 2013 at 7:33 AM, Tstudent wrote:
>
> Uwe Ligges statistik.tu-dortmund.de> writes:
>
>>
>> Install a recent version of tawny that does not depend on the other package?
>
>
>
> The most recent version is this:
> http://cran.r-project.org/web/packages/tawny/index.html
>
> I
Thanks a lot Achim!
This helped a lot. I do not have exactly what I want yet, but I now have
promising ideas to gather my data and find what I'm looking for (especially
as.numeric(x,
units = "hours")).
Regards,
Sartene Bel
> Message du 31/10/13 à 08h48
> De : "Achim Zeileis"
> A : sart...
Uwe Ligges statistik.tu-dortmund.de> writes:
>
> Install a recent version of tawny that does not depend on the other package?
The most recent version is this:
http://cran.r-project.org/web/packages/tawny/index.html
I can install, but can't load without parser package.
It seems true for 2.15
Good day all.
I am hoping you can help me (and I did this right). I've been working in
R for a week now, and have encountered a problem with forecast.lm().
I have a list of 12 variables, all type = double, with 15 data entries.
(I imported them from tab delimited text files, and then formatted
Sure,
I was attempting to be concise and boiling it down to what I saw as the
root issue, but you are right, I could have taken it a step further. So
here goes.
I have a set of around around 20M string pairs. A given string (say, A)
can either be equivalent to another string (B) or not. If A
I am building a horizontal bar plot using ggplot2 - see the code below.
A couple of questions:
1. On the right side of the graph the value labels are inside the bars. How
could I move them to be outside the bars - the way they are on the left
side?
2. How can I make sure that the scale on my X axi
You are not using the inv_ecdf function that Rui sent. His was
inv_ecdf_orig <-
function (f)
{
x <- environment(f)$x
y <- environment(f)$y
approxfun(y, x)
}
(There is no 'xnew' in the environment of f.)
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
From:
All,
I've used the excellent package, spsurvey, to create spatially balanced samples
many times in the past. I'm now attempting to use the analysis portion of the
package, which compares CDFs among sub-populations to test for differences in
sub-population metrics.
- My data (count data) have
You could also try:
library(plyr)
newdf <- function(.data, ...) {
eval(substitute(data.frame(...)), .data, parent.frame())
}
x1 <- ddply(mtcars,.(cyl,gear), newdf, mgp=t(quantile(mpg)),hp=t(quantile(hp)))
#(found in one of the google group discussions)
#or
library(data.table)
dt1 <- data.
Install a recent version of tawny that does not depend on the other package?
Best,
Uwe Ligges
On 01.11.2013 12:10, Tstudent wrote:
I have R version 2.15.3 When i try to load it:
library (tawny)
i receive this response:
package ‘parser’ could not be loaded
The package Parser in not on C
The release version of tawny has no such dependency and builds just fine on
CRAN. Try updating that instead.
Michael
On Nov 1, 2013, at 7:10, Tstudent wrote:
>
>
> I have R version 2.15.3 When i try to load it:
>
> library (tawny)
>
> i receive this response:
>
> package ‘parser’ could n
It would be nice if you followed the posting guidelines and at least
showed the script that was creating your entries now so that we
understand the problem you are trying to solve. A bit more
explanation of why you want this would be useful. This gets to the
second part of my tag line: Tell me w
Hi,
Try:
do.call(data.frame,c(x,check.names=FALSE))
A.K.
Hello,
I´m using function aggregate in R 3.0.2. If I run the instruction
x<-aggregate(cbind(mpg,hp)~cyl+gear,data=mtcars,quantile) I get the
result the following data.frame:
cyl
gear
mpg.0%
mpg.25%
mpg.50%
mpg.75%
Pretty much what the subject says:
I used an env as the basis for a Hashtable in R, based on information
that this is in fact the way environments are implemented under the hood.
I've been experimenting with doubling the number of entries, and so far
it has seemed to be scaling more or less l
Daniel,
You can see better what is going on if you look at
as.list(x)
There you can see that cyl and gear are vectors but mpg and hp are matrices.
You can rearrange them using the do.call() function
x2 <- do.call(cbind, x)
dim(x2)
Jean
On Fri, Nov 1, 2013 at 7:08 AM, Daniel Fernandes wrote:
Lorenzo,
I may be able to help you get started. You can use the XML package to grab
the information off the internet.
library(XML)
mylines <- readLines(url("http://bit.ly/1coCohq";))
closeAllConnections()
mylist <- readHTMLTable(mylines, asText=TRUE)
mytable <- mylist1$xTable
However, when I l
I think this gives a different result than the one OP asked for:
df1 <- structure(list(ID = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L), x = c(1, 0,
0, 1, 0, 0, 0, 1, 2, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0)), .Names = c("ID",
"x"), row.names = c(N
Hi,
Try this:
Lines1 <- readLines(textConnection("Peak Usage : init:2359296,
used:15859328, committed:15892480,max:50331648Current Usage : init:2359296
used:15857920,committed:15892480,max:50331648|---|
Peak Usage : init:2359296, used:15859328,
committed:15892480,max:50
Hello,
I´m using function aggregate in R 3.0.2. If I run the instruction
x<-aggregate(cbind(mpg,hp)~cyl+gear,data=mtcars,quantile) I get the
result the following data.frame:
cyl
gear
mpg.0%
mpg.25%
mpg.50%
mpg.75%
mpg.100%
hp.0%
hp.25%
hp.50%
hp.75%
hp.100%
4
3
21.5
21.5
I have R version 2.15.3 When i try to load it:
library (tawny)
i receive this response:
package ‘parser’ could not be loaded
The package Parser in not on Cran anymore, it seems a dead project!
http://cran.r-project.org/web/packages/parser/index.html
If i try to manual install parser_0.1.tar.
below is a code to compute hessian matrix , which i need to generate 29 number
of different matrices for example first element in x1 and x2 is use to generate
let say matrix (M1) and second element in x1 and x2 give matrix (M2) upto
matrix (M29) corresponding to the total number of observations
Thank you very much, guys - it worked beautifully.
On Thu, Oct 31, 2013 at 7:55 AM, John Kane wrote:
> At a guess, don't use colour.
>
> John Kane
> Kingston ON Canada
>
>
> > -Original Message-
> > From: dimitri.liakhovit...@gmail.com
> > Sent: Wed, 30 Oct 2013 14:11:37 -0400
> > To: r
Thanks.
I converted my data structure( that is most of the confusion in my case )
into a data frame and then applied this function
y <- apply( y, 1, function(z) str_extract(z,"Current.*?[/|]"))
to get
"Current Usage : init:2359296, used:15857920, committed:15892480,
max:50331648|"
Mohan
F
try this:
> x <- rbind("Peak Usage: init:2359296, used:15859328,
> committed:15892480,max:50331648Current Usage : init:2359296,
> used:15857920,committed:15892480, max:50331648|---|")
> apply(x, 1, function(a) sub("(Current.*?[/|]).*", "\\1", a))
[1] "Peak Usage: init:235
Thanks, Bill & Duncan. Actually I tried values which are inside the defined
region. please find below the extracted script
> xnew<-rlnorm(seq(0,400,1), meanlog=9.7280055, sdlog=2.0443945)
> f <- ecdf(xnew)
> y <- f(x)
> y1<-f(200)## finding y for a given xnew value of
2
You can override the legend aesthetics, e.g.,
ggplot(df,aes(x=Importance,y=Performance,fill=PBF,size=gapsize))+
geom_point(shape=21,colour="black")+
scale_size_area(max_size=pointsizefactor) +
scale_fill_discrete(guide = guide_legend(override.aes = list(size = 4)))
Best,
Ista
On Thu,
Hi Thomas,
It depends whether you'd like to include all levels of each column in every
column. For including all values you could try something like this:
isAllDifferent <- function(z) !any(duplicated(z))
myData <- data.frame(Friend1=c("a", "a", "b", "c"), Friend2=c("b", "c
You could use the data.table package
require(data.table)
DT <- data.table(Friend1 = sample(LETTERS, 10, replace = TRUE), Friend2 =
sample(LETTERS, 10, replace = TRUE), Indicator = 1)
ALL <- data.table(unique(expand.grid(DT)))
setkey(ALL)
OTHERS <- ALL[!DT]
OTHERS[, Indicator := 0]
RESULT <- rbi
Hi
Another option is sapply/split/sum construction
with(data, sapply(split(x, ID), function(x) sum(x==0)))
Regards
Petr
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Carlos Nasher
> Sent: Thursday, October 31, 2013 6:46
I have data that looks like this:
Friend1, Friend2
A, B
A, C
B, A
C, D
And I'd like to generate some more rows and another column. In the new
column I'd like to add a 1 beside all the existing rows. That bit's
easy enough.
Then I'd like to add rows for all the possible directed combination
Hi,
I have a data frame with one column and several rows of the form.
"Peak Usage: init:2359296, used:15859328, committed:15892480,
max:50331648Current Usage : init:2359296, used:15857920,
committed:15892480, max:50331648|---|"
I tested the regex
Current.*?[\|]
There is no R code following
On 01 Nov 2013, at 05:34, Li Bowen wrote:
> Hi,
>
> I have been trying to build R with optimized BLAS library.
>
> I am using a Ubuntu 13.10 x86_64 desktop, on which I am able to build R
> with openblas without any problem:
>
> #BEGIN_SRC sh
> ./configure --enabl
70 matches
Mail list logo