<- x*fact(x-1)
> + print(sys.call())
> + cat(sprintf("X is %i\n",x))
> + print(ans)
> + }
>> fact(4)
> fact(x - 1)
> X is 0
> [1] 1
> fact(x - 1)
> X is 1
> [1] 1
> fact(x - 1)
> X is 2
> [1] 2
> fact(x - 1)
> X is 3
> [1] 6
> fact(4)
&
e({cat(sprintf("x= %i\n",x));return}),print=T)
> [1] "fact"
>> fact(4)
> Tracing fact(4) on entry
> x= 4
> Tracing fact(x - 1) on entry
> x= 3
> Tracing fact(x - 1) on entry
> x= 2
> Tracing fact(x - 1) on entry
> x= 1
> Tracing fact(x - 1) on
> mylist <- c( 2,1,3,5,4 )<<< make a vector of numbers
> sort(mylist)
[1] 1 2 3 4 5 <<< in sorted order
> mylist <- c( "this", "is", "a", "test")
> sort(mylist)
[1] "a""is" "test" "this" <<< in sorted order
> order(mylist)
[1] 3 2 4 1
On Fri, Apr 17, 2009 at 10:12 PM, Brendan Morse wrote:
> ...I would like to automatically generate a series of matrices and
> give them successive names. Here is what I thought at first:
>
> t1<-matrix(0, nrow=250, ncol=1)
>
> for(i in 1:10){
> t1[i]<-rnorm(250)
> }
>
> What I intended was
Judging from the traffic on this mailing list, a lot of R beginners
are trying to write things like
assign( paste( "myvar", i), ...)
where they really should probably be writing
myvar[i] <- ...
Do we have any idea where this bizarre habit comes from?
-s
_
On Wed, Apr 22, 2009 at 3:28 AM, Andreas Wittmann
wrote:
> i try to integrate lgamma from 0 to Inf.
Both gamma and log are positive and monotonically increasing for large
arguments.
What can you conclude about the integrability of log(gamma(x))?
-s
__
There are certainly formulas for solving polynomials numerically up to 4th
degree non-iteratively, but you will almost certainly get better results
using iterative methods.
Even the much more trivial formula for the 2nd degree (quadratic) is tricky
to implement correctly and accurately, see:
* Ge
On Tue, Jan 5, 2010 at 5:25 PM, Carl Witthoft wrote:
> quote:
> > There are certainly formulas for solving polynomials numerically up to
> 4th degree non-iteratively, but you will almost certainly get better results
> using iterative methods.
>
> I must be missing something here. Why not use t
I have a modest-size XML file (52MB) in a format suited to xmlToDataFrame
(package XML).
I have successfully read it into R by splitting the file 10 ways then
running xmlToDataFrame on each part, then rbind.fill (package plyr) on the
result. This takes about 530 s total, and results in a data.fram
read.table gives idiosyncratic results when the input is formatted
strangely, for example:
read.table(textConnection("a'b\nc'd\n"),header=FALSE,fill=TRUE,sep="",quote="'")
=> "c'd" "a'b" "c'd"
read.table(textConnection("a'b\nc'd\nf'\n'\n"),header=FALSE,fill=TRUE,sep="",quote="'")
=> "f'" "\n
I don't understand why 'quantile' works in this case:
> tt <- rep(c('a','b'),c(10,3))
> sapply(0:6/6,function(q) quantile(tt,probs=q,type=1))
0% 16.7% 33.3% 50% 66.7% 83.3% 100%
"a" "a" "a" "a" "a" "b" "b"
and also
> qua
Is there R software available for doing approximate matching of personal
names?
I have data about the same people produced by different organizations and
the only matching key I have is the name. I know that commercial solutions
exist, and I know I code code this from scratch, but I'd prefer to bu
If I have 2 n-dimensional arrays, how do I compose them into a n+1-dimension
array?
Is there a standard R function that's something like the following, but that
gives clean errors, handles all the edge cases, etc.
abind <- function(a,b) structure( c(a,b), dim = c(dim(a), 2) )
m1 <- array(1:6,c(
Is there a generic binary search routine in a standard library which
a) works for character vectors
b) runs in O(log(N)) time?
I'm aware of findInterval(x,vec), but it is restricted to numeric vectors.
I'm also aware of various hashing solutions (e.g. new.env(hash=TRUE) and
fastmatch), but
-s
On Wed, Apr 6, 2011 at 12:59, Martin Morgan wrote:
> On 04/04/2011 01:50 PM, William Dunlap wrote:
>
>> -Original Message-
>>> From: r-help-boun...@r-project.org
>>> [mailto:r-help-boun...@r-project.org] On Behalf Of Stavros Macrakis
>
I have a file of data where each line is a series of name-value pairs, but
where the names are not necessarily the same from line to line, e.g.
a=1,b=2,d=5
b=4,c=3,e=3
a=5,d=1
I would like to create a data frame which lines up the data in the
corresponding columns. In this case, this wo
ote:
> Use plyr::rbind.fill? That does match up columns by name.
> Hadley
>
> On Thu, Jul 28, 2011 at 5:23 PM, Stavros Macrakis
> wrote:
> > I have a file of data where each line is a series of name-value pairs,
> but
> > where the names are not necessarily the same f
> Last time, I was told that I couldn't list my R package and associated
papers as a research activity with
> substantial impact because it was outside my official scope of work.
(Even though I wrote it so I could
> *do* my work.)
That seems wrong. My impression is that "method" papers were frequ
Has anyone here implemented Jon Kleinberg's burst detection algorithm
("Bursty and Hierarchical Structure in Streams"
http://www.cs.cornell.edu/home/kleinber/bhs.pdf)?
I'd rather not reimplement if there's already running code available
Thanks,
-s
[[alternative HTML vers
That won't work because R has special rules for evaluating things in the
function position. Examples:
*OK*
min(1:2)
"min"(1:2)
f<-min; f(1:2)
do.call(min,list(1:2))
do.call("min",list(1:2)) # do.call converts string->function
*Not OK*
("min")(1:2) # string in function positi
On Thu, Dec 18, 2008 at 1:37 PM, Wacek Kusnierczyk <
waclaw.marcin.kusnierc...@idi.ntnu.no> wrote:
> Kenn Konstabel wrote:
> >> ...foo({x = 2})
> ...
>
> This is legal but doesn't do what you probably expect -- although
> > documentation for `<-` says the value (returned by <-) is 'value' i.e.
> >
What is the equivalent for formatted tabular output of the various very
sophisticated plotting tools in R (plot, lattice, ggplot2)?
In particular, I'd like to be able to produce formatted Excel spreadsheets
(using color, fonts, borders, etc. -- probably via Excel XML) and formatted
HTML tables (id
David, Tobias,
Thanks for your pointers to the various HTML and OpenOffice tools. I will
look into them. odfWeave looks particularly promising since "OpenOffice can
be used to export the document to MS Word, rich text format, HTML, plain
text or pdf formats." It looks as though I have to learn
You might consider using the 'bit' library and use two bits per base. You
could then wrap this in an object with appropriate functions (bit.`[`,
etc.).
-s
On Wed, Dec 24, 2008 at 10:26 AM, Gundala Viswanath wrote:
> Dear all,
>
> What's the R way to compress the string into smaller 2
Sorry, I meant
`[.gene`
where gene would be your new class.
-s
On Wed, Dec 24, 2008 at 11:00 AM, Stavros Macrakis wrote:
> You might consider using the 'bit' library and use two bits per base. You
> could then wrap this in an object with appropriat
Any opinions on the list about these courses?
Are they addressed to business analysts who are whizzes at Excel?
To programmers?
To statisticians?
To mathematicians?
Has anyone on the list attended them?
Did they find them more useful than working through a book or some online
resource?
On Sat, Dec 27, 2008 at 1:32 PM, Sean Zhang wrote:
> My question: How to use a character vector that records object names as
> function input argument?
> ...
> I asked this question very recently and was advised to use get(). get()
> works when passing one single object name. but it does not work
How about
c(rbind(A,B,C))
a<-1:5
> b<-11:15
> c<-21:25
> rbind(a,b,c)
[,1] [,2] [,3] [,4] [,5]
a12345
b 11 12 13 14 15
c 21 22 23 24 25
> c(rbind(a,b,c))
[1] 1 11 21 2 12 22 3 13 23 4 14 24 5 15 25
On Mon, Dec 29, 2008 at 4:01 AM, ykank wrote:
I believe this does what you want:
m[-sample(which(m[,1]<8 & m[,2]>12),2),]
Analysis:
Get a boolean vector of rows fitting criteria:
m[,1]<8 & m[,2]>12
What are their indexes?
which(...)
Choose two among those indexes:
sample(...,2)
Choose all except the selected rows from the or
On Tue, Dec 30, 2008 at 12:53 PM, hadley wickham wrote:
> On Tue, Dec 30, 2008 at 10:21 AM, baptiste auguie wrote:
>> I thought this was a good candidate for the plyr package, but it seems that
>> l*ply functions are meant to operate only on separate list elements:...
>> Perhaps a new case to co
On Tue, Dec 30, 2008 at 8:44 AM, wrote:
> It is no homework. It is part of a project where a binary matrix, whose 1s
> represent the position of the highest DWT coefficients energy, is used as a
> template to extract signal features.
> The approach I am following requires each row of the binary
On Wed, Dec 31, 2008 at 6:12 AM, Michael Pearmain wrote:
> summary(z.out.1)
> summary(s.out.1)
> hist(s.out.1$qi$ev)...
> This seemed a rather long winded way of doing things to me and a simple for
> loop should handle this, as later i want it to be dynamic for a number of
> groups so my new code
> On Tue, 30 Dec 2008, m.u.r. wrote:
>> according to the documentation, the xlim parameter should bound the
>> range of the function being plotted, and is returned as the extreme
On Wed, Dec 31, 2008 at 4:18 AM, Prof Brian Ripley
replied:
> Wheere does it say that?
True, it doesn't say that. As
On Tue, 30 Dec 2008, m.u.r. wrote in thread [R] plot.stepfun xlim:
> foo <- stepfun(0.5, c(1, 0));
On Wed, Dec 31, 2008 at 4:18 AM, Prof Brian Ripley
replied:
> Why are you adding two blank commands via the semicolons?
The R parser (2.8.0 Windows) does not seem to have the concept of
blank comma
On Wed, Dec 31, 2008 at 12:44 PM, Guillaume Chapron
wrote:
>> m[-sample(which(m[,1]<8 & m[,2]>12),2),]
> Supposing I sample only one row among the ones matching my criteria. Then
> consider the case where there is just one row matching this criteria. Sure,
> there is no need to sample, but the ins
I think there's a pretty simple solution here, though probably not the
most efficient:
t(sapply(split(a,a$ID),
function(q) with(q,c(ID=unique(ID),x=unique(x),y=max(y)-min(y)
Using 'unique' instead of min or [[1]] has the advantage that if x is
in fact not time-invariant, this gives an err
runif appears to give 31 bits of precision, but this isn't mentioned
in the documentation page. The R numeric type supports 53 digits of
precision, and other numeric functions (sin, etc.) give full-precision
results. So I'd think that either runif should give full precision or
its documentation sh
There is another undocumented glitch in sample:
sample(2^31-1,1) => OK
sample(2^31 ,1) => Error
I suppose you could interpret "sampling takes place from '1:x' " to
mean that 1:x is actually generated, but that doesn't work as an
explanation either; on my 32-bit Windows box, 1:(2^29) giv
On Fri, Jan 2, 2009 at 4:03 PM, Duncan Murdoch wrote:
> On 02/01/2009 2:45 PM, Stavros Macrakis wrote:
>> ...So I'd think that either runif should give full precision or
>> its documentation should mention this limitation.
>
> It refers to the .Random.seed page for detai
On Fri, Jan 2, 2009 at 5:56 PM, Duncan Murdoch wrote:
> I don't agree. If you add too much technical detail to a topic, then people
> don't "work through it". I'd say the r pages generally give enough
> detail now, but not too much. If you add every detail that might interest
> someone somewher
R's variable passing mechanism is not call by value, but a mixture of
unevaluated arguments (like the obsolete Lisp FEXPR) and call-by-need.
It is like FEXPR in that the function can capture the unevaluated
argument (using 'substitute'). But it is like call-by-need in that
normal use of the argum
Watch the operator precedences. In R (and many other languages)
-1^2 == -(1^2) == -1
Perhaps you intended:
(-1)^2 == 1
On Sat, Jan 3, 2009 at 3:32 PM, wrote:
> I had a question about the basic power functions in R.
>
> For example from the R console I enter:
>
> -1 ^ 2
> [1] -1
>
> b
On Sat, Jan 3, 2009 at 7:02 PM, wrote:
> R's interpreter is fairly slow due in large part to the allocation of
> argument lists and the cost of lookups of variables, including ones
> like [<- that are assembled and looked up as strings on every call.
Wow, I had no idea the interpreter was so awf
On Sun, Jan 4, 2009 at 4:50 PM, wrote:
> On Sun, 4 Jan 2009, Stavros Macrakis wrote:
>> On Sat, Jan 3, 2009 at 7:02 PM, wrote:
>>> R's interpreter is fairly slow due in large part to the allocation of
>>> argument lists and the cost of lookups of variables,
I
Thanks for the explanations of the internals.
I understand about the 'redefining log' problem in the interpreter,
but I wasn't aware of the NAMED counter. In both cases, beyond static
analysis, dynamic Java compilers do a pretty good job, but I don't
know if Java bytecodes are suitable for R, and
On Mon, Jan 5, 2009 at 6:30 AM, wrote:
> My question is motivated by dealing with pairs of values, like (3,12.5),
> (16,2.98), and so on
> that are mapped to a cartesian plain (Time, Frequence)
> I miss C multidimensional arrays. I am trying to simulate the 3rd dimension
> by declaring a matri
? `break`
On Tue, Jan 6, 2009 at 7:58 PM, kayj wrote:
> I was wondering if there is anything that breaks a loop in R
>
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
>
> "Some people familiar with R describe it as a supercharged version of
> Microsoft's Excel spreadsheet software..."
>
It is easy to ridicule this line from the NYT article. But this is not only
a very sensible comment by a smart reporter, but also one that is good for
R:
It is good for R beca
On Thu, Jan 8, 2009 at 5:52 AM, Christian Kamenik <
christian.kame...@giub.unibe.ch> wrote:
>
> 'Apply' is a great thing for running functions on rows or columns of a
> matrix:
>
> X <- rnorm(20, mean = 0, sd = 1)
> dim(X) <- c(5,4)
> apply(X,2,sum)
>
> Is there a way to use apply for excluding ro
On Thu, Jan 8, 2009 at 1:26 AM, Robert Wilkins wrote:
> ...The user interface for R, otherwise known as the S programming language
> has the same origins as C and Unix
We could take this one step further, and note that C's design (its "user
interface"?) was based on BCPL, which was developed
On Sun, Jan 11, 2009 at 9:04 PM, Jörg Groß wrote:
> ...now I want to get the mean and sd, as long as the column is not of type
> factor.
> ...But how can I check this, if I don't know the column name?
>
...
> is.factor(d[1])
> produces "FALSE".
>
Try is.factor(d[[1]]). Remember that in R, x[...
I was wondering if there were any R packages supporting fast set intersection.
I am aware of base::intersection, bit::`&`, and set::set_intersection,
each of which has its advantages. base::intersection is
space-efficient and accepts arbitrary unsorted lists of arbitrary base
types. set::set_int
On Sun, Jan 18, 2009 at 2:22 PM, Wacek Kusnierczyk
wrote:
>> x <- c("abcdef", "defabc", "qwerty")
>> ...[find] all elements where the word 'abc' does not appear (i.e. 3 in this
>> case of 'x').
> x[-grep("abc", x)]
> which unfortunately fails if none of the strings in x matches the pattern,
> i
I'm rather confused by the semantics of factors.
When applied to factors, some functions (whose results are elements of
the original factor argument) return results of class factor, some
return integer vectors, some return character vectors, some give
errors. I understand some but not all of this
Given a vector of reference strings Ref and a vector of test strings
Test, I would like to find elements of Test which do not contain
elements of Ref as \b-delimited substrings.
This can be done straightforwardly for length(Ref) < 6000 or so (R
2.8.1 Windows) by constructing a pattern like \b(a|b|
Though there are certainly some *ir*rational reasons for IT
departments' behavior, there are also many rational reasons that IT
departments try to control the software running in their
organizations.
Condescendingly assuming that the IT department is run by idiots whose
decisions are ruled by emot
A first step that would make the current Web page look much better
would be to anti-alias the demonstration graphic. The current graphic
makes R graphics seem (falsely!) to be very primitive. I'm afraid I
don't know how to do the anti-aliasing myself.
Replacing the fixed-width, typewriter-style f
Perhaps rather than globally saying it is "utter nonsense" you would
care to refute what you think is wrong about it?
-s
PS "Tyrants"? Wow, we are really dramatizing life at work now
On Mon, Feb 2, 2009 at 3:14 PM, Rolf Turner wrote:
>
> On 2/02/2009, at 4:29 PM, Murray Coop
Take a look at the run-length encoding function rle. I believe
rle(k)$lengths gives you exactly what you want.
-s
On Wed, Feb 4, 2009 at 10:19 AM, axionator wrote:
> Hi all,
> I've a vector with entries, which are all of the same type, e.g. string:
> k <- c("bb", "bb", "bb", "aa", "
Since both comparison and negation are well-defined for time differences, I
wonder why abs and division are not defined for class difftime. This
behavior is clearly documented on the man page: "limited arithmetic is
available on 'difftime' objects"; but why? Both are natural, semantically
sound, an
The R FAQ is very helpful about installing R on various Linuxes, but doesn't
seem to discuss the advantages of one distribution over another. I am new
to Linux (though not to Unix!), and would appreciate some guidance from
those with experience.
I plan to set up a headless Linux x86 server for th
Thanks for your help!
You mention amd64 -- I didn't realize that AMD and Intel were different for
this purpose. I will actually be installing on a VM on top of an Intel
box. Does that change things?
Thanks again,
-s
[[alternative HTML version deleted]]
_
On Mon, Feb 9, 2009 at 8:03 AM, clion wrote:
>
> this is good, but it doesn't solve my main problem (which I unfortunately
> din't post - very sorry )
> I would like to filter may data , for example by:
>
> dat.sub<-dat[dat$age>10 & dat$NoCaps>2,]
>
> So I need a column where the number of Captur
On Mon, Feb 9, 2009 at 12:36 PM, Neotropical bat risk assessments <
neotropical.b...@gmail.com> wrote:
> Read a string of data and had this message during a plot run.
>
> Warning message:
> closing unused connection 3 (Lines)
>
> Not sure what this means or if it should be of concern.
>
This simp
On Tue, Feb 10, 2009 at 7:51 AM, Gabor Grothendieck
wrote:
> ...Also while Maxima is more sophisticated in terms of algorithms,
Glad to hear it... (I first worked on Maxima in 1971...)
> yacas is actually more sophisticated from the viewpoint of its language which
> borrows ideas from both impe
On Wed, Feb 11, 2009 at 12:32 PM, Greg Snow wrote:
> ...The c-style of /* */ allows both types and you can comment out part of a
> line, but it is not simple to match and has its own restrictions. Friedl in
> his regular expressions book takes 10 pages to develop a pattern to match
> these (an
On Wed, Feb 11, 2009 at 6:20 AM, Robin Hankin wrote:
>> library(Brobdingnag)
>> exp(1000)/(exp(1007)+5)
> [1] NaN
>
>> as.numeric(exp(as.brob(1000))/(exp(as.brob(1007))+5))
> [1] 0.000911882
Though brob is certainly useful in many cases, it can't substitute for
thinking about numeric issues (roun
On Wed, Feb 11, 2009 at 4:38 PM, Wacek Kusnierczyk
wrote:
> Stavros Macrakis wrote:
>> For example:
>>> x<-40; log(exp(x)+1)-x
>> [1] 0
>>> x<-as.brob(40); log(exp(x)+1)-x
>> [1] -exp(-Inf)
>> The correct answer is about 4e-18. Perhaps Ryacas
On Thu, Feb 12, 2009 at 4:28 AM, Gavin Simpson wrote:
> When I'm testing the speed of things like this (that are in and of themselves
> very quick) for situations where it may matter, I wrap the function call in a
> call
> to replicate():
>
> system.time(replicate(1000, svd(Mean_svd_data)))
>
> t
On Fri, Feb 13, 2009 at 10:47 AM, Gabor Grothendieck
wrote:
> See ?get and try:
Interesting. I hadn't paid attention to the 'mode' argument before.
Where would it be advisable to use anything but mode='any' or mode='function'?
-s
__
R-help@r
Combining the various approaches on the list, here's a simple
one-liner that puts the NAs at the end:
t(apply(mat,1,function(r) { dr<-duplicated(r); c( r[!dr],
rep(NA,sum(dr)) ) ))
If you don't care where the NAs are, the following is a tad shorter
and perhaps clearer:
mat[ t(apply(mat
On Fri, Feb 13, 2009 at 12:25 PM, Berwin A Turlach
wrote:
> On Fri, 13 Feb 2009 11:11:28 -0500
> Stavros Macrakis wrote:
>> Where would it be advisable to use anything but mode='any' or
>> mode='function'?
>
> I guess the answer to this question is mor
rhaps clearer:
mat[ t(apply(mat,1,duplicated)) ] <- NA # modifies mat
-s
On Fri, Feb 13, 2009 at 12:49 PM, Stavros Macrakis
wrote:
> Combining the various approaches on the list, here's a simple
> one-liner that puts the NAs at the end:
>
>
Assuming your matrix is:
mm <- matrix(runif(6*6),6,6)
And your blocks are defined by integers or factors:
cfact <- c(1,1,1,2,3,3)
rfact <- c(1,1,1,2,2,3)
Then the following should do the trick:
matrix(tapply(mm, outer(rfact,cfact,paste), mean),
length(unique(rfact)))
On Mon, Feb 16, 2009 at 4:23 PM, Martin Morgan wrote:
> Stavros Macrakis writes:
> >matrix(tapply(mm, outer(rfact,cfact,paste), mean),
> > length(unique(rfact)))
>
> or the variant
>
> idx <- outer(rfact, (cfact - 1) * max(rfact), "+&qu
There seems to be a problem in the way `+` is dispatched for
POSIXt/difftime (R 2.8.0 Windows).
With the following definitions:
t0 <- as.POSIXct('2009-01-01 00:00')
halfhour.mins <- as.difftime(30,units='mins')
I would have thought that the straightforward answer to Suresh's
question woul
On Mon, Feb 16, 2009 at 7:52 PM, Bert Gunter wrote:
> I suppose the clean way to do this would be to define a cartesian product of
> two factors with the induced lexicographic order (is there a standard
> function for doing this?):"
>
> Of course. ?interaction.
Perhaps my specification was unclea
I recently traced a bug of mine to the fact that cumsum(s)[length(s)]
is not always exactly equal to sum(s).
For example,
x<-1/(12:14)
sum(x) - cumsum(x)[3] => 2.8e-17
Floating-point addition is of course not exact, and in particular is
not associative, so there are various possible r
Here is one kind of weighted quantile function.
The basic idea is very simple:
wquantile <- function( v, w, p )
{
v <- v[order(v)]
w <- w[order(v)]
v [ which.max( cumsum(w) / sum(w) >= p ) ]
}
With some more error-checking and general clean-up, it looks like this:
# Simple weigh
Some minor improvements and corrections below
# Simple weighted quantile
#
# v A vector of sortable observations
# w A numeric vector of positive weights
# p The quantile 0<=p<=1
#
# Nothing fancy: no interpolation etc.; NA cases not thought through
wquantile <- function(v,w=rep(1,length(v)),p
Hmm. Why not use the same method to guarantee the same result? Or at
least document the possibility that cumsum(x)[length(x)] != sum(x)...
that seems like an easy trap to fall into.
-s
On Wed, Feb 18, 2009 at 11:39 AM, Martin Maechler
wrote:
>>>>>> "
Duncan, Berwin, Martin,
Thanks for your thoughtful explanations, which make perfect sense.
May I simply suggest that the non-identity between last(cumsum) and
sum might be worth mentioning in the cumsum doc page?
-s
__
R-help@r-project.org
On Thu, Feb 19, 2009 at 9:50 AM, Jorge Ivan Velez
wrote:
> mydata$trt<-with(mydata,paste(diet,vesl,sep=""))
Besides the above (good!) solution, you might want to understand why
your original solution didn't work:
>> > mydata$trt<-ifelse(mydata$diet=="C" && mydata$vesl=="A", "CA",
>> +
Inspired by the exchange between Rolf Turner and Wacek Kusnierczyk, I
thought I'd clear up for myself the exact relationship among the
various sequence concepts in R, including not only generic vectors
(lists) and atomic vectors, but also pairlists, factor sequences,
date/time sequences, and diffti
On Tue, Feb 24, 2009 at 3:10 PM, Sean Zhang wrote:
> ...Want to delete many variables in a dataframe
> df<-data.frame(var.a=rnorm(10), var.b=rnorm(10),var.c=rnorm(10))
> df[,'var.a']<-NULL #this works for one single variable
> df[,c('var.a','var.b')]<-NULL #does not work for multiple variab
On Tue, Feb 24, 2009 at 3:01 PM, Bert Gunter wrote:
> Nothing wrong with prior suggestions, but strictly speaking, (fully) sorting
> the vector is unnecessary.
>
> y[y > quantile(y, 1- p/length(y))]
>
> will do it without the (complete) sort. (But sorting is so efficient anyway,
> I don't think yo
On Mon, Feb 23, 2009 at 9:52 PM, Fox, Gordon wrote:
> This is a seemingly simple problem - hopefully someone can help.
> Problem: we have two integers. We want (1) all the common factors, and
> (2) all the possible products of these factors. We know how to get (1),
> but can't figure out a general
"L'esprit de l'escalier" strikes again
An even simpler statement of your original problem:
Find the factors that A and B have in common.
If A and B are fairly small (< 1e7, say), a very direct approach is:
which( ! (A %% 1:min(A,B)) & !(B %% 1:min(A,B)) )
Is this "brute forc
Argh! The second (concise) version should have |, not & !!!
-s
On 2/24/09, Stavros Macrakis wrote:
> "L'esprit de l'escalier" strikes again
>
> An even simpler statement of your original problem:
>
> Find the factors that A and B ha
On Wed, Feb 25, 2009 at 9:25 AM, Fox, Gordon wrote:
> The tricky part isn't finding the common factors -- we knew how to do
> that, though not in so concise a fashion as some of these suggestions.
> It was finding all their products without what I (as a recovered Fortran
> programmer) would call "
It does seem sensible that median and quantile would work for the
POSIXct, Date, and other classes for which they are logically
well-defined, but strangely enough, they do not (except for odd-length
input). The summary function has a special case (summary.POSIXct)
which does the straightforward, o
On Sun, Mar 15, 2009 at 4:12 PM, diegol wrote:
> ...This could be done in Excel much tidier in my opinion (especially the
> range_aux part), element by element (cell by cell)...
If you'd do it element-by-element in Excel, why not do it
element-by-element in R?
Create a table with the limits of t
On Sun, Mar 15, 2009 at 6:34 PM, diegol wrote:
>> If you'd do it element-by-element in Excel, why not do it
>> element-by-element in R?
>
> Well, actually I was hoping for a vectorized solution so as to avoid
> looping.
That's what I meant by element-by -element. A vector in R corresponds
to a ro
Using cut/split seems like gross overkill here. Among other things,
you don't need to generate labels for all the different ranges.
which(x<=range)[1]
seems straightforward enough to me, but you could also use the
built-in function findInterval.
-s
read to be easier to follow. For example
> I could replace the last four lines for only:
>
>product <- x*percent
>ifelse(product< minimum, minimum, product)
>
> But I believe you refer to the cut/split functions rather. I agree that
> "which(x<=range)[1]"
Greg,
Thanks for helping this user.
I assume you mean the permn function in the combinat package? For a
new user (including me), it is not obvious how to get from "the
permutations function in the Combinations package" to that.
I see there is also a function gtools::permutations. The gtools
pac
On Tue, Mar 17, 2009 at 5:24 AM, Gavin Simpson wrote:
> On Mon, 2009-03-16 at 18:43 -0400, Stavros Macrakis wrote:
>> ... no way to find relevant functions, and no way of knowing which one to
>> use if there is more than one.
> That is what the Task Views are meant to add
On Tue, Mar 17, 2009 at 12:41 PM, Greg Snow wrote:
> No, I meant the Combinations package, it is apparently an Omegahat package
> (http://www.omegahat.org/Combinations/). It looks similar to the permn
> function as far as the usage goes, but the documentation includes additional
> information
Matlab's Nthroot calculates the real nth root. For positive a, you
can use a^(1/b); for negative a, b must be odd for the result to be
real, and you can use -abs(a)^(1/b). So you could write:
nthroot <- function(a,b) ifelse( b %% 2 == 1 | a >= 0,
sign(a)*abs(a)^(1/b), NaN)
This is a so-called V
On Fri, Mar 20, 2009 at 8:18 PM, science! wrote:
> I am aware that it is easily possible to create var names on the fly. e.g.
> assign(paste("m",i,sep=""),j)
> but is it possible to assign dataframes to variables created on the fly?
>
> e.g.
> If I have a dataframe called master and I wanted to su
101 - 200 of 244 matches
Mail list logo