Stefan Evert wrote:
> A couple of remarks on vQ's naive benchmark:
>
>> f.rep = function(n, m) replicate(n, rnorm(m))
>
> I suppose you meant
>
> f.rep = function(n, m) replicate(n, mean(rnorm(m)))
>
> which doesn't make a substantial speed difference, though.
indeed, thanks; i've already pos
Dear R-listers,
I am building a prediction model starting with many, many variables. I
use the 'stepAIC' procedure in the MASS package, and the model building
process takes several hours to complete. At the very end, I am
occasionally confronted with warnings like this one:
Warning messages:
1: I
i have the following code - assimilating the maximum annual discharge each
year ffrom a daily discharge record from year 1989-2005.
m <- read.table("D:/documents/5 stations/01014000.csv", sep =",")
z <- zoo(m[,4],as.Date(as.character(m[,3]), "%m/%d/%Y"))
x <- aggregate(z, floor(as.numeric(as.yea
Hi,
I am looking for two ways to speed up my computations:
1. Is there a function that efficiently computes the 'sandwich product' of
three matrices, say, ZPZ'
2. Is there a function that efficiently computes the determinant of a
positive definite symmetric matrix?
Thanks,
S.A.
[[alter
Dear R-experts,
Need your help.
Dear R- Experts,
Seek your help.
I created a time sequence using:
x[i] <-chron(dates, tt, format=c(dates="y-m-d", tt="h:m:s"))
first element in the list is displayed as: (09-01-01 00:00:00)
Further elements are:
(09-01-01 00:01:00)
(09-01-01 00:02:00)
(09-0
Michael Kubovy wrote:
Dear r-helpers,
I want to show that time is flowing CCW in the following:
require(circular)
len <- 8
labl <- as.character(c(0, 1, 1, 1, 0, 0, 1, 0))
r <- circular(2*pi* (rep(c(1, 3, 6), each = 200)/len + rnorm(600, 0,
0.025)))
r.dens <- density(r, bw = 25, adjust = 4, k
On Tue, 17 Feb 2009, Nathan S. Watson-Haigh wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm trying to add some Fortran 90 code to an existing package.
When I compile and load the file manually like:
SHELL> R CMD SHLIB file.f90
R> dyn.load("file.so")
I can use the .Fortran() fine. How
Dear all ,
Is there any subset regression (subset selection
regression) package in R other than "leaps"?
Thanks and regards
Alex
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/
it would be a bit more helpful if we knew more info regarding these
matrices, for instance is P diagonal, etc. In any case, you could have a
look at
crossprod()
# and
tcorssprod()
and, for the determinant maybe
prod(eigen(mat, symmetric = TRUE, only.values = FALSE)$values)
# or
prod(diag(chol
Ingore this message. I have already solved the problem..
Regards,
Suresh
Suresh_FSFM wrote:
>
> Dear R-experts,
>
> Need your help.
>
> Dear R- Experts,
>
> Seek your help.
>
> I created a time sequence using:
> x[i] <-chron(dates, tt, format=c(dates="y-m-d", tt="h:m:s"))
> first eleme
sorry, in my previous e-mail it should be
tcrossprod()
# and
prod(eigen(mat, symmetric = TRUE, only.values = TRUE)$values)
Best,
Dimitris
Shimrit Abraham wrote:
Hi,
I am looking for two ways to speed up my computations:
1. Is there a function that efficiently computes the 'sandwich product'
I ran into a similar issue with a simple benchmark the other day,
where a plain loop in Lua was faster than vectorised code in R ...
hmm, would you be saying that r's vectorised performance is overhyped?
or is it just that non-vectorised code in r is slow?
What I meant, I guess, was (apar
Thanks for your suggestions.
I'll try to implement what you suggested.
Perhaps the following information can help you to think of alternative ways
to speed up computations:
I am coding the Kalman Filter in R because I have certain requirements that
are not provided by the packages regarding Stat
William Simpson wrote:
I have data in a format like this:
namessexsex viewnum rating rt
ahl4f m f 56 -1082246
ahl4f m f 74 85 1444
ahl4f m f 52 151 1595
ahl4f m f
Stefan Evert wrote:
>
>> hmm, would you be saying that r's vectorised performance is overhyped?
>> or is it just that non-vectorised code in r is slow?
>
> What I meant, I guess, was (apart from a little bit of trolling) that
> I'd had misconceptions about the speed differences between loops and
>
See ?which.max :
library(zoo)
z <- read.zoo("myfile.dat", sep = ",", format = "%m/%d/%Y")
f <- function(x) time(x)[which.max(x)]
ix <- tapply(z, floor(as.yearmon(time(z))), f)
z[ix]
On Tue, Feb 17, 2009 at 3:58 AM, CJ Rubio wrote:
>
> i have the following code - assimilating the maximum annual
Hello,
I have a sequence of numbers:
seq(1:50)
and I would like to have all the possible combinations with this numbers
without repeating any combination:
11, 12, 13, ... ,22,23,24,...
How can I do it?
Best,
Dani
--
Daniel Valverde Saubí
Grup de Biologia Molecular de Llevats
Facultat de Veter
Try this:
apply(expand.grid(1:50, 1:50), 1, paste, collapse = '')
On Tue, Feb 17, 2009 at 8:37 AM, Dani Valverde wrote:
> Hello,
> I have a sequence of numbers:
> seq(1:50)
> and I would like to have all the possible combinations with this numbers
> without repeating any combination:
> 11, 12, 1
Hi Dani,
see ?combn .
combn(1:50,2)
gives you all combinations as matrix.
you can do sth like
apply(combn(1:50,2),2, paste, sep="", collapse="")
to get concatenated results.
hth.
Dani Valverde schrieb:
Hello,
I have a sequence of numbers:
seq(1:50)
and I would like to have all the possible c
Hi,
I'm trying to calculate the residuals in a one-sided AR(p) model:
sum (phi_j (X_t-j - EX)) = epsilon
My X is an ARMA(1,1)-model, which can be represented as an AR(infty) model.
I calculate the order of the model with AIC and the parameters with the
Yule-Walker method.
For the residuals, I t
Thanks for your help,
I managed to get it working like this:
fml<-as.formula(paste("y ~", paste(PCnames, collapse="+")))
y = class.ind(grp[outgroup])
z1=multinom(formula = fml, data=data.frame(scores))
Daniel Crouch
**QUOTE:
Forget eval(parse(text = ))
See
?as.formul
Hi Dylan,
>> Am I trying to use contrast.Design() for something that it was not
>> intended for? ...
I think Prof. Harrell's main point had to do with how interactions are
handled. You can also get the kind of contrasts that Patrick was interested
in via multcomp. If we do this using your artifi
On 2/16/2009 10:18 PM, Dylan Beaudette wrote:
> On Mon, Feb 16, 2009 at 5:28 PM, Patrick Giraudoux
> wrote:
>> Greg Snow a écrit :
>>> One approach is to create your own contrasts matrix:
>>>
>>>
mycmat <- diag(8)
mycmat[ row(mycmat) == col(mycmat) + 1 ] <- -1
mycmati <- solve(mycma
Thank you for the lightening replies.
I tested various corStruct objects (?corClasses) using the nlme
package and all work flawlessly.
My best regards to all...
Constantine Tsardounis
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/l
Eik Vettorazzi wrote:
Hi Dani,
see ?combn .
combn(1:50,2)
gives you all combinations as matrix.
you can do sth like
apply(combn(1:50,2),2, paste, sep="", collapse="")
note that you could simplify the above by
combn(50, 2, paste, collapse = "")
Best,
Dimitris
to get concatenated results.
Hi Dylan, Chuck,
>> contrast(l, a=list(f=levels(d$f)[1:3], x=0), b=list(f=levels(d$f)[4],
>> x=0))
There is a subtlety here that needs to be emphasized. Setting the
interacting variable (x) to zero is reasonable in this case, because the
mean value of rnorm(n) is zero. However, in the real wor
Hi,
I'm trying to match peaks between chromatographic runs.
I'm able to match peaks when they are chromatographed with the same method,
but not when there are different methods are used and spectra comes in to
play.
While searching I found the ALS package which should be usefull for my
applicati
Peter Jepsen DCE.AU.DK> writes:
> [snip]
> ... I would save valuable time if
> the procedure would stop automatically when it encounters any anomaly
> that causes it to warn me. Is this possible? And if so, how?
>
Maybe options(warn=2) will do what you want.
Ben Bolker
__
Thank you Gabor Grothendieck for your message.
I would surely like to say, that if someone wants to assume AR(1)
residuals, running the regression y ~ x, could run
gls(y~x, correlation = corAR1(0, ~1))
Constantine Tsardounis
http://www.costis.name
Hello list,
I am wondering if a joining "one-to-many" can be done a little bit easier. I
tried merge function but I was not able to do it, so I end up using for and if.
Suppose you have a table with locations, each location repeated several times,
and some attributes at that location. The se
On Tue, Feb 17, 2009 at 8:33 AM, Monica Pisica wrote:
>
> Hello list,
>
> I am wondering if a joining "one-to-many" can be done a little bit easier. I
> tried merge function but I was not able to do it, so I end up using for and
> if.
>
> Suppose you have a table with locations, each location re
on 02/17/2009 08:33 AM Monica Pisica wrote:
> Hello list,
>
> I am wondering if a joining "one-to-many" can be done a little bit
easier. I tried merge function but I was not able to do it, so I end up
using for and if.
>
> Suppose you have a table with locations, each location repeated
several t
Hi r-help!
Consider the following data-frame:
var1 var2 var3
1 314
2 223
3 223
4 44 NA
5 435
6 223
7 343
How can I get R to convert this into the following?
Value 1 2 3 4 5
var1 0 3 2 2 0
var2
Try merge(t1, t2)
On Tue, Feb 17, 2009 at 9:33 AM, Monica Pisica wrote:
>
> Hello list,
>
> I am wondering if a joining "one-to-many" can be done a little bit easier. I
> tried merge function but I was not able to do it, so I end up using for and
> if.
>
> Suppose you have a table with locatio
Hi,
after updating to foreign version 0.8-32, I experienced the following error
when I tried to load a SPSS file:
Fehler in inherits(x, "factor") : objekt "cp" nicht gefunden
Zusätzlich: Warning message:
In read.spss("***l.sav", use.value.labels = TRUE, to.data.frame = TRUE) :
***.sav: File-ind
I need to uninstall R 2.7.1 from my Mac. What is the best way to uninstall
it? Simply delete the R icon in the Applications folder?
Or is it more involved?
TIA,
Anjan
--
=
anjan purkayastha, phd
bioinformatics analyst
whitehead institute for biomedical research
nine ca
Hi Monica,
merge(t1, t2) works on your example. So why don't you use merge?
HTH,
Thierry
ir. Thierry Onkelinx
Instituut voor natuur- en bosonderzoek / Research Institute for Nature
and Forest
Cel biometrie, methodolo
This is on the Mac FAQ:
http://cran.cnr.berkeley.edu/bin/macosx/RMacOSX-FAQ.html#How-can-R-for-Mac-OS-X-be-uninstalled_003f
HTH,
--sundar
On Tue, Feb 17, 2009 at 7:17 AM, ANJAN PURKAYASTHA
wrote:
> I need to uninstall R 2.7.1 from my Mac. What is the best way to uninstall
> it? Simply delete t
Ok,
I feel properly ashamed. I suppose my "real" data is a little bit different
than my toy data (although i don't know how) because i did try the merge
function as simple as merge(t1, t2) and did not work. Maybe a reset of my
session will solve my problems and more coffee my confusion.
Ag
Hi!
I came across R just a few days ago since I was looking for a toolbox
for cox-regression.
I´ve read
"Cox Proportional-Hazards Regression for Survival Data
Appendix to An R and S-PLUS Companion to Applied Regression" from John Fox.
As described therein plotting survival-functions works wel
Hi,
See ?survfit.object
if fit is the object you get using survfit,
fit$surv will give you the survival probability.
Best,
arthur
Bernhard Reinhardt wrote:
Hi!
I came across R just a few days ago since I was looking for a toolbox
for cox-regression.
I´ve read
"Cox Proportional-Hazards Reg
Harry Haupt wrote:
> Hi,
> after updating to foreign version 0.8-32, I experienced the following error
> when I tried to load a SPSS file:
>
> Fehler in inherits(x, "factor") : objekt "cp" nicht gefunden
> Zusätzlich: Warning message:
> In read.spss("***l.sav", use.value.labels = TRUE, to.data.fra
A couple of weeks ago I asked how it is possible to run an R script (not a
function) passing some parameters.
Someone suggested the function "commandArgs()".
I read the on-line help and found no clarifying example. Therefore I do not
know how to use it appropriately.
I noticed this function retur
on 02/17/2009 09:06 AM Hans Ekbrand wrote:
> Hi r-help!
>
> Consider the following data-frame:
>
>var1 var2 var3
> 1 314
> 2 223
> 3 223
> 4 44 NA
> 5 435
> 6 223
> 7 343
>
> How can I get R to convert
Prof Ripley:
Many thanks - it did indeed say it cannot find fGarch after I tried your
advice - but a completely clean re-install did the trick.
John
On Tue, Feb 17, 2009 at 1:09 AM, Prof Brian Ripley wrote:
> Start R with --vanilla, or rename youe saved workspace (.RData).
> Then
>
> library(fG
Hi,
is it possible to wrap a text using Hershey fonts? "\n" does not work!
Thanks in advance,
Martina
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-g
I seriously doubt that a survfit object could only contain that
information. I suspect that you are erroneously thinking that what
print.survfit offers is the entire story.
What does str(survfit(, data=) ) show you?
> data(aml)
> aml.mdl <- survfit(Surv(time, status) ~ x, data=aml)
# this is
Try this:
A <- 1
B <- 2
C <- 3
source("myfile.R")
Now the code in myfile can access A, B and C.
On Tue, Feb 17, 2009 at 10:55 AM, wrote:
> A couple of weeks ago I asked how it is possible to run an R script (not a
> function) passing some parameters.
> Someone suggested the function "commandA
Peter Dalgaard wrote:
>
> Yes, something in the logic appears to have gotten garbled.
>
> It's in this part of read,spss:
>
> if (is.character(reencode)) {
> cp <- reencode
> reencode <- TRUE
> }
> else if (codepage <= 500 || codepage >= 2000) {
> attr(rval,
On 2/17/2009 10:55 AM, mau...@alice.it wrote:
A couple of weeks ago I asked how it is possible to run an R script (not a
function) passing some parameters.
Someone suggested the function "commandArgs()".
I read the on-line help and found no clarifying example. Therefore I do not
know how to use
is there some sort of R function which can advise me of the best ARIMA(p,q,r)
model to use based on the Schwarz criterion e.g for e.g p=0-5, q =0, r=0-5
or for example p+r< 5???
or is this something I will have to write my own code for?
Thanks Emma
--
View this message in context:
http://www.n
Hi all,
I've managed to get JAGS working on my Ubuntu Hardy Linux with a 32-bit
computer and AMD processors using R 2.8.1. JAGS is great. I've read that
JAGS is the fastest, but that hasn't been my experience. At any rate, I
have more experience with WinBUGS under Windows and would like a versi
Hi Bernhard,
I'm wondering what you will expect to get in "dividing" two proportional
survival curves from a fitted cox model.
Anyway, you can provide a newdata object to the survfit function
containing any combination of cofactors you are interested in and then
use summary, eg:
fit <- coxph(
Hi Dylan, Chuck,
Mark Difford wrote:
>> Coming to your question [?] about how to generate the kind of contrasts
>> that Patrick wanted
>> using contrast.Design. Well, it is not that straightforward, though I may
>> have missed
>> something in the documentation to the function. In the past I hav
Hoi Bart,
I think you're right that ALS should be applicable to this problem.
Unfortunately in writing I see that there is a bug when the spectra are
NOT constrained to nonnegative values (the package has been used to my
knowledge only in fitting multiway mass spectra thus far, where this
constrai
I'm trying to create a fairly basic map using R. What i want to get is the
map of the country with circles representing a count of students in each
state.
What I've done so far is as following -
map("state")
symbols(data1$count,circles=log(data1$count)*3,fg=col,bg=col,add=T,inches=F)
this gives
see auto.arima in the forecast package.
On Tue, Feb 17, 2009 at 10:20 AM, emj83 wrote:
>
> is there some sort of R function which can advise me of the best ARIMA(p,q,r)
> model to use based on the Schwarz criterion e.g for e.g p=0-5, q =0, r=0-5
> or for example p+r< 5???
>
> or is this something
Paul Heinrich Dietrich wrote:
Hi all,
I've managed to get JAGS working on my Ubuntu Hardy Linux with a 32-bit
computer and AMD processors using R 2.8.1. JAGS is great. I've read that
JAGS is the fastest, but that hasn't been my experience. At any rate, I
have more experience with WinBUGS und
Hi All,
I am looking at applications of percentiles to time sequenced data. I had
just been using the quantile function to get percentiles over various
periods, but am more interested in if there is an accepted (and/or
R-implemented) method to apply weighting to the data so as to weigh recent
dat
Two places that have worked examples leap to mind:
--- Sarkar's online accompaniment to his book:
http://lmdvr.r-forge.r-project.org/figures/figures.html
Thumbing through the hard copy I see Figure 6.5 might of interest.
--- Addicted to R's graphics gallery:
http://addictedtor.free.fr/graphiques
Hello list,
Thanks in advance for any help.
I have many (approx 20) files that I have merged. For example
d1<-read.csv("AlleleReport.csv")
d2<-read.csv("AlleleReport.csv")
m1 <- merge(d1, d2, by = c("IND", intersect(colnames(d1), colnames(d2))),
all = TRUE)
m2 <- merge(m1, d3, by = c("IND",
It is a issue specific to 0.8-32 and some files (most likely those
with some (not all) Windows codepages declared).
We are trying to collect together some examples, and will update
foreign accordingly later in the week.
On Tue, 17 Feb 2009, Harry Haupt wrote:
Hi,
after updating to foreign
> Hello list,
>
> I am sorry for the previous half post. I accidentily hit send. Thanks
> again in advance for any help.
>
> I have many (approx 20) files that I have merged. Each data set contains
> rows for individuals and data in 2 - 5 columns (depending upon which data
> set). The individua
I do know that Harrell's Quantile function in the Hmisc package will
allow quantile estimates from models. Whether it is general enough to
extend to time series, I have no experience and cannot say.
--
David Winsemius
On Feb 17, 2009, at 11:57 AM, Brigid Mooney wrote:
Hi All,
I am lookin
You need to give the symbols function the locations where you want the centers
of the circles to be. Some datesets with map information also have centers of
the states that you can use, for the USA, there is the state.center dataset
that may work for you, or the maptools package function get.Pc
Jessica L Hite/hitejl/O/VCU vcu.edu> writes:
> I am attempting to run a glm with a binomial model to analyze proportion
> data.
> I have been following Crawley's book closely and am wondering if there is
> an accepted standard for how much is too much overdispersion? (e.g. change
> in AIC has an
Thanks Greg,
do you know where i can find the sate.center dataset that you mention?
On Tue, Feb 17, 2009 at 12:28 PM, Greg Snow wrote:
> You need to give the symbols function the locations where you want the
> centers of the circles to be. Some datesets with map information also have
> center
Hi,
I am getting an error compiling the R-devel on a suse architecture
64-bit architecture. The cp attribute is sending 'trusted.lov'
and an error. This is a sample of the output:
> make[3]: Entering directory
>`/lustre/people/schaffer/R-devel/src/library/base'
>building package 'base'
>make[4]
Hi All,
I am trying to run several linear regressions and print out the summay and
the anova reslts on the top of
each other for each model. Below is a sample progarm that did not work. is
it possible to print the
anova below the summary of lm in one file?
thanks for your help
#
Thanks very much, exactly what I need.
Oliver
On Feb 16, 10:36 pm, Marc Schwartz wrote:
> on 02/16/2009 07:51 PM Oliver wrote:
>
>
>
> > hi,
>
> > I am a R beginner. One thing I notice is that when do graphing is,
>
> > if I want to draw two figures in a row such as this:
>
> > par(mfrow(1, 2))
Hi guys,
I have a tricky problem that I'd appreciate your help with.
I have two categorical variables, say varA and varB and an associated
frequency Freq for combinations of the levels of varA and varB. This was
created with a table() call.
I'd now like to make panel plots of the frequency. I can
On Tue, 17 Feb 2009, Ben Bolker wrote:
Jessica L Hite/hitejl/O/VCU vcu.edu> writes:
I am attempting to run a glm with a binomial model to analyze proportion
data.
I have been following Crawley's book closely and am wondering if there is
an accepted standard for how much is too much overdisper
" did not work."
Might that mean errors? Care to share?
Running that through my wetware R interpreter, I think I am seeing you
ask for creation of models with variable outcomes from a the first 100
columns of "data". And then you are specifying a formula that includes
a weird mixture o
Thanks for the clarification.
I actually had MASS open to that page while
I was composing my reply but forgot to mention
it (trying to do too many things at once) ...
Ben Bolker
Prof Brian Ripley wrote:
> On Tue, 17 Feb 2009, Ben Bolker wrote:
>
>> Jessica L Hite/hitejl/O/VCU vcu.edu> writ
It should be in the datasets package that is automatically loaded with R (at
least my copy), try ?state and you should see the help for it and others.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
From: Alina Sheyman [mailto:ali
The sink() command stops the sinking, so you send the lm output to the file,
then stop the sinking before printing out the anova result. So the simplest
thing to try is to put the first sink (with the filename and append=T) before
you start the loop, remove all calls to sink within the loop, th
Thanks for pointing me to the quantreg package as a resource. I was hoping
to ask be able to address one quick follow-up question...
I get slightly different variants between using the rq funciton with formula
= mydata ~ 1 as I would if I ran the same data using the quantile function.
Example:
url:www.econ.uiuc.edu/~rogerRoger Koenker
emailrkoen...@uiuc.eduDepartment of Economics
vox: 217-333-4558University of Illinois
fax: 217-244-6678Champaign, IL 61820
On Feb 17, 2009, at 1:58 PM, Brigid Mooney wrote:
Than
Thanks a lot for your help, it worked.
Greg Snow-2 wrote:
>
> The sink() command stops the sinking, so you send the lm output to the
> file, then stop the sinking before printing out the anova result. So the
> simplest thing to try is to put the first sink (with the filename and
> append=T)
this is the error message that I was getting
"In sink() ... : no sink to remove"
i got it to work . thanks for the help
David Winsemius wrote:
>
> " did not work."
>
> Might that mean errors? Care to share?
>
> Running that through my wetware R interpreter, I think I am seeing you
Alex Roy wrote:
>
> Dear all ,
> Is there any subset regression (subset selection
> regression) package in R other than "leaps"?
>
>
RSiteSearch("{subset regression}") doesn't turn up much other than
special-purpose tools for ARMA models etc.. What does leaps not do that y
Hi Bob - your suggesting worked out great... Many thanks!
Also, thanks everyone for the other suggestions!
Bob McCall wrote:
>
> Look in the package "forecast" for the function "Arima". It will do what
> you want. It's different than arima function in the stats package.
> Bob
>
> Pele wrote
Dear R users,
I would like to fit cross classified or multiple membership logistic models
or a 3 level hierarchical logistic model using the Umacs package. Can anyone
advise me on how to proceed or better point me to examples of how its done.
Regards,
--
Luwis Diya,
Leuven Biostatistics and St
Hello:
I would like to sum every x columns of a dataframe for each row. For instance,
if x is 10, then for dataframe df, this function will sum the first ten elements
together and then the next ten:
sapply(list(colnames(df)[1:10], colnames(df)[11:20]),function(x)apply( df[,x],
1, sum))
If the
Alex Roy gmail.com> writes:
>
> Dear all ,
> Is there any subset regression (subset selection
> regression) package in R other than "leaps"?
Lars and Lasso are other 'subset selection' methods, see the corresponding
packages 'lars' and 'lasso2' and its description in The Elem
I recently traced a bug of mine to the fact that cumsum(s)[length(s)]
is not always exactly equal to sum(s).
For example,
x<-1/(12:14)
sum(x) - cumsum(x)[3] => 2.8e-17
Floating-point addition is of course not exact, and in particular is
not associative, so there are various possible r
On 17/02/2009 4:42 PM, mwestp...@worldbank.org wrote:
Hello:
I would like to sum every x columns of a dataframe for each row. For instance,
if x is 10, then for dataframe df, this function will sum the first ten elements
together and then the next ten:
sapply(list(colnames(df)[1:10], colnames(
Hi friends,
I have questions about printing a pretty big size matrix.
As you could see from below, the matrix wasn't showed in R at full size
(11X11), but it was cut partly into three smaller matrices (11X4,11X4,11X3).
I'm wondering if there is a way to show the whole matrix with dimension
11X11,
On 17/02/2009 5:31 PM, phoebe kong wrote:
Hi friends,
I have questions about printing a pretty big size matrix.
As you could see from below, the matrix wasn't showed in R at full size
(11X11), but it was cut partly into three smaller matrices (11X4,11X4,11X3).
I'm wondering if there is a way to
Hello all,
I am just wondering if any of you are doing most of your scripting
with Python instead of R's programming language and then calling
the relevant R functions as needed?
And if so, what is your experience with this and what sort of
software/library do you use in combination with Python
Here is one kind of weighted quantile function.
The basic idea is very simple:
wquantile <- function( v, w, p )
{
v <- v[order(v)]
w <- w[order(v)]
v [ which.max( cumsum(w) / sum(w) >= p ) ]
}
With some more error-checking and general clean-up, it looks like this:
# Simple weigh
On Tue, Feb 17, 2009 at 10:00:40AM -0600, Marc Schwartz wrote:
> on 02/17/2009 09:06 AM Hans Ekbrand wrote:
> > Hi r-help!
> >
> > Consider the following data-frame:
> >
> >var1 var2 var3
> > 1 314
> > 2 223
> > 3 223
> > 4 44 NA
> > 5 4
Esmail Bonakdarian wrote:
I am just wondering if any of you are doing most of your scripting
with Python instead of R's programming language and then calling
the relevant R functions as needed?
No, but if I wanted to do such a thing, I'd look at Sage:
http://sagemath.org/
It'll give you acc
2009/2/17 Esmail Bonakdarian :
> Hello all,
>
> I am just wondering if any of you are doing most of your scripting
> with Python instead of R's programming language and then calling
> the relevant R functions as needed?
I tend to use R in its native form for data analysis and modelling,
and pytho
Hello!
On Tue, Feb 17, 2009 at 5:58 PM, Warren Young wrote:
>
> Esmail Bonakdarian wrote:
>>
>> I am just wondering if any of you are doing most of your scripting
>> with Python instead of R's programming language and then calling
>> the relevant R functions as needed?
>
> No, but if I wanted to
I have the following dataframe:
ad <- data.frame(dates, av, sn$SectorName)
colnames(ad) <- c("Date", "Value", "Tag")
which has data (rows 10 to 20, for example) as follows:
DateValue Tag
10 2008-01-16-0.20875Co
Hello,
I tried to use ylim=c(x,y) in a plot.ts().
This was ignored.
How can I achieve it to create such graphics?
Ciao,
Oliver Bandel
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting g
Hi dear list,
I wonder if somebody can help me with this. I have a text file with
300 rows and around 30 columns and I need to insert a column that
has the number 1 in every row. This new column should be placed
between columns 6 and 7.
As an example: I would want to insert a column (consitin
On Tue, Feb 17, 2009 at 6:05 PM, Barry Rowlingson
wrote:
> 2009/2/17 Esmail Bonakdarian :
> When I need to use the two together, it's easiest with 'rpy'. This
> lets you call R functions from python, so you can do:
>
> from rpy import r
> r.hist(z)
wow .. that is pretty straight forward, I'll
I wonder if an R package would have a function that calculates the following.
Let Y be a normal multivariate function. For example, let Y have 4
dimensions. I want to calculate
P(Y1 < Z1, Y2 < Z2, Y3 > Z3, Y4 > Z4).
There are R functions to do the calculation if all the inequalities
are of the t
1 - 100 of 133 matches
Mail list logo