Sorry. I am not sure how to post a link. Basically the legend looks
like:
* * * * * * * * * * *
-4 -3 -2 -1 0 1 2 3 4
Where ' * ' are colored boxes that are right next to each other. Kind of
like a gradient.
On Mon, Jan 30, 2012 at 3:29 PM, R. Michael Weylandt wrote:
Server stripped the
How would I create a legend that looks like the attached image?
Basically all of the color boxes are right next to each other and the
text is below. This kind of arrangement allows for many more items in
the legend. Using the legend() method seems to top out at about 14 items
(that will fit i
I would like to make the following faster:
df <- NULL
for(i in 1:length(s))
{
df <- rbind(df, cbind(names(s[i]), time(s[[i]]$series),
as.vector(s[[i]]$series), s[[i]]$category))
}
names(df) <- c("name", "time", "value", "category")
I have the following scenario:
> m <- matrix(1:4, ncol=2)
> m
[,1] [,2]
[1,]13
[2,]24
> apply(m, 2, sum)
[1] 3 7
> apply(m, 1, sum)
[1] 4 6
So I can apply to rows *or* columns. According to the documentation
(?apply)
MARGIN a vector giving the subscripts which the functio
I have an R script with the following applicable lines:
xshort <- window(s, start=st, end=ed)
. . .
xshort <- ts(xshort, frequency=1, start=1)
. . .
m1 <- m2 <- m3 <- m4 <- m5 <- m6 <- NULL
m1 <- tslm(xshort ~ trend)
I get an error:
Currently I have a for loop executing functions and at the end I get a
message like:
There were 50 or more warnings (use warnings() to see the first 50)
If I do what it says and type warnings(), I get 50 messages like:
2: In !is.na(x) & !is.na(rowSums(xreg)) :
longer object length is not a
Thank you.
I am glad I asked. It wasn't giving answers that I expected and now I
know why.
Rather than pull in another package just for this functionality, I will
just reassign the frequency by generating a new time series like:
dswin <- window(ds, start=..., end=...)
dswin <- ts(dswin, fre
I noticed that the text() command adds text to a plot. Is there a way to
either make the plot blank or add text to a "blank sheet". I would like
to "plot" a page that contains just text, no plot lines, labels, etc.
Suggestions?
Kevin
[[alternative HTML version deleted]]
_
I have the following code:
c <- file("c:/temp/r/SkuSalesModel.br", "rb")
s <- unserialize(c)
close(c)
rm(c)
And it worked as late as yesterday. Today when I came in I get the
following error:
Error in .Call("R_unserialize", connection, refhook, PACKAGE = "base") :
negative length vectors ar
True, porting old C and Fortran code to C# or F# would be a pain and probably
riddled with errors but it is not too soon to start looking to see if there is
a better way. There have been numerous ports of LAPACK, BLAS, etc. to C#. Maybe
they could be leveraged.
Maybe just allowing packages to b
I am trying to check the results from an Eigen decomposition and I need to
force a scalar multiplication. The fundamental equation is: Ax = lx. Where 'l'
is the eigen value and x is the eigen vector corresponding to the eigenvalue.
'R' returns the eigenvalues as a vector (e <- eigen(A); e$values
I am trying to check the results from an Eigen decomposition and I need to
force a scalar multiplication. The fundamental equation is: Ax = lx. Where 'l'
is the eigen value and x is the eigen vector corresponding to the eigenvalue.
'R' returns the eigenvalues as a vector (e <- eigen(A); e$values
Forgive me if I missunderstand a basic Eigensystem but when I present the
following matrix to most any other LinearAlgebra system:
1 3 1
1 2 2
1 1 3
I get an answer like:
//$values
//[1] 5.00e+00 1.00e+00 -5.536207e-16
//$vectors
// [,1] [,2] [,3]
//[1,
>From the notes I see that for 2.11 Hmisc is not supported and the suggestion
>is made to build from source. I am on a Windows 7 platform and I got all of
>the tools and successfully built 'R' from source. I changed to gnuwin32 and
>entered make all recommended. Even though the tar.gz (source) v
i believe John Fox offers another solution in his book.
Kevin
Daniel Wollschlaeger wrote:
> * On Mo, 1. Mar 2010, Ista Zahn wrote:
>
> > I've posted a short explanation about this at
> > http://yourpsyche.org/miscellaneous that you might find helpful. I'm a
>
> As someone who's also stru
Thanks to Marc Schultz I found the documentation on the "factors" attribute
under ?term.object. It stats:
factors: A matrix of variables by terms showing which variables appear
in which terms. The entries are 0 if the variable does not
occur in the term, 1 if it does occur an
I am sorry but I didn't see "factors" mentioned in this documentation.
Kevin
Henrique Dallazuanna wrote:
> See ?terms
>
> On Mon, Mar 22, 2010 at 2:08 PM, wrote:
> > I noticed that when I fit a linear model using 'lm' there is an attribute
> > called "factors" that is added to the "te
I am sorry but I didn't see "factors" mentioned in this documentation.
Kevin
Henrique Dallazuanna wrote:
> See ?terms
>
> On Mon, Mar 22, 2010 at 2:08 PM, wrote:
> > I noticed that when I fit a linear model using 'lm' there is an attribute
> > called "factors" that is added to the "te
I noticed that when I fit a linear model using 'lm' there is an attribute
called "factors" that is added to the "term". It doesn't seem to appear for
'model.matrix', just 'lm'. I have been unable to find where it gets constructed
or what it means? It looks like a two dimensional array that I may
Perhaps those more in the know than I could clarify some confusion. In the
ANOVA 'R' code I see:
mss <- sum(if (is.null(w)) object$fitted.values^2 else w *
object$fitted.values^2)
if (ssr < 1e-10 * mss)
warning("ANOVA F-tests on an essentially perfect fit are unreliable"
Thatnk you.
The documentation indicates as you indicated that if there is not an exact
match then the next element is chosen. But it does not indicate the case that
contains an exact match but there is not value to be returned (=, case). From
what you indicate this is treated as if it was not a
In browsing the source I see the following construct:
res <- switch(type, working = , response = r, deviance = ,
pearson = if (is.null(object$weights))
r
else r * sqrt(object$weights), partial = r)
I understand that 'switch' will execute the code that is matched
I am sorry I still don't understand.
In the example you give the 'assign' vecotr is a vector of length 6 and there
are indeed 6 columns in the data frame. But the formula only has two variables
namely 'Month' and 'Wind'. Where do the values if the 'assign' vector come
from? I see '0 1 1 1 1 2'
Would someone be so kind as to explain in English what the ANOVA code
(anova.lm) is doing? I am having a hard time reconciling what the text books
have as a brute force regression and the formula algorithm in 'R'. Specifically
I see:
p <- object$rank
if (p > 0L) {
p1 <- 1L:p
I read in the documentation for split:
‘split’ divides the data in the vector ‘x’ into the groups defined by ‘f’.
But I am still unclear as to its function. Take for example:
x <- 1:4
split(x, c(0,1))
$`0`
[1] 1 3
$`1`
[1] 2 4
I am not clear on how this result is reached.
Thank you.
Kevin
I am just curious. Every once and a while I see an attribute attached to an
object called "assign". What meaning does this have? For example:
dist ~ speed, data=cars
forms a matrix like:
num [1:50, 1:2] 1 1 1 1 1 1 1 1 1 1 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:50] "1" "2" "3" "4
I am just curious. Every once and a while I see an attribute attached to an
object called "assign". What meaning does this have? For example:
dist ~ speed, data=cars
forms a matrix like:
num [1:50, 1:2] 1 1 1 1 1 1 1 1 1 1 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:50] "1" "2" "3" "4
I am testing 'qr' with an admittedly contrived matrix and I am getting
different results than I am from another package. The matrix that I am using is:
x <- matrix(seq(.1, by=.1, length.out=12), 4)
So the whole test is:
x <- matrix(seq(.1, by=.1, length.out=12), 4)
qr(x)
And the output from 'R
Thank you for the tip. I was used to inserting write statements and was
surpised when it didn't work and reading this section I see that I shouldn't
have been doing this anyway.
One more question. Is there another call that I can use to print out a
2-dimensional array? Since FORTRAN stores as c
I found the problem but not a solution. It turns out if I add the following
lines to dqrdc2.f I get the error:
write(*,300) ldx,n,p
300 format(3i4)
I don't get a compile error but I get the seemingly unrelated error in linking
R.DLL
I guess the question now is, "How do I add a simpl
I am trying to build R-2.9.2 from source on a Windows 7 machine. I have
installed all of the requisite software and followed the instructions. I also
could have sworn that I had a successful build. But now I get the following
error.
gcc -std=gnu99 -shared -s -mwindows -o R.dll R.def console.o d
I give up. Maybe it is my search (Windows) but I cannot seem to find the
definition of the F77_CALL or F77_NAME macros. Either there are too many
matches or the search just doesn't find it. For example where is the source for:
F77_CALL(dpotri)
?
Thank you.
Kevin
_
I have a small request regarding this "append" feature. As it is now if the
data is appended to the file so is the header. I would like to have the header
only entered once and appends just append the data.
Doable?
Kevin
Patrick Connolly wrote:
> On Tue, 15-Dec-2009 at 01:55PM +0100, Gu
I have a small request regarding this "append" feature. As it is now if the
data is appended to the file so is the header. I would like to have the header
only entered once and appends just append the data.
Doable?
Kevin
Patrick Connolly wrote:
> On Tue, 15-Dec-2009 at 01:55PM +0100, Gu
If I have data that I feed into shapio.test and jarque.bera.test yet they seem
to disagree. What do I use for a decision?
For my data set I have p.value of 0.05496421 returned from the shapiro.test and
0.882027 returned from the jarque.bera.test. I have included the data set below.
Thank you.
If I have data that I feed into shapio.test and jarque.bera.test yet they seem
to disagree. What do I use for a decision?
For my data set I have p.value of 0.05496421 returned from the shapiro.test and
0.882027 returned from the jarque.bera.test. I have included the data set below.
Thank you.
I looked at the descriptions for uniroot and optimize and they are somewhat
different but the book reference is the same and I am wondering if there are
reasons to pick one over the other?
Thank you.
Kevin
__
R-help@r-project.org mailing list
https:/
Hello,
In reading the loess description I see:
span: the parameter alpha which controls the degree of smoothing.
The default seems to be 0.75. Would it be possible to expand on this decription
so I can avoid trail and error? Can I increase this pass 'span' > 1?
Qualitatively to what degree ch
This is probably an even more basic question but shapiro.test return both the
statistic (w) and the significance (pw) of the statistic. For this test the
null-hypothesis is that the distirbution is not normal so very small values of
pw would mean that there is very little chance that the distiri
This is a very simple question but I couldn't form a site search quesry that
would return a reasonable result set.
Say I have a vector:
x <- c(0,2,3,4,5,-1,-2)
I want to replace all of the values in 'x' with the log of x. Naturally this
runs into problems since some of the values are negative
I have downloaded all of the tools and read the readme's that I know about but
I am still getting the following error when I try to build from source:
C:\Program Files (x86)\R\R-2.9.2\src\gnuwin32>make all recommended
make[1]: `Rpwd.exe' is up to date.
cp -p etc/Makeconf etc/Rcmd_environ etc/Rcon
I have an array of data.frame(s) that I would like to smooth with loess one at
a time. the array is master and the two variable that I am interested in is
Period and Quantity. So my first attempt at calling loess is:
loess(Quantity ~ Period, master[[i]])
But I get the following error:
Error: N
I am running R 2.9.2 and creating a PDF that I am trying to open with Adobe
Reader 9.2 but when I try to open it the reader responds with
"There was an error opening this document. The file is damaged and cannot be
repaired.:
I am using the R command(s):
pdf(file="cat.pdf", title="Historical
Hello,
I have seen much discussion on Date. But I can't seem to do this simple
operation. I can convert a string to a date:
d <- as.Date(DATE, format="%m/%d/%Y")
But what I want to do is extract the year and month so I can construct an
element in a ts object. Ideally I would like to see d$year
Hello,
I asked a question about what the most likely process to follow if after a
time-series fit is performed the residuals are found to be non-normal. One
peron responded and offered to help if I supplied a sample data set.
Unfortunately now that I have a sample I have lost the emai addressl.
I am having a hard time interpreting the results of the 'shapiro.test' for
normality. If I do ?shapiro.test I see two examples using rnorm and runif. When
I run the test using rnorm I get a wide variation of results. Most of this may
be from variability of rnorm, samll sample size (limited to 50
This is kind of a general question about methodology more than anything. But I
was looking for fome advice. I have fit a time-series model and feel pretty
confident that I have taken this model (exponential smoothing) as far as it
will go. In other words looking at the data and the fitted curves
I recently installed R 2.9.2 on a new Windows platform. Everything seemed to
installed OK. I then downloaded the latest Tinn-R (2.3.2.3 I think) and as I
have always done I selected R -> Configure -> Permanent. I was greeted with a
dialog box asking me for a mirror site. I don't remember this pr
I recently installed R 2.9.2 on a new Windows platform. Everything seemed to
installed OK. I then downloaded the latest Tinn-R (2.3.2.3 I think) and as I
have always done I selected R -> Configure -> Permanent. I was greeted with a
dialog box asking me for a mirror site. I don't remember this pr
I have a misunderstanding on the residuals function in 'R'. In the stats
package the residuals for the output of a HoltWinters fit is
residuals.HoltWinters and the source looks like:
> stats:::residuals.HoltWinters
function (object, ...)
object$x - object$fitted[, 1]
>
If I execute the follo
If I look in the stats package for the 'R' source code for predict.HoltWinters
I see the following lines:
vars <- function(h) {
psi <- function(j) object$alpha * (1 + j * object$beta) +
(j%%f == 0) * object$gamma * (1 - object$alpha)
var(residuals(object)) * if (o
Thank you for looking into this. It turns out the problem was "You are
misinterpreting R_HOME. . . " I thought R_HOME was were I installed R not the
directory where I was trying to compile the source. Once I moved the "extra"
stuff that RTools.exe installed in what I thought was the R installat
I know I am going to catch alot of comments for this question but I am really
stuck. If there is some written documentation that I have missed please
redirect me.
I want to build 'R' from source on a Windows Platform. The main reasons are
that I want to check out a debugging some existing packa
If I am on Windows and don't have GDB it does'nt look like it is possible. Any
tips?
Kevin
roger koenker wrote:
> At points of total desperation you can always consider
> the time-honored, avuncular advice -- RTFM,
> in this case Section 4.4 of Writing R Extensions.
>
>
> url:www.e
This may be asking more than can be reasonably expected. But, any tips on
debugging the 'C' and Fortran code without trying to put together all the tools
to compile from source?
Kevin
Erik Iverson wrote:
> This article might help:
>
> http://www.biostat.jhsph.edu/~rpeng/docs/R-debug-too
Simple question:
Why doesn't the following work? Or what 'R' rule am I missing?
tclass <- "Testing 1 2 3"
if(tclass == "Testing 1 2 3")
{
cat("Testing", tclass, "\n")
}
else
{
cat(tclass, "\n")
}
I get an error 'else' is unexpected.
Thank you.
Kevin
__
It has to be related to 'cat' because the output of 'cat' is truncated. I am
just tyring to find out some possible reasons as to why it is truncated. I have
been unable to form an array like is in the test program. Do you think there is
something else that is gobbling up the output from cat that
So then I am to assume that the output of 'cat' can be truncated by passing it
"bad" arrays. That is the only difference between the "reproducible" code you
show and mine. It is just a theory but say that the components array is not
dimmensioned for 4 elements. It seems a little strange if that
I have a statement:
cat("myforecast ETS(", paste(object$components[1], object$components[2],
object$components[3], object$components[4], sep = ","), ") ", n, "\n")
That generates:
cast ETS( A,N,N,FALSE ) 3
Anyone guess as to why the first 5 letters are truncated/missing?
Kevin
I tried optim using the SANN algoithm. To start things out I tried the example
of solving the "traveling salesman" problem as given in the documentation. The
example works just fine. But if I comment out the line:
set.seed(123) # chosen to get a good soln relatively quickly
More often than not
Is the output of residuals() the studentized residuals or just the residuals?
Dieter Menne wrote:
> Giam Xingli nus.edu.sg> writes:
>
> >I hope I can get advice regarding the calculation of leverage values or
> >studentized residual values of a non-linear regression model. It seem
I was trying to understand some of the source in optimi.c and in the SANN
source I see:
SETCADR(OS->R_gcall, x);
PROTECT_WITH_INDEX(s = eval(OS->R_gcall, OS->R_env), &ipx);
REPROTECT(s = coerceVector(s, REALSXP), ipx);
if(LENGTH(s) != n)
error(_("candi
I have a list of data.frames
> str(bins)
List of 19217
$ 100026:'data.frame': 1 obs. of 6 variables:
..$ Sku : chr "100026"
..$ Bin : chr "T149C"
..$ Count: int 108
..$ X: int 20
..$ Y: int 149
..$ Z: chr "3"
$ 100030:'data.frame': 1 obs. of 6 variables:
...
As y
I have decided to use this SANN approach to my problem but to keep the run time
reasonable instead of 20,000 variables I will randomly sample this space to get
the number of variables under 100. But I want to do this a number of times. Is
there someone who could help me set up WINBUGS to repeat
I have a question on the function 'embed'. I ran the example
x <- 1:10
embed(x, dimension=3)
This gives the output:
[,1] [,2] [,3]
[1,]321
[2,]432
[3,]543
[4,]654
[5,]765
[6,]876
[7,]987
[8,] 1098
Sorry I sent a description of the function I was trying to minimize but I must
not have sent it to this group (and you). Hopefully with this clearer
description of my problem you might have some suggestions.
It is basically a warehouse placement problem. You have a warehouse that has
many item
Thank you I had not considered using "gradient" in this fashion. Now as an add
on question. You (an others) have suggested using SANN. Does your answer change
if instead of 100 "variables" or bins there are 20,000? From the documentation
L-BFGS-B is designed for a large number of variables. But
It would in the stictess sense be non-linear since it is only defined for
descrete interface values for each variable. And in general it would be
non-linear anyway. If I only have three variables which can take on values
1,2,3 then f(1,2,3) could equal 0 and f(2,1,3) could equal 10.
Thank you f
I have an optimization question that I was hoping to get some suggestions on
how best to go about sovling it. I would think there is probably a package that
addresses this problem.
This is an ordering optimzation problem. Best to describe it with a simple
example. Say I have 100 "bins" each wit
I was feeling masochistic the other day and we have been having some wierd
memory problems so I started digging into the source for L-BFGS-B. In the
lbgfsb.c file I see the following code:
/* Cholesky factorization of (2,2) block of wn. */
F77_CALL(dpofa)(&wn[*col + 1 + (*col + 1) * wn_d
Sorry to be so dense but the article that you suggest does not give any
information on how the arguments are packed up. I look at the call:
val <- .Internal(fmin(function(arg) -f(arg, ...), lower, upper, tol))
and then with the help of this article I find do_fmin in optimize.c:
SEXP attribute_h
At the risk of appearing ignorant why is the folowing true?
o <- cbind(rep(1,3),rep(2,3),rep(3,3))
var(o)
[,1] [,2] [,3]
[1,]000
[2,]000
[3,]000
and
mean(o)
[1] 2
How do I get mean to return an array similar to var? I would expect in the
above example a
I was trying to find source for optimize and I ran across
function (f, interval, ..., lower = min(interval), upper = max(interval),
maximum = FALSE, tol = .Machine$double.eps^0.25)
{
if (maximum) {
val <- .Internal(fmin(function(arg) -f(arg, ...), lower,
upper, tol))
I am having a hard time understanding just what 'sweep' does. The documentation
states:
Return an array obtained from an input array by sweeping out a summary
statistic.
So what does it mean "weeping out a summary statistic"?
Thank you.
Kevin
__
R
I am using the XML package to form a rather large XML file.
It seems to go OK untill the file length gets larger than about 30Mb then it
seems that the last tokens of the xml file are unmatched. It is almost like the
file hasn't been flushed because the file is OK with the exception of the last
I am trying to update the packages that I have installed but I get the
following warning messages:
package 'tseries' successfully unpacked and MD5 sums checked
Warning: cannot remove prior installation of package 'tseries'
bundle 'forecasting' successfully unpacked and MD5 sums checked
Warning: c
I am trying to update the packages that I have installed but I get the
following warning messages:
package 'tseries' successfully unpacked and MD5 sums checked
Warning: cannot remove prior installation of package 'tseries'
bundle 'forecasting' successfully unpacked and MD5 sums checked
Warning: c
Thank you for you reply. I will try this. The inital few rows in the .dat file
look like:
Year,DayOfYear,Sku,Quantity,CatId,Category,SubCategory
2009,1,100051,1,10113,"MEN","Historical men's"
2009,1,100130,1,10638,"ACCESSORIES & MAKEUP","ALL Kids Accessories"
2009,1,100916,1,10222,"WOMEN","TV & M
This hopefully is trivial. I am trying to reshape the data using the reshape
package.
First I read in the data:
a2009 <- read.csv("Total2009.dat", header = TRUE)
Then I trim it so that it only contains the columns that I have interested in:
m2009 <- melt(a2009, id.var=c("DayOfYear","Category",
I am struiggling a bit with this function 'hatvalues'. I would like a little
more undrestanding than taking the black-box and using the values. I looked at
the Fortran source and it is quite opaque to me. So I am asking for some help
in understanding the theory. First, I take the simplest case
I am not clear on what is happening with parscale in optim It seems that
scaling the parameters will produce unpredictable results in a non-linear
function (which is the purpose of optim right?)
The documentation states:
parscale
A vector of scaling values for the parameters. Optimization is pe
Thank you. I saw the source. But I am not sure how to get from
.Internal(optim(...)) to fmingr.
Kevin
Katharine Mullen wrote:
> see the fmingr function in src/main/optim.c
> (https://svn.r-project.org/R/trunk/src/main/optim.c)
>
> On Wed, 25 Feb 2009 rkevinbur...@charter.net wrote:
>
>
Yes. But I found out I the files permissions were set so that the configuration
had a null effect and the configuration silently ignored the fact that the file
could not be written to.
Thank you.
Kevin
Leandro Marino wrote:
> Did you do the configuration of Tinn-R after the installation
Yes. But I found out I the files permissions were set so that the configuration
had a null effect and the configuration silently ignored the fact that the file
could not be written to.
Thank you.
Kevin
Leandro Marino wrote:
> Did you do the configuration of Tinn-R after the installation
I have a question on scope/reference/value type of variables with 'R'.
The issue cam up first when I look at the arima code.
I see code like:
myupARIMA <- function(mod, phi, theta) {
. . . .
mod
}
Then
armafn <- function(p, trans) {
. . . .
Z <- upAR
FYI. I found the solution. My RProfile.site file could not be written to
because of permissions. Whan I selected the "Configure/Permanent" option in
Tinn-R it was silently ignoring the fact that the file could not be written to.
When I adjusted the permissions, all was well.
Thank you.
Kevin
I am running an R script with Tinn-R (2.2.0.1) and I get the error message
Error in source(.trPaths[4], echo = TRUE, max.deparse.length = 150) :
object ".trPaths" not found
Any solutions?
Thank you.
Kevin
__
R-help@r-project.org mailing list
https
I have read that when the gradient function is not supplied (is null) then
first order differencing is used to find the differential. I was trying to
track down this for my own information but I run into .Internal(optim.). I
was not sure where to look next to see the function that is automat
I was looking at the 'R' code associated with arima. I see the following:
upARIMA <- function(mod, phi, theta) {
p <- length(phi)
q <- length(theta)
mod$phi <- phi
mod$theta <- theta
r <- max(p, q + 1)
if (p > 0)
mod$T[1:p, 1] <- phi
I was noticing mainly sign differences amoung the solutions to QR
decomposition. For example R:
> x <- matrix(c(12,-51,4,6,167,-68,-4,24,-41),nrow=3,byrow=T)
> x
[,1] [,2] [,3]
[1,] 12 -514
[2,]6 167 -68
[3,] -4 24 -41
> r <- qr(x)
> r$qr
[,1] [,2] [,3]
I am new to 'R' and also new to the concept of a 'Hessian' with non-linear
optimization. I would like to avoid going through all of the reference articles
given with ?optim as access to a library is not handy. Would someone be able to
elighten me on what is in the Hessian matrix if 'hessian = TR
Thank you that helps alot. Now the question is how do I know that it is in the
'stats' package? getNativeSymbolInfo doesn't seem to find 'RTSconv'.
Kevin
Sundar Dorai-Raj wrote:
> You're missing that "R_TSConv" is an R object. You can use
> stats:::R_TSConv to see the value. Not sure how
Let me get more specific. I think it this can be answered then I can translate
the information to other calls. In the arima 'R' code there is a reference to
.Call(R_TSconv, a, b)
If from the console I type:
> .Call(R_TSConv, c(1,-1), c(1,-1))
I get:
Error: object "R_TSConv" not found
If I do
In the scripts I see lots of calls like .Call(.). But if I replace this
exact string in my own R script I get not found. I was wondering what the
scopting rules are for these functions. How just for testing (I don't want to
build a full blown package) to I use .Call and .C or .Fortr
You could look into ' try' and set it up to catch errors and do the appropriate
thing in your error handler. I don't have the exact syntax at hand right now
but looking at ?try or ?tryCatch I think will do what you want.
Kevin
Alexandra Almeida wrote:
> Hi everybody!
>
> I´m with a prob
Sorry I didn't give the proper initialization of j. But you are right j should
also be an array of 5. So x[j + 5] would return 5 values.
So if the array returned from 'ifelse' is the same dimention as test (h), then
are all the values of h being tested? So since h as you say has no dimensions
I am having a hard time understanding what is happening with ifelse.
Let me illustrate:
h <- numeric(5)
p <- 1:5
j <- floor(j)
x <- 1:1000
ifelse(h == 0, x[j+2], 1:5)
[1] 2 3 4 5 6
My question is, "shouldn't this be retruning 25 numbers?" It seems that the
ifelse should check 5 values of h for
This was just an illustration. It is the warning message that I don't
understand. The warning says "number of items to replace is not a multiple of
replacement length". The way I look at it 10 is a multiple of 20.
Kevin
Sarah Goslee wrote:
> The lengths are different, particularly the le
I have a question on whether a warning message is valid or if I just don't
understand the process. Let me illustrate via some R code:
x <- 1:20
i <- x %% 2 > 0
y <- rep(1,20)
x[i] <- y
Warning message:
In x[i] <- y :
number of items to replace is not a multiple of replacement length
But it st
This is definitely a newbie question but from the documentation I have not been
able to figure out what the partial sort option on the sort method does. I have
read and re-read the documentation and looked at the examples but for some
reason it doesn't register. Would someone attempt to explain
1 - 100 of 225 matches
Mail list logo