Hi Sebastian,
here are examples with ggplot2 and basic graphic.
http://stackoverflow.com/questions/3777174/plotting-two-variables-as-lines-using-ggplot2-on-the-same-graph
http://stackoverflow.com/questions/17150183/r-plot-multiple-lines-in-one-graph
You may also impress your audience by using
Hi Rosa,
you may take advantage of the extremevalues package.
https://cran.r-project.org/web/packages/extremevalues/extremevalues.pdf
An example:
set.seed(1023)
v3 <- c(rnorm(100, 0, 0.2), rnorm(5, 4, 0.1), rnorm(5, -4, 0.1))
v4 <- sample(v3, length(v3))
nam <- as.character(1:length(v4))
df <-
Hi Michael,
On top of all suggestions, I can mention the following packages for linear
programming problems:
https://cran.r-project.org/web/packages/linprog/linprog.pdf
https://cran.r-project.org/web/packages/lpSolve/lpSolve.pdf
https://cran.r-project.org/web/packages/clue/clue.pdf
(see cl
You may look at:
http://rseek.org/?q=community%20detection
--
Best,
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the
Hi,
I was able to replicate the solution as suggested by William in case of
data.frame class, not in case of data.table class.
In case of data.table, I had to do some minor changes as shown below.
library(data.table)
a <- 1:10
b <- c("a","b","c","d","e","f","g","h","i","j")
c <- seq(1.1, .2, len
df1 <- data.frame(ID = c(1,1,2,2,3,3,4,4,5,5),
A = c(1,0,5,1,1,NA,0,3,2,7),
B = c(2,3,NA,3,4,NA,1,0,5,NA))
df2 <- data.frame(ID = c(1,2,3,4,5),
A = c(1,6,1,3,9),
B = c(5,3,4,1,5))
m <- match(df1$ID, df2$ID)
sel <- c("A",
Starting from this data frame:
my.df <- data.frame(num = 1:5, let = letters[1:5])
> my.df
num let
1 1 a
2 2 b
3 3 c
4 4 d
5 5 e
>
and inserting a blank row (NAs row) for each one of my.df rows.
na.df <- data.frame(num = NA, let = NA)
my.df <- do.call(rbind, apply(my.df, 1
Since the aggregate S3 method for class formula already has got na.action =
na.omit,
## S3 method for class 'formula'
aggregate(formula, data, FUN, ...,
subset, na.action = na.omit)
I think that to deal with NA's, it is enough:
aggregate(Value~ID, dta, max)
Moreover, passing na.r
idvalues <- data.frame (ID = c(1, 1, 2, 2, 3, 4, 4, 4, 5, 5),
Value = c(0.69, 0.31, 0.01, 0.99, 1.00, NA, 0,1, 0.5,
0.5))
aggregate(Value~ID, data=idvalues, max)
ID Value
1 1 0.69
2 2 0.99
3 3 1.00
4 4 1.00
5 5 0.50
--
Best,
GG
[[alternative HTML
You may investigate a solution based on regular expressions.
Some tutorials to help:
http://www.regular-expressions.info/rlanguage.html
http://www.endmemo.com/program/R/grep.php
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/SvetlanaEdenRFiles/regExprTalk.pdf
https://rstudio-pubs-static.s3.ama
Please see updates to df2 assignment as shown below.
library(xts) # primary
#library(tseries) # Unit root tests
library(ggplot2)
library(vars)
library(grid)
dt_xts<-xts(x = 1:10, order.by = seq(as.Date("2016-01-01"),
as.Date("2016-01-10"), by = "1 day"))
col
Some tutorials and examples may help.
http://www.zoology.ubc.ca/~kgilbert/mysite/Miscellaneous_files/R_MakingMaps.pdf
http://coulmont.com/cartes/rcarto.pdf
https://pakillo.github.io/R-GIS-tutorial/
http://www.milanor.net/blog/maps-in-r-plotting-data-points-on-a-map/
https://www.youtube.com/wat
Guessing that you may want to take a look at:
http://adv-r.had.co.nz/Expressions.html
https://www.opencpu.org/
Anyway, as David wrote, that it is too vague for specific hints.
--
Best,
GG
[[alternative HTML version deleted]]
__
R-help
Generically:
http://rseek.org/?q=predictive+maintenance
and among those:
https://rpubs.com/Simionsv/97830
http://blog.revolutionanalytics.com/2016/03/predictive-maintenance.html
--
Best,
GG
This Communication is Ericsson Confidential.
We only send and receive email on the basis of the term
A more specific reproducible example.
set.seed(1023)
library(arules)
# starting from a dataframe whose fields are characters (see stringsAsFactors =
FALSE), as asked
products <- c("P1", "P2", "P3", "P4", "P5", "P6", "P7", "P8", "P9", "P10")
mydf <- data.frame(user = sample(LETTERS[1:20], 100, re
Ryan,
>From "decompose()" source code, two conditions can trigger the error message:
"time series has no or less than 2 periods"
based on the frequency value, specifically:
1. f <= 1
2. length(na.omit(x)) < 2 * f
It appears to me that your reproducible code has got a typo error, it shou
# your code
Subject<- c("2", "2", "2", "3", "3", "3", "4", "4", "5", "5", "5", "5")
dates <- seq(as.Date('2011-01-01'),as.Date('2011-01-12'), by = 1)
deps <- c("A", "B", "C", "C", "D", "A", "F", "G", "A", "F", "A", "D")
df <- data.frame(Subject, dates, deps)
df
final<-c("2 2011-01-02B","2 2011
mydf <- data.frame(d1 = LETTERS[1:10], d2 = letters[11:20])
> str(mydf)
'data.frame': 10 obs. of 2 variables:
$ d1: Factor w/ 10 levels "A","B","C","D",..: 1 2 3 4 5 6 7 8 9 10
$ d2: Factor w/ 10 levels "k","l","m","n",..: 1 2 3 4 5 6 7 8 9 10
>
library(arules)
trans1 <- as(mydf, "transaction
As a "quick solution", I would explore the use of stat_smooth() and then
extract fit data from,
as herein shown:
library(ggplot2)
p <- qplot(hp, wt, data=mtcars) + stat_smooth(method="loess")
p
ggplot_build(p)$data[[2]]
xy ymin ymaxse PANEL group colour fill
Yes, I think it is worth evaluating what available at:
https://cran.r-project.org/web/views/HighPerformanceComputing.html
and as a thesis to tackle a "real-life" use case where R language and some of
those High Performance Computing packages
are used to solve problems about.
Or you may implement
Some good references:
https://www.otexts.org/fpp
http://link.springer.com/book/10.1007%2F978-0-387-88698-5
http://www.statoek.wiso.uni-goettingen.de/veranstaltungen/zeitreihen/sommer03/ts_r_intro.pdf
Best,
--
GG
This Communication is Ericsson Confidential.
We only send and receive email on
om: Francesco Perugini [mailto:francesco.perug...@yahoo.it]
Sent: martedì 2 febbraio 2016 09:24
To: Giorgio Garziano
Subject: Re: [R] R Sig-Geo group - loop for creating spatial matrix
Dear Giorgio,
thanks a lot for your reply.
From here now, I want to implement the Global G test for spatial
autoco
https://cran.r-project.org/web/packages/tsintermittent/tsintermittent.pdf
Best,
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PL
You may handle that as a list of "nb" objects.
library(spdep)
example(columbus)
coord <- coordinates(columbus)
z <- c(1,2,3,4,5,6,7,8,9)
neighbors.knn <- list()
for (val in z) {
neighbors.knn <- c(neighbors.knn, list(knn2nb(knearneigh(coord, val,
longlat=F), sym=F)))
}
class(neighbours.knn)
This tutorial may help:
http://faculty.washington.edu/ezivot/econ424/Working%20with%20Time%20Series%20Data%20in%20R.pdf
See pages 20 and 27 for your specific issue.
Best,
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org
: Giorgio Garziano; r-help@r-project.org
Subject: RE: [R] cross correlation of filtered Time series
Thank you,
May I know the reason for keeping frequency as 60, the package
says it expects frequency in hertz, how does it work if I input daily data?.
The correlation max shown by
Hello,
I think that the package "seewave" may help you. See corenv() function.
https://cran.r-project.org/web/packages/seewave/seewave.pdf
library(seewave)
sst.ts <- ts(dc$sst, frequency=60)
t2m.ts <- ts(dc$t2m, frequency=60)
corenv(sst.ts, t2m.ts)
corenv.res <- corenv(sst.ts, t2m.ts, plot=FALS
A <- matrix(c(1,2,3,4),2,2)
B <- matrix(A, nrow=14, ncol=14)
> B
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]
[1,]131313131 3 1 3 1 3
[2,]242424242 4 2 4
My suggestion is to inspect the VaRplot source code and, also with the help of
debug() if necessary,
you may verify how ylim results with your data.
> VaRplot
function (alpha, actual, VaR, title = paste("Daily Returns and Value-at-Risk
\nExceedances\n",
"(alpha=", alpha, ")", sep = ""), ylab
I think that the rle() function may help you to tackle the problem in a more
general way.
https://stat.ethz.ch/R-manual/R-devel/library/base/html/rle.html
Using William's suggested series:
x <- c(2,2,3,4,4,4,4,5,5,5,3,1,1,0,0,0,1,1,1)
> x
[1] 2 2 3 4 4 4 4 5 5 5 3 1 1 0 0 0 1 1 1
rle.x <- rle
Some hints at the following link where the "order.by requires an appropriate
time-based object" error is commented.
http://stackoverflow.com/questions/23224142/converting-data-frame-to-xts-order-by-requires-an-appropriate-time-based-object
--
GG
[[alternative HTML version deleted]]
__
Hi Marna,
here is another example that should appear more similar to your scenario
than my previous one.
x <- seq(1:100)
y1 <- x*x
g1 <- rep("y1", 100)
df1 <- as.data.frame(cbind(x, y1), stringsAsFactors=FALSE)
df1 <- as.data.frame(cbind(df1, g1))
colnames(df1)<- c("x", "value", "variable")
y2
Hi Marna,
I prepared this toy example that should help you.
x <- seq(1:100)
y <- x*x
avg <- mean(y)
avg.v <- rep(avg,100) # your average column data
df <- as.data.frame(cbind(x, y, avg.v))
library(ggplot2)
ggplot(data=df[,-3], aes(x=x, y=y)) + geom_line() +
geom_line(data=df[,c(1,3)], color='b
You can do it this way, for example:
geom_line(linetype="dashed", size=1, colour="blue")
Further info at:
http://docs.ggplot2.org/current/geom_line.html
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To
I think this may help.
my_assign <- function(operand, value) {
assignment <- paste(operand, value, sep = "<-")
e <- parse(text = assignment)
eval.parent(e)
}
a <- rep(0,5)
> a
[1] 0 0 0 0 0
my_assign("a[2]", 7)
> a
[1] 0 7 0 0 0
my_assign("a[4]", 12)
> a
[1] 0 7 0 12 0
--
GG
Library dplyr to use arrange() for ordering, in the case.
library(dplyr)
result.order <- arrange(result, d, version, a, b, c)
dim(result.order)
[1] 30005
head(result.order)
d versiona b c
1 -2.986456069 1 0.2236414154 0.004258038663 1.089
Further, trying to be more specific about checkpoint with R, I may suggest the
following readings.
Look for "checkpoint models" in:
http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/8/docs-website/h2o-docs/booklets/DeepLearning_Vignette.pdf
Look for "checkpointing" i
I found some information about parallel processing in R that might be of your
interest:
http://topepo.github.io/caret/parallel.html
http://www.vikparuchuri.com/blog/parallel-r-model-prediction-building/
https://www.r-project.org/nosvn/conferences/useR-2013/Tutorials/kuhn/user_caret_2up.pdf
htt
You forgot to put the comma after "-intrain" in the following assignment:
testing <- spam[-intrain, ]
"make" is one of the data columns of spam dataset.
> colnames(spam)
[1] "make"
--
GG
[[alternative HTML version deleted]]
__
R-help@r-pro
About my previous answer, I should have taken advantage of glm() in place of
lm(), as the response is binomial.
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.
I would tackle the problem in the following way:
lm.model <- lm(z~ x + y, data=m)
summary(lm.model)
Call:
lm(formula = z ~ x + y, data = m)
Residuals:
Min 1Q Median 3Q Max
-0.34476713 -0.09571506 -0.01786731 0.05225554 0.51693389
Coefficients:
library(data.table)
dat <- as.data.table(matrix(100, nrow=1, ncol=100))
colnames(dat) <- gsub("V", "i", colnames(dat))
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://st
I may suggest this quick guide:
http://gastonsanchez.com/work/webdata/getting_web_data_r4_parsing_xml_html.pdf
and the following link:
http://www.r-datacollection.com/
I apologize for not being more specific.
--
GG
[[alternative HTML version deleted]]
You may use the "caret" package.
At the following link 2-classes and 3-classes examples:
http://www.inside-r.org/node/86995
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
htt
And in case you would like to explore the supervised clustering approach, I may
suggest to explore the
use of knn() fed by a training set determined by your cluster assignments
expectations.
Some "quick code" to show what I mean.
z <- as.data.frame(cbind(scale(x), scale(y)))
colnames(z) <- c("x"
I think the inspection of the "stopifnot()" source code may help.
> stopifnot
function (...)
{
n <- length(ll <- list(...))
if (n == 0L)
return(invisible())
mc <- match.call()
for (i in 1L:n) if (!(is.logical(r <- ll[[i]]) && !anyNA(r) &&
all(r))) {
ch <- de
my_convert <- function(col) {
v <- grep("[0-9]{2}.[0-9]{2}.[0-9]{4}", col);
w <- grep("[0-9]+,[0-9]+", col)
col2 <- col
if (length(v) == length(col)){
col2 <- as.Date(col, format="%d.%m.%y")
} else if (length(w) == length(col)) {
col2 <- as.numeric(gsub(",", "", col))
}
col2
}
Decrease the "tol" parameter specified into the "is.non.singular.matrix() call,
for example as:
m <- matrix(c( 1.904255e-12, -1.904255e-12, -8.238960e-13, -1.240294e-12,
-1.904255e-12, 3.637979e-12, 1.364242e-12, 1.818989e-12,
-8.238960e-13, 1.364242e-12, 4.80998
My educated guess:
package: RSGHB
https://cran.r-project.org/web/packages/RSGHB/RSGHB.pdf
http://www.inside-r.org/packages/cran/RSGHB/docs/doHB
Wondering if also package ltm may help.
There is available R package search at:
http://rseek.org/
My apologies if abovementioned packages do not f
Looking at the source code of the package drc, there is something that may
somehow explain what
you are experiencing:
file: plot.drc.R, function addAxes(), lines 543-626
ceilingxTicks <- ceiling(log10(xaxisTicks[-1]))
...
xaxisTicks <- c(xaxisTicks[1], 10^(unique(ceilingxTicks)))
xLabels <-
Last "+theme_bw()" to be deleted.
Try this:
ggplot(data1, aes(x=x1, y=y1))+
geom_point()+
geom_smooth(method="glm", family="gaussian",aes(linetype="equation1"))+
geom_smooth(aes(x=x1, y=y1, linetype="equation2"),data=data2, method="glm",
family="gaussian")+
scale_linetype_manual(values =
Try to look at the link:
http://www.cookbook-r.com/Graphs/Legends_(ggplot2)/
Also consider:
help(theme)
legend.backgroundbackground of legend (element_rect; inherits from rect)
legend.margin extra space added around legend (unit)
legend.key background underneath legend keys (element_rect
First, your code has flaws in the assignment of NA and in passing na.rm=TRUE to
colMeans().
It should be:
test2 <- test
for (i in 1:ncol(test)) { test2[which.min(test[,i]),i]=NA}
print(test2)
samp1 samp2 samp3
16060NA
2506065
3NA9065
490NA90
It appears to be a numerical precision issue introduced while computing the
"end" value of a time series,
if not already specified at ts() input parameter level.
You may want to download the R source code:
https://cran.r-project.org/src/base/R-3/R-3.2.2.tar.gz
and look into R-3.2.2\src\librar
>From R prompt, input:
memory.limit()
which returns the amount of memory available to R.
Then, before allocating that vector, run:
library(pryr)
mem_used()
To see current memory in use.
Should be: memory.limit() - mem_used() >= 2.4GBytes
--
GG
[[alternative HTML version deleted]]
I may suggest this tutorial:
http://www.stat.berkeley.edu/~nolan/stat133/Fall05/lectures/profilingEx.html
and this discussion:
http://stackoverflow.com/questions/3650862/how-to-efficiently-use-rprof-in-r
which inspired this example:
Rprof("profile1.out", line.profiling=TRUE)
for(i in 1:10
It is likely the "p" variable is not defined in your R environment.
Inside your function model.LIDR, the variable "p" is used before being
initialized
in any of the environments reachable by the search path, included the model.LIDR
function environment.
The remedy is to define and initiali
May this be fine ?
foo <- function(df) {
x <- df[, 1, drop = FALSE]
available <- rev(letters[(letters %in% colnames(df)) == FALSE])
colnames(x) <- available[1]
dfOut <- data.frame(df, x)
dfOut
}
Data <- data.frame(x = c(1, 2), y = c(3, 4))
foo(Data)
x y z
1 1 3 1
2 2 4 2
--
GG
Good question.
> str(Empl[c(2,4)])
List of 2
$ family:List of 3
..$ spouse: chr "Fred"
..$ children : num 3
..$ child.ages: num [1:3] 4 7 9
$ family:List of 3
..$ spouse: chr "Mary"
..$ children : num 2
..$ child.ages: num [1:2] 14 17
>
> Empl[c(2,4)][1]
$family
$family$spou
It is likely you have some list structure you should not.
Check the class of the elements of your matrixes, to see if any list class
shows up.
Not clear from your code what is "y" passed to densregion(..).
Anyway, one way to reproduce your error is the following:
# this does not work
> a <- lis
>From the "densregion" help page I can read that:
z is a matrix of densities on the grid defined by x and y,
with rows corresponding to elements of x
and columns corresponding to elements of y.
So in your scenario z must be a 3 rows x 100 columns matrix, if you like to
take advantage of densregio
Similarly to what can be read on help(qqplot), however using Rayleigh
distribution:
library(VGAM)
p <- ppoints(100)
x <- qrayleigh(p)
y <- rrayleigh(100)
qqplot(x,y)
qqline(y, distribution = function(p) qrayleigh(p),
prob = c(0.1, 0.6), col = 2)
--
GG
[[alternative HTML version d
About the "distance = NA" issue, please see if this comment helps:
https://lists.gking.harvard.edu/pipermail/matchit/2011-January/000174.html
Furthermore, the Mahalanobis NA distance values are hard-wired in the code,
file matchit.R:
## no distance for full mahalanobis matching
if(fn1=="di
Replacing na.omit() with !is.na() appears to improve performance with time.
rm(list=ls())
test1 <- (rbind(c(0.1,0.2),0.3,0.1))
rownames(test1)=c('y1','y2','y3')
colnames(test1) = c('x1','x2');
test2 <- (rbind(c(0.8,0.9,0.5),c(0.5,0.1,0.6)))
rownames(test2) = c('y2','y5')
colnames(te
I reworked Frank Schwidom's solution to make it shorter than its original
version.
test1 <- (rbind(c(0.1,0.2),0.3,0.1))
rownames(test1)=c('y1','y2','y3')
colnames(test1) = c('x1','x2');
test2 <- (rbind(c(0.8,0.9,0.5),c(0.5,0.1,0.6)))
rownames(test2) = c('y2','y5')
colnames(test2) = c(
library(dplyr)
df <- data.frame(z = rep(c("A", "B")), x = 1:6, y = 7:12) %>%
arrange(z)
temp <- reshape(df, v.names = c("x", "y"), idvar = c("x", "y"), timevar = "z",
direction = "wide")
lA <- na.omit(temp[,c("x.A", "y.A")])
lB <- na.omit(temp[,c("x.B", "y.B")])
df.long <- as.data.frame(cbind(lA,
Intel i5 Windows-7 64-bit 16GB RAM.
GG
From: Maram SAlem [mailto:marammagdysa...@gmail.com]
Sent: giovedì 1 ottobre 2015 14:12
To: Giorgio Garziano
Cc: r-help@r-project.org
Subject: Re: [R] (subscript) logical subscript too long
Thanks a lot Giorgio, I used
memory.limit(size=4096)
but got
: Giorgio Garziano
Cc: r-help@r-project.org
Subject: Re: [R] (subscript) logical subscript too long
Thanks a lot Giorgio, I used
memory.limit(size=4096)
but got
don't be silly!: your machine has a 4Gb address limit
I'm working on my Ph.D. thesis and I have a huge code of which this
Check your memory size by:
memory.limit()
try to increase it by:
memory.limit(size=4096)
From: Maram SAlem [mailto:marammagdysa...@gmail.com]
Sent: giovedì 1 ottobre 2015 13:22
To: Giorgio Garziano
Cc: r-help@r-project.org
Subject: Re: [R] (subscript) logical subscript too long
Thanks
Be:
log <- (rowSums(ED) <= (n - m))
Compare the following two values:
length(log)
nrow(w)
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/ma
The "pronoun dot" is used in conjunction with %>% in dplyr (which imports
magrittr).
See pag.9, paragraph "Placing lhs elsewhere in rhs call" of the document:
https://cran.r-project.org/web/packages/magrittr/magrittr.pdf
--
GG
__
R-help@r-project.org
This works:
filter(mydata, complete.cases(mydata))
About dplyr "pronoun dot", see:
http://www.r-bloggers.com/dplyr-0-2/
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https
library(quantmod)
getSymbols("YHOO")
chartSeries(YHOO, theme="white")
b1 <- addBBands(50,2)
b1@params$colors$bg.col="#FF"
b1
b2 <- addBBands(100,2)
b2@params$colors$bg.col="#FF"
b2
[[alternative HTML version deleted]]
_
See if this may help:
http://stackoverflow.com/questions/11872879/finding-out-which-functions-are-called-within-a-given-function
--
GG
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
To mention also:
temp <- list(1:3, list(letters[1:3], duh= 5:8), zed=15:17)
library(rlist)
list.flatten(temp)
[[1]]
[1] 1 2 3
[[2]]
[1] "a" "b" "c"
$duh
[1] 5 6 7 8
$zed
[1] 15 16 17
---
Giorgio Garziano
[
If you need further info on flattening a list, check this out:
http://stackoverflow.com/questions/8139677/how-to-flatten-a-list-to-a-list-without-coercion/8139959#8139959
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing l
80)
lat <- runif(n,-90, 90)
size <- runif(n, 1,5)
data <- cbind(long, lat, size)
data <- as.data.frame(data)
gpl <- ggmap(map) +
geom_point(data = data, aes(x = long, y = lat), size=data$size,
alpha=1, color="blue", s
s.null(v)) {
m[rowname, colname] <- list(v)
} else {
m[rowname, colname] <- NA
}
}
}
> m
x1 x2 x3
y1 0.1 0.2 NA
y2 Numeric,2 Numeric,2 0.9
y3 0.1 0.1 NA
y5 0.5 0.6 0.1
> m["y2",]
$x1
[1] 0.3 0.8
$
Try this:
X<-c("A","B","C","D","E")
Y<-c(0,1,2,3,4)
for (i in 0:3) {
Y<-Y+i
data<-data.frame(X,Y)
fe.flag <- file.exists("test.csv")
write.table(data, "test.csv", row.names = FALSE, col.names = !fe.flag,
sep=";", append = fe.flag)
}
[[alternative HTML version deleted]]
_
white", type='line', name="")
chartSeries(YHOO, type='line', theme='white')
b <- BBands(HLC(YHOO))
addTa(b, legend=NULL)
--
Giorgio Garziano
[[alternative HTML version deleted]]
__
R-help
RColorBrewer,\nreadxl, reshape, rggobi, RGtk2Extras, ROCR, RODBC,
rpart,\nrpart.plot, SnowballC, stringr, survival, timeDate, tm,\nverification,
wskm, XML, pkgDepTools, Rgraphviz"
pack["rattle", "Imports"]
[1] NA
In general, the package installation by RStudio is strai
it may help.
Giorgio Garziano
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
By "interested in any kind of deviation from randomness", I mean that
I would like to apply all "randtests" R package randomness tests, if
they give reliable results for {-1, 1} sequences.
[[alternative HTML version deleted]]
__
R-help@r-projec
Good suggestion, thanks.
-Original Message-
From: Jeff Newmiller [mailto:jdnew...@dcn.davis.ca.us]
Sent: venerdì 25 settembre 2015 18:49
To: Giorgio Garziano; r-help@r-project.org
Subject: Re: [R] Randomness tests
You are way off topic for this list. Perhaps stats.stackexchange.com
I am interested in any kind of deviation from randomness.
I would like to know if the fact that a time series can take values only from
the set {-1, 1} restricts
the type of randomness tests that can be done.
--
Giorgio Garziano
[[alternative HTML version deleted
=300)
In general, if you like to plot indipendently a trading indicator computed by
quantmod,
you can do:
wpr <- addWPR(n=300)
plot(wpr@TA.values, type='l') -- or any other R plot library you like
as in @TA.values are stored trading indicator values for any quantmod indicator.
Gio
Hi,
to test randomness of time series whose values can only be +1 and -1, are all
following
randomness tests applicable or only a part of ?
cox.stuart.test
difference.sign.test
bartels.rank.test
rank.test
runs.test
Tests provided by the randtests R package.
Thanks.
Giorgio Garziano
/covariance-matrix.aspx
About what outlined in the book reference I mentioned, I shall open a separate
thread
in the case.
Thanks.
---
Giorgio
Genoa, Italy
From: Tsjerk Wassenaar [mailto:tsje...@gmail.com]
Sent: domenica 10 maggio 2015 22:31
To: Giorgio Garziano
Cc: r-help@r-project.org
Subject
x2 / (N-1)
. . .
Σ xn2 / (N-1)
Reference: “Time series and its applications – with R examples”, Springer,
$7.8 “Principal Components” pag. 468, 469
Cheers,
Giorgio
From: Tsjerk Wassenaar [mailto:tsje...@gmail.com]
Sent: domenica 10 maggio 2015 22:11
To: Giorgio Garziano
Cc: r-help@r
lt;- (1/(n-1)) * data.center %*% t(data.center)
--
Giorgio Garziano
-Original Message-
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: domenica 10 maggio 2015 21:27
To: Giorgio Garziano
Cc: r-help@r-project.org
Subject: Re: [R] Variance-covariance matrix
On May 10, 2015, at 4:27
Hi,
I am looking for a R package providing with variance-covariance matrix
computation of univariate time series.
Please, any suggestions ?
Regards,
Giorgio
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UN
Ok. Thanks for your explanation.
Cheers,
Giorgio Garziano
-Original Message-
From: Rolf Turner [mailto:r.tur...@auckland.ac.nz]
Sent: lunedì 27 aprile 2015 09:24
To: Giorgio Garziano; r-help@r-project.org
Subject: Re: [R] Question about base::rank results
On 26/04/15 20:17, Giorgio
5,77,78,22)
> x[rank(x)]
[1] 12 77 34 78 22 15 (?)
> x <- c(12,34,77,15,78)
> x[rank(x)]
[1] 12 77 15 34 78 (?)
Please any feedback ? Thanks.
BR,
Giorgio Garziano
[[alternative HTML version deleted]]
__
R-help@r-pro
93 matches
Mail list logo