Looks to me like you are giving the name of a directory where a file name of a
tar.gz (package source) would be required. R prefers to be given the package
file rather than an extracted directory of files.
Most packages on Windows are used in their binary form ("zip" file), so you
would not spe
Is there any way to forecast/predict a time series using wavelets ? Is
there any package with that functionality included?
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.eth
Hi Juli,
What you can do is to make your outlier remover into a function like this:
remove_outlier_by_sd<-function(x,nsd=3) {
meanx<-mean(x,na.rm=TRUE)
sdx<-sd(x,na.rm=TRUE)
return(x[abs(x-xmean) < nsd*sdx])
}
Then apply the function to your data frame ("table")
newDA<-sapply(DA,remove_outlie
Hi Lida,
Given that this is such a common question and the R FAQ doesn't really
answer it, perhaps a brief explanation will help. In R the factor class is
a sort of combination of the literal representation of the data and a
sequence of numbers beginning at 1 that are alphabetically ordered by
defa
Hi Hanna,
Not within lattice, but you could try this:
plot(as.numeric(tmp1$lot[tmp1$trt=="trt1"]),
tmp1$result[tmp1$trt=="trt1"],xlab="lot",
ylab="Values",type="p",axes=FALSE,
col="red",pch=1,ylim=c(68,120))
abline(v=1:9,col="lightgray",lty=2)
box()
library(plotrix)
axis(1,at=1:9,labels=as.char
Hello. I'm trying to install R package Rssa from directory
install.packages("d:\\Rssa\\", repos=NULL, type="source")
But I have an error
* installing *binary* package 'Rssa' ...
Warning: running command 'cp -R . "E:/Program Files/R/R-3.1.3/library/Rssa"
|| ( tar cd - .| (cd "E:/Program Files/R/R-
This is pretty clearly a homework question and this list does not do
people's homework for them.
You need to learn more about the basics of R syntax. Read "An
Introduction to R", readily available from the R web site.
cheers,
Rolf Turner
--
Technical Editor ANZJS
Department of Statistics
This belongs in an email to the Rsubread maintainers, e.g.
maintainer("Rsubread"), or on the Bioconductor mailing list.
---
Jeff NewmillerThe . . Go Live...
DCN:Basics: ##.#.
Hi Rsubread developers and R users,
I am using R version 3.1 and the associated "Rsubread" Bioconductor package.
The "subjunc" function allows the user to output junction reads in the
following format,
#Chr, StartLeftBlock, EndRightBlock, Junction_Name, nSupport, Strand,
StartLeftBlock, EndRi
Hi all,
I plotted a dotplot based on the data below and code below. I would
like to add another yaxis on the right with a different col, different
tickmarks and a different label. Can anyone give some help?Thanks very
much!!
Hanna
> tmp1
result lot trt trtsymb trtcol
1 98 lot1 tr
On Fri, Sep 11, 2015 at 04:42:07PM +0200, peter dalgaard wrote:
> Or change the data format slightly and use indexing:
>
> > l
> keyval
> [1,] "1.1" "1"
> [2,] "1.9" "1"
> [3,] "1.4" "15000"
> [4,] "1.5" "2"
> [5,] "1.15" "25000"
> > v <- l[,2]
> > names(v) <- l[,1]
>
massmatics,
You are trying to take the mean/sd of an entire data.frame and therefore
you receive an error. You must do some form of subset and take the mean of
the 'breaks' column. This can be done a few ways (as with almost anything
in R).
AM.warpbreaks2 <- subset(AM.warpbreaks, breaks <= 30)
?tapply
or better yet
?ave ## a wrapper for tapply
allows it to be done without extraction.
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
On Fri, Sep 11, 2015 at 9:02 AM, Tom Wright wrote:
> O
On Fri, 2015-09-11 at 07:48 -0700, massmatics wrote:
> AM.warpbreaks<=30
The above command is not returning what you expected, what part of the
AM.warpbreaks dataframe is expected to be <= 30?
Effectively you are using a two stage process.
1) Create a logical vector identifying rows in the datafr
On 11 Sep 2015, at 15:00 , Ana Raquel Felizardo Rodrigues
wrote:
> Hi,
>
> I have a realy dumb question about hist(). Is it possible to define classes
> width independently from breaks? I have breaks=c(0,2,100,max(mydata)), but
> would like the 3 bars to have the same width and to label x ax
Thank you Jim and Bert for your suggestions.
Following is the final version used:
### Original tiny test data from Aldi Kraja, 9.11.2015.
### Purpose: split A into element 1 and 2, not interested on 3d element
of A. Assign element one and two to vectors C and D of the same data.frame.
### Do simi
Hey,
i want to remove outliers so I tried do do this:
# 1 define mean and sd
sd.AT_ZU_SPAET <- sd(AT_ZU_SPAET)
mitt.AT_ZU_SPAET <- mean(AT_ZU_SPAET)
#
sd.Anzahl_BAF <- sd(Anzahl_BAF)
mitt.Anzahl_BAF <- mean(Anzahl_BAF)
#
sd.Änderungsintervall <- sd(Änderungsintervall)
mitt.Änderungsintervall <-
HI All,
Please I need help with the following. I use the TM package on text mining
purpuse. Everything works fine until the stage of trying to do a dendogram.
R gives this message (See the end of the script) :
Error in graphics:::plotHclust(n, merge, height, order(x$order), hang, :
invalid d
I need help on 4b please :(
4.) ‘Warpbreaks’ is a built-in dataset in R. Load it using the function
data(warpbreaks). It consists of the number of warp breaks per loom, where a
loom corresponds to a fixed length of yarn. It has three variables namely,
breaks, wool, and tension.
a.) Write a co
Hi,
I have a realy dumb question about hist(). Is it possible to define classes
width independently from breaks? I have breaks=c(0,2,100,max(mydata)), but
would like the 3 bars to have the same width and to label x axe with 0, 2 and
>100. Can anyone help me?
Thank you all!
Ana Raquel Rodrigue
So this is the question that I need help on. :(
4.) ‘Warpbreaks’ is a built-in dataset in R. Load it using the function
**data(warpbreaks)**. It consists of the number of warp breaks per loom,
where a loom corresponds to a fixed length of yarn. It has three variables
namely, breaks, wool, and tensi
In case you were wondering, I ended up writing a function that calls
spline(). The resample() function wasn't suitable for my requirement.
But, thanks for steering me in the right direction!
--
View this message in context:
http://r.789695.n4.nabble.com/Is-there-a-time-series-resampling-func
The Wikipedia article gives a simple formula based on the number of discordant
pairs. You can get that from the ConDisPairs() function in package DescTools.
-
David L Carlson
Department of Anthropology
Texas A&M University
College Station, TX 77840-4352
-O
Please reply to the list, not just me. I've added the list address to
my own reply.
On Fri, Sep 11, 2015 at 11:04 AM, Ghada Almousa wrote:
> If you don't help me , why put help and subscription
> this is not a homework it's question I searched in the web but there is
> no answer
The participa
You need to read the Introduction to R, paying particular attention to the
factors data type, which is designed for this problem.
You should also be aware that on this list failure to include a small example
of your problem in R, using plain text email (a setting in your email program),
often l
It's correct.
You need to spend some time with an R tutorial -- there are many on
the web -- and learn about how R handles factors in modeling.
?factor
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
Hi dear experts,
I have a general question in R, about the categorical variable such as
Gender(Male or Female)
If I have this column in my data and wanted to do regression model or feed
the data to seqmeta packages (singlesnp, skat meta) , would you please let
me know should I code them first ( ma
Or change the data format slightly and use indexing:
> l
keyval
[1,] "1.1" "1"
[2,] "1.9" "1"
[3,] "1.4" "15000"
[4,] "1.5" "2"
[5,] "1.15" "25000"
> v <- l[,2]
> names(v) <- l[,1]
> x <- c("1.9", "1.9", "1.1", "1.1", "1.4", "1.4", "1.5", "1.5", "1.5",
+ "1.5")
> v[x]
Repeating your post won't help. Writing a good question with sample
data and the code you've tried, as well as describing your *specific*
difficulties will.
Without a reproducible example that includes some sample data provided
using dput() (fake is fine), the code you used, and some clear idea of
On Tuesday, September 8, 2015, Ghada Almousa wrote:
> I have project to study and analysis clusters algorithm in R
> "K-mean, Hierarchical, Density based and EM"
> I want to calculate
> Cluster instance , number of iteration , sum of squared error SSE and the
> accuracy for each cluster algorithm
You are being over-optimistic with your starting values, and/or with constrains
on the parameter space.
Your fit is diverging in sigma for some reason known only to
nonlinear-optimizer gurus...
For me, it works either to put in an explicit constraint or to reparametrize
with log(sigma).
E.g.
Hi everyone,
I have a problem with maximum-likelihood-estimation in the following
situation:
Assume a functional relation y = f(x) (the specific form of f should be
irrelevant). For my observations I assume (for simplicity) white noise,
such that hat(y_i) = f(x_i) + epsilon_i, with the epsilon_i
On Thu, Sep 10, 2015 at 10:30:50PM -0700, Bert Gunter wrote:
> ?match
>
> as in:
>
> > y <- lk_up[match(x,lk_up[,"key"]),"val"]
> > y
> [1] "1" "1" "1" "1" "15000" "15000" "2"
> [8] "2" "2" "2"
> ...
Aye -- thank you very much!
Peace,
david
--
David H. Wolfski
Dear Jim,
Thank you for your reply and pointing this out. I thought about it and then
I forgot. I have computed the weekly average (and max and min). The data is
below. Again I computed the max/min/mean by each year, so each file
contains data for one year. Can I modify the code I used for count?
Hi
if you have full set of data, so no observation is missing, you can split your
data frame values e.g. by
split(yourdata, trunc(0:nrow(yourdata))/6)
and use lapply or for cycle.
If you need to cut according to dates you can use
?cut
POSIX date.
Cheers
Petr
> -Original Message-
>
Hi Shouro,
While I have enjoyed the continuing discussion on this particular message
(repression may have been a Galtonian slip), there is a lingering doubt in
my mind. You say that you want to categorize the weekly temperatures for
cities in bins of about 5.6 degrees (centigrade?). In almost all o
"that rely on profusion of dummies" :)
+1
John Kane
Kingston ON Canada
> -Original Message-
> From: r.tur...@auckland.ac.nz
> Sent: Fri, 11 Sep 2015 12:22:38 +1200
> To: dwinsem...@comcast.net
> Subject: Re: [R] [FORGED] Re: Help with Binning Data
>
> On 11/09/15 11:57, David Winsemius
Dear R-users,
from a data frame of half-hourly wind observations (direction and speed, three
years data), I need to evaluate the first main direction for each period of
three hours (h03, h06, h09, h12, h15, h18, h21, h24 of each day).
The command "windRose" of the package "openair" crates a data
thanks Petr. It shall work ;)
On Fri, Sep 11, 2015 at 4:34 AM, PIKAL Petr wrote:
> Hi
>
>
>
> you need to merge those two data frames with a column indicating given set.
>
>
>
> Without data it is only a guess but
>
>
>
> a$set<-„a“
>
> b$set<-„b“
>
>
>
> complete <- rbind(a,b)
>
>
>
> p <-ggplo
Hi
you need to merge those two data frames with a column indicating given set.
Without data it is only a guess but
a$set<-„a“
b$set<-„b“
complete <- rbind(a,b)
p <-ggplot(complete, aes(x=distance, y=intensity, colour=set))
p+geom_smooth(method = "loess", size = 1,
span=0.01)+xlab("distance")
Apologies for the HTML. It shouldn't have happened. I would like to use the
dummies as independent variables in a regression. I did manage to use count
of observations in a given range using the following code:
for (i in filelist) {
# i <- filelist[1]
tmp1 <- as.data.table(read.csv(i, sep=",")
Dear Petr,
thank you very much, it helped. On a side note, shall I have 2 plots and 2
loess curves (as below), is there any way in ggplot2 to overlay these 2
graphs for "a" and "b" ? much thanks again !
qplot(data=a,distance,intensity)+geom_smooth(method = "loess", size = 1,
span=0.01)+xlab("dist
Hi
based on your data maybe using logarithmic y scale shall give you desired
result.
http://stackoverflow.com/questions/4699493/transform-only-one-axis-to-log10-scale-with-ggplot2
Or you can recalculate intensity to scale 100-0 (or any other suitable scale).
?rescale
Cheers
Petr
From: Bogdan
43 matches
Mail list logo